You have an awesome bot idea that’s going to save you and your teammates a ton of time. You’ve decided on the Automation Anywhere Robotic Process Automation (RPA) ecosystem. And, you want to build your bot in a way that doesn’t result in a security incident. You’ve come to the right place.
The Open Web Application Security Project (OWASP) developed Security by Design Principles — 10 security principles to consider when designing secure software. Using these 10 principles while designing your bot is a great way to ensure bot security is included. Let’s walk through how to apply these principles to Automation Anywhere bots.
An attacker only needs to find one successful avenue of attack to exploit a system, whereas a defender must defend all avenues from attack. The fewer avenues of attack that exist in a system, the less effort is required to defend the system. For bots, this means limiting:
Doing these things will go a long way toward reducing the complexity of your bot and minimizing the potential avenues of attack.
When a bot user installs and configures your bot following the setup and configuration guides, it should be done in such a way that it’s secure by default. In other words, all bot credentials, connections, and configuration settings must use the most secure available option by default.
Taking extra time to design the secure solution from the beginning is an investment that will pay off down the road. It will also save you from having to go back and implement additional security controls on an already-developed bot.
A bot should only have permission to access the exact files and resources it needs to perform its duties. Take care in all areas to ensure administrative rights or any elevated permission actions are only used when explicitly needed. In addition, any accounts your bot uses to connect to remote resources should be provisioned with only the necessary permissions for the bot to perform its duties.
Proper software security requires a layered approach known as defense in depth. This means having multiple strategies in place to protect bot assets from attacks. For something such as an input file asset, it might mean several approaches for data validation and input verification, restrictions on the permissions of the file, and data output encoding before being passed to any output console or displayed on screen.
In industrial systems such as building locks and control systems, designers practice the notion of failing safely. Machines won’t operate if they have a failure, and door locks often unlock on failure so that they don’t trap someone in the event of a fire.
Software designers follow a similar notion of failing securely. In this regard, bot failures should never result in data disclosure, data corruption, data access issues, or any other form of security impact on the application assets.
A key feature of bots is that they can integrate with diverse services from a variety of sources, which can mean multiple data input sources. All services from which the bot gathers or processes data shouldn’t be trusted as containing “safe” data.
When you assume all API/service data is untrusted, you naturally come up with additional security controls and data validation to ensure the bot can properly process the untrusted data.
Often, developers create bots with too much functionality in an effort to have the bot do “everything.” From a cybersecurity perspective, this creates the risk of “everything” being compromised if the bot is compromised.
Where possible, avoid this idea. Building smaller, individual bots, each responsible for a specific task, will result in a separation of duties and allow your bots to be more interoperable. Smaller bots will also be easier to audit and secure.
The security of a bot should not hinge on a secret piece of information the attacker doesn’t know. You should assume your bots will be reverse engineered and that all areas of the software will be audited. Obscuring or obfuscating application details can make it more difficult for an attacker to attack a bot, but that tactic shouldn’t be relied on for security.
The more complex a bot is, the more difficult it will be to protect. Similar to limiting the size of the attack surface, keeping the bot security controls as simple as possible will improve its overall defense.
A common mistake with software vulnerabilities is when developers think they’ve fixed an issue but, in reality, the issue was either only partially fixed or the fix itself introduced new security issues. As bot issues arise, ensure your fixes adhere to the other nine principles above and don’t introduce other security issues.
In order to eliminate such a regression or a partial fix, bot development teams should implement static scans into the software development lifecycle, conduct security peer reviews, and get trained on secure coding. One way to do that is through the Secure Bot Developer learning trail at Automation Anywhere University (AAU).