It’s a question often posed to autonomous solutions: is this okay to run in production?
The answer isn’t complicated (hint, it’s yes) but answering why may take a minute. Or a blog post. So here goes.
Let’s set aside the question of autonomous red teaming and just focus on old-school, manual security testing.
When you red team, what you’re really doing is uncovering your unknown security holes. Red teaming is, by definition, the process of playing devil’s advocate. So you’re poking and prodding your systems in unintended ways - the same ways an adversary does - hoping to uncover a weakness.
This doesn’t come without risk. If you poke your system too hard, it may topple over. If you do this in production, your customers may feel the weight of the fall.
But consider this: if you get attacked in the middle of the night by an attacker using the same techniques as your internal red team (or software), you’ll still topple over, except this time you won’t have control. Worse, you may have compromised your customers. In other words, by trying not to topple your system, you may be preparing for an even bigger fall.
Most organizations have multiple environments. There is the development environment for engineers to build solutions. The QA environment for testers. The staging environment to mirror production, just in case you need it. And finally, the production environment to serve the public.
Why not just red team your test environments, isn’t that safer? On the surface, yes. But remember the earlier consideration. This safety may just be a façade until you get attacked, at which point it flies out the window faster than your data being exfiltrated.
"If my staging environment mirrors production, isn't that the same as testing prod but safer?"
If you find yourself asking this question, consider if your staging environment is exactly the same as production. Your staging environment is likely a test bed to flush out bugs that are hard to debug in production. Most staging environments run the same software as their production counterparts but the scale (number of servers, data centers, etc.), network rules and infrastructure is far different, to lessen the cost. Even a simple example of production spanning 3 data centers vs a single DC for staging is a large difference to an attacker, who is looking at surface area as part of their attack.
Why do you think organizations with top-dollar defensive tools get hacked every day?
You should expect quality systems at your organization. Ones that do not topple over during a red team assessment. Your systems should be resilient to testing and get stronger over time. Your engineers will respect this and you’ll be doing more long term good - for you and your customers.
When using Operator against your production environment (or any environment, really), note that we do not create the harmful effects of a real adversary. There are several safety precautions we take, such as not encrypting files during a ransomware attack but instead copy a file and encrypt the copy, to prove we can. This doesn’t mean your systems won’t topple over during an exercise however. The point we want to make is, if a safe security assessment topples your servers, be glad you discovered it during the test instead of a real world attack!
Now for the question of autonomous systems.
Manual red teams rarely get posed the ‘can I do this in production?’ question. People are (for good reason) wary of automated processes, especially those built for offensive security. But this reluctance will change. It will take time. It will take effort. Most of all, it will take reforming how you think about your security.
Red teaming is great to do but the time constraint and cost make it unachievable for most companies. For those remaining, running several red team exercises a year just isn’t enough. It’s better than none, but continuous red teaming is better. And the future. The only way to get there is to embrace autonomous red team software.
Okay, you get it. You’re technical, you see this as the optimal strategy. But how do you get your manager, the one who takes the blame when things go south, to see this as well?
Move into the space slowly. Instead of going full-throttle and deploying an autonomous red team into your network, use a tool that allows you to have full manual control. Do your first several security assessments without turning on any of the autonomous bells & whistles. Plan them in small doses. Not every assessment needs to be a full-blown endeavor. Then, over time, it will become easier to introduce autonomy as you build trust, both with your organization and with your tool of choice.
In the end, choosing autonomous red teaming means choosing the future. Your attackers are using them for their efficiency in hacking you, so you should be prepared to defend yourself with a dose of autonomy yourself.