The rise of automation
How autonomy is plotting to upend the offensive security space
To be clear, this post in no way indicates there isn’t value to a manual red team. In fact, quite the opposite. There is tremendous value to having the expertise of a red teamer evaluate the nooks & crannies of a system. Instead, this editorial describes the greater push toward automation in the industry. Not every organization can afford a manual red team and to them, increases in automated capabilities are critical to supplying the best security possible.
Manual only red teaming has an expiration date.
Security testing has gone through a series of adaptations, getting better over time. That is until recently, with the advent of adversary emulation. This was the first step backward in terms of the security assessment process. Adversary emulation as a whole is a regressive move because it hurts those looking for security more than it helps.
Adversary emulation promises an elusive goal: the ability to emulate, with any level of accuracy, Advanced Persistent Threat (APT) groups. Trusting a team of less advanced hackers than the group they’re emulating is a fool’s errand.
Existing almost exclusively on computers already, security testing can be automated using the same technology that it is intended to test. And in fact, we should expect that automated security testing would be more accurate, more accessible and less expensive than a manual person to do the testing.
Are there exceptions to this? Of course. Like anything else, there are exceptions to the rule. In the case of automated security testing, there is a difference between government-level Offensive Cyber Operations (OCO) and red team security testing. The former will always (at least in the foreseeable future!) require manual eyes and creativity. In the OCO space, trained government operators spend months creatively uncovering “0-day” vulnerabilities. These are unpredictable hacks that haven’t been discovered or disclosed yet.
No automated test could find them because, until the moment they’re found, they don’t exist.
This post also doesn’t touch on physical penetration testing which, almost by definition, requires a human element.
Basic red team security testing is the lion’s share of the assessment space however. This is the area that is prime for automation. Red teaming is the process of using existing, already known security bugs and vulnerabilities to hack a system. The known can be automated. The unknown will continue on as a manual exercise.
When will manual red teaming die? It won’t, really. Advanced manual testing will always be needed and it complements automation nicely. But as automated tools start gaining prevalence, the movement should quicken. Being fully autonomous allows these tools to significantly lower the cost and time barriers to red teaming, making it possible for any size organization to take part in the process.
Think of it this way. As a red teamer, when doing a port scan, do you physically check every port one by one over manual TCP socket connections or do you rely on an automated script to do this work? Automation doesn’t always mean full replacement for human behavior. It can often mean leveraging technology to perform repetitive tasks.
We are still at the beginning of the industry really taking off however. The key is the brain. Fully autonomous red teaming can only be effective if the brain powering the autonomous actions is strong and “human like”. Without this, all you’re doing is programming a series of atomically generated procedures.
As the brain gets smarter, the red teaming process becomes more automated. As autonomy becomes commonplace, advanced security becomes more accessible.
And access to advanced security should encourage us all.