Embracing robots

How Operator chains are ushering in the new era of red team automation

Fighting the future is futile. You can kick and scream and naysay about how the existing ways have worked for ages and new things won't work. But when push comes to shove, the wave of the future eventually washes in.

When it comes to cybersecurity (like many other industries), the future is automation.

Most sectors in the industry have accepted this fact and have gone full-force in adopting technologies that inject autonomous systems: Security Operation Centers (SOC) rely heavily on automated log curation and alerting, DevSecOps teams use continuous integration and infrastructure as code pipelines, Cyber Threat Intelligence (CTI) analysts use data analysis to understand current threats... the list goes on. But red teams have been hesitant to accept the move to automation.

Manual red team exercises have been done for ages - I've personally been doing them for 15 years - and not much has changed. Sure, the terminology has shifted from penetration testing to adversary simulation/emulation. But the core premise is the same: pick up your favorite command-and-control (C2) tool, physically head to the location under test, manually enter keystrokes to attack the system and move within it.

Many red team members will say this process is too complex and requires too much intuition for a computer to do it. A computer is too noisy. It'll get caught, it'll make the wrong decisions. But I think, if you step back, you'll realize that computer hacking is actually ripe for automation.

Let's get one thing clear: red team automation will not take the jobs of red team members. First, advanced (manual) red teaming will always be needed, as there is no replacement for human intuition in sensitive environments. Second, as autonomous red team systems start covering 90%+ of the red teaming needed, these systems will need to be built and controlled by red teams who understand how to do so. Red team members should rejoice at the idea of extracting themselves from the repetition of the "boring" tasks and an increased focus on the exciting ones.

To start, the entire process is done on a computer. Everything typed can be scripted. Every result can be parsed. And autonomous systems, by the definition of removing the human element, are able to solve three "holes" in manual red teaming:

  • They only know what they know

  • Decisions are made based on personal bias, not facts

  • Information easily slips past human operators

Let's break this down, using the Operator C2 as a solution.

Collection of abilities

Hole #1: Manual red teams only know what they know

A red teamer has an arsenal of options when manually testing a network. Naturally, they have skills honed over years of experience. They may be naturally gifted in Python programming, cross-compiling binaries for efficiency, or they may have conducted mostly Windows based assessments and understand that operating system like a whiz.

But there are limitations. Everyone has limitations.

A talented red team will have the combined skill sets and experiences of their members. One team will be better than another. And every team will have gaps where they either don't know how to do something or aren't able to complete it in the most efficient manner.

Autonomous red teams can fill this gap nicely.

Operator, as an autonomous C2, is injected with the collective abilities (skills) of dozens of red team members from varying backgrounds. This occurs through open-and-closed source repositories of Tactics, Techniques and Procedures (TTPs). When Operator starts up, it collects these abilities from all the data sources it has access to, importing hundreds (if not thousands) of procedures, each representing a specific part of a larger attack.

The more people contributing to open-source projects, like Prelude Community or Atomic Red Canary, the more autonomous systems can leverage the combined skill sets of red teams everywhere. Now, instead of a red team only having access to the skills in their own heads, they can leverage experience from people all over the world.

The art of making decisions

Hole #2: Manual red teams make bad decisions

No matter how many abilities a red team has - whether gained through personal experience or collected from online repositories - their next step will be decision making. Which procedure should they execute first? Why? How does that lead to the second procedure they run?

This is normally referred to as human intuition. And this is the biggest barrier preventing a red team from accepting automation.

In Operator, we rely on the built in brain, or automated planner, to form cyber kill chain ordering within a multi-layered finite state machine. Sound complicated? It's not for everybody but if you are inclined to learn more about this, you should check out our post on how the brain works.

Regardless of how the brain actually operates, there is the technical problem of taking the output of one procedure and using that as input into a future, unknown one.

Let's say our first procedure looks like this:

id: 6469befa-748a-4b9c-a96d-f191fde47d89
metadata:
version: 2
authors:
- privateducky
- MITRE
tags:
- Crown Jewels
name: Create new directory
description: |
Creating a staging directory is often a precursor to copying files into it. Hackers will do this action in order to
exfiltrate important files without getting caught.
tactic: collection
technique:
id: T1074
name: Data Staged
platforms:
darwin:
sh:
command: |
mkdir -p /tmp/staged && echo /tmp/staged
linux:
sh:
command: |
mkdir -p /tmp/staged && echo /tmp/staged
windows:
psh:
command: |
New-Item -Path "." -Name "staged" -ItemType "directory" -Force | foreach {$_.FullName} | Select-Object
view raw ttp1.yml hosted with ❤ by GitHub

Depending on the operating system we're on, a different command will be automatically selected. Regardless, the end result will be a new directory is created and the full working path will be printed to console. Let's say it is:

Operator will immediately store this data as a fact, or a key/value pair which describes a single, identifiable - and objective - piece of information about a computer. The fact will contain a root and the technique it was discovered from. So in this case, the fact would be:

  • Key: directory.T1074

  • Value: /tmp/staged

The technique is obvious, we can read the procedure above and see that. But how did it know to apply the directory root?

When Operator starts, it loads procedures from our API, called GateKeeper. GateKeeper normally handles your login process and provides the training materials. It also provides Operator with parsers, which are capable of extracting valuable data from arbitrary text blobs. Don't worry, we only do this on your own computer, never sending your data elsewhere, even to us. We respect your privacy too much.

A parser is a simple regex which we run the output of every TTP through, looking for Indicators of Compromise (IOCs) in order to generate facts.

In this case, we are matching on this parser, resulting in the fact root above being a directory.

Now, we have learned a piece of information about the computer we're on. How can we leverage that? Easy, variables.

An Operator variable is a fact key, in #{} syntax. Operator will automatically look to replace #{directory.T1074} with the value learned from any command within the adversary profile. In this case, maybe it finds this one.

id: 300157e5-f4ad-4569-b533-9d1fa0e74d74
metadata:
version: 1
authors:
- privateducky
- MITRE
tags:
- Crown Jewels
name: Compress staged directory
description: |
Compressing a directory has many purposes, mainly making the contents smaller and condensing them to a single file.
A hacker will tend to do this before attempting to steal files from a computer because it is less noticeable to
steal a small file than a large number of bigger files.
tactic: exfiltration
technique:
id: T1560.001
name: Archive Collected Data
platforms:
darwin:
sh:
command: |
tar -P -zcf #{directory.T1074}.tar.gz #{directory.T1074} && echo #{directory.T1074}.tar.gz
linux:
sh:
command: |
tar -P -zcf #{directory.T1074}.tar.gz #{directory.T1074} && echo #{directory.T1074}.tar.gz
windows:
psh:
command: |
Compress-Archive -Path #{directory.T1074} -DestinationPath #{directory.T1074}.zip -Force;
sleep 1; ls #{directory.T1074}.zip | foreach {$_.FullName} | select
view raw ttp2.yml hosted with ❤ by GitHub

Operator will replace all instances of #{directory.T1074} with /tmp/staged, therefore fulfilling the dependencies of executing this procedure. In turn, this procedure may output the following:

Which will generate a new directory.T1560.001 fact.

This process will repeat as many times as Operator is capable of using the information it has learned - or until the deployed adversary has achieved its goals.

Adversary goals

Adversary goals is a new concept, released in version 0.9.10. A goal is a fact value you are hoping to collect. In the above example, the adversary (hacker 1) is looking for a file.T1005 fact, with a value containing passwords. This would match on /tmp/passwords.txt, C:/Users/passwords.xls or even ~/passwords.

When an adversary has a goal, it will execute only until it achieves that goal.

Humans can't catch everything

Hole #3: Information easily slips past human operators

As you just saw, the chaining process Operator implements is capable of connecting procedures through the information it learns. This is automatic and precise, based on the strength of the parsers and the breadth of the procedures you have available.

When humans execute manual red teams, they're usually not as precise. In other words, we miss things.

For example, let's say you execute a procedure which has significant output. Let's say something that looks more like this:

This procedure will attempt to print all active network connections on a computer, resulting in hundreds of lines printed to console, each line explaining which local host (interface) and port is talking to which host and port on a local (network) or remote machine.

Ignore the fact it would take a human lots of time to parse this list (during which the connections would be constantly changing) and focus on the high probability the human misses something important.

Maybe they miss an IP address that could allow them to complete a lateral movement technique? That could alter the results of their red team exercise significantly. And there is the crux of the issue. By trusting a human to be-all and catch-all, you are accepting a high level of risk because what your security team misses, an adversary may exploit.

Because of the information leakage, it's best to leverage computers for what computers do best: crunch data quickly and make decisions based on the findings.

A human element can intercept and approve decisions, sure, but they don't need to be responsible for executing each command manually, reading the (often long) arbitrary output themselves and trying to think about what future attacks it may open up for them.


Moving toward the future is fast for some industries and slower for others. In cybersecurity, there is a mixture of both, depending on which sector you're in. It's a natural evolution of adapting to change that we've seen time again.

At Prelude, we are investing in the future and providing solutions that can work for you today, but are optimized for the future. By using Operator, we hope to help you future-proof the security posture of your organization. We're pretty open about what we do, why we do it and exactly how we code it up.

We love red teaming and are excited to be a part of the future where all of us 'red at heart' operators can extract ourselves from the mundane data analysis (and reports, but that's a future post!) and focus on what makes everyone safer. Better security.