Action-oriented red teaming
Using Prelude Operator to generate automatic security recommendations for your environment
In a manual red team exercise, when all work is done, a "hot wash" ensues. During this meeting, the red and blue teams get together and review what occurred. The red team typically drives the conversation, going through the tactics taken, white-cards used and successes/failures (from their perspective). Then the blue team does the same but this time centering on what behaviors they detected and how they responded.
Essentially, everyone is attempting to determine what occurred, what false positives were seen and what should be done at the end of the day.
After the meeting, the red team creates an extended report (often dozens of pages long), summarizing the event. Arguably the most important part of this report are the recommendations.
Red team recommendations can range from generic (eg. you need to add centralized logging) to specific (eg. you need to identify an attempt to Invoke-Mimikatz from PowerShell). The intent of the recommendations is for the blue team to close potential holes.
Automating recommendations
Most automated security tools struggle with recommendations. When I designed the MITRE CALDERA framework, I intentionally ignored the problem all together. Why? Because of their fluidity, it can be difficult to deliver recommendations of value.
To illuminate the problem, let's examine the dichotomy of recommendations generated from running antivirus and those from a red team exercise.
One of the core functionalities of an antivirus program is looking for file signatures. Using a predefined list of "bad" hashes, the program will compare every entry on the target file system against the list, looking for "bad." If a match is found, the security recommendation is to quarantine the file.
When red teaming, you're almost by definition looking for security holes that are there by accident. Everything you can possibly do on a computer system can be tested and each possible combination of actions can be attempted. This infinite amount of test scenarios can "pop" any unknown vulnerability, including zero days - or vulnerabilities no one is publicly aware of, yet.
So does this make automating recommendations from red teaming impossible?
How Operator solves recommendations
If you guessed no, then you're right!
In Prelude Operator, we decided to tackle automated security recommendations as a research problem. The easy solution would have been to hard-code recommendations. Something like this pseudo code:
def is_recommended(self, procedures):
recommendation = dict(name='Rec1', ttps=['1', '2', '3'])
if all(recommendation['ttps']) in procedures:
return recommendation
return None
In this example, we would create a "recommendation" dictionary which would contain a name for our recommendation and a list of TTPs in our platform which would need to succeed for the it to be generated. Playing this scenario out, if I deployed an adversary which successfully completed the 1, 2 and 3 TTPs (made up identifiers), I would get the "Rec1" recommendation.
The obvious flaw in this approach is the hard-coding. We would need to hire a large team dedicated to creating every combination of malicious procedures, then update these lists every time we add a new TTP to the platform. It would start out unsustainable, and get worse from there.
We instead opted for a research-oriented approach.
It works like this.
We programmatically read our internal training materials, which contain hundreds of combinations of malicious techniques.
We build predictive lists out of the malicious combinations, classify them by their ATT&CK techniques, and store them in RAM.
When you load recommendations from the Reports section, we dynamically compare your filtered results to the malicious combinations in order to determine recommendation matches.
When multiple recommendations match, we run a scoring algorithm based on the tactical impact of the included procedures. The recommendation with the highest cumulative impact will be the one shown.
class Impact(IntEnum):
UNKNOWN = 0
DEFENSE_EVASION = 1
COMMAND_AND_CONTROL = 2
DISCOVERY = 3
COLLECTION = 4
PERSISTENCE = 5
CREDENTIAL_ACCESS = 6
PRIVILEGE_ESCALATION = 7
LATERAL_MOVEMENT = 8
EXECUTION = 9
EXFILTRATION = 10
IMPACT = 11
Ok, enough definition. Let's take this for a spin.
Taking a test drive
Start Operator and head to the Emulate section. From here, click to create a new adversary and add the following procedures:
FIND RECENT FILES
CREATE NEW DIRECTORY
STAGE COLLECTED FILES
COMPRESS STAGED DIRECTORY
COLLECT ARTIFACT (HTTP)
You can likely tell from these procedure names that this adversary will hunt for recently used files, stage them into a new directory on the target computer, then compress and exfiltrate the collected data back to the C2.
Seems like something your security solutions should catch, right?
Next, click the save icon in the top right, to open up the configured publishers. Ensure the Cloud option is enabled. This will ensure your results are backed up in the Prelude cloud, which will allow us to crunch recommendations.
When you publish data to the cloud, all request and results are encrypted using your private encryption key, which is generated on your computer when you first install Operator. We cannot see or decrypt your data. However, we use the ATT&CK tactic and technique metadata to generate recommendations.
Next, ensure that your 'home' range is selected and click deploy, sending the adversary to your local ThirdEye agent for immediate execution.
Click into your agent, named after your hostname, and you should see the results streaming in. Once the agent turns green it is complete.
If you're like me, the adversary executed successfully and it found several recently modified files on my system, ultimately exfiltrating them without setting off alarm bells from my antivirus.
Not a surprise, antivirus is not designed to detect this type of exfiltration.
When I filter my Reports to view today's activity, I can see the generated recommendations. In my case, two were generated. I can see the recommendation name and the ATT&CK techniques which were triggered in order to generate the recommendation.
Clicking a recommendation will display a summary of the action I should take as a result. In addition, I can see all the links (commands) executed around the time period of my recommendation. Those highlighted indicate they were part of what generated the recommendation. Those not highlighted were just incidental.
Classifying by technique
I mentioned this earlier but want to emphasize it here, now that you've seen recommendations end-to-end: part of the power here is that we're operating (pun intended) on techniques, not procedures.
This is important for two reasons:
Adversaries will constantly vary their procedures (i.e., technique implementations). If recommendations are hard-coded to specific ones, this will be a scaling nightmare.
You can import your own procedures (whether they come from Atomic Red Canary or your own private collection) and they'll automatically work with our recommendations.
Looking ahead
Recommendations are one of two heavy research areas for the Prelude development team, the other being autonomous decision making. As we look to build on what we've created so far, we will look apply machine learning to our existing scoring algorithms which generate the recommendations.
Today, this scoring algorithm is powered by a kill chain, organized by ATT&CK tactic, to calculate a recommendation's impact. We will soon allow users to approve or discard recommendations from the Reports section, which will allow us to qualify the value of a recommendation as applied to real people. As we gather more data, we will feed this user-interaction data into the scoring in order to create recommendations that are community-powered and highly customized based on your environment.
If research is an area you are interested, reach out. At Prelude, we know that our success relies on working with the greater security community and building tools together.