A red teamer hacks into computers to find their flaws. A blue teamer defends machines in their network from attacks. So what exactly does a purple teamer do?
In this post, we'll walk through a day-in-the-life of a purple teamer. We'll expose the tools-of-the-trade, the goals and the processes they apply. Along the way, we'll highlight Operator, a command-and-control (C2) platform built in large part for purple team exercises.
Purple teaming in general is pretty new. It surfaced as a result of failures in red-and-blue team processes. Another way to put it, despite having red and blue teams, big companies have still suffered from attacks. After analyzing the reasons, the root cause (generally) has been that the offensive and defensive teams are not working effectively together. There are specialized skills on each side and a lack of commonality has resulted in misses.
I'd recommend reading Daniel Miessler's post defining purple teaming, as we won't redefine it here.
Ok, let's get started!
The scenario
You are a purple teamer at a Fortune 100 telecommunications organization. Your company has a small internal red team, consisting of 5 individuals, including you and one other purple team member. You also have an extensive blue team, with a few dozen SOC analysts, a couple threat intelligence analysts and several incident responders.
Every quarter, you (collectively) are tasked with running a security assessment. As the defacto bridge between the offensive and defensive side, you are expected to organize the structure and execution of the exercises.
Your particular organization is structured like this:
The red team is in charge of manual security testing, conducting reconnaissance and the initial access attack. They then pivot to manual post-compromise activities.
The purple team is in charge of creating a repeatable "playbook" for the blue team to execute after the assessment. They are also charged with building the Rules of Engagement with the red team, ensuring the blue team understands the after-action report and generally relied on to be the bridge between the offensive and defensive sides.
The blue team is in charge of defending against the attack during the assessment and implementing any recommendations from the hot-wash.
Your CISO barked down an order that this quarter you are to emulate APT-39, an Iranian threat group known for attacking telecommunications companies in order to perform surveillance activities.
APT39's focus on the telecommunications and travel industries suggests intent to perform monitoring, tracking, or surveillance operations against specific individuals, collect proprietary or customer data for commercial or operational purposes that serve strategic requirements related to national priorities, or create additional accesses and vectors to facilitate future campaigns. Government entities targeting suggests a potential secondary intent to collect geopolitical data that may benefit nation-state decision making. Targeting data supports the belief that APT39's key mission is to track or monitor targets of interest, collect personal information, including travel itineraries, and gather customer data from telecommunications firms. — FIREEYE
With your mission established, you move on to the first step of planning: Rules of Engagement.
Phase 1: Rules of engagement
The very first thing you should do, once you have your marching orders, is to create a Rules of Engagement document. Depending on the scope & depth of your security test, this could be a single page document or it could contain dozens of pages of details. At the very minimum, it should contain the mission, the machines in-and-out of scope, a detailed look at the techniques you will be testing and a clearly defined objective you hope to walk away learning.
For now, let's outline the components we know from our CISO: the mission and the objective.
Next, let's determine who should (and shouldn't!) be part of the test, the scope.
Most organizations' gut feeling is to drop a beacon on each computer in the network. Fight this tendency. Doing such a large-scale attack is both strenuous on your network as well as very unrealistic. Most adversaries attempt to fly under the radar and not make much noise (in terms of traffic). They will only compromise machines as they need to strategically achieve their mission.
Generically speaking, I usually recommend a post-compromise test using 3-5 initial workstations or servers. Go ahead and do this now: pick a handful of servers and laptops in your network and outline them in your Rules of Engagement. Make sure you put some rationale as to why you selected them. In addition, make sure you talk to the owner of the machine(s) and get permission and explain the assessment you'll be running.
Done? Time to move into the next phase of your planning: threat intelligence. This is where we'll add your anticipated techniques to the Rules of Engagement.
Phase 2: Threat intelligence
Since your organization doesn't have an internal threat intelligence department, you must use Open Source Intelligence (OSINT) resources to design your emulation plan.
Naturally, you head to FireEye's threat report on APT-39. After getting a thorough understanding of their operational intent and motivations, you head to the MITRE ATT&CK report describing their behaviors (Tactics & Techniques) during an attack. You note down the following:
T1071: Application Layer Protocol
T1560: Archive Collected Data
T1136: Create (local) account
T1547: Boot or Logon Autostart Execution
T1010: Brute Force
T1115: Clipboard Data:
T1059: Command and Scripting Interpreter
T1555: Credentials from Password Stores
T1005: Data from a local system
T1056: Input Capture (Keylogging)
T1036: Masquerading
T1046: Network Service Scanning
T1135: Network Share Discovery
T1027: Obfuscated Files or Information
T1003: OS Credential Dumping
T1090: Proxy
T1021: Remote Services
T1018: Remote System Discovery
T1053: Scheduled Task/Job
T1113: Screen Capture
T1505: Server Software Component
T1033: System Owner/User Discovery
T1569: System Services
T1204: User Execution
T1078: Valid Accounts
T1102: Web Service
With the techniques identified, it's time to pop open Operator and see if we can build our adversary. Start by heading to the Emulate section, adding a new adversary (call it APT-39) and click to ADD TTPs.
We'll start top-down on our technique list above. First up is T1071. Using the search bar, enter the technique ID to filter all available procedures.
We can click Add on one or multiple procedure to add them to our profile.
We should iterate through all the techniques above, going through a similar process. As you do this, pay special attention to the platforms, executors and commands. Each procedure will work on one or many operating systems (platforms), be runnable on one or many execution programs (executors) and contain a code block which is the actual instruction. You should read our post all about TTPs before continuing on.
Have you noticed that you're not required to order the procedures in your adversary profile? This is because Operator uses an autonomous decision-making library to automatically select the best TTP at the right time.
As you build your adversary profile, you're likely to come across a technique which is unavailable in Operator. You have a few options:
Ignore it. Just because a threat actor is known to execute a specific technique does not mean you are required to add it to your profile. Of course, this is the easy way out and doesn't give you the full test you want. Just keep in mind: there are 1000s of procedures for any given technique, so just because you have coverage for one, does not mean you are protected. It only gives you percentage of confidence that is above 0.
Find it elsewhere. Did you know you can import TTPs into Operator from other open-source repositories? If you find a technique procedure inside Atomic Red Canary, MITRE Stockpile or another open-source location, you can import it directly from the Editor section.
Write it yourself. If you have the time, you can write the procedure yourself. Head to the Editor section, click to add a new TTP and build your own using resources you find online. There are many databases of pre-and-post compromise attacks, such as Exploit-DB, which you can write into a procedure file. Remember that you can attach payloads to your procedures, which means you can easily execute binaries as attacks.
Any time you locate code online, make sure you give it an exhaustive security code review before using it.
Depending on the scope and depth of your planned assessment, the adversary profile building stage may take a day or a couple weeks. You may be interested in pulling in your blue team to assist through the process, so they can give advice and get eyes on the profile, or, you may want to keep it a secret, to ensure the test is closer to a real-world attack. Either way, since you are working with your internal red team, you should include them on your process so they can help build the profile & offer suggestions.
While you're building the automated adversary for the security assessment, your red team is busy conducting reconosaince against the target network. This recon will be used to launch an initial access attack on the first day of testing. Since your adversary profile is geared toward post-compromise, not initial access, you avoid getting involved with the recon and instead focus your energy on the automation.
Once complete, you should ensure the profile is clearly defined in your Rules of Engagement before moving to the next step: Day #1 of the assessment.
Day #1
The excitement level is high: it's the first day of a red team event!
Typically, on the first day of an assessment people are flying in from remote locations and setting up the hardware required for the testing. Monitors, cables and laptops are strung everywhere and ad-hoc networks are being stood up to secure the connections the red & purple teams will be using. Kali Linux is feverishly being installed on all attacker computers, as are piles of other proprietary and commercial security tools.
One such tool is Prelude Operator.
As a purple teamer, you used Operator to build the adversary profile you'll be using during the security assessment. You'll use this again to launch the adversary later on in the assessment, supplementing the manual work the red team is plotting.
Start by ensuring you and the red team have installed and configured Operator. Then ensure you do a review of the adversary profile with the team and map out the manual red team process for the next day.
Since Day 1 is all about preparation, go ahead and compile a handful of Pneuma agents and ensure they work properly. You'll be using these shortly.
Day #2
On the morning of the second day, the assessment begins.
The red team, using the recon data they gained prior to the exercise, launches an initial access attempt to gain a foothold into the network. After several attempts - leveraging brute-force SSH sessions against an accidentally open port 22 and a series of SQL-injection tries - the red team waves the white flag. They surrender the first phase of the attack.
The blue team acknowledges the white flag and dutifully drops a handful of Pneuma agents on internal workstations and servers, emulating the effect of a successful initial access attempt.
Initial access is often the first white flag raised during an event. Gaining the initial foothold is a time consuming process, often taking weeks or months for a real threat actor. It is often best to "assume breach" and start the assessment in a post-compromise state.
With the agents dropped, the red team goes to work.
They start by quickly establishing persistence on the workstations, if possible, and then look to extend their reach by laterally moving to a handful of additional machines. Both steps ensure that if they are caught in one place, they can survive on the network by having a beacon elsewhere.
As the red team moves, they are attempting to follow the steps of the adversary detailed in the Rules of Engagement. They slowly advance, running surveillance related techniques: they're dropping audio recording packages and taking screenshots of internal programs. Anything to gain the "Crown Jewels" of the organization.
At your organization, the purple team assists the red team during the first day of the attack. So for now, you are just monitoring both red-and-blue sides and logging everything going on.
The blue team is scrambling to catch them. They're one step ahead in some places and completely blind in others. At times, it resembles a game of "whack-a-mole."
The red team notices the defense is only running Splunk Universal Forwarders - the SIEM of choice for your organization - on Windows computers, leaving all Linux servers completely blind to the centralized logging server. So they pivot to compromising the Linux boxes and use them to leap-frog their way around undetected.
As the day comes to an end, the blue team has caught on and makes their move.
The defenders locate the IP address of the Operator redirector the Pneuma agents are using and apply a firewall rule to block traffic to it. Without the ability to talk to Operator, the Pneuma agents go into hibernation mode and beacon infrequently to attempt to re-establish the connection. But by the end of the day, the agents are unable to move forward and the event comes to an end.
In an extensive test, the red team would create several redirector IP addresses and deploy Pneuma as a UDP, TCP, HTTP and gRPC agent to ensure a firewall rule in one place would not shut down the entire operation.
The red team gathers the exfiltrated data and starts writing a report.
Day #3
With the manual phase behind you, it's time for the purple team to introduce automation.
You ask the blue team to reset the environment to the original state and then work with them to deploy Pneuma agents on the workstations/servers outlined in the Rules of Engagement. Because you saw the red team get shut down the prior day with a simple IPV4 firewall rule, you deploy Pneuma over the UDP protocol, which is harder for the defense to catch as it blends into the massive amount of DNS traffic running over the network.
As the agents beacon in, you give the blue team a thumbs up that the attack is imminent. Then you click deploy.
Within 90 seconds your adversary has successfully executed it's attack, succeeding in a number of high-profile thefts of intellectual property and dropping several long-running undetected surveillance packages. The blue team had barely gotten their Splunk dashboard refreshed by the time the attack ended, so they ask for a redo.
This time, they apply the same TCP firewall rule that thwarted the red team the prior day. You redeploy your attack and the UDP packets easily move around the firewall rule unbothered, and your agents have the same success as the first run.
From Operator, you click into the Reports section and download the complete log of events for both runs. This is a great resource for the blue team to match up each command run with their Splunk analytics to determine what they logged, detected and alerted on.
Where the red team's manual attack was a full day activity, the purple team automated version was complete by 10am. This gives you the rest of the day to evaluate the results and prepare for the final day: the hot wash.
Day #4
The last day of most security assessments is known as the hot-wash. It's a day of review and reflection.
The all-day meeting serves two purposes:
It gives the red team a chance to describe what they did and the blue team can counter with what they caught.
It gives the blue team an actionable set of recommendations they can apply to their network.
As the purple team representative, you run the meeting. You give the red team the floor so they can describe their initial goals and their execution. After each technique, you mediate a discussion with the blue team. Did they see the technique? Was it just logged or actually detected? Did they initiate a response action or were they hesitant because they weren't sure if it was a false positive? These questions help guide the conversation so everyone can learn what did and didn't work.
Following the discussion, you describe the automated attack you ran on day three.
You open Operator and run through the commands executed and how they linked together. You then walk the defenders through installing Operator on their own laptops and you share the adversary profile. You finish by walking them through how to schedule it for automated runs until the next security assessment, effectively giving them the ability to security test themselves.
With the event concluded, you power down Operator, shut your laptop and head home.
While fictitious, this day-in-the-life of a purple teamer is an accurate depiction of how a purple team member may interact with the offensive and defensive sides of their organization. While every organization is unique - applying roles and responsibilities differently - the goal of the purple teamer remains the same: be the bridge between the red and blue teams to enhance the security posture of the organization.
Prelude Operator is a command-and-control platform designed to be accessible by all security disciplines and all levels of experience. I hope this Day in the Life gave you a better understanding of the purple team role and how you can leverage Operator in your own security assessments.