This post is not about Prelude but I am inclined to mention that we built Operator around the problem described here. If you use it, you’ll notice we don’t build adversary profiles. Instead, we design strong TTPs and tag them with threat groups we’ve seen execute them. We want you to build threat profiles that secure your environment, not that emulate APT groups for the sake of emulation.
Adversary emulation is a misnomer. It is a made up phrase, offering security where there is none. The industry itself is brand-new, capitalizing on the fraudulent nature of the computer security business. It goes by multiple names today: adversary emulation, adversary simulation, breach and simulation, next-generation antivirus. The list goes on. Each taking security professionals on a ride further from actual safety than the last.
Let’s take a (virtual) walk around Black Hat, RSA or any other security conference. Notice any trends? Throw a dart and you will likely puncture a balloon dangling off a nearby adversary simulation company, promising you they can emulate any Advanced Persistent Threat (APT) group in the world.
Think about that claim for a moment.
APT groups are mostly defined as well funded, government run hacking organizations spearheaded by the most talented computer hackers in the world. Many groups hack for political reasons (tilt an election anyone?). Others hack as an act of war. Other groups combine motiviations, hacking to build non-attributable piles of cash by funneling proceeds of ransomware and other attacks into other government efforts.
Whether autonomous or manual, small security shops mostly staffed by software developers are promising they can provide nation-state hacking capabilities. Entire governments - China, Russia, Iran, the United States - fund these operations, however the 6-person company with a flashy website and 30 marketing consultants promises the very same capabilities. Right.
One of two things must be true. Either every government funding advanced hacking is getting ripped off. Or you are being lied to.
While you decide which, let’s detour into a history lesson. George Santayana is credited with the famous saying, “Those who cannot remember the past are condemned to repeat it.” To understand how to navigate the fraud that is adversary emulation, we first need to understand how we got here.
The simple history of computer hacking
Dating back to the age of computers themselves, but more relevantly gaining limelight due to the Morris Worm in 1988, computer security has gone through several iterations.
First, there was penetration testing.
Pentesting, as it is known today, is very much the Microsoft Excel of computer hacking. Pentesters are given a Spreadsheet of Doom (SOD) which contains a row for each security measure they are required to test. Meticulously, they are to follow the columns and rows described, checkmarking each completed “hack”.
Pentesting was the de facto way white-hat (hacking for good) security assessments were executed. It has since been pushed to the background, as a regulatory necessity for some larger organizations.
What are the failures of penetration testing?
In a pentest, you are required to execute very specific tasks, without variance or creativity. In reality, good hackers like to use fuzzy logic and poke at systems for days to uncover a foothold that the spreadsheet creator never would have imagined. It can be summed up as this: Hackers take pride in finding the unfindable. People building regulatory spreadsheets take pride in, well, creating nice rows and columns. In other words, in hacking, creativity is king.
Then there was red-teaming.
Despite the term being tossed around for centuries, it wasn’t until the 2000s that “red teaming” took off in computer security.
A red team was aimed at extending penetration testing and all white-hat hackers adopted the new term in stride. Gone were the spreadsheets of doom. Gone were the boring check mark days of computer “hacking”. This new breed of hackers worked in small groups to creatively attack - a network, an application, a company - in any way possible.
Here’s how it would work:
A red team, usually consisting of 2-4 people, would create a Rules of Engagement document, outlining what was and (more importantly) was not in scope for the assessment.
This engagement document would be shared with a limited number of people at the organization, to ensure when the hack went down, the defensive team would be surprised (therefore, adding realism).
The red team would come in for the specified time period and go through a series of tactical operations: passive reconnaissance, active reconnaissance, initial access, persistence, lateral movement and more.
Upon completion of the exercise, the red team would sit with the blue (defensive) team for a day in a meeting called a “hot wash”. At this meeting, the offensive unit would share how they hacked and what did, and did not, work. In turn, the blue team would take fastidious notes and fix the problems before the red team came back, at some unspecified time in the future.
This all sounds pretty great, right?
While a significant enhancement over pentesting, there were still several significant issues which cropped up as the practice of red-teaming has unfolded.
To understand the fallacies, let’s mirror the four points above, about what a red team engagement should look like, with what they actually look like:
The process of creating a Rules of Engagement - in reality - can take weeks or months. Why? Often, the red-team creates the document quickly but the hold up is on the approving side. In most engagements, there is at least one individual with an emotional stake in the operation. They agonize over every tiny, unimportant detail in the engagement document. It’s almost as if they’re against doing the engagement itself, or at least that’s how it feels from the red team side. The end result of this time sink: cost. Beyond the time delay itself, this is where a lot of money is wasted. This also hinders the ability to do continuous security testing.
Because only a few people at an organization know the red team operation is looming, that means there can be serious implications if a red teamer gets caught. More than once, a red teamer has gone to jail because not enough people have been informed of the plan. Because of this, the red team must possess a “get out of jail free” card, in case they get caught. What does this mean? More time and money spent managing the situation.
Executing tactical goals in a red-team assessment in practice leads to a significant amount of “white-carding”, which means waving a white flag in surrender. The most common white card to be thrown is initial access. Because of the time and complexity of creating an initial foothold, most red teamers will ask for this white card up front. But it doesn’t stop there. Any time the red team is stuck - whether it is moving laterally, creating persistence, dumping credentials… - they throw a white card, which means the defense will let them continue as if they had completed that step. It is easy to see the failure here: by white-carding events, the validity of the assessment is compromised. It has lost its value. If the red team cannot proceed past a step, then that means the defense is doing well and they have thwarted the attack.
While well intentioned, the hot wash is usually a hot mess. The red team is often guarded and cocky (despite getting white cards!). They do not share enough information with the defensive team because they want to keep their hacks for the next assessment. The blue team is then, understandably, frustrated by a lack of useful actions to take. The end result? Usually the blue team sits, listens and fumes as a cocky red team relieves their highlight reel. And then the blue team does nothing in the weeks/months that follow because there really isn’t anything to do.
The practice of red-teaming is still the most popular form of security assessments (beyond antivirus and vulnerability scanning). But in the mid-20 teens, the concept of adversary emulation was born.
Adversary emulation was designed to address the problem that it was not feasible for a 2-4 person team to test every possible computer hack against a system. Instead, it was deemed more applicable for the team to condense their assessment by emulating a specific threat (APT group) and only execute the attack using their known behaviors.
So in short, adversary emulation is exactly the same as red teaming, except the team limits their actions to those only publicly attributed to a chosen group.
By definition, adversary emulation carries over the same set of problems defined above for red teaming. In addition, it brings to light a few more significant issues.
First, the expectation that even the most experienced red team can emulate a well-funded APT group is laughable. The very thought that the red team’s assessment will represent what would occur if the actual group attacked the system is a misleading road to journey down. It is insane to think your red team can be APT-33 one day and APT-39 the next.
In reality, why limit what your red team can do because they’re already limited in skill set?
Second, the red team plan is only as good as the threat intelligence it uses. In most unclassified spaces, this means open-source intelligence (OSINT). The red team builds its rules of engagement by looking at threat intelligence blogs online, reading about hacks the APT has been attributed in the past and the like. This fallible approach does not take into account the iceberg of attacks existing under the tip of viewable ones. Or the fact that the known, attributable attacks were wrong.
This leads us to an overall failure of manual security testing, which we have not yet breached: consistency. You can bring in a pentester, red-teamer, adversary emulator today, and maybe they find 10 vulnerabilities. If you bring in another 6-months from now and they find 5 vulnerabilities, was the first one better or did you fix 5 of your prior issues? How can you be certain?
Looking at this history of white-hat computer hacking, you should reach the conclusion that there is one core problem: the cost (time + money) of red team assessments is too high for the value they provide.
Ok, so what do I do if everything is a bust?
The components of a hack
Regardless of the methodology of security assessment you are or want to use, a good security assessment should have the same ingredients.
Command and control
Remote Access Trojans (RATs)
Let's walk through the concepts.
What do you want to do? Before you can conduct an assessment you need to understand what and why you are testing. This means conducting threat intelligence to build your attacker.
Threat intelligence is the act of building your attacker. You can use OSINT as discussed before. Or, if you are in the government, you may access classified information to build your attack plan.
These are great approaches but if you’re looking for the most efficient and effective way to build your attack plan, look at your business first. Only you know where the crown jewels reside. Only you know intrinsically what an attacker would be after if they targeted you. Maybe it’s ransomware? Maybe it’s access to an accounting database? Maybe the CEO’s emails?
Outlining this inside an attack plan is step number one.
Command and control
Next, you need to establish a base for command and control (C2). In computer security, the C2 is the location where the attacker launches attacks and interacts with the compromised computer network. The C2 itself is usually a combination of technology products which allow Remote Code Execution (RCE) on the computers under adversarial control.
C2 technology establishes listening posts, which are open protocols/ports which malware can interact with (usually) from anywhere in the world.
On the heels of command and control are agents, or Remote Access Trojans (RATs for short). While the C2 is where an attacker can launch an attack, the RAT is the boots-on-the-ground malware that is running on a compromised computer which can actually execute the instructions.
Think of the C2 as the lieutenant and the RAT as the infantry member.
In practice, RATs, once deployed, establish a persistent connection to the C2 through a “beacon”. This communication protocol can vary wildly, from direct (example: TCP) to indirect (example: communicating through blog postings). And the technology powering the RAT itself is highly variable and can be in any computer language.
Finally, a pipeline needs to be established to record the results of the red team assessment so they can be acted upon. This pipeline needs to be efficient, easy to grok and have few false positives.
More than this, the recommended remediations need to be useful.
The failure of what exists
With such a decorated past, and such an unadulterated sexiness exuding from the industry, it is no surprise that we have reached the road we are on. Computer security is more important than ever, year-over-year that statement rings more true, but our systems continue to be vulnerable.
Think about it this way: if existing security was working, their clients would never get hacked into by APT groups. In reality, you’ll find many examples to the contrary.
Why is this? Creating a solution is difficult.
Fast changing attacks
First and foremost, attacks are constantly changing. For every hack, there are thousands of variations to it. If hackers get thwarted one day using one variation, they pivot into a brand new, potentially never seen before version.
The complexity of this problem is compounded by the fact that hacks can be made in any computer language, leveraging any computer application or even social construct (i.e., person hacking or social engineering).
Because these hacking variations change daily, creating a software solution to emulate and defend from them is a cat-and-mouse game. It is impossible to get ahead of the hacker, you just need to focus on closing the gap.
Chaining benign effects
Existing defensive solutions are great at catching singular effects of an attack. Encrypting a file may trigger a ransomware detection. Running the Mimikatz program may trigger antivirus. However, as an attacker, you’re trained to bypass antivirus and other protective measures.
For example, copying a file from one place on your computer to another isn’t malicious. Nor is creating a new directory or compressing that directory. However, these three steps in succession are usually a precursor to an attacker stealing files off a system (exfiltration).
Defensive solutions today have a hard time identifying these chains of attacks without introducing false positives.
Lack of modularity
Most existing software solutions conducting adversary emulation lack in the most critical area: modularity. Because of the fast-changing attack patterns, the only way to close the time gap between hacking variation and software is to ensure the latter is modular to the point where new attacks can be emulated quickly. The quicker the emulation capability, the smaller the gap.
Some software solutions promise modularity but they compromise realism to get there. Look for this. If an adversary emulation solution promises next-day attack generation from zero-day hacks, check how realistic their simulated attack is. An unrealistic adversary emulation tool is worth less than the time you spend working with it.
How do you determine realism? Flexibility mostly. Look for solutions that allow you to expand the system itself and “plug in” solutions to problems. Remember, modularity is the key. Because predicting the next type of attack is impossible, you want to use a software system that allows extreme flexibility in things like attacks/hacks, communication protocols, levels of obfuscation and types of RATs.
Intersection of security & engineering
The last reason why security solutions in this space are severely lacking is the (lack of) intersection between security and software engineering. Most of the adversary emulation systems on the market today were written by software engineers, with security professionals playing a backseat role.
This is a natural problem. Software engineers build, security engineers destroy.
One of the largest problems in the security industry today is the lack of computer programming knowledge. Most security professionals do not code. They can work with code but they are not builders. This singular fact is the root of most of the corruption in the industry. Because security pros cannot build, they often cannot distinguish between a good and a misleading product. “Security snake-oil salesmen” are then able to sell flashy, seemingly effective tools to unwitting CISOs and blue teamers.
A lack of code-writing security pros is also the driving reason why most open-source software (OSS) programs in security are short-lived. Many popular projects pop up, run into severe scaling or code-design flaws, and fail within a couple of years. Only a select few security open-source projects have excelled over the course of years.
Computer security is one of the fastest growing industries worldwide. The prevalence of critical computer networks, work-from-home organizations and individual digital footprints are driving the industry forward. And the rate of solutions that actually protect us are not keeping up. Worse, in many cases they are actively fraudulent.
To find a solution, you have to circle back to the problem. Emulating adversaries should not be the goal. Protecting computers, networks and applications from realistic threats should be. Adversary emulation is computer security’s greatest fraud.
Solving this problem is not possible with a buzzword but instead with creative problem solving. Look for security solutions that offer you modularity, bundled with the components described here: threat intelligence, command-and-control, configurable agents and actionable results. Maybe you can find this in a single tool. Maybe you need several to cover your organization.
The fact is, there is no one-stop-shop for everyone when it comes to your security. Based on your assets and risk/reward ratio, you can find a security solution that works for you. Just don't buy in to the hype.