15 Comments

Like all others said, this has nothing to do with red teaming, the concept is far from what this article explain. This is basic pentesting. Also the other article linked in the article (snake oil) is so wrong, I just can't.

Sad to see this from a CTO.

Expand full comment
author

Sorry you feel that way but happy to engage/chat anytime! The goal of this editorial is to challenge the thought that red teaming is a purely manual process. Automation exists in the industry today and is consistently moving more in that direction.

If we can agree on the redteam.guide definition of red teaming, it's stated as:

"Red teaming is the process of using Tactics, Techniques, and Procedures (TTPs) to emulate real-world threats with the goal of training and measuring the effectiveness of the people, processes, and technology used to defend an environment."

There's room for both manual and automated solutions in this definition - and nearly all engagements include strong elements of automation (even something as simple as an NMAP scan).

Expand full comment

What you refer to as red teaming is actually penetration testing. There are clear and distinct differences between vulnerability assessments, penetration testing, red teaming, adversary simulation, and adversary emulation. Each has its place in security testing and assessment. Here's a good reference https://redteam.guide/docs/definition-lexicon/

Expand full comment
author

This is great, thanks for sharing!

The phrase I used could certainly be applied to penetration testing, as well as red teaming and adversary emulation, etc. I aimed to use generic wording around offensive security testing. I personally like this definition from the link you posted:

“A Red Team is an independent group that, from the perspective of a threat or adversary, explores alternative plans and operations to challenge an organizatioøn to improve its effectiveness.”

This is obviously generic as well, but I like the incorporation of adversary viewpoint.

Expand full comment
author

As I think about it though, for a formal definition of the term, maybe the adversary viewpoint edges too closely into adversary emulation whereas classical red teaming is more like this:

“A Red Team is an independent group that explores alternative plans and operations to challenge an organizatioøn to improve its effectiveness.”

What do you think?

Expand full comment

I don't think rephrasing what have been phrase by the best in the domain is something you should do here. Also red team is about assessing the business risk, that's what this sentence means.

Expand full comment
author

The prior commenter left a link to a set of definitions, of which I pulled out the red team so others wouldn't have to dig through the link to find the comment.

Assessing business risk is certainly an element to a red team engagement for a business but the definition of a "red team" in general typically does not, as a red team can be applied to nearly any aspect of life. I wouldn't speak for the person who wrote the definition linked above but I'd suspect that's why "red team" doesn't include the element of business risk and "red teaming" does.

I'd love to hear your definition of a red team however, if you don't like the one from redteam.guide. There are a lot of interpretations of the phrase and it's always fun to hear how a person would define it.

Expand full comment

"Red teaming is the process of using existing, already known security bugs and vulnerabilities to hack a system." - You misunderstand Red Teaming, its purpose, and the benefits.

Expand full comment
author

That sentence is certainly a simplistic definition of the term, as you point out. I definitely don’t want to downplay the role of red teaming and hope that’s apparent in this post. However, I would say red teaming is the process of using known “things” and approaches to security test a system. Definitions are very fluid in this space, but I would separate exploit development and research (the unknown) from red teaming.

Because there are so many definitions of the phrase, it’d be fun to hear your 1-sentence definition, which is great to crowd-source!

Expand full comment

Lol! What a joke! Thanks for wasting precious reading time

Expand full comment
author

Differing opinions are great! This is certainly a fun topic... how autonomous security tools will intersect with manual teams as the decision-making/autonomy get stronger in the available tools. Happy to engage any time.

Expand full comment

I'm sorry but aren't you guys covering physical security aspects in red teaming? We can't bring humanoids for those. Not yet :p

The attacks doesn't completely come as packets

Expand full comment
author

Great point! We are talking specifically about the purely cyber side of red teaming from the post-compromise state. Physical pen tests (as far as I can tell!) are almost by definition human. I'll update the post with a note on that actually.

Expand full comment

Even on the cyberside, the environment & lateral movement variables may vary drastically from client to client. Custom privileged user groups not called domain admins for example. Plus the dev team are ever on an ever rolling tech upgrade journey to provide users more features & convenience. I don't mean to be rude. I'm just saying from things I've seen

Expand full comment
author

Never apologize for your opinions, they're great to express. Let me take lateral movement for example. Like you said, usernames, passwords, IP addresses, etc differ drastically between environments. At face value, this seems like automation would fail.

However, if you run a handful of discovery TTPs, you may discover local usernames, passwords and IP addresses. Then, using this learned knowledge, you can pivot it into additional procedures. So the output of one becomes the input of the next.

Example, I have a lateral movement command like this, which SCPs my RAT from one host to another:

"scp rat.sh username@1.2.3.4~"

Unless I know the username and IP address ahead of time, this can't work. So instead I write the command using variables like this:

"scp rat.sh #{username}@#{ip}~"

Now, this means I have preconditions (username/ip). So maybe I run "arp -a" and "whoami" then use regex to parse the arbitrary text output, which then unlocks the preconditions to my lateral movement command, allowing me to automatically engage it and copy the agent to the remote computer.

Neither username or IP was known ahead of time, meaning I non-deterministically applied automation to compromise a computer I wasn't even aware of ahead of the test. This is a very small example, obviously, but you can extrapolate larger examples with bigger volumes of TTPs and better parsing automation.

Hope this helps you wrap your head around it. I'd recommend reading this post: https://feed.prelude.org/p/how-decisions-are-made

Expand full comment