Dichotomy of buttons

There are two types of security professionals, those who build the buttons and those who click them

When I first entered the security industry, it was intimidating.

There was a seemingly unending stream of information you needed to know to succeed. You had to track every computer language, every syntax update, every CVE, every technology product… Because offensive security meant creatively - and systematically - finding holes hidden in technology or people’s behavior using it, you had to learn the inner workings of everything to be the best.

Over the years, I scrambled to stay up-to-date on everything.

I worked into the wee-hours of most mornings. I poured through books, read the release notes of all popular computer languages when new versions came out and tinkered on code to ensure I understood where programmers and sysadmins would take shortcuts.

Was it more important to know how to memory was allocated in C++ or how to properly validate user input in Python? Is it better to understand how dependencies are chained in a Ruby on Rails project or to deeply understand how sharding works in an Elastic Search cluster? How impactful is the addition of JSON storage to PostgreSQL versus how HDFS deals with the small-file problem?

There is no manual or education that can answer questions like these, so I defaulted to ensuring I understood everything possible.

Have you read my post on the cybersecurity education problem?

How did I do this? I worked. I read. I strategically built my career around exposing myself to as many variations as possible. I worked at startups. I worked at large enterprises. I worked on dozens of projects across 10 companies over 15 years, exposing myself deeply to every technology from offensive security, to crypto mining to big data to telecommunications to… you name it.

And what did I learn?

There are two types of security professionals out there: button clickers and button builders.

Button clickers learn how to use all the tools at their disposal and available to them in the marketplace. They understand deeply how the products work and how they can be applied to their job to accomplish a goal. Button builders are those who understand the underlying technology and build and support the tools for the clickers.

Most of us in the industry are button clickers. But most of us also think everyone else is a button builder.

I learned that the amount of focused energy I put into the industry over the period of time I did was actually not the average. Most people in the security industry put in significant time/effort to build their careers - but they’re focused on the button-clicking aspect.

This is where the money is after all. When you get a job in security, you’re probably not getting paid to build a product but instead to use tools at your disposal to actually secure an organization.

Button clickers and button builders are both important to the industry. We need people in both camps. If everyone was a builder, no one would do the security work. If everyone was a clicker, we’d be cornered into using tools built by marketing companies and software engineering shops without security experience.

What is the correct ratio of button-clickers to button-builders? I have no idea.

That said, button clickers have a responsibility.

Button clickers must be capable of understanding:

  1. How their tools work

  2. What trade-offs they make for each tool they use

Above all, clickers should demand transparency from every tool they rely on. Whether they choose to or not, they should be able to peel back the UI and see if they’re being lied to.

There are too many tools in the security industry which are black boxes, promising security against this or that. If you just install and run this or that solution you’ll be protected from nation-state actors. The “secret sauce” within the black box will keep your endpoints secure and you never have to think about it. Only those “other guys” without a black box will be susceptible to attackers.

Over promising isn’t a unique marketing strategy in the cybersecurity industry. But where it is different from other industries is that there is no standardized way to validate the claims.

Because cybersecurity has gotten so complicated (another problem entirely) you can install a security product but not know if it’s working or not until you actually get attacked.

Now, that’s a problem.

So what options do you have?

  1. You can become a button builder. Based on my own experience, this is not for everyone.

  2. You can blindly trust security products. No one should ever do this.

  3. You can rely on an unbiased third-party to validate a security product for you. MITRE Engenuity started an initiative to this effect but it is still in its infancy.

  4. As a button clicker you can learn how to discern good from bad. Yep, this is what you want to do.

You don’t need to be a button builder to discern good from bad. Mostly, this is a transparency process. There are two steps to being a good button clicker, demanding transparency and asking questions.

First, you should use and rely on tools which give you a baseline of transparency you are comfortable with. Security people love open-source because it is the ultimate transparency, down to every line of code (LOC).

Do you need this level of transparency from every product you use? Probably not. But the tool should not be hiding anything. If there is something in the compiled version of the product that you don’t understand, the builder should be able to answer every question to the deepest technical rationale. If they cannot, run.

Second, you need to understand how to ask the right questions. As a button clicker, you haven’t put in the time that the builders have to understand the underlying technologies. But you don’t have to - you just need to understand the underlying technology well enough to probe the builders on where the trade-offs may be.

For example, if you’re using a security tool, the very first thing you should know is the computer language it is written in. This will be telling. Depending on the tool, there could be real consequences if it is written in C++ or Ruby or Python or Java or Rust or PHP or… you get the point.

After you know the language, do you understand how the tool handles and stores data? Is it in-memory storage or on disk? What encryption methods are being used? If the tool is doing detection (for example) is this signature-based detection or behavior-driven? If signature-driven, how is it - and how often is it - getting refreshed with new signatures? Are these YARA rules from a specific source? If behavior-driven, how is the tool generating behaviors to match against?

It’s no secret that I manage Operator, a free and largely open-source security tool. While we’re very open with our code and will often point you to specific lines, here are a few questions I’d pose back to me as a clicker:

  1. What language is Operator written in?

  2. Can I see a list of the dependencies Operator relies on?

  3. Where are you getting the TTPs which are loaded into Operator?

  4. Can I see the code which make up the TTPs?

  5. Where on disk are the TTPs stored?

  6. The agent executes arbitrary instructions, can I see the code for the agent to see what it does? What language is it? Can I talk to the developer in charge of maintaining this code?

  7. Can you walk me through how Operator executes a sequence of TTPs within a kill chain? What should I expect when I enable Protect Mode?

  8. What telemetry is collected off the computer running Operator? Can you walk me through this exactly and prove it to me?

  9. What external/outbound connections (if any) are made by Operator and to where?

As the security industry continues to grow, our number of button clickers and button builders increases.

We’re a small industry, built on the trust and transparency earned from the open-source community. Let’s work to maintain this trust as the industry expands, ensuring that we - as button clickers - are demanding transparency and start being capable of discerning good from bad.

For the security of us all.