Why cybersecurity’s future may be in the hands of the devil’s advocate (Q&A)
7 min read

Why cybersecurity’s future may be in the hands of the devil’s advocate (Q&A)

Why cybersecurity’s future may be in the hands of the devil’s advocate (Q&A)

Micah Zenko has a piece of advice for frazzled security executives: Start thinking like the enemy.

Easier said than done. In his newly published book, Red Team, Zenko notes that organizations’ institutional bias prevents them from recognizing their defensive blind spots. That’s great news for hackers; bad news for companies trying to protect information.

That’s where the practice of red-teaming promises to help. The technique, which first came into vogue during the Cold War, refers to a process that uses a variety of simulations and analyses to gauge the intentions and capabilities of an institution or nation state. The basic idea is to challenge the conventional wisdom in such a way that an institution can improve its routines with the help of a fresh perspective.

Although red-teaming first achieved popularity in the military, it became a tool for organizations dealing with complicated decisions or threats, such as those posed by cyberattacks. Red teams often use penetration tests, also known as pen tests, to find vulnerabilities in computer networks or Internet applications before hackers can breach the systems and inflict damage.

Micah Zenko

Micah Zenko

The Parallax recently caught up with Zenko, a senior fellow at the Council on Foreign Relations, to learn how they’re doing it. Here is an edited transcript of our conversation.

Q: Your book traces the origins of red-teaming to the medieval concept of the devil’s advocate. Explain that.

A: The devil’s advocate was a trained position empowered to find damaging and contrarian information about someone who was up to become a saint. They were supposed to question their miracles and examine whether they were true professors of the faith.

Some of these trials lasted decades—sometimes more than a century—as various devil’s advocates would keep coming up with new information. It was becoming extremely difficult to become a saint in the church and, in 1982, Pope John Paul basically did away with the position.

There are still checks, but in the last 30 years, more people have become saints than in the previous 1,800 years.

When did the more modern version, as exemplified by red-teaming, come into existence?

The term “red team” comes specifically from the red of the Soviet Red Army. It started in the 1960s, when people began applying econometric war game methodologies, where you have incomplete information about adversaries and want to assess potential responses.

Some of this comes out of game theory, some comes out of growing security studies literature at places like Harvard and Rand, and in the Pentagon under Robert McNamara.

Before red-teaming can reveal shortcomings in strategies, what first needs to happen?

The boss must buy in. If the senior leader doesn’t care or doesn’t want the red team, they won’t receive the resources, the funding, the time, or the reach they need to do their job. The senior leader must make it known that the red team matters.

The other thing is how the red team is situated relative to the targeted institution with which it’s going to work. The red team has to be independent enough so that it doesn’t become institutionally captured. That’s a very delicate balance to strike.

“[M]achines are just not as devious and resourceful and creative—and potentially malicious—as humans.” — Micah Zenko

How do they decide which cyberthreat to defend against?

The most important discussion is the initial scoping discussion with the CSO or the senior vice president, where the red team comes to them and says, “Tell me what’s most important to you.” And it’s quite fascinating that oftentimes, the leader doesn’t even know what’s most important to them.

In the private sector, they start by saying quarterly profits are the most important, and so the red team will ask, “If hackers got access to your customers’ information and documents, and put it on the Internet—would preventing or responding to that be less important?” At that point, they’ll say, “No, no, no, that’s more important.”

But when the red team asks, “So is that more important than market share?” the answer will be “Well, actually market share is really important to us.” And then they’ll get asked, “OK, is this more important than you, as CEO: being humiliated because I find personal information about you that’s illegal or embarrassing, and post it online, and it could cost you your job?” They often don’t have a prioritized sense of what’s critical and what’s peripheral.

So they often don’t know what needs to be protected?

People describe the process as something like a therapy discussion. The red teams need to have a series of conversations to identify what needs protecting. At that point, they can discuss the method they’ll use to challenge the assumptions of the strategy, and identify any blind spots or assume the role of the adversary that the organization’s worried about. But unless you get that scoping conversation correct, nothing else is going to matter.

micah zenko red state

Is red-teaming having an impact on the cybersecurity world?

It’s growing, but red teams are only as red or as effective as the targeted institution allows them to be. Often, the pen test scope is so narrow as to be pointless. Or the people who shouldn’t know about the test get tipped off in advance. So you see new intrusion monitors introduced into the network, or employees receive intensive phishing scare notices just before the pen test. That heightens security artificially and makes it less likely that the team will be able to demonstrate vulnerabilities and breach.

Can’t an institution do this without calling in a red team?

You can’t grade your own homework. We shouldn’t assume that the IT staff or the security team can know its own vulnerabilities. It cannot. The point is to reveal shortcomings and vulnerabilities and security culture weaknesses that on your own you cannot determine, and to provide a series of specific, prioritized corrective measures that the institution can take.

Could red-teaming have helped avoid something like the U.S. Office of Personnel Management data breach?

Yes, and they do. The challenge with red-teaming is that it’s not considered to be a core function or core business practice, so how do you demonstrate that it has a clear impact and that it averted an attack? Nobody knows that. But it generates “aha” insights or some vulnerability that, on your own, you could not find. We’re poor judges of our own performance, and the same is true for institutions.

There are some clear examples, as was the case with the Target breach, where it likely would have helped. There were issues with passwords that weren’t hashed and weren’t encrypted properly. A red team also would have tested the HVAC company that the hackers came through.

You write about a Marine colonel who went to Afghanistan and presented a review to General McChrystal, who then accused him of trying to tell him how to run his war. Do red teams risk becoming casualties, where the response is to shoot the messenger?

Sometimes they do. There is a certain skill and finesse with which you should present red-team findings. For instance, hackers should not present to the CSOs because they tend to be antisocial and arrogant about their capabilities, and can be really dismissive of the IT staff that they just hacked.

You should probably have screenshots, or maybe record with a GoPro camera, so you can demonstrate what you did. There are guys who beam their stream live to the CEO as they break into a building. But there’s no absolute guarantee that the red team will be heard.

“Some people can’t take the role of the enemy.” — Micah Zenko on the difficulty of building red teams

Why can’t this be done through automation, by using a software program that can scan for vulnerabilities in a network?

Many people actually try to do this. But when they look over software code for vulnerabilities, machines are programmed to find certain consistently repeating vulnerabilities. Their vulnerability scanning is only as good as the algorithm that gets built into it. I think there is clearly a role for automated pen testing, but machines are just not as devious and resourceful and creative—and potentially malicious—as humans.

You write about how corporate cybersecurity personnel responsible for defending an institution from hackers, have sometimes been unable to participate fully in penetration tests when they get assigned to take the opposing side. Why is that?

This is also true in a number of fields. People try to break in all the time, assuming the role of the most likely adversary and their capabilities. But there are a lot of people who cannot conceive of harming their colleagues. They just can’t do it. And it’s similar in the pen-testing world.

There are some people who are really good coders, and really good network administrators, who are really good at building and defending. But they have no sense of how to think and act deviously and maliciously. Some people can’t take the role of the enemy.

When companies seek to test their cyberdefenses, they often have trouble thinking about potential attacks from the point of view of a creative adversary. Is that due to institutional bias?

I spent a bunch of time with red teamers who have to deskill themselves when they break into computer networks. If they use their most proficient hacks, the IT staff or the CSO and the security officers would say, “Oh, you’re too good. The kind of attacks we face every day are really low-level.”

So one of the things they have learned is that they really need to use dumb attacks. They wind up having to use commercially available malware or use vulnerabilities, whether in Word or Adobe, that everyone knows. And they have to spear-phish and try to get to the CEO personally. (Spear phishing is attempting to steal information from a group or organization involving the use of emails designed to lure users to visit malware-laden websites.)

After one penetration, you found that the IT staff members at the targeted institution wound up demoralized. Why was that?

If it’s done correctly, red-teaming can be a very painful thing that calls into question what you do every day. Nobody shows up for work and decides that morning what to do. We have a series of expected behaviors that leads to unit cohesion. People become closer and just want to to do their job well.

The red team comes in and challenges all of that. It doesn’t just challenge your job, but it also challenges your relationship with each other. It challenges the unit and the mission. So people can become really demoralized.

Enjoying these posts? Subscribe for more