CrowdStrike CEO on political infosec lessons learned (Q&A)
6 min read

CrowdStrike CEO on political infosec lessons learned (Q&A)

CrowdStrike CEO on political infosec lessons learned (Q&A)

LISBON, Portugal—The last few years have been a time of chastening for people working at the intersection of information security and politics: We’ve realized too late what we didn’t know about our threat models, and now we’re trying to figure out what we should have done then and what we should be doing now.

Consider, for instance, some of the flagship infosec discussions at the Web Summit conference here each November.

In 2016, Facebook’s then chief information security officer, Alex Stamos, spoke about how the social network worked to ensure nobody else could get into your account—when the already latent risk was somebody getting into your head with disinformation campaigns. At this year’s summit, Facebook executives had vanished from the agenda, while the conversations overran with concern over the risk of that sort of social engineering at scale.

There’s always a lot of commentary about, can the ballot system be hacked? I think it’s far easier to influence someone to change their vote in their head than to actually change their ballot.

One of those Web Summit speakers was George Kurtz, chief executive and co-founder of CrowdStrike. The Sunnyvale, Calif., security firm landed in front-page headlines for documenting Russian hacking of the Democratic National Committee in 2016 and earlier helped uncover North Korea’s role in the 2014 Sony Pictures hacks. Attribution of cyberattacks remains tricky, and other security experts haven’t always agreed with CrowdStrike’s calls.

I spoke with Kurtz on November 7 about these and other issues, from security consciousness in political campaigns to hacking campaigns by nation-state adversaries. An edited transcript of our conversation follows.

Q: We’ve spent much of the last two years realizing how little we knew about things like influence operations on social networks, ways to exploit a platform that don’t fall into the traditional definition of security. How much do you think we’ve woken up to that risk?

I certainly think that over the last two years, there’s been an awakening to the risk of influence operations, if you will, and in particular, when we think about elections, the impact it can have.

There’s always a lot of commentary about, can the ballot system be hacked? I think it’s far easier to influence someone to change their vote in their head than to actually change their ballot.

You’ve talked about how defining value by number of users discourages a platform like Twitter from taking action against bots. And yet that’s what the market values—daily average users. How do we get out of that box?

When you look at the financial incentives of daily average users and the pressure to clean up the platform, I think there’s gotta be some meeting in the middle.



READ MORE PARALLAX Q&As

How USB sticks help drive freedom in North Korea
New Zealand defends its border device search policy
To prevent health record breaches, stop using them
Smart guns ‘one piece of a very complex problem’
How to strike a balance between security and privacy
Jennifer Granick on spying: ‘The more we collect, the less we know’


If you have a platform that has a lot of bots, and you have a lot negative interaction, you’re going to have a lot more churn of users, as opposed to having more value for users and having users flock to the platform.

There may be a temporary decrease. But I think that if the platform is actually cleaned up, and you get rid of a lot of the automated bots, people will look at that as a net positive and see more value in the platform itself.

EU competition commissioner Margrethe Vestager said at the Summit that the EU is moving to have rules requiring platforms to take down terrorist propaganda within the hour. Does that point to a future where artificial intelligence is the only tool we can resort to?

I think it’s very difficult to have humans in the loop of deciding what content is good or bad. I think you can search out for patterns of speech that might be indicative of hate or terrorism. I also think you can search out for bots: There are certain characteristics of bots that are out there—what they do and how they interact.

I think you’ve gotta focus on the low-hanging fruit first. It’s a pretty complex topic of platforms trying to assert themselves on what’s good and what’s bad, and what should be on there and what shouldn’t, particularly when we think about free speech.

I would drive a lot of automation into cleaning up the automated bots, and certainly using AI to determine, you know, hate speech and potentially other offensive conversations that are out there.

Once you start getting into that realm of deciding what’s good or bad outside of what’s automated, you get into a slippery slope.

But the algorithm itself has to have some sort of founding principles?

Exactly. And whose founding principles?

When you have those false positives, you need some sort of appeals process. But in Twitter and Facebook’s case, it seems like you need somebody like me to cover one of these to get it fixed.

It’s a challenge, for sure. When you look at the transparency that a lot of these platforms are trying to drive into the user base, I think they have to get a little better at the appeals process. There’s certainly going to be folks, legitimately, who get caught up in these algorithms.

I’ve been coming to these events for 25 years, and people still pick poor passwords. You’re not going to solve that overnight.

There needs to be a vetting mechanism with a certain service-level agreement so that users understand the process they can go through to get reinstated, they understand the time frames around it, and they understand the process that the platform will actually go through to determine whether they should be on there or not. And that’s still in its infancy.

The midterm election was two weeks ago. Obviously, it’s too soon to say whether it proceeded without large-scale attempts at interference. When do you think we will be able to make that determination? When will those results come in?

Unclear.

I heard multiple people in D.C. over the last three to four weeks say they wouldn’t worry about the Russians in this election; they’re going to save their best tricks for 2020. Is that your assessment?

I think that any election needs to take influence operations seriously—whether it’s a local election, whether it’s a midterm or a presidential election. At least over the last couple of years, we have a heightened awareness of what could happen.

And the good thing is that a lot of these organizations who don’t have a full-time security team—or full-time folks that think about this—are consulting with organizations like CrowdStrike to help them protect against these attacks.

I dread to think about how many campaign managers to years ago would have said, “Yes, I have two-step authentication enabled on both my work account and my personal Gmail.” How far have we come since then?

I think there’s still a long way to go. I’ve been coming to these events for 25 years, and people still pick poor passwords. You’re not going to solve that overnight. The reality is, when political campaigns ramp up, they have a lot of volunteers who aren’t necessarily IT experts or security experts.

They’ll go down to a Best Buy and buy a computer, plug it in, and now you’re part of the campaign. That can be very problematic.

It used to be, “Hey, I don’t have to worry about that; it’s not me.” Or, “The government’s going to protect us against that.” There’s since been a realization that you have to think about this protection yourself, and you have to take some action.

Let’s move to Chinese hacking. I’ve heard more than one person say China doesn’t seem to be honoring the whole deal we worked out under the Obama administration anymore. What should we be doing?

I think that’s accurate. Things abated a bit, and now they’re back in full force, just given the different regimes. Obviously, there’s a different, uh, interaction between the two countries at this point in time.

We’ve put out some blog posts on the recent uptick in Chinese activity, particularly around intellectual-property theft in various sectors that are out there—manufacturing energy, retail. I think we’ve seen oil and gas—it’s across the board.

[CrowdStrike co-founder and Chief Technology Officer] Dmitri Alperovitch went into more detail at an October 2 event hosted by The Washington Post. “Intrusions into private industry had dropped by 90 percent,” he said then of the Obama deal. “Unfortunately, now the Chinese are back.”]

Who else should we be worried about? Iran, Saudi Arabia? Who else is on your radar?

If you look at North Korea, they’ve been very active in ransomware. Their capabilities have dramatically increased over the last couple of years, so we’re seeing a lot of activity from North Korea. We’re also seeing a lot of activity outside of Iran.

Trump and Kim Jong Un falling in love, as the president described it at a recent rally, has not stopped that?

Governments are going to do what governments do, no matter what happens in front of a camera.

Enjoying these posts? Subscribe for more