Episode Thumbnail
Episode 6  |  36:55 min

Can ML Solve the Cybersecurity Skills Gap?

Episode 6  |  36:55 min  |  07.22.2020

Can ML Solve the Cybersecurity Skills Gap?

00:00
00:00
This is a podcast episode titled, Can ML Solve the Cybersecurity Skills Gap?. The summary for this episode is: Year after year we read about millions of unfilled cybersecurity jobs as incidents increase and Twitter experiences one of the most public cyber meltdowns in history. We talk with data scientist extraordinaire, Michael Roytman about whether Machine Learning can fill the cybersecurity skills gap.
Takeaway 1 | 01:15 MIN
Coffee Risks and Nespresso Pods
Takeaway 2 | 02:12 MIN
Roytman's Hot Take on The Cybersecurity Skills Shortage
Takeaway 3 | 01:02 MIN
Skills Gap Fast Facts
Takeaway 4 | 04:36 MIN
Why Is There a Skills Gap
Takeaway 5 | 01:22 MIN
Defining What Security Is
Takeaway 6 | 02:52 MIN
The Inertia of Gaining Visibility
Takeaway 7 | 02:08 MIN
Crazy to Expect Humans to Analyze Data Sets at Internet Scale
Takeaway 8 | 03:09 MIN
The State of ML Automation Today
Takeaway 9 | 02:21 MIN
Wicked Problems
Takeaway 10 | 01:55 MIN
What Problems Are Solvable?
Takeaway 11 | 01:39 MIN
Toyota QA Is A Source Of Hope
Takeaway 12 | 02:41 MIN
What Roles Can ML Fill?
Takeaway 13 | 01:18 MIN
What Is ML Bad At?
Takeaway 14 | 02:54 MIN
Humans Are Good At Complexity
Takeaway 15 | 01:45 MIN
So Can ML Solve The Cybersecurity Skills Gap? Yes, but...
Takeaway 16 | 02:03 MIN
Future State of ML in Cybersecurity
Year after year we read about millions of unfilled cybersecurity jobs as incidents increase and Twitter experiences one of the most public cyber meltdowns in history. We talk with data scientist extraordinaire, Michael Roytman about whether Machine Learning can fill the cybersecurity skills gap.

Dan Mellinger: Today on Security Science, can machine learning, solve the cybersecurity skills gap? Hello and thank you for joining Security Science. As always, I'm Dan Mellinger and I have with me our data scientist extraordinaire, Michael Roytman. How's it going, Michael? Last time we chatted, I believe you took a risk on volume of coffee. How's your coffee supply going today?

Michael Roytman: A lot better today. I'm ready and fueled up on cup eight or nine and that's not risky at all for me.

Dan Mellinger: Oh, Nice. I think I've done three Nespresso pods? Like the big ones, the Vertuos. I'm really, really fiending for my Starbucks lately, though.

Michael Roytman: You're automating a lot of your coffee delivery. I feel like you should scale it back and do it yourself some more.

Dan Mellinger: We've been honing down the ordering process in terms of volume that I drink. So we get two different types of coffee and this is not a plug for Nespresso, by the way. We get two types of pods, one for my wife, one for me, and she is constantly astounded because she does the orders of how many more I drink on a monthly basis than she does.

Michael Roytman: You know you can make your own Nepresso pods in- house?

Dan Mellinger: I did not know that.

Michael Roytman: It's a thing.

Dan Mellinger: Oh man, I'll have to look into that. You just opened a new wormhole for me to geek out on. Anyway, jumping off the coffee topic. We're looking at the cybersecurity skills gap. So if anyone's ever followed cybersecurity news for the last five years or so, you've probably heard this. So it's saying that roughly I believe 3. 5 million unfilled cybersecurity jobs globally by 2021, according to a recent Cybersecurity Ventures report, which we will also link on the podcast page. You guys can go take a look. But we also have some other stats from( ISC) ², did a yearly workforce study they've been running for 10 years. We will also link that and it shows what that number is already at 42 or four million as of last November. So, Michael, can you give us just an overview? What are your thoughts on the cybersecurity skills shortage?

Michael Roytman: I think it is very true and real, but also maybe an artifact of the way that we've been thinking about security for the past 20 or 30 years.

Dan Mellinger: How do you mean?

Michael Roytman: Well, the definition of security skills is also evolving. So if you think about back in the day, it used to be administering a system. Maybe admin started transitioning to being security officers or information officers. If you think about it today, most security jobs are managing some tooling or working within some tooling to perform some tasks that are either remediate vulnerabilities or do investigations or mitigate risks. And of course, then there's all those kind of consultative security jobs like pentesting and researching. I think a lot of the work that's done is expanding because organizations are realizing security's a problem, attacks are going up and there're more vulnerabilities than ever. But a lot of the reasons that they're expanding is because they're hiring more people to do the same tasks that we've done over and over again. And it's a new industry. We haven't had time to automate the repetitive tasks so that folks can actually do the higher level thinking above their repetitive tasks.

Dan Mellinger: Example?

Michael Roytman: Accounting, I think very few people now do pivot tables by hand. When the practice of accounting started, that's all it was.

Dan Mellinger: Pivot tables.

Michael Roytman: Pivot tables, data entry, data auditing. If you really think about it, an IR investigation or a vulnerability remediation or a prioritization of vulnerability is that kind of exercise. It's just that we haven't had a common language to define what is standard scan data. How do you prioritize a vulnerability? What are the steps in an investigation? So those are things that people are doing with their understanding of computer systems, with their security skills, with whatever they're getting, either learning it themselves or through bootcamps through some of the new formal programs, but my hot take is a lot of the manual labor can and will be automated. And it's a question of shifting how we think about the gap and how we apply machine learning security.

Dan Mellinger: That's a very interesting hot take you got there. So let's step back a little bit and I'll go over current kind of stats out there. So according to the( ISC) ² survey that I mentioned earlier, they peg the number that we're missing, right? Unfilled positions that should be filled at around four million as of November 2019. Of those 42 respondents said that this was their first job in cybersecurity. So people coming from a lot of other careers, almost half, right? They're not coming from a cybersecurity background per se. A lot of the professionals are very young. So 65% are between the ages of 25 and 44 and a little more than a third have a bachelor's degree in something. Doesn't need to be anything IT or technology- related. And then I think encouraging, but can get a lot better, but 30% of the respondents are women. So hopefully we can get that number up. And then one of the questions I wanted to ask you just to kind of start off, because it seems like a lot of opportunities. So why is there a gap? Cybersecurity has virtually no unemployment, right? They can not fill these roles. Workers don't necessarily have to go through a formal education path like being a lawyer or a doctor or anything like that, right? So there's a diverse range of skill sets from communications to business management to I'm a geek who started doing this when I was 13 because I got angry in Fortnite. And then they all make good money. So the average salary in North America is 90K for a cybersecurity professional. So why does this gap exist?

Michael Roytman: Well, first of all, from a social perspective, this gap is great, right? This means there're tons of jobs available that we could fill with domestic workers, with employees who might be unemployed recently. There's tons that we could do there. There's two reasons gaps exist, either the problem is getting bigger or there's not enough people that can work on it. And I refuse to believe that there's not enough people that can work on it. I think the problem is growing very quickly and a recognition of the problem is growing really quickly. If you think about it, the field is really young. So maybe 10 years ago only the Fortune 500 really had formal security teams that were bigger than a few folks. Whereas today, most organizations in the top Fortune or the Global 5000 probably have a team or some semblance of a team. More organizations are recognizing they need a security team formalizing it, looking for those skills. And so we can't hire fast enough, is I think what the problem is. But there's also a nuance there, which is a lot of the vendor evolution, a lot of the specialization that's emerged and security. Any new field will start to specialize pretty quickly once it emerges. It has been creating domains of security where you might have a vendor or tool, and then you need somebody to administer it, to work within it. Think about most security vendors. I would say 80% of them or more are tooling that people use, right? Think about the IDS world. There's Splunk analysts now that are being hired. Think about the endpoint world. There's Carbon Black analysts. Think about the vulnerability world. There're folks that just spend their entire days working with Qualys outputs and pivot tabling those spreadsheets. All of those are new jobs created by the types of tools and the vendor community. And all of them are getting more and more specialized. And of course, the skills that you need to work within that are being able to use a piece of SaaS or not SaaS tooling and understanding how computers interact with one another, which is pretty broad, right? That's why you don't need specialized skill sets. I think a lot of the opportunities you look at that 4. 5 million and you think," We could fill that with folks. We could retrain them. We could create bootcamps. These things are happening. We finally have bachelor's of science in cybersecurity programs coming about." But we could also think about the problem a little differently than we have been for the past 30 years. For the past 30 years, we've been saying security is an issue. How do we get our heads around it? How do we quantify the vulnerabilities? Figure out the threats, figure out what the chain looks like, manage that risk. And a lot of the tools we've built have been sensors that collect data. Whether it's network traffic, vulnerability data, scanning infrastructure, scanning endpoints, it's sensing and collecting that data. And then we hire people to work with that data to make decisions. I think we're now a world where we can start to build tooling that makes decisions for us or at least offers decision support. Because if you think about cybersecurity, it's all about machine data. It's data created by machines about machines and how they behave. And it is kind of silly that we think that let's collect all of the data about these four million assets that an organization might have and then have people look at it. The easiest way to look at structured data about machines is to build some machines that can do that at scale. And then of course people can make risk tolerance decisions or a lot of the things we talked about in our first podcast, which is how do you decide what the right risk trade off is for your organization? But you don't want to be making that decision by looking at a snapshot of data with your human eyes or even using the tooling to look at accurate statistics. You want a machine to be able to look at every single vulnerability and every single asset and every single alert for you, and then give you your decision options. So I think that's where the opportunity is. Let's figure out what's automateable and it's not just workflow, some of those decisions. And once we do that, I think we have a much smaller gap.

Dan Mellinger: To restate, I just want to make sure that I'm on the right track here. We spent a lot of time in cybersecurity trying to gain visibility, understand, collect the data points of what's happening. Thus far today, we've used specific tools to do the specific data collection that we need. We've employed individual humans to go look through, parse, add all this data to pivot tables and/ or review firewall logs or whatnot, and then analyze, correlate, and make decisions from a headcount from a human time perspective. Is that correct?

Michael Roytman: Yeah. And we're just beginning to define what is security, right? You used to have one security function. Now you might have a vulnerability management function, an application security function, an incident response function. All of these are different parts of the organization that need different tooling and we're building all of those. Now, of course that's not everything. You also have architectural decisions that might impact security that we make way before any of the sensing and tooling happens, but we haven't established those feedback loops yet within the organization. Even the feedback loop between incident response and vulnerability management is just now emerging. Where we just now we're starting to look at our incidents and saying," Which vulnerabilities do we fix to reduce the number of incidents?" We used to just say," Here are all the vulnerabilities in your network. Here's a spreadsheet. Go fix them."

Dan Mellinger: There're several spreadsheets.

Michael Roytman: Yeah. Some of those spreadsheets from several tools. And then of course in that world, you need a couple of dozen folks to do the data analysis work to make that work. But now that we have all of this tooling and we have all the infrastructure, we can start to look across them and say," How can we create feedback loops within them?" And a lot of those feedback loops are going to be machine learning- powered.

Dan Mellinger: Interesting. Okay. So use machines to start to automate some of the tasks that you would consider a little more admin, right? Data entry, correlation analysis. We're using, right now, humans to do that. Because it's, one, a really technical endeavor and you can train people to do these kinds of things. And two, because what? Alternatives don't exist, or should exist, or?

Michael Roytman: Because of inertia, I would say it's because of inertia. You're right. That visibility is what we've been working towards for a long time. If you look at the show room marketing language around black hat and RSA for 10 years or so, it's been all about visibility, exposure, identifying. We've built machine infrastructure to do that. And simultaneously, folks have been making decisions as their visibility increases. Now the visibility has increased to a place where we have too much data to deal with. People are dealing with the dearth of that data. And I think it's time to build up the other side to build up the analytic and reasoning machines, which is machine learning, to deal with that outcome of that data. I'll add one caveat to the whole what can and can't be automated. I don't think there's a world where we can automate people's security jobs out of existence. I really like this. Arthur Conan Doyle quote, the Sherlock Holmes writer. He says," When you have eliminated the impossible, whatever remains, however improbable, must be the truth." In security, we do a really bad job of eliminating the impossible. We have 150,000 vulnerabilities in NBD. The average organization might see 10 million vulnerabilities. You look at those vulnerabilities, a lot of them could not possibly be responsible for the intrusions and the risks that you're seeing in the organization. And yet we're not eliminating them because our goal has been visibility. And if your goal is to find more vulnerabilities, then you write those signatures, you find those vulnerabilities, you're on your network scans, your authenticated scans. You find as many as you can. But if the objective function is instead trying to figure out the signal from that noise, and we would start to build machines that are driving signal from noise, which is all about binary classification, random forests training the right models that can help people in making those decisions. Then we can eliminate a lot of the impossible stuff and make every single decision that an IDS analyst makes. Give them a little more time to make one so that they're making better decisions.

Dan Mellinger: If we can build the tooling. It's just inertia, right? We have the data. I think what? The last five, 10 years, threat intelligence has been the technology [foreign language 00:14: 30 ] in the industry, right? We can tell you exactly what's going on externally. We know what the hackers are doing. You can leverage this with a security analyst, a human, to go make yourself safer. And you're saying that a lot of this stuff doesn't need to be worked on, right? And humans aren't necessarily the best judge of that kind of data set.

Michael Roytman: That's insane to me. It's insane to me that you even call that a technology. What we're saying is we created an industry around generating data sets that describe threats. These are threats that are internet scale, right? You look at gray nodes, you look at Shodan, you look at things like Intrigue scanning the whole internet for these threats. And then we take that data. We aggregate it and we give it to a person to think about. Of course, that individual is going to make some mistakes, not going to be able to think about all of it. It's difficult to apply internet- wide intelligence data, however summarized for an individual to an enterprise, which might have a 100, 000 assets within it in a complex architecture. But you've created a data set. You already have a data set and what's in between is a human. I think what's in between needs to be something that executes on objectives that humans set. One of those objectives could be risk reduction. One of those objectives could be a reduction in alerts on the IDS system. One of those objectives could be not having any risk of ransomware within your system because you know what of that threat intelligence you've collected is linked to ransomware. And you can then spider across your organization and look for what the vulnerabilities that caused those ransomware intrusions are. Those kinds of things that I'm describing are workflows that humans do today. But their workflows that are incredibly repeatable because they're about structured entities like vulnerabilities, platform enumerations, machines, assets, signatures, alerts. Whatever we have structured in security over the past 30 years, we can start to use to build decisions on top.

Dan Mellinger: I mean, that makes a ton of sense, but where are we today then? If we've been focusing primarily on discovery, collection, visibility, and then narrowing this list down and/ or just trying to defend against literally quite literally everything and using human man hours to do so. Where are we today? So what are the roles that machine learning can fill today?

Michael Roytman: Well, we're just at the beginning is the answer. We have tooling that answers parts of the problem. So scanners answer part of the vulnerability problem. Threat intelligence answers part of that problem. Ticketing systems and workflow systems answer part of that problem. We're just at this stage where all of these vendors are now exposing their database APIs. And I think security teams, internally, are now building systems using those to automate some of the work they'd been doing manually before. Of course, vendors, we carve out a space ourselves kind of, and we work on a part of that there're vendors that do that on the detection response side of the house. There's reverse engineering vendors that will try to do that with malware as well. We are now at the stage where we have too much visibility for any security team to handle. And they're in the prioritization stage of security. Once we've automated the workflow where prioritization is largely a solved problem, is when security analysts and security teams will have the time freed up to be able to pursue the kind of things that are more esoteric, more difficult, require architecture, rewrites, or might not be solvable than machines in the first place. Right now we can't even answer the question of what can and can't be automated because we haven't tried to automate all the workflows that are out there. So...

Dan Mellinger: So this is like a market maturity stage, right? Yes. It's where we're at.

Michael Roytman: Yes, absolutely. I think it's an industry maturity stage. The industry is maturing to a point where everything has been quantified or where we've at least attempted to quantify everything. Even pentesters to some extent are attempting to quantify the number of different attack paths that are possible. If you look at that data over the past 20 years, this is how miter attack came to be. We now understand the kinds of techniques that adversaries can use against the system. And it's been quantified and discussed, and we're doing a much better job of doing that. Now we can start to use those tools to automate the manual work security teams themselves are doing some of it. There's automated periodization at some sophisticated organizations, the ones that have thousands of security employees, there's automated alert, handling, and workflow. That's built in for some of those organizations as well. And now we're starting to see the rise of vendors that aren't identifying anything they're instead making decisions for security teams at scale.

Dan Mellinger: If that's where we're at on the maturity rating side, ML right now is just starting to fill the analysis gap. So we have the data collection, as far as you're concerned, solved. We've discovered enough that we need to figure out what we should prioritize to take care of versus trying to discover more, constantly, or quantify more.

Michael Roytman: We've solved enough of it, where we've created a new problem. If that makes sense.

Dan Mellinger: A data problem.

Michael Roytman: A wicked problem, wicked problems are problems whereas you try to solve it, it shifts. So your solution no longer applies. That's a lot of where this workforce shortage is coming from. We had a problem where we didn't know we didn't have visibility into our security posture. Now we've hired every individual that we hired to try to write a new scanner signature, which has found more vulnerabilities or more waste quantified than within an enterprise. And the security problem itself has gotten bigger and has shifted under our feet. We've created enough sensor infrastructure and enough data collection, infrastructure and security, where now the problem is no longer the co the identification of it. It's now we've identified too much. What do we do with it? How do I act? It's a decision paralysis instead of a lack of visibility into what decisions to make.

Dan Mellinger: Awesome. And I do want to go and just read off the Wikipedia intro to wicked problem. I think it's a pretty cool concept. So a wicked problem as defined in planning and policy, is a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. That sounds exactly what we're describing in cybersecurity at the moment,

Michael Roytman: That is security, right? The first problem in the seventies and eighties was how do we structure this problem? What is the problem? And now that we've gotten our heads around it, it's wait, what are they actual problems that we need to solve? Is it that people are writing insecure code? Is it that the architecture is wrong? Is it that we're migrating to the cloud? Is that that user access is inherently tradeoff with security. All of these problems are now problems that are shifting under our feet, especially as technology shifts, right? By the time we've solved one problem in server infrastructure, a different problem pops up in containers. And so the process of how we solve problems, it's probably more important than security than any one individual piece of data or workflow that we've created.

Dan Mellinger: Would you argue that in an end state, we have automated away a lot of the machine learning, capable tasks, and we're freeing up the same humans to solve these more complex problems, right? Using the flexibility of human intellect to address something that's a little more difficult. So...

Michael Roytman: Well, you're making me think about what is a solvable problem. Analyzing one particular vulnerability, surely solvable, right/ Has an outcome. Reversing, a piece of malware, surely solvable. Deciding which architecture is correct for an application, the process is insurance claims, probably never solvable by itself. An open ended question and a lot of the business or technology problems that will never be solvable are the ones that cause downstream security issues that we then have to pick up and deal with. End state is a silly thing to think about because there is no end state, but there's a process that will evolve to, I think, where we're more integrated into the original decision making that's causing downstream security problems. And if you start to think about it in terms of a feedback loop, there's feedback loops inside of security that I think about day to day, like the feedback loop between vulnerability management and incident response. And there's also feedback loops between the business and security. Decisions the businesses are making are causing security problems down the line, and some of them are intentional. Some of them are not intentional. If you think about writing applications, decisions made in code are causing SQL injections down the line, and that might be a mistake or that might be the way the applications are designed, the tighter we can get to the beginning of that process, the less downstream effects we'll have to deal with.

Dan Mellinger: And you believe that we can automate some of these guides and/ or flags early on in the process, which would then negate the need for someone at the back end of that process to go need to fix something ultimately.

Michael Roytman: Yeah. Toyota has given me a lot of hope. The Taguchi methods and quality control that came about in the mid eighties, early nineties, were all a reaction to American auto manufacturing and Japanese manufacturers decided that quality control was a lot better if you did it early on in the process. The general rule in engineering is it's 10 times cheaper to say of solid something at the beginning of the process than at the end of the process. The same thing is probably true in security. So before Toyota's and Hondas and Nissans came about, you would have cars rolling off the line at Chevy plants that might have a part from a different car attached to it. And that would cause tons of problems and the mechanics would have to go solve those problems. But as quality control got better, OEM cars rolling off the line, didn't have issues for 80,000 miles, a hundred thousand miles. Only through wear and tear or changes in technology, they would start to need to address those. Similarly, we now have a handle on what the process looks like and all the defects that it's causing, look at all the vulnerabilities and all the breaches that are happening today. Now it's time for us to decide how can we intervene earlier in that life cycle so that we have less downstream errors because we've quantified the errors. We figured out which ones are bad. We know how much they cost us now. How do we decide to make investments in quality control earlier on?

Dan Mellinger: Okay. That brings us kind of back around then. What kind of roles can machine learning fill that help address the cybersecurity skills gap?

Michael Roytman: I think really structured ones that are large scale. So in vulnerability management, slam dunk, you've got millions of vulnerabilities. Each one needs its own assessment. Each one of those individual assessments needs to be done by a machine. So that then the decisions the human makes are where's the threshold of what we should be spending money on, and shouldn't be spending money on. Threat intelligence. Collecting that data is sometimes done by folks writing up PDF reports about something they read on websites. Well, now we have NLP. We can crawl those sites. We can figure that out. We can extract those entities. We can figure out where's the signal there reversing malware is quickly evolving to where a lot of that is helped by tools or automation. Any of these, I got to do a million of these little assessments per day in order to get a picture of the whole system. Those can eventually be automated. Even things like crawling code across good help define some mistakes or some vulnerabilities in that code is something that application security vendors are getting better and better at doing and recognizing that that's happening inside of IDEs earlier in the process, versus later, as they're doing a dynamic code analysis.

Dan Mellinger: Can you explain an IDE real quick, just for the audience?

Michael Roytman: As somebody is writing the code, we can start to identify a lot of, some of the issues that will cause downstream vulnerabilities later on. So while they're programming in their environment, you can start to spot mistakes and alert them to, Hey, fix this for now, before you deploy the code so that it doesn't actually make it into production. So those individual assessments, whether it's of users or vulnerabilities or malware or threats or threat actors, those are starting to become automated. And that's an analysis that, of course an individual can do, but enough individuals have done them now, where we can start to learn and train machines how to do them. I don't think that decisions about risk tolerances or anything that actually touches the business are decisions that machines are going to be able to make, because we have a very hard time understanding cost of remediation, cost of investment, into security costs into rebuilding something or changing application code. And so that's where people start to get involved. And there's a lot of estimation that happens. This is why risk management is a practice to begin with, but then there's also a lot of ambiguity and uncertainty and trade offs that have everything to do with time and usability. That is why we need folks to be aware of security as they're making business decisions.

Dan Mellinger: Okay. That's actually a really good transition. So you went over what machine learning is good at. What kind of roles they can fill, especially today, but also I imagine going forward. You started touching on some of the stuff machine learning is bad at. Roles that it will likely not be able to fill. Could you just go a little more higher level on that? Like what is machine learning bad at? What will it likely never replace from a human standpoint?

Michael Roytman: I think we're very far away from it ever writing code for us.

Dan Mellinger: We're not going to have AI writing AI?

Michael Roytman: We already do, but I think they're mostly for show, not for production. And I think that anything you think is 10 years in the future is actually a hundred years in the future for other parts of the world.

Dan Mellinger: Don't tell Elon, although I think his worst threat right now might actually be a Twitter admin console versus anything else, but still...

Michael Roytman: Right. I thought it was the stock market, but Twitter admin console and the stock market might be the same thing, who knows?

Dan Mellinger: True, especially when it's related to him anyway.

Michael Roytman: Those kinds of decisions, the ones that have a lot of ambiguity and require a lot of ingenuity, are going to be hard for machines to make. I really like framing it in that Sherlock Holmes way of let's let machines eliminate the impossible or the very risky or the improbable, so that humans can focus on the things that have ambiguity in it. We're very good at making decisions about things that are ambiguous or risky or might have uncertain outcomes in the future. Whereas machines need to be able to quantify something in a very precise way in order to make better and better decisions.

Dan Mellinger: Okay. Large data sets, things that we can clearly quantify things that may be too complex for us to easily make correlations, right? Surface connections.

Michael Roytman: Humans are actually very good at complexity. If you think about it, if you think about what writing a piece of software, like Office 365 takes. There're tons of dependencies and complexity and architectural decisions that get made throughout it from deploying it to the cloud, to writing some specific widget within word, the thousands of people involved in it. Those architectural decisions are always going to be human- based and where to invest resources, to secure pieces of those large complex architectures that we're building are probably better made by humans than by machines. Machines are going to have a lot of trouble working across domains. And security is a very cross domain industry. Good example of that is that, even in vulnerability management, half of the decision is about the value of the asset where that vulnerability is. And the easiest way to really understand the value of the asset is to go talk to the business owner that is running their application on that infrastructure and ask them, what is this really doing? That's one point in time assessment that made machines will get better at. We have some survey tooling instrumentation, but ultimately you have to understand the entire enterprise and how important that one application is in relation to all of the other parts of the business and the business's strategy moving forward, just to be able to make a decision about a vulnerability. There are parts of that decision that are super easily automateable like, is this thing at risk of being exploited by an adversary. You got your threat intelligence data, you got your scanner data, you got your vulnerability data, but there are other parts of that decision that have everything to do with the two year strategy of Zynga and how much they want to invest in this particular game versus this particular game and which asset infrastructure it's running on, which is not a security problem at all. It's very cross domain.

Dan Mellinger: That's very interesting. I never thought about the relative complexity of defining risk and or importance of different pieces of infrastructure, data, even, given their relative importance to the rest of the business.

Michael Roytman: Yeah. Well, you should have another podcast and you should talk to Ed Bellis about how the board views security, because there is where complexity and strategy and all of these things start to add a lens to every decision you're making and machines aren't nearly that context aware to be able to make those decisions.

Dan Mellinger: You're saying we're not going to have machine learning based boards anytime soon,

Michael Roytman: To dream, to dream.

Dan Mellinger: I think he heard it here first.

Michael Roytman: I think Elon Musk would be happy about that, but I don't know about anybody else.

Dan Mellinger: Interesting. Okay. Let's bring this back around because we're starting to get out in the stratosphere. Can machine learning fill the cybersecurity gap? That's the topic of the episode. That's where we're all driving towards" Can it fill that gap ultimately?" The 4 million, according to ISC squared.

Michael Roytman: Yeah. Four and a half million individual question. Yes, but we don't want it to, so I think it can fill the gap that exists today, which is we need to hire a lot of people to do a lot of decision making that might be able to be automated. The reality is that we probably still need four and a half million people to be making much more ambiguous decisions on top of machine learning data. There's a class of security problems. Mostly everybody who works within a specific vendor, specific tool, where workflow can be automated, where investigation can be automated where some decision making can be very easily supported. And so you need less people to make the same amount of decisions. But what that introduces is a whole lot of other ambiguity, which is let's do some interfacing with the business. That's bring in data about applications and assets that we don't have. Let's make smarter architectural decisions. Let's work with developers to make better decisions at the forefront that will be free to work on only if we've solved some of the things that we think people can solve today. The machine learning really can solve.

Dan Mellinger: Interesting. We were talking about industry maturity, right? So if we address this current wicked problem, we move to a whole new stage of industry maturity where we can open up some new wicked problems to try to solve.

Michael Roytman: Yeah. We're going to leapfrog into a world where humans are doing less assessment, more decision making, but once we're in that stage, then maybe we can start to build Toyota's and not 1983 Chevy's.

Dan Mellinger: Good analogy. You hit on it. Is there anything else, a future state where are we going past this? Just lay it out. What does the Toyota look like for cybersecurity?

Michael Roytman: I think the Toyota for cybersecurity already exists in some organizations, right, the future is not evenly distributed. The best security teams are ones that have the skillsets and the kind of resiliency and speed to build their own tooling and automate the pieces that they're doing time and time. Again, I think a really good future state for a security team as where you've got a team that is resilient enough to evolve as the problem evolves. Data scientists, internally, and teams that can automate some of the workflow that security teams are doing. Internal tooling being built that is more distributed across other enterprises as well. You see some of this with open source projects like Square, Facebook, and things like that. Today we have structured problems that security teams have to solve. But I think in the future, as there's more uncertainty that humans have to deal with and more decisions that are architectural that they have to make, maybe security teams go back to being generalists and people that can work across domains instead of specializing so much on vulnerability management or incident response.

Dan Mellinger: And start addressing strategic issues, problems, challenges?

Michael Roytman: Or working alongside developers to get earlier in that life cycle development chain to make sure that the vulnerabilities never get there in the first place.

Dan Mellinger: All right. Super interesting, Michael, thank you for joining us today. I will go ahead and link. We pulled a ton of different research for this. Cybersecurity ventures we mentioned, there's some really cool Dan Gear used articles that we will link here, the Wikipedia for wicked problem, ISC squared surveys and some Forbes articles. So this will all be on the podcast page and Michael, thanks for joining us. We look forward to having our future conversations. And everyone else, thanks for tuning in.

Michael Roytman: Thank you.

More Episodes

Measuring What Matters w/ Cyentia Institute

Reporting Risk To The Board

Winning The Remediation Race w/ Cyentia Institute

Around the Virtual Table with Chris, Jeremiah & Ed

The Exploit Prediction Scoring System (EPSS)

Getting Real About Remediation w/ Cyentia Institute