Episode Thumbnail
Episode 7  |  53:49 min

A Chronological Journey Through Risk-Based Vuln Management

Episode 7  |  53:49 min  |  07.29.2020

A Chronological Journey Through Risk-Based Vuln Management

This is a podcast episode titled, A Chronological Journey Through Risk-Based Vuln Management. The summary for this episode is: Picking up where we left off on the history of vulnerability management, Ed Bellis walks us through the history of risk-based vulnerability management (RBVM) to current times and the near future.
Takeaway 1 | 00:31 MIN
Warning: A Lot Of The Sources For This Podcast Are Gated
Takeaway 2 | 01:56 MIN
A Definition of Risk-Based Vulnerability Management
Takeaway 3 | 05:59 MIN
2005: Vulnerability Scanners Start to Become Vulnerability Management
Takeaway 4 | 05:10 MIN
2010: SCAP v1.0 Lays Foundation For Automating Vuln Management
Takeaway 5 | 02:28 MIN
Ed and Jeff Start Kenna Security
Takeaway 6 | 01:02 MIN
Bellis' Stages of Vulnerability Management Pain
Takeaway 7 | 02:21 MIN
2011: Anton Lays Out The Challenge of Prioritization
Takeaway 8 | 03:46 MIN
2010 - 2015: Collecting ALL THE DATA
Takeaway 9 | 01:06 MIN
Ed Begins Automating Using SCAP
Takeaway 10 | 01:45 MIN
2015: Anton and Augusto Lay Out a Framework for VM
Takeaway 11 | 02:00 MIN
2016: Oliver Lays Out Different POVs For Prioritization
Takeaway 12 | 01:32 MIN
Adam Shostack, Threat Modelling, and Star Wars
Takeaway 13 | 01:35 MIN
Targets of Opportunity vs. Targeted Attacks
Takeaway 14 | 03:51 MIN
2018: The Business Context For Cyber Risk
Takeaway 15 | 04:28 MIN
2018: Theory > Implementation > Measurement
Takeaway 16 | 04:37 MIN
2019: Measuring the Enterprise Vulnerability Landscape and Patch Rates
Takeaway 17 | 02:57 MIN
Performance Factors That Correlate To RBVM Success
Takeaway 18 | 01:03 MIN
Exploit Prediction Scoring System v1.0
Takeaway 19 | 03:21 MIN
2020: Measuring Risk of The Assets Themselves
Takeaway 20 | 01:42 MIN
Where We Are Today and What's Next
Picking up where we left off on the history of vulnerability management, Ed Bellis walks us through the history of risk-based vulnerability management (RBVM) to current times and the near future.

Dan Mellinger: Today on Security Science, a not so brief history of risk- based vulnerability management. Welcome to Security Science. I'm Dan Mellinger, and today we're following up on our very first episode and discussing the history and evolution from vulnerability management to the relatively new concept of risk- based vulnerability management. Guiding us along this journey is our resident vulnerability management historian, none other than the duke of risk- based vulnerability management himself, Ed Bellis. Ed, how is it going?

Ed Bellis: Going well. Thanks for having us again, Dan.

Dan Mellinger: Awesome. I'm excited for this one. It should be a pretty in depth discussion, but I do want to be upfront. So there's a lot of history on this unlike the Vulnerability Management Podcast, and a lot of it we're finding from Gartner. So the research, a couple of blogs. Unfortunately, that does mean that a lot of the content we will link is going to be gated unless you have a Gartner subscription. We'll try to find alternatives if possible, but you can track back the history from there if you'd like to. As always, we'll be linking back a bunch of the sources that we used to build this. So I'm going to kick off real quick with the definition of risk- based vulnerability management, just to level set and guide on where we're going for the rest of this episode. So risk- based vulnerability management or RBVM, which we'll probably use a lot during this episode, is a cybersecurity strategy in which organizations prioritize remediation of software vulnerabilities according to the risk they pose to the organization. And so there's several components that you should look at, and we'll actually see some of the evolution of that as we go through the history, but using threat intelligence to identify vulnerabilities attackers are discussing using, otherwise taking action on externally. They should use intelligence to generate risk scores based on the likelihood of exploitation internally. From there, they should use business context of various assets because intrusion into some segments of a network may be more or less damaging or likely than others. And then you should combine all this together to really create some kind of asset criticality. Risk- based vulnerability management programs should focus patching efforts on the vulnerabilities that are most likely to be exploited and reside on the most critical systems. So that out of the way, Ed, agree, disagree with any of that stuff?

Ed Bellis: Yeah, I totally agree with that stuff. I would add or even simplify it all and say, look, really what you're trying to do is take a step back and say, " What is the most likely thing to happen here? And what's the impact if that happens?" And then organize from there. There might be some things that are less likely to happen, but if they do happen, they're catastrophic. There might be things that are very likely to happen, but if they happen could be next to meaningless, and a lot of same gray areas in between. And ultimately you just want to be able to manage that to some sort of acceptable level for your organization.

Dan Mellinger: Makes a ton of sense. And you almost sounded like Michael Roytman there.

Ed Bellis: Oh yeah. We're going to have to change that throughout.

Dan Mellinger: All right. Okay, so let's jump into some of the history. So I think we touched on this more in depth on the very first history of vulnerability management, but in 1999 the Common Vulnerability Scoring System and CVE list was started. And so we start to see some form of metrics or ways for us to measure and identify vulnerabilities and create, I guess, a dictionary for everyone else to follow. So we all start getting on the same page. From there, it looks like we go into 2005. So at this point, what you were talking about Ed was we were mostly in this kind of vulnerability assessment mode or scanning mode, scanning vendors, and now we're starting to see them become vulnerability management vendors. So could you give us a little bit of an overview of this kind of 2005 era of vulnerability assessment to management?

Ed Bellis: Yeah, I remember it well. So really, I always like to categorize vulnerability management in general into the various stages of pain that they represent. So early days, the halcyon days, there was the ignorance is bliss, which was that pre vulnerability assessment time. And then you got into, okay, so where are my vulnerabilities? Let's start to find these. Well, at first you're doing some very manual assessments involving people's time. You're finding some, but you ultimately want to get a much better coverage across all of your infrastructure and understand where your vulnerabilities are. You start to implement automated tools and scanners and these types of things. And that's where you get to that stage of, oh my God, I have vulnerabilities and they're everywhere. And there's way too much to deal with here, so I should probably start fixing some of these. And that's when you started to see some of these vendors and some of the orgs who are using them just go from, okay, we keep scanning and we keep scanning and we keep scanning and we're producing these reports, but we're not really fixing much of anything yet. We should start to get into that. So the vulnerability management comes into the actual, let's create a process around this. Let's start to remediate and push out patches and do these different things and then go back and then measure again and report on it and see, am I making progress? Am I opening more vulnerabilities? Am I closing more vulnerabilities? What if we looked at CVSS? Am I closing the CVSS tens, the nines? How am I doing here? And start to measure the efficacy, if you will, of at least your remediation efforts. And that's really where you started to see the vulnerability management vendors focus on and start implementing more reporting around that. It became more of, instead of just a point and shoot scanner, often sitting on somebody's desktop to, we've got a server that runs this or multiple servers or an appliance, and it's going out and scanning things. And you start to actually maintain state of vulnerability. So yes, I have this vulnerability. It was opened three weeks ago. We saw it was closed yesterday. Maybe in some cases I even track our reopening of that vulnerability, so I start to actually manage these vulnerabilities, not just find them.

Dan Mellinger: Not just visibility and reporting.

Ed Bellis: That's right. To be clear, certainly in the mid two thousands, there was still a lot of that going on. Even the vulnerability management side, you would see the reporting would often lead to depression because it was, okay. Yeah, I get it. We're creating more and more of these vulnerabilities, and I'm not fixing that many, which was another issue altogether.

Dan Mellinger: Okay. So from a market dynamic standpoint, there was a couple of companies that we'll recognize today, like the Qualys and Tenable's of the world existed. And then there was some like ISS and Circle that didn't survive. What were the differences between what some of the people who didn't make it were offering versus some of the mainstays that still exist?

Ed Bellis: So I believe all of the big three that we refer to today, Tenable, Qualys, Rapid7 were all around at this point in time, but we did see ISS who was an early... I think they got acquired by IBM, and Circle who was also acquired, I think, correct me if I'm wrong, you might know this one, was that a Tripwire acquisition?

Dan Mellinger: I think so. I saw some references, but it was hard to dig that far back into that archive.

Ed Bellis: Yeah. Well, I mean, it's far back and at the same time, it's not. We had an integration within Circle for a while in our own product. So there was a lot of those. There were still, you would see some of the open source stuff, so OpenVAS, which is still around today, but there were other open source things. I think a lot of the one off where you saw a lot of it be more project- based like I think we referred to it on our previous podcast episode, but the Saints I think is still around. You don't see them nearly as much anymore. Satan, which I don't ever see anymore. I'm not even sure if that's in any way a supported scanner or not. But the ones that really truly moved from vulnerability assessment to vulnerability management early on in this timeframe, this mid kind of 2005- ish timeframe are the ones that are ultimately now the big three in vulnerability management today. Which is the Qualys, Tenable, Rapid7s of the world.

Dan Mellinger: Or became a component technology within someone else.

Ed Bellis: That's right.

Dan Mellinger: Okay. Awesome. And it looks like the next big point in time change was with SCAP. So we talked about this briefly in the VM episode as well, but it looks like in 2010, July timeframe the National Institute of Standards and Technologies rolled out their SCAP. So Security Content Automation Protocol, and that was version 1. 0. So that was 2010. I did find, and I believe you shared a little blog on gorilla. cso.com. So I'm going to link that one for sure. But Ed was already messing around with the SCAP just before it officially V1 launched in 2010. Ed, can you talk a little bit about those projects?

Ed Bellis: Yeah. So I remember actually getting introduced into the SCAP world, I think originally by Mike Smith who is the one that wrote that blog post? I believe he goes by Rival Love on Twitter, but we had had a number of discussions and SCAP is really just kind of... it's a group of different standards. So we talked about CVE and CVSS already, which are part of it. CPE, which is the Common Platform Enumeration, CWE, CCE all these... I won't bore you with all of the acronyms, but hence coming out of NIST, you're going to get a lot of that. But really it was a way to commonly describe things and then ultimately automate both assessment aggregation, prioritization, et cetera. How do I take all of this data that's coming in about all of this security issues? And they could be vulnerability issues. In the case of CCE, they could be configuration issues, but all of these things that represent some sort of security issue or risk within my environment, how do I describe them commonly across multiple tools which I can then use to aggregate those? And then how do I prioritize those? And in that case, a lot of it was CVSS for the vulnerability stuff. I believe they actually had a standard for configuration scoring. Now I'm probably just going to guess on the acronyms. I won't. But in a similar to it's-

Dan Mellinger: Extensible Configuration Checklist Description Format.

Ed Bellis: No, that's something else. It was-

Dan Mellinger: Man, that is a lot of acronyms.

Ed Bellis: Yeah. It was something much more closer to CVSS like CCSS or something like that. Common Weakness Scoring something or other, but anyway I digress. So they had their way to prioritize configuration issues as well, application security weaknesses as well. Now obviously we've evolved quite a bit since then, but it was really the early days start of saying, okay, I want to get a handle on what is all this stuff regardless of all the tools I have. And it became actually really, I don't know if I'd say popular, but certainly you'd see it a lot more often in government agencies and things like that. It was very much led by NIST and MITRE and folks like that. In fact, I presented what we were working on and built out at Orbitz at a couple of different conferences, which I think was the blog posts that Mike was talking about. But we also did something at Metricon about that. And pretty much everywhere that we went and we're talking about some of these SCAP standards and how we use them to automate. It turned out we were a unicorn in the commercial environment. It was very much a government thing at the time.

Dan Mellinger: So essentially this conglomeration of kind of definitions mechanisms of description. So everyone was able to describe and speak the same language as it related to vulnerabilities ultimately is what SCAP was. So it's kind of the seed of, I guess, providing some kind of foundation to be able to ultimately longterm automate some of this because everyone's talking the same way using the same metrics to describe the same things.

Ed Bellis: Oh yeah. I mean, and to be very clear, what we do at Kenna today is rely on some of those SCAP standards even today. And we probably couldn't do everything that we do today without some of those early on efforts that came out of that.

Dan Mellinger: Awesome. Well, I mean, that brings us to what, 2010 SCAP version 1.0. And I know Kenna started roughly around 2010. So that project, it looks like spurred some kind of thinking in your head and you decided to create a solution. So what was the, I guess founding of Kenna, just real quick based off of all this.

Ed Bellis: Yeah. So to go back, again, you're right. It was 2010, and I had... basically it came out of the work I was doing when I was the chief information security officer over at Orbitz prior to this. And it was ultimately frustrations around the vulnerability management world. So I'd gone through the vulnerability assessment cycle. I'd gone through the vulnerability management cycle. So going back to the stages of pain that we were referring to, you went from ignorance is bliss to, where are my vulnerabilities? Oh, yep, they're everywhere. Oh my God. There's a ton of them. How do I go about fixing all these? Whoops. You know what, it's impossible to fix all these. No one has the resources to fix all of these, which ones should I fix? Oh, well we could rely on things like CVSS and things like that. One nightmare after the other, you end up realizing that CVSS is all skewed towards high, which meant that, okay, now you don't have to fix 100% of them, I have to fix 80% of them. Still can't be done, so how do I make the right decisions about this stuff? And what matters in terms of not only the vulnerability, but my own environment. I have other controls in my environment. I have things that are more important than other things in my environment, so how do I consider all of that? At the time I was talking to a bunch of my peers asking them what they were doing to ultimately address this and finding that they were doing a lot of the same things we were doing, which were massive spreadsheets or homegrown databases. And we did our own SCAP project at Orbitz as well. But in the end it was a whole lot of people and time, and still not sure if we were doing the right things. So that's when I founded or cofounded and called Jeff Heuer, who's our cofounder and said, " Hey, we got to go out and build something to solve this and figure all this out." Again, that was probably towards the end of 2010, and we went out to form what is now Kenna Security today?

Dan Mellinger: Huh? That's interesting. Well, October 2011th, we have the next big jump in Anton Chuvakin. So he's a friend of ours. We talk to him quite frequently. He was a Gartner analyst at this time. I believe he's now at Google, correct?

Ed Bellis: Yeah, that's correct. He walks over to-

Dan Mellinger: Chronicle.

Ed Bellis: Chronicle. That's right.

Dan Mellinger: Yeah. So Anton, he kind of laid out this prioritization and scoring challenge. And this is way back in 2011. It's clear you guys have been chatting with him a little bit. He'd been interacting with some of the content that you were putting out on the SCAP projects, but I'll read a quick quote because I think this is really what moves the industry towards RBVM externally, but he wrote vulnerability prioritization for remediation presents the critical problem to many organizations operating the scanning tools. The volume of data from enterprise scanning for vulnerabilities and configuration weaknesses is growing due to both additional network segments and new asset types. So he's saying this problem is big. It's getting bigger. There's no way humans can do this. We need a better way to prioritize efforts effectively. And so that was 2011. Can you talk about some of the work between that and say, 2015?

Ed Bellis: Sure. Yeah. And we had been talking on and off with Anton, as you mentioned even. Going back, I think I first met him at a security conference in the, I don't know, 2007, 2008 timeframe or something like that. I think at the time, he was doing a lot of work in the PCI world. And I was doing a lot of work in the vulnerability management space, but we often would meet and see very eye to eye. In fact, we had the honor of co- authoring with many other authors actually a book of which he wrote a chapter, I wrote a chapter, Mark Curphey bunch of other folks. And that was actually one of the weird ones where I wrote a chapter on payment card security of all things. And he did that. I don't think he may have had something to do with that, but yeah. So we've been in touch on and off over the years, and talked a lot about this problem and saw very much eye to eye in the fact that there's a lot of pain in the, yeah, I can't fix everything, so how do I go about prioritizing these things? Again, so 2011 through say, 2015, I would say for me personally, working through this problem space, it became... there's kind of a table stakes that you have to as a vendor certainly come out and say, " Look, before I can prioritize everything, I have to know about everything. And then in order for me to know about everything, when you're working in a large enterprise, that means you've got to integrate with a lot of different tools. You've got to pull in a lot of different data from a lot of different sources. Some of those are the vulnerability assessment guys. Finding those vulnerabilities is obviously key. You can't prioritize the vulnerability if you don't know about it. But it's also understanding the assets, and pulling in all of that information and understanding a bit about how important this asset is? What does it do? Are there mitigating controls, compensating controls upstream at the network layer or somewhere else that would prevent something like that? These are all things that you need to know about. In fact, I would say we skipped over it a little bit, but in the mid two thousands- ish to late two thousands you started to see a different kind of a network- centric approach that was taken to prioritizing vulnerabilities. So folks like Skybox and RedSeal started to come out and add vulnerability data into a lot of their network mapping tools, so they could start to understand firewall rules and router rules and different ACLs and stuff at the network layer and say, " If I had this vulnerability compromise, if I was able to exploit it, where else could I go on the network?" So that was one way to look at it. It was a bit complicated at times for users, and I'd say a lot of folks, certainly that were consumers of that space while very valuable, I would also categorize it as the security 1%. It's like the people who have a lot of resources in the first place that are going to be able to implement that sort of thing. But being able to ultimately aggregate that data so you could ultimately prioritize it. So a lot of the work that we were working in was that aggregation piece, and taking advantage of some of those SCAP standards to deduplicate and meld things together. And then ultimately what you wanted to do then is say, okay, so what else would I use then to prioritize what I'm working on? And so you got to go out and get that data as well. And obviously one of the things that comes to mind is, well, what the heck are people attacking? So what's the most likely thing to happen? And is it because it's easy because it's spray and pray because whatever the case may be, why are people attacking this? So being able to understand that, to measure some sort of form of likelihood, and you're going to look at things like, what are the attackers doing? What is the upstream compensating controls in place that maybe would make that likelihood go down or up? And then if it did happen, what's the impact. And so ultimately understanding those assets and how important they are and what they do, that all comes into play. So a lot of work went into those first few years of building out and understanding all of those different data sources, being able to consume them all, being able to deduplicate them all and ultimately using that data to get into that risk- based vulnerability management process.

Dan Mellinger: Interesting. So it seems like'99 through roughly 2010, you guys are figuring out how to universally describe vulnerabilities themselves. So taking that vulnerability perspective and how do we share this data broadly and universally. And now we get into 2011 and you're starting to look for the other perspectives, the asset perspectives, the activity perspectives, the network perspective. So pulling in different perspectives to fill in the gaps between the technical components of a vulnerability and the actual executing and exploit in the wild type thing. Correct?

Ed Bellis: Yeah, absolutely. I mean, the way I think about it, and again, I could go back to my Orbitz days and say, well, we had a bunch of people doing this. What were they looking at? What went through their mind to say this is important, or this is not important? And that's ultimately what you want to do in an automated fashion as much as possible. There'll be some things where a person just needs to be involved, but if they don't, let's not involve them, let's collect that data and then do something with that data.

Dan Mellinger: Okay. What was the first thing that you were able to automate or you tried automating?

Ed Bellis: So clearly I would say again, it goes back to the SCAP stuff. So even then just being able to aggregate what we were doing at Orbitz and say... because we had multiple scanners as an example. So we had staff on the inside, we had staff externally and then we had like assessors that came in and they used something else all together. I want to make a decision as a CSO to say, " Okay, of the things that we're going to work on, what's the most important?" I don't want to say what's the most important in this list, and then what's the most important in this other list. I want to say what's the most important thing. And again, get whoever's responsible for mediating that working on that. So the SCAP stuff, being able to use things like CVE, CVSS, being able to automate all of the collection of this data and dumping it, in our case, into just a common database to dedupe. That was certainly a lot of automation going on back then too.

Dan Mellinger: Interesting. Cool. Well, I think the next piece that laid some of the foundation for this process we're talking about it in 2005, VA and scan vendors are starting to try to apply vuln management capabilities to things. And you start to see that proliferating all the way through 2015, and then our good friend Anton and Augusto Barros, they actually came out with a framework for implementing vulnerability management. And I thought this was interesting because it lays out some of the concepts that we ultimately end up researching and putting some hard metrics to later on to guide the process. And so just the key takeaway from the report is vulnerability management practices, including VA represent a proactive layer of enterprise cyber defense, but it remains very challenging to large organizations. And we would actually find also small ones. And this guidance represents a structured approach to VM best practices. And the interesting steps forward is they laid out a process workflow for VM that starts to provide a seed for the automation of this kind of task; what to look for, how to do it in what order. But we start to see some interesting concepts as well. The maximum patching speed is what they call it. And we later end up actually finding out there is a relative remediation capacity, and we can touch on that a little bit later. I think it's interesting that they lay out the concept of SLA- based or setting context- based SLAs, which we just started really looking at and rolling out as a company as well. And so you start to see some of these conceptual ideas form in mid 2015 and 2016. And then to your point, we go into 2016 threat- centric vulnerability remediation prioritization. Wow, that's a mouthful, from Oliver Rochford, and they're talking about different models. So asset- centric model of remediation, which is gradual risk reduction. So business criticality, which ultimately becomes a big driving factor. Vulnerability- centric remediation, which is also gradual risk reduction. So criticality of vulnerability, which we've been talking about for a while, and CVSS tries to lay out. And then the threat- centric view which is imminent threat elimination as they call it. So active targets.

Ed Bellis: Yup.

Dan Mellinger: In 2016, Ed, they're talking about basically breaking out separate models. What was your feeling at the time? Was this the way to go or you were already talking about combining models. So I'm curious to get your take.

Ed Bellis: I think those are all good steps forward, and I wouldn't argue with any of them other than to take our step all the way back to at the beginning of this conversation, which is to say, ultimately what I want to know is what's the likelihood of this thing happening. And if it happens, how bad is it? Now, there's a lot that goes into that, and some of it is that threat- centric view. And by the way, there's really two different approaches you have to think about there too. So even taking a threat- centric view, there is the likely thing to happen, which is often the targets of opportunity. And things that I can plan for to say, hey, someone's going to come after me because I have this vulnerability and it's exposed to the internet or something like that. And then there's just targeted attacks, which is they're coming after you because of who you are, which is a different thing that you have to deal with. And often a much more expensive thing, frankly, to deal with the targeted attack than the target of opportunity. So keeping both of those in mind as you're going through that threat- centric view. I mean, Adam Shostack has done a ton of different stuff on threat modeling. And going through different threat modeling exercises, that's all part of a threat model exercise. Which is to think of not only how you could be attacked, but who is coming after you and why. And thinking through all of your different exposures, your attack surface, and making sure that you've plugged those things, which as I mentioned, can be a much more time intensive, resource intensive and expensive endeavor to protect yourself against.

Dan Mellinger: That makes a ton of sense. And I do want to take an aside, Adam Shostack is one of the few people who I think knows more about Star Wars lore, both legends and cannon than I do. So we had a really, really good conversation a couple of years ago at Black Hat. Anyway.

Ed Bellis: That's important though. And just to be clear, that means that you do know a lot more than Wade Baker, which is also impressive.

Dan Mellinger: Oh, man. We'll have to have a little nerd off and see what happens. He has way more swag than I do.

Ed Bellis: He can't buy his way out of though.

Dan Mellinger: It's true. We should have a little competition between all the Star Wars fans here.

Ed Bellis: So yeah. I mean to get back to what we're talking about, there's the threat- centric view. And I think that is the right view, but again, there's multiple layers to that. There's multiple layers to what Anton and Augusto were talking about, which is really operationalizing this. And they talk about things that we ultimately start thinking about capacity and how much can an organization do in terms of remediation, which by the way, you should think about that in your risk- based vulnerability management approach as well, because there are things that maybe they do, or they don't eliminate much risk, but the cost of doing so is negligible. So we looked at in volume five, which I think we're going to get into some of the P2P reports, but we looked at the capacity for people to actually remediate a bunch of Microsoft vulnerabilities all at once. Some of those vulnerabilities probably aren't that risky at all. Some of them are extremely risky. But the ability to deploy their patch through SCCM, if they can operationalize that and push that patch out and it's little to no cost for them to do it, then do it. And then you've eliminated the pile of stuff that you actually have to put a lot more effort into thinking about the prioritization, and going through that RBVM approach. If you can shorten that pile up and then take that RBVM approach, all the better.

Dan Mellinger: Hmm. Interesting, and that makes total sense. And that actually leads us to January, 2018, where we start to see the first use of risk- based in a description from a Gartner report. So this was a risk- based approach to manage enterprise vulnerabilities in highly regulated industries. And I think this report really moved forward this concept of looking at risk from a business perspective. So some of their takeaways for CIOs. So they're talking to the IT side of the house, not just security necessarily, but engaging with business stakeholders that were nontechnical to understand the risk. They may not know that a certain R and D file is worth way more than the rest of them. Or if there's some proprietary technology they're using on the finance side to forecast out and plan out their operations, things like that. So what was your take on this emergence of starting to engage really on the business context?

Ed Bellis: I'd say finally, this is what I've been talking about for... felt like I've been talking about for years. Absolutely. Obviously, I mean, that's what we as security professionals are ultimately paid to do. Is to understand the risk, be able to figure out whether it's vulnerability management or anything else quite frankly. And what's the cost to mitigate that risk, and is the cost worth it to that business? Where do we want to be as our risk threshold. We talked a little bit about SLA. Feeding risk into an SLA is extremely important. I remember, this is going to age me again, but a talk that I went to, a Black Hat talk, and I want to say, don't quote me on this, but maybe 2008, 2009. It was the last talk of the show for that particular track, and it was Alex Hutton and David Mortman. And they basically walked through an exercise of, okay, here are all of the vulnerabilities that were disclosed at this year's Black Hat. We're going to walk you through as to why you don't have to worry about any of these. And it was as a risk- based approach, and ultimately they're saying, yeah, you might have to worry about this probably in a couple, two or three years, but there's a bunch of other stuff out here that you have to worry about right now that's not being talked about at Black Hat because it's so common and it's likely to happen to you. We'll look it up, we'll put it in the show notes, but it was a great talk about implementing risk, and specifically in that case, it was risk as it related to security vulnerabilities. But yeah, being able to take the business context into account, being able to understand what is your risk appetite as a business, because you as a too big to fail bank is going to have a very different risk appetite than the mom and pop down the street that maybe doesn't have the targeted attacks, but has plenty of targets of opportunity, and everything in between. So vertical matters, size matters, what you're protecting matters, how your industry is regulated matters, all of this context about your business and what it is that you do as a business matters when it comes to vulnerability management.

Dan Mellinger: And I think that's a great transition. So I would say that just from my research, leading up till 2018, we're trying to identify the scope of the problem, trying to create common terminology, trying to lay down process and foundations and this kind of conceptual theoretical strategies with which to solve these challenges. And then 2018 flips, and all of a sudden we start trying to measure performance, actually putting metrics to the realities of these things. And so we're seeing that in both external research, like stuff from Gartner with this risk- based approach piece. And then May, 2018, we actually came out with our first volume of prioritization of prediction. And so in that one we figured out most reported vulnerabilities aren't used by hackers. So the vast majority, 77% never have exploits developed and less than 2% are actively used in attacks. And so that was one of the big takeaways from that report. The second one is that we actually put out something to look at the effectiveness of machine learning models. So applying machine learning to help automate these and actually compare that against 15 other strategies. So I know we did a P2P volume one deep dive, we will do the rest of the P2P reports as well. But that research led us to go talk to Gartner, go brief some of the analysts on it, work with Anton where Craig Lawson comes into the picture here as well. He writes a piece called Implement a Risk- Based Approach to Vulnerability Management. So where are we now?

Ed Bellis: Yeah. I mean, that's a mouthful and a lot, but one of the things that ... and I was just listening to the podcast with you and Michael Roytman that you most recently published, and you guys talk a little bit about solving a problem that ultimately creates another problem. So one of the big things that we learned and discovered was, as we are taking saying, " Hey, you need to take a risk- based approach to vulnerability management and here's how and why," and all of that. And people would do that, and then they'd come back and they'd say, " Oh, by the way, we're measuring all of our remediation teams based on the risk score, and we're going to bonus them, or whether we're going to do something based on what their risk is." And that created a little bit of concern for us. So I was like, " Wait, you're doing what? Can you tell us a little bit more about what you're doing?"

Dan Mellinger: You're going to give bonuses based off risk scores?

Ed Bellis: Yeah, it's like you ... So part of the problem with that is your risk is not entirely in the hands of the remediation team or even your business, frankly. We talked about the threat- centric approach to risk management. So what happens if a threat goes out and does something tomorrow that suddenly creates risk within your group of assets that you're responsible for? That means you don't get a bonus this year, or how does that work? So that's when we started and we worked quite a bit actually with the Cyentia team on this too. And that was part of some of the P2P research, which we'll get into, but measuring those performance cues and risk feeds performance. So you want risk to ultimately be a part of the equation is to measure how a team is performing, but your risk of something, the likelihood and impact of something happening to your environment in and of itself is not performance. So how do we measure performance of these remediation teams. And we looked at the capacity that you talked about earlier, but also velocity. And then we didn't about it much in this episode, the coverage and efficiency, which was a lot of the P2P reports are around coverage and efficiency, meaning all of this stuff that you are fixing, does it matter ultimately?

Dan Mellinger: Yeah. That makes a ton of sense. And just so everyone else knows, I consider this the era of measurement now. So the rest of this podcast, we're going to be doing some high levels on basically some of the P2P and other research that's come out in the wake of really identifying what some of these metrics are and how they matter to actual performance. So it'll get a little technical, we're not going to go super deep because we do want to do some deep dives on each one of these reports, but we'll rapid fire this off real quick. So January 2019, P2P V2. So this one, P2P V one, we're looking at everything through the lens of CVE. So individual vulnerabilities. V2, we start looking at actual enterprise environments. So what vulnerabilities exist in enterprises? And could you do some highlights of that report, Ed?

Ed Bellis: Yeah. I mean, you talked earlier about the very small percentage of those vulnerabilities that end up in exploit or end up being exploited. That's looking at the broad base of all of the CVEs in the National Vulnerability Database, but what if you didn't have to do that? What if you started with a subset and then said, " Okay, if I don't have those vulnerabilities, I don't care about those vulnerabilities." So which ones do I have here in this enterprise environment? Now of those, which ones are being weaponized? Which ones are being exploited and how do I... So you start with a smaller pile like we talked about earlier, and then you make that even smaller. So ways volume two deferred greatly from volume one. Volume one was an exercise, almost an academic exercise in the sense that we are looking at all CVEs that exist, but those are not all the... most of those we don't even care about. So volume two was getting much more practical about and saying, " Okay, so you're managing an enterprise environment, what do you need to do? How do you need to prioritize? And what do you need to think about?" And that's what volume two is about.

Dan Mellinger: Okay. Yeah, and I'll do some call outs in some of the key stats there. So that 2% of active exploits that we've seen in the wild, that actually grows to 5%, which makes sense because people in general probably aren't trying to hack things that don't actually exist in businesses where they can make some profit off of it. We also found that about a third, 32.3% are remediated within 30 days of discovery. So people are running just not necessarily maybe at the right ones, but half of all vulnerabilities aren't patched within 90 days. And then not a big surprise, but of the 10 largest software vendors within enterprise environments, three were responsible for like 70% of them, and we'll get back to some cool stuff about that a little bit later as well. So January, 2019 V2, March, 2019, we come out with V3. So rapid turn right around RSA that year. And V3, we start trying to basically look at the difference between the average mid bottom quartile and these" top performers." So what are the best enterprises doing compared to the rest? Could you give us a little overview on that?

Ed Bellis: Yeah. So that's where we really started to get into this performance measurement stuff. It's like, okay, wait, let's not just consider risk. Let's consider a lot of these other things about performance. And we looked at the capacity and how many vulnerabilities in a 30- day timeframe are you capable of remediating? One of the remarkable things that came away from that was, it didn't matter the size of the organization, whether it was a small SMB or the largest Fortune 10 organization in the world, they pretty much fixed one in 10, regardless, 10%. And you get into the top performers. And those top performers did do a little bit better, but they were still only the very best of the best. We're fixing about 25% of those vulnerabilities. So 75% still remained open, which gets back to all of our early conversations about you can't possibly fix everything, so you got to take this risk- based approach.

Dan Mellinger: Anton was saying that there's probably a patch capacity people could hit, and we actually quantified that, right?

Ed Bellis: Yup. And it was amazingly linear in terms of... I couldn't believe how it didn't matter again, the size of that organization. It was one in 10, almost across the board. So there's that. We looked at velocity, how quickly these people are able to eliminate any given vulnerability out of their environment. And while we saw for certain vendors like Microsoft and others there was a very quick out- of- the- gate velocity to remediate a bunch of stuff, there is a long tail of things that just never go away at all. Which might be okay if you're taking that risk- based approach. So it depends on what the vulnerability is, but it also depends on where the vulnerability is as well. But we also, again, we look a lot of coverage and efficiency across the board for all of these folks. And they're very quantitative metrics that we're looking at in volume three.

Dan Mellinger: Awesome. Now we're jumping to volume four. So August, 2019. So a couple of big things happen. We start looking at the correlating factors that are leading to success. So we're going down this a rabbit hole. So now we're looking at factors like VM maturity. So pairing survey data of these actual enterprises with the actual performance measures that we can track in the actual source data to see what matters. And it looks like VM maturity rating or perceived maturity rating was one of the biggest correlating factors, but we also looked at things like where they were hosted, and how that impacts coverage efficiency, velocity of high risk. So we're growing this kind of gamut of metrics that we can start to measure performance with. So Ed, just real quick feedback on the nice takeaway grid on that one.

Ed Bellis: Yeah. I mean, as you mentioned real quickly here, we went from, okay, I've got all of these quantitative metrics to say these are the good performers, these are the low performers, and here's the average folks, but we didn't know why. We didn't understand what they were doing behind the scenes that made them good or not good. And so we implemented this survey and started just, well, let's ask them a bunch of questions about what do they use for, I don't know, patching? Do they have automated tools in place? What do they do for remediation? How many teams are involved in remediation? How complex is it? What's their budgets like? What is their self perception? As you mentioned, the VM maturity rating. I'm actually a little surprised by this. I'm always leery quite frankly of survey data, but people were largely in line and self aware of where they were. There were a few outliers that thought they were great and weren't so great, and people that thought they were bad and they were actually pretty good. But for the most part, they kind of felt exactly right where they were. But it was interesting to see there were some surprises. I do remember some surprises in there around complexity and things like that, and how many teams were involved. And actually I was thinking, the less the number of teams involved or the less number of people involved in remediation, the quicker you would be in terms of velocity, but that was not the case at all. Actually, the more people you got involved to a point, the faster you were, which was a surprise to me. But then you could always think about it with the other hat on say, " Oh, well." I mean, if you're just one guy or one team and you're responsible for security and administration and everything else, and you're wearing a bunch of different hats, I could see where you wouldn't be all that quick around vulnerability remediation either. So it spurred more questions as most of those reports do.

Dan Mellinger: Absolutely. Well, and we get into it a little bit more as well, so I'm sure we'll keep digging. I think what, Black Hat 2019 as well Michael Roytman worked with Jay Jacobs and Ben Edwards at Cyentia Institute, along with Sasha Romanosky, who was actually on the original team that developed CVSS, and is currently at RAND Corp and then Idris Adjerid at Virginia Tech. And they worked out and measured the Exploit Prediction Scoring System, so EPSS. And that was their take of applying machine learning to correlate variables that ultimately led to an exploit being developed and being used within 12 months. So EPSS predicts the risk of exploitation for a given vulnerability, and it uses 16 public variables. So anyone could actually go look at these variables, and it's pretty simple calculation overall. So we will link to that paper because you can literally implement that in a spreadsheet if you know what you're doing. We created a calculator, so you can go play around and see how that works. And I know they're actively working on that as well. And then what, that brings us up to current day. So April, 2020, we did the latest volume five, which was taking more of an asset- centric view. So thus far, we've done a ton of measurement on vulnerability- centric, so CVE. We're looking at enterprise- centric, we're looking at performance measures trying to see how culture and organizational factors impact things. And now we're actually looking at the assets themselves. So could you do a nice highlight takeaway for that one as well?

Ed Bellis: Yeah. I mean, ultimately that's where you want to be is an asset- centric view, we talked about this. Just scoring a vulnerability, whether it be even a risk- based score of a vulnerability, that's not how an organization ultimately needs to prioritize what they're doing. They need to understand how risky is this asset or group of assets or organization. So that's what I want to understand. So we looked at it in terms of, in this case, we're looking at the assets and we're looking at them in terms of the platform vendors and different things. So what do Microsoft assets look like? What do Apple assets look like? What do the variant Linux assets look like? And what do the network appliances and devices and IoT things look like? And ultimately understanding what's the risk there? What are the performance metrics? How easy or hard is it to keep those assets at a lower risk? So again, looking at coverage and efficiency, but capacity and especially velocity. And really some of the bigger takeaways, which probably weren't terribly surprising, Microsoft far and a way had way more vulnerabilities and high risk vulnerabilities than any of the other platform vendors. Not a surprise. But they've also done so much work to make it so much easier for their customers to remediate and eliminate a lot of that risk. So the velocity that we saw amongst the remediation teams when they were working on those Microsoft assets was far and away faster than most. So the Mac OS X held its own against Microsoft as well. But the Linux guys, and certainly the appliance guys were much slower in terms of the remediation, but the volume was incredibly low on those too. We would see things where it took the vulnerability half- life of an appliance was what, in the hundreds of days, like near a year. When looking at the volume, they've just had a handful of high risk vulnerabilities to deal with in the first place.

Dan Mellinger: That reminds me, I need to go patch my wireless router. It's been a while. Kind of like a couple of years.

Ed Bellis: Yeah, exactly. When's the last time you've done that?

Dan Mellinger: I don't remember. I'm going to do it right after this.

Ed Bellis: I can tell you your half- life is probably in the at least hundreds of days.

Dan Mellinger: Awesome. Well, I think that brings us through today and then I just wanted to get, as we sign off, what's next? What do you see on the horizon? And I know it starts off slow, and we're seeing years between major advancements, and then the last five years or so it's been process, metrics, performance. And it's coming really, really fast and starting to really mature. We're also seeing a lot of other vendors in the space. A lot of people are entering, trying to do the same kind of things because people are recognizing that this is the solution to a very, very old challenge. So where are we today? What's next for risk- based vulnerability management?

Ed Bellis: Yeah. I think we hinted on a lot of what's next. We talked a bit about the performance and things like that, but ultimately the goal is to be proactive about this. So we talked a bit about measuring risk and understanding risk and then applying the resources to the stuff to drive risk down. So you want that bang for your buck approach, if you will, to say, " Here's the resources that I have, how much risk can I reduce, and how can I get down to an acceptable level for the org?" Because it has a little bit of that threat- centric view to it, it's like, okay, well, what are attackers doing right now? Okay, now I'm going to go out and respond to that. Ultimately you want to be proactive about that and you want to be able to do things before the attackers even start attacking. It takes me some time and resources to get to and actually ultimately remediate a set of vulnerabilities. Can I start working on it before there's an exploit in the wild, before something is weaponized, before there's a proof of concept? Can I start to predict and start to make sense of, hey, there's no proof of concept for this vulnerability yet, but I think there's going to be one. And I think there's going to be one in the very near future. So I'm going to start deploying resources to remediate this one now. So I really believe the future of all of this is to become much more proactive. By the time attackers start looking at creating proof of concept, start exploiting vulnerabilities in the wild, you are already remediated.

Dan Mellinger: Let's hope we all get there. Well, I think that was a good and pretty thorough history of risk- based vulnerability management. Ed, thank you for joining us. Like we said, we're going to have a ton of resources linked for this one, so please check out the podcast page if you want to go check out any of the reports in person, all that good stuff. And if you have any questions, feel free to DM me on Twitter. I'm @ dtmellinger and thanks for joining us. Have a nice day.

More Episodes

Measuring What Matters w/ Cyentia Institute

Reporting Risk To The Board

Winning The Remediation Race w/ Cyentia Institute

Around the Virtual Table with Chris, Jeremiah & Ed

The Exploit Prediction Scoring System (EPSS)

Getting Real About Remediation w/ Cyentia Institute