Analyzing Vulnerability Remediation Strategies w/ Cyentia Institute
Analyzing Vulnerability Remediation Strategies w/ Cyentia Institute
The first in a multi-part dive into the Prioritization to Prediction (P2P) research series by Kenna Security and The Cyentia Institute - guests Ed Bellis and Wade Baker discuss P2P Volume 1 which quantifies the performance of vulnerability prioritization and remediation strategies for the very first time.
Dan M.: Today on Security Science, we're kicking off our multi- part dive into the prioritization to prediction research series. Thank you for joining Security Science. I'm Dan Mellinger and today I have the Prince of risk based vulnerability management and Kenna Security co- founder and CTO, Ed Bellis. Thanks for joining Ed.
Ed Bellis: Thanks for having me again, Dan. Pleasure to be here.
Dan M.: Awesome. We also have a special guest today to discuss the topic of our research series. So we have the Verizon DBIR creator, RSA conference and FAIR Institute advisory board member, Virginia tech professor, partner and co- founder of the Scientia Institute and fellow Star Wars nerd, Dr. Wade Baker. Hey Wade.
Dr. Wade Baker: Hey, the last one I think is most important, the Star Wars piece.
Dan M.: And just for the record, Wade does have what a millennium Falcon, a TIE fighter, BB- 8, a rebel block cave runner
Dr. Wade Baker: Two TIE fighters.
Dan M.: Two TIE fighters.
Ed Bellis: I see an X wing and what looks like the millennium Falcon.
Dr. Wade Baker: Snow speeder down here kind of crosstalk screen.
Dan M.: Oh man, I am very jealous at that collection. Well, thanks for joining us guys. Just real quick, I know that we're kicking off this kind of history and overview of prioritization to prediction volume one, but I thought we'd get started with a little bit of background on how you guys know each other, how the relationship started between Scientia and Kenna. I'm just going to go ahead and assume that Ed swipe right on a Wade's profile and that's how you guys met.
Dr. Wade Baker: I think it was a dating app called DD Sec or something like that. I don't know. Security science dating app.
Ed Bellis: I'm not sure if I remember, that it sounds bad, but I mean, we've met several times, Wade and I over the years, I think in various events. I think we even spoke together on some panels maybe the Securosis thing way back when... but yeah, I'm not sure the first time I swiped right. I'm not sure I recall.
Dr. Wade Baker: It all blends in together. Yeah, that's a good question. I can't remember when we first started talking about this. Obviously this has been within the last few years because Scientia, we started in 2016, 2017, really actually doing anything. So it's within that timeframe. I think you probably spoke to Jay first. Maybe you guys started working a little bit and that spawned all the awesome research that we've done since then together.
Ed Bellis: Yeah, I do remember, both myself and Michael Reitman having a couple of conversations with Jay Jacobs very early on.
Dr. Wade Baker: inaudible you probably swiped right with Jay first and then...
Ed Bellis: I think Mike was probably swiped with Jay.
Dan M.: Oh God, this is going to keep going long crosstalk-
Dr. Wade Baker: Yeah, it's going to get worse and worse.
Dan M.: ... pastthis podcast. Isn't this? And real quick, Wade, can you give us a little overview on Scientia Institute? What you guys do and how you guys do it a little bit?
Dr. Wade Baker: Yeah, we are a security data science and research firm. I mean our short story is we really enjoy doing data analysis and telling data- driven stories. And a lot of that goes back to, you mentioned in the intro there that I was involved with the data breach investigations report at Verizon. In the context of that, if you open that report and look at all the vendors that contribute to that thing, there's a huge number. And Jay, my partner at Scientia was with me there at Verizon. And we just really, really enjoyed the process of learning from vendor supplied information, trying to figure out what it informed about cyber risk management and we always thought, okay, if we did this outside of Verizon, what would it look like? That's how Scientia was born.
Ed Bellis: I understand you are the godfather of the DBIR.
Dr. Wade Baker: In the same way that you're the Prince of risk based vulnerability management. I think those two titles are similar. They're perfect.
Dan M.: Very, very true. Well, if anyone is listening, you should definitely check out the Scientia institutes research library. It's a really, really cool research resource that has a bunch of stuff. You guys have been cataloging all this research for one, couple of years now, at least, right?
Dr. Wade Baker: It has been a couple years and it's admittedly a side project. So it hasn't gotten the focused love and attention like the prioritization to prediction series has gotten. But yeah, I think we're getting near 2, 500 reports. And the unique thing about that is these are all reports created on the industry side. We're not doing academic literature or anything like that. These are not marketing documents, anything in there. At least we try to have a criteria of, it's got to be some kind of study has statistics, findings, charts, et cetera in there. And when you look at the numbers, you realize that there's a lot of good research being put out in the industry.
Dan M.: Absolutely, cool. Well, that'll bring us to prioritization of prediction. So that's a joint report that's done on behalf of Kenna Security in partnership with Scientia and what we're on volume five now spanning two and a half years of research. So we're going to be digging into volume one. And Ed, I think the biggest thing that struck me when we were working on volume one was there wasn't a lot of this data available. Like there wasn't a ton of this research that had been done prior. So Ed, would you mind just giving us an overview of what some of goals were when we first started this one?
Ed Bellis: Sure. And I do recall having some conversations about weight about this, and he's like, really, are you sure? Isn't this already been done? But we were basically looking at, and we started to see this early on, even within Kenna, prior to doing some of the research was, there's an overwhelmingly number of CVEs or common vulnerabilities and exposures within the national vulnerability database. Every year there tends to be more and more. And then we see obviously on the genocide thousands if not millions of instances of the CVEs, across many, many assets. And it just, it became very apparent that of all of the vulnerabilities that are out there, that it seemed very few were worthy of a lot of attention. And by that, I mean, I got to jump on something, I need to remediate this right away. It's likely to end in some sort of exploitation event if I don't. So we wanted to kind of dig into this and figure out, well what does end in exploitation events? Could we predict or model out the characteristics of a vulnerability word that could ultimately lead to one of those exploitation events? And then we see a lot of different remediation strategies across the industry on how people are actually going about fixing or determining what vulnerabilities to fix because it was very apparent that nobody was able to fix all of them. And we wanted to see how that related. And are they doing the right things? How did they compare and contrast to each other or some better than others? If so, what made them good or bad? That's kind of the juxt of when we really started to get started on this series and really with the some of the questions that we set out to answer in volume one.
Dan M.: Yeah. That makes a ton of sense. And that was, I guess, the first step down that rabbit hole as well. Now that we're what five volumes deep. So I guess let's start with the data, right? You brought it up. So end of 2017, I believe is inaudible one, the data stopped collecting from CVE. So I have some notes here. It looks like there's over 120, 000 total common vulnerabilities and exposures. So the CVE list that existed at that time. Now and the end of 2019, there was over 136, 000. So that shows how much it's grown in the intervening years. But one of the quotes I found in the report that was really cool and I just wanted to read this out because it is pretty interesting. I believe Wade pen this one, but the CVE list is neither comprehensive nor perfect. Many vulnerabilities are unknown, undisclosed or otherwise never assigned a CVE ID for the more CVE details are subject to biases, errors and omissions. So I thought that was interesting because there's a whole section that goes on data sources and some of the enrichment that goes into it. So Wade, do you mind taking a stab at some of the additional information that helped kind of classify the data for us?
Dr. Wade Baker: Yeah. And partly that statement was just trying to be transparent about what we were doing because I love the security industry, but it tends to be kind of a skeptical lot and which-
Dan M.: You don't say.
Dr. Wade Baker: ... is a goodthing, right. I mean, that kind of comes with the territory, I think given the mindset that many have. So we do base a lot of the analysis around CVE and the objection that someone says, " Oh, well, there's a lot of other vulnerabilities that aren't on the CVE list." That's true but the fact of the matter is when you talk about the lion's share of vulnerability management, I think we've measured it since then. It's something like between 95 and 99% or something of the scanned vulnerabilities that are detected by any scanner that Kenna supports, has a CVE. So again, it's most of them, but not all of them. But there's a lot of things around the CVE process and tooling and all of that, that really goes in that enrichment category. You have, of course, the national vulnerability database. I mean, I wish other areas of security would document and collect information like CVE/NVD has over the last 20 years. I mean that has been one of the longest running data collection and sharing projects in our industry and it shows. There's a lot of people that contribute to it and no, it's not perfect, not completely comprehensive, like we said, but you can do a lot of really, really good stuff with it. And it's a great resource for people that are managing vulnerabilities. And you have other kinds of things surrounding that, there's the scoring system CVSS that has been in place for some time. I mean, that's one of the things I'm sure we'll talk about that a little bit later. It also not perfect. We find a lot of issues with what that's trying to accomplish and how it goes about it, but it's still there and it's a resource. CPE is great for the equipment that's associated with these vulnerabilities. It allows us to do things like, hey, Microsoft products have more vulnerabilities than other operating systems or types of which is what we dug into a lot in volume five. Then Metasploit and Exploit DB, you've got not just the vulnerabilities themselves, but the signals that we use to know or which of the vulnerabilities have been exploited. Because that's the thing that we really focus on in these reports are there's that subset of vulnerabilities that we know have been weaponized or are being actively used in the wild to compromise organizations, clearly we want to prioritize on those. So how do we know that, most organizations don't just have that walking knowledge, right? We can all quote, Oh yeah, not inaudible exploited blue keep. And there's certain ones that raise to that level, but the majority of them, you wouldn't know if it weren't for these other types of efforts tracking that kind of information and sharing it.
Dan M.: That's actually a really good lead in as well. So in the first report we did this inaudible life cycle, right? So inaudible is created, it's discovered, right. People find out about it, it's disclosed, which means typically it's published in the CVE database. There's exploit code developed, which we see can happen before or after that published date. There's actual exploitation, which is, can be hard to track down of when it's actually exploited. And we can actually see that in the wild and then the detection signature is typically how we figure out it's been exploited in the wild. So could you go over a couple of the challenges to collecting this kind of vulnerability data? I know there's a lot, and you already talked about kind of being transparent with... it's hard to get all these definitions. We have to work with the data that we have. What are some of those challenges?
Dr. Wade Baker: Yeah. I think this is really interesting from a already mentioned that we really enjoy analyzing data and telling stories and that kind of thing. That's what we do at Scientia. But I find this one particularly interesting because the data that we're using comes from so many different places. And if you just kind of walk through that chain, right, you've got a vulnerability is discovered. That's some security researcher, or whatever, finding a vulnerability, it's a person and they report this. So then it goes into CVE. That's one dataset. Exploit code comes from somewhere totally different. Someone sees that, and then they create an exploit that they... whether they put that code on GitHub or wherever it lives it's, " Hey, here's a working exploit." Sometimes those are" white hats." Sometimes those are the other side of the fence that are using it for malicious purposes, but that code exists in lots of different places. And we need to figure out where it is so that we can know that vulnerability has been exploited. And then it's a completely different tool set to know that that exploit has been used in the wild, because now you're talking about organizations or entities that have sensors on the Internet, IDS, IPS, and can see that, " Hey, this is being used." And a lot of that tends to be, say the network providers and those types of organizations or individual companies as well. And in the rolling out of a detection signatures, you would not know that an exploit has been used if the detection signatures to recognize it, weren't created and deployed by all the tool vendors out there. So when you kind of look at that as a chain of what all has to happen so that we can answer a simple question of, is this exploit being used in the wild, it's pretty amazing. There's a lot that goes on into that.
Dan M.: I've never heard it quite laid out with that much complexity that I could understand. So when you put it like that, yes, that is an extraordinary chain of events and data that needs to be correlated for every one piece of information you want to find. Right?
Dr. Wade Baker: Right.
Dan M.: What that results in ultimately or hopefully is this kind of case for prioritization. So I think this is a good spot where we kind of call out, I guess, the break points for what matters to us and what doesn't so much. So Ed, do you mind doing a kind of overview real quick?
Ed Bellis: Yeah. Absolutely. So one of the things that we call out, certainly in volume one, what we're looking at is what we categorized as a high risk vulnerability is essentially is a vulnerability that either has some sort of weaponized code, whether that be POC and something that sits in like an exploit DB or something that's a little bit more point click and shoot like Metasploit or some of the other black hat kits and different things that we were tracking or that we're seeing the exploit in some way used in the wild. Sometimes those intersected, in fact, oftentimes as we'll find out they did, meaning we saw both of those cases but sometimes they did not. So we wanted to kind of figure out, right? So for all of these vulnerabilities that sit within the national vulnerability database, how many of them are being exploited in the wild? How many of these have some sort of weaponization, some sort of code that's out there that enables them to be exploited and then try to dig deeper from there, answer some of the questions around the remediation stuff that we got later, but we really kind of categorized it very simply into, of these, how many of them are high risk? And then we started to measure the remediation not only based on... I think we'll get into this a little bit later, but in terms of coverage and efficiency and started to introduce some of those definitions around the overall remediation capacity, I guess of an organization.
Dan M.: Ultimately I think the breakdown at the time of this report was just over 120, 000 vulnerabilities. Of those 77% had no published exploits or code or anything that we could see from all those various data sources that would say, " Hey, this is easy to take advantage of." Of that number as well, you kind of filter it down, 21% roughly had exploit that was publicly released. So it doesn't mean people are using it, but someone figured out a way to easily take advantage of the inaudible via an exploit. So should probably care about those. And I think that's where the cutoff for coverage and efficiency gets in there. And then out of that subset, the things you really want to worry about was either we can see an exploit in the wild or we've seen some kind of activity related to that vulnerability being exploited. And that number I believe comes to about 1. 8% total between the two buckets that they had there.
Ed Bellis: That's right. That's kind of the buckets of where we start in volume one. You'll see, as you get progressed down some of the other volumes, those percentages flux a little bit here and there, but they remained fairly consistent overall. We're not seeing like wild jumps between like 2% and 10% from exploited in the wild. It's more single digit percentages regardless.
Dr. Wade Baker: And I've seen that in other datasets too that have looked at this and tried to study that it's fairly in line.
Ed Bellis: Yeah. That's right.
Dan M.: Well, it's interesting because I've actually, I've seen a couple other reports reference and get roughly the same split. I think we've even actually pulled a couple updates to our numbers to look since it's grown by roughly 18,000 in the last couple of years or so. The numbers seem to stay relatively even as a percent, which is interesting. So I wonder if it's just down to human effort and ability/ bandwidth to actually build stuff or...
Ed Bellis: I'll bet there were some other volumes of PDP that maybe dug into that a little bit later.
Dan M.: Awesome teaser. We can get into that. So I did want to do a little sidebar. You mentioned it Ed, coverage versus efficiency because this will come up a couple times throughout the series I'm sure. But it was kind of a brand new measure that science actually worked out with Michael Roytman. Can we do a little explainer on what's the difference between coverage and efficiency and how are they interrelated to each other?
Ed Bellis: I'll start Wade, but would love for you to dive in here. So they're new in the sense and how we describe them and certainly how they relate to vulnerability management. That said, if you're familiar with machine learning, there's something called precision and recall out there, which is basically what these things are based on. When you're thinking about it in terms of remediation of high risk vulnerabilities, which is exactly what we're looking at in this report, you can think of coverage is okay if I've got 100 high risk vulnerabilities out there, meaning I've got 100 vulnerabilities that have either a weaponized exploit that we know about or being exploited in the wild. How many of those 100 did I remediate given any remediation strategy that we cover within this report? So if I had fixed 70 out of 100, then my coverage in this case is 70%. Kind of the flip or the yin and the yang here is the efficiency side. Which is to say, if I went out and fixed 100 vulnerabilities, how many of those ended up being high risk vulnerabilities, ended up with some sort of exploitation events? So again, if I went out and fixed 100 and 50 of them were ended up being high risk, then I would have a 50% efficiency in this case.
Dan M.: Wade, anything to add there?
Dr. Wade Baker: Yeah. I think you mentioned the yin and the yang, Ed. I think that is an important aspect of this because as we began to dig into the performance of organizations along those two measures, it's very clear that it's hard to maximize both of those things. I mean, ideally you'd have 100% coverage and 100% efficiency, meaning you fixed every single high- risk vulnerability and you didn't waste your time with any that were not on that list. But the fact of the matter is it's never works out that way. So you've kind of presented with this. I don't want to say dilemma. I mean, it really is a strategy of if you are risk adverse, for instance, you probably want to maximize coverage and you're okay sacrificing efficiency. You're okay fixing some things that may not necessarily need to, just for the sake of being cautious. On the flip side of that, if you're budget constrained or you're risk tolerant, maybe you maximize efficiency and work on the worst of the worst of the worst vulnerabilities first. In which case, you know you're not going to get coverage, but you also know you're not wasting your time and resources on stuff that doesn't matter and you kind of worked from there. So I just find it fascinating when the analysis leads toward managerial strategies like that, and you can actually measure them. I mean, that's the really cool thing here is that you can objectively measure the performance of any vulnerability management program along those two lines and get meaningful results.
Ed Bellis: Yeah. And there's two kind of really practical things to look at here. One, you talked about the differences between coverage and efficiency. And oftentimes what you'll see in an organization do is, as they're getting started, they'll start leaning towards efficiency. Because they want to make sure that everything that they're doing early on is stuff that matters. They want to get as much bang for the buck as they can. And as they start to mature and they start to get a little bit more ahead of it, then they start to kind of shift that remediation strategy more towards coverages is what we often see. And you get to a lot more of reducing your overall risk. The other thing to really call out that's important here is there's a difference in coverage and efficiency when you're looking at individual CVEs versus patches. So it's really important to say, and in the real world, when I go out and I apply this patch, I may fixing 10 CVEs of which one might be of high risk and nine are not. Now, I believe if I recall Wade, we did actually account for that by looking at the remediation strategies via patch or fix.
Dr. Wade Baker: In volume two.
Ed Bellis: That's right.
Dr. Wade Baker: crosstalk we started doing that. I do not think we did that in this volume. So efficiency is kind of dampened in volume one because of that.
Ed Bellis: Yeah. So that's a really important thing to call out because I want to fix that high risk vulnerability. And the only way I can fix that high risk vulnerability is by fixing that one and these nine others, which I shouldn't be penalized for in terms of an efficiency metric. Because I'm doing the right thing actually both in terms of coverage and efficiency in that case. And I'm applying a patch that it takes no more effort for me to fix these 10 than it does to fix the one.
Dan M.: So you're saying like, say Microsoft patch Tuesday, they give you a patch and it's got one what we would consider, I guess, a high risk or critical volume, but it's got a ton of the ones that no one's ever going to exploit. You're going to apply that patch. And in this original model, you're actually going to get dinged in terms of efficiency, because it sees you plug in nine things that don't matter, one thing that does.
Ed Bellis: That's right. By applying that one patch effectively, you have 10% efficiency in that example.
Dr. Wade Baker: And that goes back to the whole data challenge behind all of this, because now in order to make that correction, you have to have data on all the patches that exist and which CVEs they address and tie that in to everything else that we've already talked about. So it gets more and more complex as you go along.
Ed Bellis: Not to mention patch superseded as well when you throw that in there and like, oh.
Dan M.: Well, I think that's kind of interesting that we're talking about these limitations that we're actively discovering while you're doing this research. And you referenced volume two, which I'm sure will be one of our next topics and how you identified, " Hey, there's a little gap between this kind of theory and the reality of what people do. Let's go try to figure out that piece."
Ed Bellis: One of the great things about these reports is that we go out to answer five questions and we answer those five questions and come back with 10 new ones.
Dan M.: I'm sure we'll get a little bit deeper into coverage and efficiency realities in some of the later reports as well. So next section, it goes over kind of the timelines of exploitation. Now that we have a rough idea of what's kind of important to deal with, how fast do you need to do it? Wade, do you mind going over a couple of the timeline reveals that we found in the research?
Dr. Wade Baker: Yeah, certainly. This part narrows in on that subset, roughly 22% of CVEs that have exploit code because again, we're asking the question of these. How fast is that code released? How fast is it weaponized? If you haven't read the report, I would suggest looking at it. This chart is figure three on page nine and it basically looks like I don't... To me, it always reminds me of the empire state building or something because it looks like a skyscraper. Basically when a CVE is published, exploitation ramps up very quickly and that's important because it means that you don't have very much time. That is not to say that it's used in the wild. We'll talk about that in a minute, but at least the weapon is created very quickly on the CVE. So attackers are actively, I mean, they're doing the same thing we're doing as defenders, right? We monitor CVE and the NVD and those kinds of things so that we can take action. They do too and they know when those things are published and they're quick. Now, all of them aren't attackers. Again, some of these are people trying to do the right thing by... because they're pen testers and they're trying to use the latest bug to see if that exists in the environment and things like that. Whole different discussion there. But bottom line it's published, if it's going to be exploited, usually it's exploited within a week or two.
Dan M.: Publish means CVE or the vulnerability is actually published in the database. So essentially the world knows about it. And then you're talking about an exploit being released. So now we know that, hey, someone can go out and take advantage of this easily.
Dr. Wade Baker: Right.
Ed Bellis: So there's a publishing of the CVE, but there's also really a publishing in some ways or another of an exploit. Maybe that's an open public publishing in some cases. And sometimes it's private, but it's somewhere we have tracked weaponized code for this particular exploit.
Dan M.: Wade, you were saying, it looked like a kind of a skyscraper. It reminds me of some of the buildings in New York or something like that. The skyline because it's within roughly two weeks before after the CV published date, I think is when we see that code published.
Dr. Wade Baker: Yeah. And that published date is kind of has some gray area and uncertainty around it. So yeah, it's just rule of thumb. If there's going to be published exploit code, it's happens very close to when the CVE is published. Now, exploitation in the wild is different though. Like that there an upsurge in exploitation in the wild within the first month or so of exploit code being developed but that there's kind of a plateau. If you look at figure six in the report, it's equally likely that a CVE might be exploited two years out in the wild as it is in the more near term. And that kind of steady likelihood of exploitation goes about two years out until it starts to drop off by the time if it's been sitting out there for three years and hasn't been exploited in the wild, it's not likely to be kind of the world has forgotten about it, so to speak, but that behavior is very different. It's kind of unfortunate because it means you really, you can't forget about a vulnerability if it's been out a couple months and assume that it's not going to be exploited because it's not interesting to the folks out there. So you have to keep your eye on it for a couple years. And that means intelligence and more data wrangling and that kind of thing.
Ed Bellis: Yeah, if I remember correctly, Wade that year and a half or two years out, it wasn't just that they were being exploited a year and a half to two years out. But sometimes it was because they were being exploited or we were seeing them being exploited for the first time crosstalk.
Dr. Wade Baker: Correct. That's what I mean.
Ed Bellis: crosstalk.
Dr. Wade Baker: Yes. That's right. That's what that is measuring. The first known exploitation is just as likely to be two years out, pretty much as six months out.
Dan M.: So there's a really long tail from the published date. So it could be immediately to 1. 5 years, two years away?
Dr. Wade Baker: Yeah. Two and a half. Yeah.
Dan M.: Oh, wow. What do people do about that? How do you...
Dr. Wade Baker: Like I mentioned, I think a lot of it has to do with you've got to keep these things on your radar. Whether you do this kind of intelligence internally, whether you work with a third to do it, that becomes very important because you need to know two years down the road that Oop, that vulnerability switched now from kind of lying dormant to being exploited in the wild. And if you haven't chased that out from within your environment yet, because that's been one of the ones you've deprioritize now you've got to move it to the top of the stack and do something about it.
Ed Bellis: Yeah. But speaking from the real world perspective of what people do about it, right? One of the things that we found in this report too, is once that code is weaponized, the likelihood of seeing that exploitation in the wild jumps dramatically. So assume that once you're sitting on a vulnerability that has some sort of weaponized exploit associated with it, that will get used eventually in the wild. Whether it's in the next 30 days or the next two years, it's eventually going to get used. So you've got your clock is ticking and it's time to go remediate.
Dr. Wade Baker: Yeah. And that's the whole notion, I don't think we're getting into it on this segment, but the exploit prediction, scoring system and the models that we did a year or two after this report published was all right, if we don't know that it's been exploited yet, can we from the characteristics of that vulnerability predict whether or not it will likely be exploited in the next year? So you get that. You can take action before things get out of hand.
Ed Bellis: Yeah. We've gotten to the point now where we've got a couple of different models available to us one, whereas a model which is predicting the weaponization of that exploits, the code being developed. Then other models such as EPSS to predict whether or not we will see an exploit in the wild being used against that over the next 12 months. So you can really start to get a little bit more proactive and start to understand what might be coming down the road.
Dan M.: The prediction of the prioritization to prediction. Got it. That it makes so much sense now. Okay. I'm sure we will dig into EPSS and some of the models here, but I think this was the first kind of a predictive model that we built some research around at the end of this report. So let's keep chugging through it. I know the next kind of section for this report was kind of the rules of remediation was the name, but essentially it was going down and looking at the actual theoretical performance of re- meeting vulnerabilities by vendors or CVSS or a couple of different factors, a reference list, Metasploit, things like that. So let's start with I guess the vendor driven vulnerabilities. Ed.
Ed Bellis: We looked at a bunch of different vendor lists, obviously Microsoft, Oracle. I mean, basically we took, I forget that, Wade refresh my memory on the top X vendors and looked at their reference lists and their mailing list. So if you subscribed to Microsoft and you get patched Tuesdays or whatever, what if I went out and fixed every single patch Tuesday? How would I do both in terms of coverage, which we talked about and efficiency for each of those?
Dan M.: I think this was actually a really, really unique piece of this research was it seemed like you guys wanted to provide a baseline for people to understand what's good from a coverage and efficiency standpoint. You guys, you did it by chance, right? So you looked at, if you were just to randomly select things, to go take care of things, to remediate, what would be the efficiency, which in this case, if you were to just roll a dice and plug things, your efficiency would be roughly 23%. So because 22% of 120, 000 vulnerabilities at the time were some things that had exploits developed, you should probably take care of those maps out 23% makes sense. And then the coverage would vary depending on the overall data set. So I thought that was interesting, rolling dice. And I believe the quote for vendor led a remediation for top 20, I took a specific note here, a pair of dice would trounce a purely vendor driven decision model.
Ed Bellis: Yeah. Fortunately with the exception of the Microsoft's one example, I would tell you that in the real world, most people aren't just sitting here and saying, okay, Oracle published something, I'm going to go out and fix everything that Oracle just published.
Dan M.: Unless Dark Reading, decided to write something on it. Right?
Ed Bellis: Yeah. But then they're going to write about a specific CVE within that list, not all of the CVEs that they've published in their quarterly update. But it was actually one of the more entertaining pieces was to look at all of these different remediation strategies and figure out well, which ones actually performed worse than random and there was...
Dan M.: A lot of them did.
Ed Bellis: Yes, there was certainly more than one.
Dan M.: I think the next one that we looked at was the CVSS based remediation strategy, which actually I believe is common. A lot of companies do use CVSS crosstalk.
Ed Bellis: Probably more than anything else out there. The common vulnerability scoring system, which we see we covered earlier on and is really part of the S cap standards. So the security content automation protocol, which includes CVE, CVSS, CWE, ECP, all of these different things, mostly out of MITRE and NIST. But yeah, we looked at all the different remediation strategies, CVS, what if you just fixed CVSS 10 and above? For those who are don't know, it's a 10 point scale. Or I fixed everything, that's a CVSS nine and above and seven above and so forth. And then looked and said, well, what's the coverage and efficiency of each of these? One of the things was interesting to see for me and I'll let Wade talk about the actual results. But it was interesting to see to me how skewed the CVSS scoring system was. So if seven and above was a big differentiator both in terms of coverage and efficiency, because there are so many vulnerabilities that are actually scored a CVSS seven. So if you look at the calculator and how you actually get to the different scores, that's a very common score to end up getting to. In fact, if I recall there was one score that was nearly, or is impossible to actually get, I don't remember what it was. I want to say it was like a CVSS three or some odd ball score that it would be one out of the 100,000 CVEs scored this. It's almost impossible to get.
Dr. Wade Baker: Yeah. I thought it was pretty fascinating that CVSS seven and above is I would say the consensus strategy, if you want to call it a strategy/ recommendation of what should be fixed. There's a few things that do reference that. And that's actually the one that performs best in terms of all the different levels of CVSS. I mean, it kind of makes sense that if you just fixed 10 and above, you'd be the most efficient is still not great efficiency is basically the same as random chance, oddly enough, but pretty terrible on coverage. Whereas if you fixed five and above, you'd have great coverage, 80% coverage. So that makes sense. But yeah, the one that was seven was the best, and it's the only one, actually, there's a couple of them that beat random chance, but that one performed better than random. So that's actually good that people are recommending it. Of course, the models that we built on this blew that away by a pretty healthy margin.
Ed Bellis: Yeah, I mean, even when you talk about like CVSS five was good at coverage at 80% or whatever, but it was also more than 80% of the vulnerabilities. So at some point-
Dr. Wade Baker: Correct.
Ed Bellis: ...you're still ultimately not doing better than random. If you fix 80% of the vulnerabilities and you get a coverage of 79% or something, that's not good.
Dan M.: So you're saying it performed better than random chance, but the total volume of things you would need to fix at a CVSS seven plus was like, I think at this time it was what, 46% of the entire MITRE list or something like that.
Ed Bellis: I mean, for any sort of organization, that's a Herculean effort, I would say to fix that. Although I do remember very early on in this age as me a bit, but when PCI first came in fact I think is even before it was PCI and visa had their standard then Amex or MasterCard at a separate standard for SDP. But they used to say CVSS four and above was required for PCI compliance, which is essentially saying, you have to fix all of your vulnerabilities because there was almost nothing crosstalk.
Dan M.: 89% or something crazy.
Dr. Wade Baker: And just so everyone knows, PCI is a standard that, what retailers need to adhere crosstalk.
Ed Bellis: Anyone processing credit cards, the payment card industry is what it stands for.
Dr. Wade Baker: So technically if they were to be audited, they should be remediating 90% of all the vulnerabilities that exist on all their systems.
Ed Bellis: Yes. I believe they have since changed that standard. In fact, to my pleasure, I think in some of the more recent ones, they actually use the term risk based approach so we got that crosstalk.
Dan M.: Very cool. Well, let's go into why they would say that and change some of those standards possibly. So the end of this report looks at trying to figure out a model that would predict the best way to prioritize what the combination of factors. I think the net, net of the rules of remediation was none of the attributes in and of itself was a good predictor just by itself. Wade, would you mind just doing an overview?
Dr. Wade Baker: So the idea in this is, as you mentioned, the different, " strategies vendor" CVSS, et cetera, don't work very well in isolation, but we wondered, could you use these things in combination to do a better job of predicting exploitation? So we trained a machine learning model using those as inputs. And we added some things like we haven't talked about here, reference lists. Whether or not the various lists that exists that talk about security things, if they mentioned a CVE, a lot of times that's, " Hey guys, X is being exploited in the wild watch out for it." There's lots of things like that. So we pulled that in using words that are in the CVE description. The way that we describe things when someone enters it, might be a key and some of this was just pure hypothesis. Like, okay, well maybe that'll work. Some of it was more led by intuition, the words, for instance, you know somehow people pick out some subset of CVEs to create proof of concept and exploited in the wild. So presumably a lot of that has to do with, they read that one and say, " Ooh, that'd be a good one. I'm going to write an exploit for that."
Ed Bellis: Remote code execution.
Dr. Wade Baker: Exactly, exactly. So things like that can be signals of, hey, this is probably going to be exploited or is at least attractive for that purpose. So long and short of it is we wrapped all of that in and used it to train a model. And again, all of this was possible because we had the two objective outcome measures of coverage and efficiency. So you can just flat out measure and compare all of these different strategies. And what does it do under these conditions? What kind of level of coverage, what kind of level of efficiency do you get? Et cetera. And at the end of this report, we plot it all on a map and we eventually came up with three different approaches to this, everything model. I think we called it at this time, one that is optimized for efficiency, one that is optimized for coverage and one that is designed to be more balanced. And all of these beat anything else on the spectrum by a healthy margin. There was some cheering and celebrating, maybe tearing up when we saw these kinds of results. crosstalk.
Ed Bellis: Ed cried because he was so happy.
Dr. Wade Baker: Yes. Those are definitely tears of joy.
Dan M.: Yeah. Excellent. Awesome. Well, here I'll go ahead. I took some notes on kind of how the model performs. So one compared to a strategy of remediating all CVEs with a CVSS score of seven or more. So we established earlier seven plus CVSS was probably the best single strategy that we found thus far, the everything model. So the model you guys worked on that incorporated all these different attributes and a few extras was twice as efficient. So 61% versus 31% efficiency, half the effort. Which is interesting because we've kind of alluded to that in terms of raw number of things you need to do and how big that that effort would be. So half the effort was 19K versus 37K, so right, or almost half the effort, 46% something like that. One third, the false positive. So we didn't really talk about this much, but that's another issue, right? False positives that pop up requiring extra work or stress when none is needed. It looks like the model found what, 7K versus 25K and then a better level of coverage. So 62% coverage versus 53. And I believe that was the balanced model. So some extra numbers here as well, eight times more efficient than based off of remitting vulnerabilities from the top 20 vendors, which we also referenced earlier. Which was particularly bad and should not be used in and of itself. So just want to get some of those stats out there. Ed, can you explain the tears of joy you cried when these results popped up?
Ed Bellis: Yeah. I mean, obviously we've seen a lot of different remediation strategies. I would say by far the CVSS one was the most common. It was good to see the seven or above, which is also probably the most common of the CVSS strategies perform better than most of the things, the standard ones that we looked at there were out there. But when this model was kind of put into the test and we started to look at really what I liked about it, not only did it outperform all of those other remediation strategies by far, but you could tune it. So we talked earlier about some organizations are going to lean towards efficiency. Others are going to lean towards coverage for various reasons. You can turn that model up or down so that you are better at coverage or better at efficiency and still be like, if I tuned towards coverage, my efficiency would still have been better than any of these other remediation strategies that we are monitoring by quite a bit.
Dan M.: Nice. And then Wade, what were your thoughts at the outset of this first report?
Dr. Wade Baker: For me, it opened my eyes because I've been counting and measuring things a long time in security and most of it's counting, right? Like you can say, ooh, this is a trend because this is happening or this is happening at an increasing rate or things like that. But you just don't have many things, at least that I've been involved with where you have such a clear outcome measure and you can compare two different things and demonstrably prove that one is better than the other. And this was so strong in that, that I mean, it gave me hope that we can not only do this for vulnerability management and which things should we patch, but a much wider range of security processes and tools and all of those things as well. It also set us in volume one when it closes, we have two measures, coverage and efficiency. I think at volume five, I don't know, we're dealing with six or more. So we kept adding these objective measures, all of which say something slightly different about the performance of remediation program. And that's awesome, because you just wouldn't think that we would get there. It's fun to be able to decide. This is clearly a better strategy and people need it. We've been doing too much guesswork and just what makes a successful program, no breaches, I don't know. Is it really that just, maybe you got by, by the skin of your teeth. Maybe it was dumb luck, but this is something you can measure and hang a hat on. So I like it.
Dan M.: Very cool. Well, I think this rounds out this podcast for the first series, the volume one. We'll get definitely farther down into it and talk about a couple of the teases that happened on this very podcast. But just to round things out real quick, on the page, we will link to the prioritization of prediction report. So you can go get it there. I will also link to the Scientia Institute resource library because that thing is just really cool to go nerd out on and dig into. Then we look forward to having Ed and Wade, you guys back on a little bit later, so thanks guys.
Ed Bellis: Sounds good. Thank you, Dan.
Dr. Wade Baker: Thank you.