Measuring and Minimizing Exploitability w/ Cyentia Institute

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Measuring and Minimizing Exploitability w/ Cyentia Institute. The summary for this episode is: <p>We hop on the line with the Cyentia Institute to discuss our latest joint research, <em>Prioritization to Prediction, Volume 8:&nbsp;Measuring&nbsp;and Minimizing Exploitability. </em>The new report reveals that exploitability for an organization can, in fact, be measured&nbsp;and reveals the best strategies to minimize it.</p>
Jay Jacobs: R Values Personified
00:02 MIN
Dr. Wade Baker: The Dickens of Data Science
00:07 MIN
Ed Bellis: Putting the Chief in CTO, Now Part of Cisco
00:14 MIN
Follow Along: Download The Report @ Kenna Security.com
00:33 MIN
We Can Calculate Exploitability at the Organizational Level
00:49 MIN
Quick Primer on EPSS
01:03 MIN
Record Setting Year w/ 20,000+ CVE's
00:19 MIN
16% of Vulns Are Considered High-Risk
00:24 MIN
Quantified Data: Billions of Vulns, Across Millions of Assets, Across Hundreds of Organizations
00:49 MIN
How We Define "High-Risk"
01:41 MIN
Some of the Data Biases
01:00 MIN
Organizations Are Getting Quantifiably Better At Remediating High Risk Vulns
03:53 MIN
EPSS Risk Distribution is Not a Nice Bell Curve
02:56 MIN
Turn Out Attackers Attack Vulnerabilities The People Have
00:11 MIN
Exploitability By Vendor
03:04 MIN
Survival Curve of Vendors
02:32 MIN
When You Aggregate a Bunch of Rare Things... They Become Less Rare
01:07 MIN
Exploitability of an Asset is Often Near 100%
01:25 MIN
Wade Stumbles On Dan's 401K Strategy
00:29 MIN
Over 50% of Assets Will Have a Vuln That Is Being Attacked Somewhere, Sometime
01:40 MIN
Using the KISS Method Or Things Get Squichy
00:36 MIN
Remediation Capacity Performance Quartiles
02:19 MIN
Mean Remediation Capacity Has Jumped 55% But Top Quartile Hasn't Moved
01:40 MIN
The Bellis Constant Is Less Than Constant
00:18 MIN
Simulating Remediation Strategy Effectiveness
00:33 MIN
Describing the Remediation Strategies
03:27 MIN
Twitter is a "Decent" Vuln Prioritization Strategy
00:50 MIN
Findings Are a Bit Less Surprising Than They Appear At First
00:58 MIN
Prioritization Strategy Beats Remediation Capacity But Do Both If You Can
02:36 MIN
Does Exploit Code Exist? That's The Decision Tree.
01:35 MIN
Closing Thoughts
02:30 MIN

Dan Mellinger: Today on Security Science, we figure out how to measure, then minimize the exploitability of an entire organization. Thank you for joining us. I'm Dan Mellinger. And today, we are covering our latest prioritization to prediction research report created in partnership with the Cyentia Institute. P2P volume eight, measuring and minimizing exploitability. We have a full house today, starting with the physical manifestation of our values, chief data scientist, founder, and partner at the Cyentia Institute, Jay Jacobs.

Jay Jacobs: Our values. I mean, great to be here, Dan.

Dan Mellinger: Okay. Next up, we have the Charles Dickens of data science, founder and partner of Cyentia Institute, Dr. Wade Baker. How's it going, Wade?

Dr. Wade Baker: I think that's very rude to Mr. Dickens, but I'll take it.

Dan Mellinger: I thought it was nice. I like alliteration too. It helps out.

Dr. Wade Baker: I do too.

Dan Mellinger: Yeah, it's a good pneumonic device. And last, but at least, the man who puts the chief in CTO, co- founder and CTO inaudible security, now a part of Cisco folks, so am I, by the way, Ed Bellis. How's it going, Ed?

Ed Bellis: It's going well, Dan. Always a pleasure.

Dan Mellinger: Yeah. You like how I snuck in that we got acquired by Cisco? Because I think this is our second podcast and I didn't mention it on the other one.

Ed Bellis: Yeah. I did notice that.

Dan Mellinger: Excuse our data sources completely.

Ed Bellis: I didn't even bother to change my title this time around.

Dan Mellinger: This is true. It is very, very true. As a quick reminder to all of our listeners, you can find relevant links to everything we're discussing today on the kennasecurity. com/ blogpage. So this will be a launch. There's going to be a full report. You can go download it. We'll have some cool analysis there as well. I would encourage everyone to download and read along because we are going to walk through the report and we typically reference specific charts and all that good stuff. We do describe it all obviously, but it's a little bit easier if you can visualize as well, especially for people like me who are dumb. If you've been following along, P2P volume four ended like many do, with another question. So is it possible to determine the relative exploitability or remediability of an entire organization? So not wanting to bury the lead in volume eight, we ran a simulation that shows that yes, we can. So we can indeed extrapolate exploitability to the organizational level. So let's again on that one. As normal, we normally kick off with a couple goals. So we look to achieve two things. So measure the exploitability for individual vulns, and then far more importantly, entire organizations. And then we created a simulation, which is funny because the Matrix reboot just came out and wasn't that good, seeks to minimize the exploitability under varying scenarios, combining vulnerability prioritization strategies with remediation capacity. So ultimately, looking at what makes the biggest dent overall. And so normally as we kick off, we like to go over our data sources. And in this one, it's really important that people understand the concept of the Exploit Prediction Scoring System, which is a first. org special interest group. It's open it's out there. There's a ton of data. We've done two different podcast episodes on it. So if you want a quick... Well, not so quick primer on both of them, you can go back and listen. But yeah, let's get that kicked off with Jay. Let's go over some of the data that we include in this report.

Jay Jacobs: Sure. So there's a couple of different things that you mentioned there, first is EPSS, which is the Exploit Prediction Scoring System, which uses a variety of data to look at essentially the exploitability. We're talking about measuring exploitability, and we naturally went to EPSS in this series.

Dan Mellinger: EPSS. You can go check it out at first. org, I think it's/ EPSS. And basically, we did a white paper and we looked at some of the correlating factors, and ultimately it gives a percentage of how likely an exploit is going to be exploited within the next 12 months. And some of the data sources that we run into along the way is we've done a measurement of the overall CVEs by year. And so the last few years there's been 18, 000 plus CVE's submitted every single year, which I think we got Jerry who's been measuring them. I think we actually cracked 20,000 this year.

Jay Jacobs: We did.

Dan Mellinger: Yeah, which is an insane amount. If we look a little bit later on we'll talk about remediation capacity companies cannot remediate them any vulnerabilities. They just, they can't do it ultimately. And so as of right now, 16% of all CVEs have some sort of exploit code available or activity that we're seeing externally, which typically is how we classify things as high risk.

Jay Jacobs: The thing I want to add too is that as we looked at the vulnerabilities themselves, we looked at billions of vulnerabilities across millions of assets. And then I think we have hundreds of companies. Five, 600, I can't remember. We're talking about an enormous quantity of vulnerabilities and enormous quantity of assets. And as we look at the exploitability across companies and assets and all these different things, that's the scope that we're talking about. It's not like a survey of 30 people or something. This is really massive. And it's device data. This isn't, like I said, a survey or anything like that.

Dan Mellinger: Yeah. Yeah. That's really important. I like to call that out because this is actual quant data. It's not qualified. It's not people guessing or, I think we have this many phones. It's actual production data, so it's a big deal.

Jay Jacobs: Yep.

Dan Mellinger: All right. Well, I think, was there any other stats? I think there's a couple other ones I wanted to cover off real quick. So most organizations, 87%, have open vulnerabilities on at least a quarter of their assets. I don't think this is surprise to anyone, but it's just fun to put that out there as a context basis. And then, this is an interesting one, so I might have to have a Wade break this down in a little more detail, but 75% of organizations have more than one in four assets with a high risk vuln. How do I interpret that?

Dr. Wade Baker: We'll talk a little bit about what a high risk vuln is, but in terms of that is consistent if you've been keeping up with P2P reports in the past. We look at high risk vulns as having known exploit code available for that vulnerability or exploitation active in the wild. So if it meets either of those two conditions, that's" high risk." And one of the things that we have consistently promoted in this series is that that's where to start. If you're looking at all the vulnerabilities in your environment and trying to think, well, which ones do I remediate first? Which assets should I concentrate on? Start with those that have high risk vulnerabilities. And that stat is just meant to give you a sense of, what does that look like across your infrastructure? And if you look across all organizations, like you said, most organizations have a good portion of their assets that have at least one high risk vuln open to exploitation. This is not a narrow, tiny slice of your infrastructure. It's a good part of it. And I think people know that, but it's nice to have a number on it.

Ed Bellis: And just to clarify all of this, just so we know what data we're talking about here, this is data that is coming directly from enterprises, vulnerability scans, all of their information about assets and things like that. So just have that lens on why we're talking through the data. As an example, we talked about the percentage of assets that have an open vulnerability on it. Well, naturally, when you're scanning for vulnerabilities, you tend to find them, especially on the assets you're scanning. There could be assets that they're not scanning as well. There's a little bit of bias in the data set in the sense that probably 99% of it is coming from automated vulnerability scanners across infrastructure, and it's coming from enterprises that have already, in some way, decided to apply some sort of risk lens to all this.

Dan Mellinger: That's a really good call out, Ed. This data set is definitely biased because it's Kenna customer. So they're biased to at least try to understand or create some kind of prioritization mechanism. I don't think any company would not want to do it if it was available to them. But as such, we've seen improvements as we've improved our product, as we've gotten better about helping our customers, teaching them how to go about prioritization and all this. So our first chart actually under measuring exploitability is a good example of that. So we've tracked since, I think, P2P volume two or three, maybe the monthly change in high risk vulnerabilities, so our organizations remediating more high risk vulnerabilities or less high risk vulnerabilities in any given month that are coming in. Wade, I love when you talk about this chart, do you want to just give a quick overview on that real quick?

Dr. Wade Baker: Sure. Ever since volume one, we've focused on this notion of keeping up. There's this avalanche or tsunami, whatever metaphor you want to talk about, an overwhelming number of vulnerabilities that are found in the environment. And it's not static, of course. There's new ones that show up every day. Every time you do a scan, you find more and organizations are remediating during that time. So it's always this give and take. You can almost get lost in the numbers, especially with the numbers of assets and number of vulnerabilities that Jay mentioned a while ago. And we wanted to net it out and simplify it down to, hey, when it comes to open high risk vulnerabilities, vulnerabilities that really matter, is that trending down or trending up over time? Are companies closing more than they're adding or opening? That's the gist of figure four in the report. It says," 60% of orgs are improving." That means they're redo the number of open high risk vulnerabilities over time. That's what we want to see. We want to be chipping away at it. They don't disappear overnight, but you want to chip away at them over time. 60% of orgs are doing that really well. Another 17% are in that treading water. They're maintaining the number. If you can't chip away at it quite yet, at least you don't want to add you the number and be steadily drowning. And when you combine those, you get over three quarters of organizations are either reducing or maintaining. That's awesome. And Dan, like you said, that has steadily increased over the last couple years as we've, as we've measured the statistic, which means you're coaching customers and helping them well, and I'm not just trying to shill what you're doing. But you can see that in the data, and that's exactly what we want to see. We want to see and improve capability to drive down high risk vulnerabilities over time.

Ed Bellis: Yeah. And in fact, I'd say the very first time that we measured this, it was very much the opposite. We had, I think, if I recall, two thirds of we're either at that treading water stage or falling behind stage. And now if we measure that same group of stats, we have 40% are there now as opposed to two thirds, so things are definitely improving.

Dan Mellinger: Yep, yep. Another stat. And it speaks to the bias, but we like to say it's a good bias in this one from the data set. But they are getting better. I'll bring this up just because I noticed it and I took a quick note, but remediation capacity has actually gone up as well from when we first did that. Not sure why, maybe companies are getting better automated, maybe more tools are coming out, more software is automated patching, who knows, but things have definitely shifted up by 50%. It used, the average remediation capacity, which we'll dive into in a little more depth, was 10%. Now it's 15. That's a big jump over, what, two years, I want to say for that one? So anyway, I do want to call that out as some bias in the data because it's good to have that context when we're looking at the greater landscape. But let's jump in. So we're measuring exploitability. 77% of organizations are actually either treading water or improving, which is great. Let's get into a little bit of the distribution of EPSS. So where does that lie from exploitability of published vulnerability? So Jay, you want to take this? We're on figure five in the report.

Jay Jacobs: Sure. Yeah. So figure five is looking at just the EPSS scores across all published CVEs. Everybody knows that like very few of these CVEs are actually exploited in the wild. And therefore, exploitability, if you average it out, is going to be very low. And that's what we're seeing on this chart. When you look across all CVEs, most of the EPSS scores are extremely low. This is not a nice bell curve that you might be used to. Things are generally under 1% chance of being exploited on the wild. And so that's figure five that just shows that type of distribution. We broke it out essentially into three categories, under 1%, 1% to 10%, and then 10% to 100%. And here, we're seeing about two thirds on the lower category, about one third in the middle, and then 5% in the upper. And so just heavy to the left on the chart as you're looking at it. Now, the next one, I think this is where it starts to get interesting. When we looked at the actual observed vulnerabilities that companies are reporting in that they're finding, we see this shift where that lower category that was, what was it, 63%? Now, it's about 40%. It dropped to about 23% on that lower end. And so it shifts more to the middle and now we're seeing about 50% of the CVEs in the middle having scored the exploitability between one and 10%. And then that above category also shifts up. And then later, I don't want to put any spoilers out there, but we've got another chart just like this later are on. It's figure 11, where we looked at an asset based view. And as everybody knows, depending on the asset, the type of asset may have literally hundreds of vulnerabilities on there. And so naturally, if there's a whole bunch of vulnerabilities on an asset, the exploitability of that asset is going to be through the roof. And that's what we see. We see about 95% at the asset level being in that highest category. So we end up at the asset level about 95% in the upper category. When we started out just looking at CVEs, it was like 5% were in that upper category, just on the CVE level.

Dan Mellinger: Yep. And just real quick, I'll plug one of our old episodes that we did on how vulnerability risk is power law distributed. It's not a nice average bell curve that humans like to think about, which is mimicked in these charts. So you'll see the long tail. So if you want example and you're not reading the report, check out any of our vuln of the month blogs, you'll see the same chart on every single one of them and it shows that Kenna risk or distribution, super long tail, very low probability. The bulk is very low risk overall. So that's a similar take. And if you want to get some additional content. So sorry, Ed.

Ed Bellis: Yeah. I was just going to say, what I relearned from some of these charts is low and behold, it turns out that attackers attack vulnerabilities that people have.

Dan Mellinger: Weird, huh? Smarter than we think, right? So moving on to exploitability but through the lens of product. Wade, do you want to take a stab at these charts here? Because they're very colorful and there's a lot of stuff on them.

Dr. Wade Baker: Yeah. I'll take figure seven and start there. Jay just talked about CVEs overall and what that looks like and we thought," Well, it's hard for people to connect with just a CVE, so let's break it down to product." And so that chart is showing the same information essentially, but grouped by the vendor that is responsible for that vulnerability. And a couple things here just to get your bearings. When you look at figure seven, the X axis on the horizontal is observed vulnerabilities across asset. So that gives you a sense of which of these vendors are more common and which are more rare in terms of the assets under management that we studied in this research. So no surprise, Microsoft is the furthest to the right, meaning that we observed there were Microsoft vulnerabilities or Microsoft products. That was the most prevalent of any other vendor. Of course, they have a lot of assets, desktop infrastructure, servers, everything. So that won't be surprising. And then on the other dimension here, on the Y axis, we look at this exploitability, and this is just an average of EPSS scores. So if you can think about, okay, we take all Microsoft vulnerabilities and each of those vulnerabilities individually have an EPSS score and you average them, where do you land? You could have all kinds of arguments about, is averaging the right way to do it and this and that. But we're talking about probabilities here and we're just trying to stack vendors according to, do the vulnerabilities specific to that vendor tend to be more exploitable than other vendors? Just to take Microsoft again, they are among the top 10 in terms of average exploitability across all vulnerabilities. Apache is up there. Adobe is up there. And then there's some that maybe are a little bit less familiar, Pseudo Project, Slackware. I can't even read that one, Faster XML? I don't know. So if you look at this chart, it's just an explosion of color, but it really does a good job of separating out, is this a highly prevalent, highly exploitable set of products? Microsoft and Adobe are in that upper right quadrant. Or is it more rare and less exploitable? And you see down there Samsung and Juniper, Dell, any number of things, and everywhere in between. That's the point of that chart. And it's pretty cool way to think about your infrastructure. This is why we did it. As you look across your assets, all of your assets are somewhere on this plot and gives you a sense of what you're up against.

Dan Mellinger: Yeah. Which transitions into figure eight as well, which shows how quickly people are able to remediate these vulnerabilities within their environments, how long they survive.

Ed Bellis: One thing I found interesting though about figure seven, I guess, one that stood out for me, there's a lot of, yeah, things we've covered before. I think volume five, we covered a lot about the volume from the vendors and different remediation velocity and things like that. However, one thing that stood out here for me is the prevalence across assets versus exploitability. Google really stood out to me as somebody who is across a whole lot of different assets, but the exploitability, at least from an EPSS standpoint, is quite low.

Dr. Wade Baker: Yep. And that's where you'd want to be. If you can maintain a super wide scale of deployment and low exploitability, that's doing a good job.

Dan Mellinger: Do you have any off the napkin interpretation of why that might be in?

Ed Bellis: Well, one, I would say that probably my guess here is the vast majority of this is Google Chrome that we're talking about. I'm going to assume that things like sandboxing and a lot of the different things that they've put in place in Google Chrome, along with auto updates so that people are automatically updating that software, have all played well for them.

Dan Mellinger: Yeah. That makes sense. And then actually, could we get your read on this area under the curve, the survival cord plot on figure eight? So figure eight shows the vendors, but this time it's mapping them based off of how long the vulnerabilities take to be remediated. So the lower you are, the faster ultimately, I guess. Well, exploitability is top to bottom. The more farther left you are, the faster enterprises are able to remediate vulnerabilities from a given vendor.

Ed Bellis: And as you can see again, Google is quite low, which is good in terms of how fast people are remediating, but so is Microsoft. And that's something that I know we covered a lot in volume five as well of the P2P research series is that Microsoft represents probably more vulnerabilities than any other vendor out there, but they're also one of the best in terms of enabling their customers to remediate those vulnerabilities through the Patch Tuesday process, through patch management systems like SCCM. People have very much operationalized, I think, Patch Tuesday and how to go about patching those Microsoft vulns, which really stands out versus a lot of other vendors.

Dan Mellinger: Yep. Well, let's jump, because I know we're going to be a little press for time to get through all these because these are always good episodes, but exploitability within assets now. This is actually interesting. So I'd like to have Jay, you take a stab at figure 11 and describing this because this is the inverse of the power law that we see for risk, right?

Jay Jacobs: Yeah. I touched on this earlier when we were talking about these distributions of exploitability and that when you look at the asset level, you're going to get this aggregation of rare things. And when you aggregate a bunch of rare things, they become a lot less rare. And so when you look at the asset level, most of these are rated at over 10%. And actually, most of them are in the top half of that range between 10% and 100%. But the interesting thing here is that a lot of these assets are vulnerable. And I think anybody working in vulnerability management knows what that's like. You've got so many vulnerabilities in your network and you're always fighting that fire. And that's what this chart is showing, that when you actually look at the asset level, there's a lot of exploitability.

Ed Bellis: Hey Jay, a question for you on this one. Talk, if you can, just a little bit about exploitability of an asset. So I'll use an example. I've got an asset. You mentioned before you might have 100 or more open vulnerabilities on that and each of those vulnerabilities has a different exploitability rating or EPSS score. What's the math behind aggregating that together to determine for this asset, what is its exploitability? And put it in layman's term.

Jay Jacobs: Yeah, that's super hard. There's a good half dozen ways to approach this. And we did it because EPSS is meant to be a probability, and so the first way we approached it was treating it like a probability. But the problem is, is that it's mostly calibrated. But I think where it's not calibrated is on that very low end. So under 1% chance, I think that a lot of those are rated higher than they should be. And so when you're aggregating across hundreds of vulnerabilities on a single asset, these things that have 1% chance of exploitability and then you aggregate 300 of those unpatched windows box, you're going to get the probability of exploitation to be near 100%. And that's what we found was that almost all of these were 100% chance of being exploited at some point in the next year.

Dr. Wade Baker: Sorry, I was going to break in and say exploited anywhere is a good caveat there, because we're not saying that particular set has an almost 100% chance of being exploited. It's that a vulnerability on that asset most likely will be exploited at some organization somewhere. So we took this and multiplied it by Jay's weight and then divided it by the current stock market value and averaged that across-

Dan Mellinger: It sounds like my 401k strategy.

Jay Jacobs: What we did end up doing is that for every asset, we would take the highest rated vulnerability, I believe, on there and just use that one. So rather than try to do insanely fun and complicated math where it just showed every asset was 100%, we looked at this top ranking one at this. And so as we slide into figure 12, for example, what we found is that over half of the active assets have a near certain chance that at least one of these open vulnerabilities will be actively attacked. Yeah. Again, what Wade clarified is that this is attacked anywhere, not on that specific asset. We did try to do some interesting mathematical approaches to talk about the probability of this asset being exploited, but we just found that we're doing a lot of, well, if we assume this and then we assume this and then these two go together and we've got another assumption... It just became way too squishy. So we tried to keep it simple.

Dan Mellinger: Yeah. Less quant, more qualification.

Jay Jacobs: Yeah. So it got just too squishy.

Dan Mellinger: Well, which makes sense, because that's the way Kenna's found, we have to look at it. So on an asset basis, it's the highest risk vuln on it because that's ultimately what matters because you don't want to get exploited. That's the end goal. And so there's a nice quote to finish out this section really, but nearly all, so 95% of assets, have at least one highly exploitable vulnerability. Again, that's not a surprise really, but it's just interesting to put that out there.

Jay Jacobs: Yep, definitely.

Dan Mellinger: You guys want to cover anything else before we jump into the meat of this issue, which is minimizing set exploitability?

Jay Jacobs: I think we jump there.

Dan Mellinger: All right. So I kind of prefaced this earlier, but we defined remediation capacity back in, what, P2P volume three, which, by the way-

Jay Jacobs: I think so, yeah.

Dan Mellinger: March, 2019 is when that report came out.

Ed Bellis: Ah, the inaudible days of PP.

Dan Mellinger: Yeah, exactly. That the pre- COVID and post- COVID, PC... Anyway, back then, like I said, it was originally any organization on any given month had the capacity to patch roughly 10% of their vulnerabilities. I think it was a 30 day period that we looked at and just rolling. That's actually increased. So organizations on average can have the capacity to patch roughly 15% of vulnerabilities, and you guys took it a step further this time and broke it down into distributions. Actually, Wade, let's have you talk about this.

Dr. Wade Baker: Cool. So we wanted to revisit that, because that's honestly still one of my favorite charts to talk about is that chart that shows the phenomenal correlation between the observed vulnerabilities and close rate and how consistent it is across all organizations. It really speaks to a broader issue. Anyway, I won't go down the road of talking about that right now, but we wanted to add some context to it, and we needed also for of this simulation that's coming up. Every organization doesn't remediate exactly 15%. We actually found some that are remediating over half the vulnerabilities in their environment and we found some that are remediating more like less than 5%. So there's a wide range, and we set some bounds trying to create, what does the middle all set look like? And if you're in the bottom 25% of organizations, that's about 6. 6%. If you're the median, exactly the half, which we said earlier, was that 15.5%. And if you're in the 75th percentile, is 27% of vulnerabilities on average over time. So that gives you some kind of sense of where organizations line up and hopefully begs the question of," Hey, where does my organization fit in here? Where do I stand when it comes to remediation capacity?"

Dan Mellinger: Yeah. I think actually, Wade, you guys ran some of those numbers for our customers back in the day when we first pulled out the report. I just noticed this. What's interesting is back when we first ran this, the average was one in 10 or 10%, but the top quartile was still one in four. Now, we saw a 50% jump in the average, up to 15.5%, but the top quartile only jumped up by 2.1 percentage points. So it does seem like the is a little bit of a cap. We might want to look into that later on. But yeah, because there's not as much growth, even though we saw spectacular growth in the average, the top performers are not growing at the same rate.

Ed Bellis: Save it for volume nine, Dan.

Dr. Wade Baker: Interesting.

Dan Mellinger: I'm just doing teases for the next reports. Anyway, Ed, did you actually have any feedback on before we jump into the next section?

Ed Bellis: No, this one was actually both interesting and new to me because I had previously been focused very heavily on that median number, which previously, as mentioned, was 10%. And then we were talking about top performers. In fact, we do use a number of half day training sessions with customers and we talk a lot about those metrics and we talk about top performers and what that means to be a top performer. So it's good to see that even while the top performers are increasing a little bit more, but everybody, I guess the rising tide quote, something, something comes in here. But yeah. No, this was all new to me and it was very interesting. I think it might be something we want to, as we expand our state of the union metrics, to include those different quartiles within that.

Dan Mellinger: Yeah. Fun P2P trivia for all those following along throughout the years, I tried to have remediation capacity labeled as the Bellis Constant. I'm glad that Jay didn't let me do that because it's less than constant actually, now that we're looking at it. Anyway, we got roughly 15 minutes to get into the meat of this. So you guys ran a simulation model, Jay, I'm just going to hand it off to you, because this is well beyond me. You guys have fun and jam.

Jay Jacobs: Yeah. The simulation, we started with factual data. So we didn't start and say," Let's simulate what we think a company would look like." We started with the data that we've got the company and the data of their current state. And so we said," All right, here's a company. Here's all the vulnerabilities that they have open. Now, if their capacity was the 25th percentile, 50th or 75th percentile, they're going to close X number of vulnerabilities this month. And what effect would different strategies have on their exploitability at the beginning versus the end?" And so if they did nothing, obviously their exploitability would stay exactly the same. And so we've got figure 17 and 18 cover this. So 18 has a dash line that says," Do nothing." And then we've got different strategies that we said," All right." If we said," A random strategy, they're is going to put all of the vulnerabilities in the hats and pick out until they hit their capacity..." And this is a baseline of doing something, but doing something very dumb. And it's good to have that baseline because then you look at something like CVSS and you see a minute improvement over a random strategy. And so that's interesting. If you randomly pick things, you're doing slightly worse than CVSs. And so CVSS may slightly improve on that, but that's not a great thing. Then we've got another strategy we looked at called the quickest, which is essentially, let's take a look at things that were easy to remediate. So given how fast these are to remediate, maybe it's a Patch Tuesday thing, maybe Microsoft has some auto patch stuff. Maybe there's no problem to do it. If you just did things that were easy up to your capacity, you would be outperforming CVSS. The next strategy that I think is super fascinating is Twitter. And we looked at essentially how many mentions on Twitter did a CVE have? And this is absolutely just insanely stupid counting of CVEs mentioned on Twitter. And this includes like inaudible's Twitter account that says, " We published CVE X." This is insanely dumb. Let's just count on Twitter and this outperforms CVSS and the easy to remediate things.

Dr. Wade Baker: Sorry, just to break in, just to put a number on that, Twitter outperform CVSS by 2X. So it's twice as good.

Jay Jacobs: Yeah. So if you'd rather not do CVSS and you just want to count up Twitter mentions, that might be more effective, twice as good from a CVSS perspective. So the next one we had was prevalence, and this is basically if a vulnerability is widespread, it might be also widespread in the attack form. That's what we actually are seeing, that if you focused on your most popular vulnerabilities, you're going to reduce it even more than the Twitter mentions.

Dan Mellinger: And that's most popular for specific organization or...

Jay Jacobs: It was across our entire data set. And so obviously, individual company may have to adjust, but I think they would probably see a similar gain if they just looked at popular. But then we get into the really, really good one, and that is their exploit code published for this? And we're using Metasploit Exploit DB and GitHub, and we're slowly adding some other sources in there. But largely, this one just completely outperformed everything else. And then finally at the end, we had perfect information and this is using EPSS, which isn't fair because we're using up EPSS of the exploitability score. So that's why we're calling it if you had perfect information, this is how good you could do. And one of the interesting things, if you look at figure 17, on the perfect information line, there's still a few dots way over to the right, meaning even if they had perfect information, their environment has so much work to do that they're not shifting the needle even with perfect information. So they probably have several months of really digging into their vulnerabilities, trying to fix them until they're going to see a benefit. I.

Ed Bellis: I would say for the most part, this isn't actually surprising to me. It's certainly funny sometimes. Certainly comparing things like Twitter to CVSS as an example. The exploit code is stuff that we've covered in previous volumes. In fact, in volume seven, we talk very heavily about," Hey, maybe it's never a good thing to actually publish exploit code because it does indeed shift the attacker's momentum to the left." But at the same time, I guess I could say," Well, now that I know that there's exploit code available, if I was to take this strategy, I could focus on this and I could do quite well compared to all these other strategies."

Dan Mellinger: Absolutely. And also, I petition that if we're going to brand any of these strategies, that we brand randomly patching stuff as the Dan strategy. I would love to go down as that. That'd be great.

Dr. Wade Baker: So long as it's not- inaudible.

Dan Mellinger: The Dandam. Wade, any thoughts here?

Dr. Wade Baker: I'll move us a little bit forward, because we not only wanted to compare strategies, but we wanted to compare capacities. So there's two things in this that we're looking at. One, what strategy are you using? Are you randomly picking vulnerabilities? Are you picking vulnerabilities with exploit code, for instance? Also though, how does that capacity that we mentioned before... Okay. We know the median is 15%. We know 6.6 is the bottom 25th percentile and the 75th percentile is 24%- ish. So how does that factor into all of this and the combination of a strategy and your capacity? How do you manage these two things? What's more important in terms of seeking that goal of minute exploitability? And that's where I think this really gets interesting as, as this report closes out and as this podcast closes out. We start comparing things. There's some interesting stuff. Going back to exploit code. You could have exploit code as your strategy and be at the bottom in terms of capacity, you could be the low 25th percentile in terms of capacity, and you're still going to outperform high capacity combined with the CVSS- led strategy. And things like that, I think, I hope, are really informative to organizations that are trying to figure out," Hey, what should I focus on? Is it more important for me to focus on increasing my capacity to remediate vulnerabilities at higher volume and more quickly? Or do I need to first focus on my strategy? Which vulnerabilities should we remediate?" And I think this pretty firmly says strategy trump's capacity In most cases here. You want them both, but if you get got to pick one, focus.

Dan Mellinger: Yeah. I find it very interesting because you set, basically we define these kind of speed limits, low, medium, and high, high performers, lowest performers, and medium. But this chart, and we're talking about figure 18, really breaks it down. We see what you could do at all of those break points with the perfect info, and then the next best thing is exploit code, and everything else is not even close. So if you are a low capacity, low remediation organization, you don't have the resources to patch this stuff, it's really, really hard, if you're focusing on vulnerabilities that have exploit code, you're going to far and away do better than any other strategy that we measured here, which is just mind blown.

Dr. Wade Baker: Work smarter over work harder. Do both if you can.

Jay Jacobs: And that gets called out explicitly in figure 19, the one after you mentioned, Dan, where we look at the changes between changing capacity versus strategy, and the strategy just generally blows it away.

Dan Mellinger: Yeah. Can you actually walk through this, I guess, rainbow jump chart? I don't know how to describe it.

Dr. Wade Baker: I call it the Volkswagen.

Dan Mellinger: The Volkswagen

Jay Jacobs: The Beatle. Yeah. Essentially, if you start with a low capacity CVSS, for example, you're at like a 0.75 exploitability. And then if you go to a high capacity, you get about a seven point reduction, as opposed to if you went with the same amount of capacity to exploit code, for example, you get about a 32 point reduction. That's a huge change. You see a big arrow shifting, and that is low capacity with exploit code gets you down to 0. 4 or so. And then if you go from low to high capacity with exploit code, you get even just a 10 point reduction. So CVSS changing capacity, you get a seven point, exploit code, you get a 10 point, but when you go from CVSS to exploit code, you're getting a 32 point jump, really quite a big difference having that strategy. And again, this is relatively dumb strategies. Does exploit code exist? This is not an analyst has spent 20 minutes looking at this vulnerability and making a claim. This is, yes or no. Do we have evidence that exploit code is out there? And that's it. That is the entire logic decision tree for exploit code.

Dan Mellinger: Yeah. You grounded out pretty well. You need the data set to understand if that happened, or you can go check out EPSS for free online now and go download the data. Or if you want to automate it, come hit up Kenna. I'll do my marketing spiel real quick. But Ed, actually, I wanted to close out with you here, because one, we've been working, like you basically your whole career, on how to prioritize vulnerabilities to move the needle, move the market. And it seems like things are moving that way. You just wrote a byline on CSA and their list of prioritized vulnerabilities. So in a nutshell, these final thoughts from your perspective.

Ed Bellis: Yep. No, absolutely. And to hammer that home, like you mentioned with CSA, we're starting to see organizations take more of, whether it be a risk approach or an exploit driven approach, where I know that there's exploit code that's available, or even better yet the perfect info, which is, I know that this is actually being exploited in the wild somewhere and I want to go remediate this first, before I become one of those exploited in the wild. And then, we always talk about an organization essentially has two levers to solve this problem, and this problem being, I have more vulnerabilities than I can possibly fix. So given that, I can either, one, I can prioritize these, and that's much like what we're talking about here with these different strategies, or two, I can use automation in terms of remediation, which is much more of the other half of this, which is increasing my capacity. And from the things that we've seen in the previous reports, the top performers are, in reality, doing both. They are applying a risk based strategy and they are doing as much automation as they can to remediate as much as they can.

Dan Mellinger: Awesome. No, that's a great, great summary. Oh, man. Lot to digest. This was another great report. So I'd like to thank all of our listeners at home. Go check it out, go download the report. If you have any questions or spot a typo, by the way, crowdsource typos, every single time, feel free to tweet me, let me know, DT Mellinger, Ed Bellis, Jay Jacobs, Wade. And if you have any suggestions for things, questions, comments, concerns, feel free to hit us up. We're all available and love getting feedback and love people who challenge the data as well. So let us know. And with that, thank you, everyone. Have a nice day.

DESCRIPTION

We hop on the line with the Cyentia Institute to discuss our latest joint research, Prioritization to Prediction, Volume 8: Measuring and Minimizing Exploitability. The new report reveals that exploitability for an organization can, in fact, be measured and reveals the best strategies to minimize it.