The Attacker-Defender Divide w/ Cyentia Institute
The Attacker-Defender Divide w/ Cyentia Institute
Dan Mellinger: Today on Security Science, we analyze the attacker- defender divide. Thank you for joining us as we discuss the sixth and newest report in our ongoing dive into the Prioritization to Prediction research series that we at Kenna Security have partnered with the Cyentia Institute to put out there. We're doing Prioritization to Prediction Volume Six: The Attacker- Defender Divide. With me today, I have the president- elect of risk- based vulnerability management, Kenna Security co- founder and CTO, Ed Bellis. How's it going, ED?
Ed Bellis: It's going great, Dan. Thanks again for having me.
Dan Mellinger: I have a feeling that title's going to be a little Easter egg if people are listening to this episode in like five years. The context may be lost. Anyway, and I'm also pleased to welcome our next guest, he's the quantifier of cyber loss, analyzer of exploits, partner and co- founder of Cyentia Institute, Dr. Wade Baker. Dr. Baker, how's it going?
Dr. Wade Baker: It is going well, sir. Thanks for letting me join in on this shindig.
Dan Mellinger: Awesome, and thanks for helping us with these reports. A quick caveat, as always, you can find any of the materials that we reference here on the podcast episode page at kennaresearch. com/ podcast. This one will be a little bit different because this report's actually launching a week from today, and we will be reporting or promoting the hell out of it, I'm sure. So, this will be across Kenna Security's home page. I'm sure if you've ever interacted with us, we will email you, you'll probably see it on social media, anything we could think of to get this out in the world, it'll probably be there. So, you could feel free to yell at me via Twitter if you'd like to.
Ed Bellis: Super Bowl commercial.
Dan Mellinger: Absolutely. Anyway, I'll do a quick recap on the previous reports. We've done five up till this one. Just a quick reminder to, I guess, set context and orient everyone, there's a ton of vulnerabilities, we're roughly at just shy of, I think 145,000 in NVD that had been tracked thus far. Remediation takes a ton of time. So, it takes what? 45% of vulns can be handled within the first month, two- thirds within three months, and just under 20% are open after a year, or just can't fix them all. On average, 10% can be closed in any given month, which the Bellis rate of remediation, I think we titled that one. But not all vulns need to be fixed right away. So, 5% of vulnerabilities exist in corporate environments and have exploit code available to them. So, those are the ones we're typically telling you to go attack first, or just can fix all their high- risk vulns. So, 51%, which we've actually seen this number change over the couple years that we've been tracking this stuff, but 51% of organizations are actually reducing their high- risk vulnerability debt, 16% are maintaining, and only 33% are falling behind. And when we first looked at that, those numbers were almost exactly flipped. So, that's awesome. And then from the last report, the volume of assets per organization, and the density of vulnerabilities per asset just varies extremely widely, very, very, very widely. So, there's no kind of rule of thumb for this one, but ultimately, we can say that vendor- led VM programs, patch cadences and patch programs, seems like a key to success to addressing volume of vulnerabilities quickly. So, whew, that's a lot, there's a lot of stuff going on. We're launching this report on the 18th, so, a week from when we're actually recording this episode, and I want to do a quick shout- out. We take a first of its kind look at CVEs, and particularly with this report, we're looking at observed evidence of exploitation. And anyone who's tried to look at threat intel, or been involved in cybersecurity data for a while, analysis, will know that that's a really hard data set to get, and it's a really hard data set to have cleanly, and we couldn't track and/ or analyze this without partnership of Fortinet and their exploit data. Fortinet is one of the biggest cybersecurity companies on the planet. They do hardware, software, and they have probably the world's largest network of sensors out there. So, UTM devices, next- gen firewalls, enterprise firewalls, all that good stuff. So, I just wanted to give a massive shout- out to Fortinet, because otherwise, this data would not exist.
Ed Bellis: Here here.
Dan Mellinger: Yeah, here here. All right, now that I got all of that out of the way, for this report, we looked at the gamut of 473 CVEs from the overall list of roughly 18, 000 that were published in 2019. And those 473 were important, because like I just mentioned earlier, they have observed evidence of exploitation. So, we can tell something happened where someone tried to exploit them in the wild. Wade, why is that important? Why do we look at that this time?
Dr. Wade Baker: Well, if you look at what we've been doing in the Prioritization to Prediction series, this is a logical next step. And I want to emphasize that for a second, these are not planned out. When did we start this? Two years ago? Two and a half years ago? It is not as though we had a roadmap figured out of what volume six would look like way back then. We don't know from time to time and we are learning and asking questions and seeking answers to questions obviously a little bit ahead of the people that read these reports once they're published, but not far in advance. I mean, we're doing this live, so to speak. So, this seem like a logical next step. And if you think about some of the things that Dan just highlighted, one that sticks out is the difference between an organization falling behind in the vulnerabilities in their environment, and gaining ground on them and reducing that security debt, is the ability to focus on the vulnerabilities with known exploits. Using that as a prioritization mechanism, you can actually go from no hope of keeping up to where you're actually doing it. So, exploitation is an important thing to know about. And if you couple that with what we've been doing outside the report, so, if you're following the exploit prediction scoring system, you'll know that this is another thing that Kenna and Cyentia and other organizations have collaborated on to figure out, well, what are the characteristics of a vulnerability that make it more likely that it will be exploited in the wild? So, you put these two things together, and that's where we arrive at volume six. We really, really, really want to know as much as we can know about these vulnerabilities that are exploited in the wild, because they're a fantastic mechanism for prioritizing vulnerability management.
Dan Mellinger: Awesome. Ed, you look like you wanted to say something.
Ed Bellis: Oh, no, I mean, that's absolutely right what Wade was saying. And it's funny when we talked about the roadmap because, and we even talk about it at the end of volume six here is, we're going to introduce more questions with every volume of this report. So, we answer a bunch of things, and I feel like we introduce probably twice as many questions as the ones that we answer. They're all new questions, which is great, but which guides further research. And I feel like that's how this roadmap has evolved over time for sure.
Dan Mellinger: Farther and farther down the rabbit hole. Yeah, and I think, Ed, you've mentioned like, hey, yeah, we wanted to look farther into that when we were doing volume three, and then volume four, hey, look, we're looking into that next. So, now that's a very, very good point. But, yeah, let's start with a little bit of context, right? I think there's a pretty big final, I don't want to say conclusion to this one, because there's a cool finding, I mean, there's still some more investigation, just, I will tease that now, but the beginning of this report is really setting kind of this context, because, new data set, this is kind of a new look, and we're trying to work through and figure out how do we measure some of this stuff, I guess, is the best way to say it. So, you start off with enumerating exploited vulnerabilities, and doing a breakdown of the 473 with which we've seen evidence that it's been exploited in the wild. When you look at this gamut, I don't think there's any surprise that majority is Microsoft and Adobe, right? That if you followed any of the reports, they tend to have the highest volume. Right?
Ed Bellis: Yep.
Dr. Wade Baker: Yep.
Dan Mellinger: And then they have a really large count. But what I thought was interesting that when you look at prevalence, it's actually really low of exploited vulnerabilities. You would think it's high, but when we look at this distribution, I think 75% of CVEs are detected in less than one in 11, 000 organizations.
Ed Bellis: Now, just to clarify on that one, when we talk about prevalence, we're talking about prevalence of exploitation, not prevalence of the vulnerability existing. Correct, yeah.
Dr. Wade Baker: Yep. Yeah, and this is one of those things, Dan, as you were mentioning earlier, having the visibility to see something like this is really the domain of Fortinets and other large network providers that have that global footprint of sensors that you can see lots and lots of organizations and detect the spread of exploitation across that population. And it really is like, I mean, we've all been living in the pandemic, a lot of the same principles apply, right? So, something goes live, it's first seen in the wild, and then it starts spreading. And sometimes it spreads really fast, sometimes it spreads slow, sometimes it spreads within a localized area, sometimes it spreads globally. You never know. And I was actually surprised that a lot of these vulnerabilities that are exploited in the wild, they're not exploited everywhere. That exploitation is fairly limited. And I find that hopeful in the same way that that 5%, knowing that 5% of vulnerabilities are exploited and also seen in organizations, because it means we have a smaller problem to actually deal with when we talk about exploitation, knowing that there's a relatively few that see massive widespread prevalence is also like that, because it means we have not many of them that are going to hit that uber- spread tier.
Ed Bellis: Yeah. I mean, to get back to the EPSS and all of the exploit predictions and things like that, it's a big difference between predicting whether or not there will be an exploitation somewhere, or whether you would be exploited, which is, we can see a big difference here.
Dr. Wade Baker: Yeah.
Dan Mellinger: Ooh. And, Wade, you talking about that kind of epidemiology, shout- out to our epidemiology for cybersecurity episode. I just got an idea, I want to get you on with Sam Scarpino, our resident epidemiologist. That should be fun. That dude's awesome. You guys would have a lot of fun jamming on the podcast. Anyway, future episodes aside, I think the max prevalence you saw was one in three, which was less than 1% of the CVEs found right out of the 473, and just under 6% were even detected by one in a hundred organizations. So, pretty wide distribution, and that's pretty rare, ultimately.
Dr. Wade Baker: It is. And I think it's something for people to remember. We tend to have this thing of, " Oh, there's a vulnerability out. We've got to fix everything right now." Or, " Oh, no, vulnerability is being exploited in the wild. That means I'm going to get hit tomorrow." And, I mean, maybe that's true, but for most organizations, it's not true. And we need to understand, there's a lot of gray area between an exploit that's in the wild and an exploit that's knocking on your door.
Dan Mellinger: Yeah. Well, and that's why risk is so hard to calculate, right? I mean, the risk reward. The timelines are long, the chance of actually something happening are actually pretty rare overall, but the ramifications, which is actually a plug for your IRIS report, the IRIS Xtreme, ramifications can be pretty severe to the tune of billions. Right?
Dr. Wade Baker: Absolutely.
Ed Bellis: Our friend, Michael Reitman, might say there's a power law somewhere in there.
Dan Mellinger: Distribution. Yep. We're just plugging every concept we've gone through in all the podcast episodes. But I think that's actually a pretty good segue, because you're talking about now the likelihood and kind of the attack chain or the sequence of events that starts to happen, and that's where we jump to next. So, the vulnerability lifecycle. Ed, I don't know if you have it up. Do you want to read through some of those stages real quick, just to show like what were the various things that they're trying to track from a timeline perspective?
Ed Bellis: Yeah, yeah. I mean, I was looking through some of the findings. It goes from CVE reserved, which we can think of as usually the first thing that happens, but not always, to, I think, what was it? There is a patch published, as part of the lifecycle was detected by a scanner, part of it was actually being exploited in the wild, being patched by or remediated by the companies after they've detected it by the scanner. What were some of the-
Dr. Wade Baker: Exploit code dropping.
Ed Bellis: That's right. Exploit code versus exploitation in the wild, which were different and pushed in either direction. I think that was it. Am I missing one, Wade?
Dr. Wade Baker: Those were the ones we focused on most. Yeah. We tried to get information on like when it was first discovered, and when it was disclosed, but that it doesn't tend to exist in a structured data format that's easily extracted. So, we had to push that to the side for now. Maybe later.
Ed Bellis: It's also very vendor to vendor specific as to how much they publish or in terms of when they became aware of a vulnerability, because somebody discovered, a researcher discovered something, and then we were told about it, versus Microsoft, versus Google, versus Adobe, versus Apple, they'll all be very different.
Dan Mellinger: And there's a little bit of nuance with some of this stuff as well. Like CVE reserved can be seen as one of the day- zero events where you look at stuff. But we were discussing this earlier, this week, I believe, but sometimes companies or some of the CNAs and stuff, they reserve blocks of CVEs well in advance, nothing planned for. They just know, " We're Microsoft, we're going to find stuff. So, let's have these queued up so we can just populate them." So, there's a little bit of nuance to that stuff. But it's interesting because you guys start to lay out, I don't want to call it the typical order of operations for kind of attacks because-
Dr. Wade Baker: Yeah. It's not really typical, but it is fuzzy. Typical as you can get.
Dan Mellinger: Yeah. So, there's a ton of different sequences of these milestones that they looked at. The highest one was just what? 15.9% of the, I guess, incidents that you guys tracked.
Ed Bellis: They start with 73. Yeah.
Dan Mellinger: Yeah, the 473. So, almost all of them start with the CVE reserved, right? Because, like we said, those are typically reserved in advance, all that good stuff, but then the order changes. So, 16% of the time, it goes from a reservation for CVE to a patch being available, which is important later on, seen in the wild by vulnerability scanners. So, organizations are finding this vuln within their environment, not exploited, nothing like that, but they're finding it from a scanning perspective. The CVE's actually published, and then there's exploitation in the wild. And that is, out of all of the different sequences that could happen, that is the most, and it's only 16% of the time. So, it falls off very quickly from there. We're looking at figure four in the report, and I just thought it was interesting to look at how varied all of these steps were, and sometimes steps are omitted, sometimes things jump around into ways that you wouldn't expect, but it just shows that there really is no order of operations for an attack cycle.
Ed Bellis: Yeah. I mean, you talked a bit about the milestones, and I think we covered a total of six. And you mentioned the most common one only has five milestones. It's missing, I think, what the exploit code available as part of the lifecycle at all.
Dr. Wade Baker: Yeah. And I was fascinated looking at all of these because, again, this just goes to the better we know this, the better that we as defenders can manage what we're doing, to try to line up with what's happening. Because a lot of these things happen, and vendors don't have control over it, a lot of these defenders don't have control over it, but they all play and they're all doing their things and they have different goals and objectives, and the attackers also play into this, they're doing things. So, it's just interesting to see it's all over the place. Like the last one on that figure four mentioned goes from CVE reserved, jumps straight to exploitation in the wild. That's the 10th most common pattern, and-
Dan Mellinger: At 2.3% for 2019.
Dr. Wade Baker: At low volume, but still, that's... You can't just assume that, " Oh, that CVE was reserved, I'll wait for the patch to even worry about it, because there's-
Ed Bellis: Yeah. You point out earlier, or you point out later in this about some of these milestones overlapping. And we'll see this later in some of those other figures, like the difference between some of these milestones is often same day. So, yeah, it may have come in this order, this order, in this order, but the difference, in some cases, could be hours.
Dan Mellinger: Yeah. And I think that there's, a lot of people have kind of a set order they would think of the lifecycle in their head. I know like, I think it was volume one, we laid out this kind of lifecycle ourselves, and we set these things in an order, and now we're finding, man, sometimes they happen in that order, sometimes not so much. And as we've actually found out in several of these, I mean, the distribution, we talked about that earlier as well, is wide. It's very, very broad. And one of my big takeaways is actually figure five. So, we're finding all these different sequences, and there's, I think someone else did the permutation, there's like 10, 000 different orders that could potentially happen. But when you look at the distribution, exploited in the wild has the widest from a timeline perspective. So, it can happen anywhere within the sequence. And it just shows you how complex and probably why a lot of people haven't tried to tackle this type of analysis before. It would just be my guess.
Dr. Wade Baker: Yeah, it's a messy domain for sure. But at the same time, one of the things that struck me after we went through all of this is, there's a lot of disorder, but if you zoom out and you don't worry about the finite differences between milestones, because like you said, a lot of them are on the same day, but if you just generally look at it, a CVE is reserved, time passes, coordination is occurring, and within a relatively short amount of time, you get the CVE's published, you get a patch, vuln scanner, signatures are launched, people start remediating, exploit code is dropped, and then exploitation in the wild typically comes after that. Not always, like we've said, but you have this flurry of coordinated activities. And that's one of the takeaways as I look at these things. It really impresses upon me that a lot of those events are timed that way, because you have the people that are finding and disclosing, and the vendors, and everybody in between trying as best they can to make sure by the time that CVE becomes public knowledge, that stuff's ready to go.
Dan Mellinger: Yep. And there's a reason for that, or there should be.
Dr. Wade Baker: Yep.
Dan Mellinger: Another tease. But I'm sure, I mean, knowing the cybersecurity community, people will find specific examples that buck the trend and all that good stuff, but, yeah, to your point, when you look at things at kind of the 10, 000- foot level, you can start to really identify some of these patterns. I think we'll skip over some of this stuff. If you really want to go into some of this process detail, you guys should all download and check out the report, there's a ton of good analysis on the timelines, where things happen and when. And then the big part is you start to break down these timelines by different, I guess, day- zero events. Right? So, the first one was CVE reserve date, because that seems to be the only one that has very strong correlation of things typically start here. And so, when we look at that, I think figure nine is kind of the... Well, here, let me step back. CVE reservation tends to be the day- zero, for lack of a better word, for relative tracking purposes, most, but not all events occur after CVE is reserved. So, just to get that out there. But we jump into exploit code being available or released relative to CVE publish date. So, I don't know, Wade, do you want to take a stab at that one real quick?
Dr. Wade Baker: Yeah. So, if you can imagine this section, you've got a series of charts and you have a timeline, right? And when we have, like Dan said, you've got CVE published, we know that happen when, and when does various things happen around that. So, this one is exploit code. And you look at about a little under 40% of the time, exploit code is published before the CVE is published, another 18% is almost same day, like within a day of publication. And there is some overlap there. But you get a sense that, again, these things are often timed. So, if you put those together, you're getting up around 50% of the time, by the time you get to when a CVE is published or a day or so after, about half of them have exploit code available.
Dan Mellinger: And you guys looked at CVE publish date, because that's effectively when the world knows that an exploit exists, here are the details, both good and bad, right?
Dr. Wade Baker: Yeah.
Dan Mellinger: Then attackers and the defenders.
Dr. Wade Baker: At least ideally. Yes.
Dan Mellinger: Yeah. That's a good point. And so, we start looking at a few of these to build out, it looks like some context for the final charts. So, figure 10, we look at patch date relative to CVE publish date. And so, actually some hope here. Roughly 60% of patches are actually available before the CVE is ever published. Another 37.3% are available within a day or so of that being published. So, we're talking about what? 97%- ish?
Dr. Wade Baker: Yeah. There's not very many that trickle out after the CVE is published, which is, again, it's by design. That's the way we want it to work. By the time the world knows about this thing, you can fix it.
Dan Mellinger: Was that surprising to you, Ed, that a lot of patches are available right around the time the CVE's published up?
Ed Bellis: Not at all. I mean, given the coordination effort that we've seen over the last several years, there's very few where you look at a CVE in the National Vulnerability Database and there isn't a fix available somewhere there.
Dan Mellinger: That makes sense. Well, and that's a good thing, and we can, I think, we can give another shout- out to Microsoft because they're probably responsible for like half of those, if we look at the distribution-
Ed Bellis: Shout- out for being responsible for half of those CVEs.
Dan Mellinger: And patches as well. And then we jump into what? Figure 11. The first detection relative to the patch availability. So, this is a little bit different, but this is where we see the numbers start to go a little backwards from what we would expect, or at least what I was thinking. So, from first detection, first time it was scanned in an environment, correct?
Dr. Wade Baker: Yep.
Dan Mellinger: Right? Okay, relative to when the patch was available, only 4.1% of vulns are actually scanned or identified before a patch is available. And I think there's probably some nuance there, like, do they have signatures to scan for it?
Dr. Wade Baker: Exactly.
Ed Bellis: I'm actually surprised that it's as high as 4%, to be honest.
Dr. Wade Baker: Yeah. One of the things we didn't do is check out, okay, what is the exact list of CVEs for all of those cases? But we could, and sometimes it is a matter of when signatures are available. And we've all seen vulnerabilities where it drops and a patch can't be created immediately, it takes some time. And they say, " Hey, if this is in your environment, do X, Y, and Z." So, that's another thing that's going on here.
Dan Mellinger: Yeah. Well, and, Ed, I think now looking at this, it speaks to the coordination, right? Because before, it's 4.1%, but within that one to two day- period of when the patch's dropped, 73%. Right? So, it goes from like, doesn't exist, to, everyone knows here, we're scanning it, and we have a patch available almost immediately. Interesting.
Ed Bellis: There's a big common pattern here, I think, between the last several figures, which is that straight line right around day- zero of patch becoming available. A lot of different things are happening within that very small window of time.
Dan Mellinger: And I would really encourage people even more than any of the previous reports, follow along on this one because I think the charts do a really good job of illustrating from that kind of relative day- zero, whatever you're looking at at that time, and showing that context. And so, now we're looking at exploit code being available relative to the patch date. And 24% of the time, exploit code is available before patch date. Now, is that surprising?
Dr. Wade Baker: I don't know if that surprises me. I mean, we'll talk about this one more in just a bit here. We keep foreshadowing it. But, again, you have this idealistic way that vulnerability lifecycle should proceed, and this is one of those that you would rather not see. You don't want exploit code available before you can fix it, which is usually represented by the patch. But that is not uncommon.
Ed Bellis: And so, I would say at first, when I first saw this chart and thinking it through, I thought, " Oh, that is surprising. That seems like a large amount." But then let's remember that we've whittled this data set down to vulnerabilities that are actually being exploited in the wild, which is, as we know, is a very small number of vulnerabilities. And then we're looking at exploit code associated with that. So, the percentage of actual vulnerabilities across all CVEs in 2019 that had exploit code before the patch are much, much smaller.
Dan Mellinger: Yeah, now that makes sense. And it's always good to point out, and I think we'll reinforce this later, correlation does not necessarily mean causation, and the causation could go the other way. Like, Ed, you're talking about for this particular one, that's a possibility if they are causal effects. Right? I will jump through 13 and 14 real quick, these figures, so we can get to the meat of this discussion. So, figure 13, we look at the first exploitation relative to the patch being available. So, not exploit code, but observed exploitation evidence in the wild. So, before the patch's available, 10.8%, which seems kind of high, but overall, when we compare this to the other curves, overall, even a month after, it's only 44.2%, I think, of their overall attacked number of orgs or whatnot. The entire gamut. So, that's actually relatively small compared to like, say, patch rates where they hit 93% a month afterwards. And then figure 14, they look at the first exploitation relative to the exploit code being available. And so, 21.1% is before, so, someone's working on that before everyone else know about it. It only jumps another 3.2% right around the same time, and a month afterwards, again, just that 10% bump to 55.8% a month afterwards. So, some of these context setting piece is out of the way, and let's get into the fun stuff, measuring momentum. Wade, to do this, we're trying to look at the momentum between attackers and defenders. And can you go over kind of the three timelines that you guys needed to establish to start to analyze this?
Dr. Wade Baker: Yeah. I've always had this question in the back of my mind of who has the advantage? And we actually purposely chose momentum instead of advantage, because it's hard, I mean, advantage is much larger picture than what we're measuring here, but we want to know, who moves first? Who is growing, whatever it is that they're doing, at a rate? And how does that relate to the other groups? So, the things we measure here are, how quickly vulnerability scanners are detecting vulnerable assets. So, you can view this as sort of the find rate. How quickly your organization's getting after it in terms of finding where that vulnerability exists in their environment. And it's a period of time. It actually goes pretty quick. Like, right after the patch is available, 80% of the time, it's within a month or just a few, about a month, 80% of them are already found. So, happens pretty quick. Second thing we measure is the remediation. Finding is one thing, fixing is another. And so, the next curve that we look at is how quickly they remediate those vulnerabilities once they're found. And we've done this in prior P2Ps. So, if you're familiar with the survival analysis, and, I don't know, we've probably done 50 different charts of a curve, this should not be a shocker. But it takes about five months to remediate 80% of vulnerabilities across the environment, five months after the patch is available, I should say. And then the third thing that we measure is the attacker, the exploitation timeline. And this is the new one here, and so, it probably deserves a little bit of exploitation. But think about this as trying to answer how quickly does exploitation spread across whatever proportion of organizations that detected it? And we already talked about prevalence, but at some point, there's the first organization that ever registered an exploit attempt against that vulnerability. That's day one. And then there's the last organization that ever detected that vulnerability. And we're looking at how time progresses between those points. And so, we find, it typically takes six months for attackers to reach 80% of their target population. 80% of the organizations have detected it, that will detect exploitation in the wild after six months.
Dan Mellinger: And to be clear, that's not 80% of organizations who have that vulnerability in their environment, it's 80% of the organizations who we can identify had some form of exploit behavior, some kind of trigger.
Dr. Wade Baker: Yeah, they detect it.
Dan Mellinger: Okay.
Dr. Wade Baker: Yeah, exactly. Exactly. Does not mean compromise, no incidents here. And in many cases, this is opportunistic activity attackers or even red teams scanning for vulnerabilities, and some IPS sensor registered, " Ooh, this exploit was attempted against this vulnerability."
Dan Mellinger: Got it.
Ed Bellis: And in some ways, you can look at that as another form of defense, right? If I have an IPS signature to defend against.
Dr. Wade Baker: Yeah.
Dan Mellinger: But it's also an opportunity, right? Because it's a trigger something happened. Someone's probing for whether good, bad, or indifferent, someone's probing and finding that. Okay. So, you start to layer these things on top of each other.
Dr. Wade Baker: Yeah. And so, I think it's probably not surprising that the discovery of vulnerabilities by vulnerability scanner's pretty quick, remediation, by and large, is next, and then exploitation in the wild. But there is some intermingling of those last two, and that is the most fascinating part of this study for me. I don't know if we can skip ahead to, I'm looking at figure 20 right now, is that okay?
Dan Mellinger: Yeah, yeah.
Dr. Wade Baker: But this is trying to, going back to that question of momentum, who has the momentum? So, in general, what we see is this pattern. Before the patch is available, attackers have the momentum. The red exploitation line lifts quicker than the blue defender line of the remediation curve in this, indicating that exploitation activity is happening before the patch is available, and before defenders start remediating systems. Right? Not a whole lot, it's still about 10% only at that point in time, but attackers or exploitation gains the momentum to begin with.
Ed Bellis: Yeah. It makes sense that not a lot of remediation is going on when there isn't a patch available.
Dr. Wade Baker: Exactly, exactly. And I think that is an important point to make is that defenders are clearly handicapped until that patch is available. The longer it is until that patch is available, the more and more momentum attackers will gain over defenders.
Dan Mellinger: Yeah, there's not much they can necessarily do about it before patch is available. And as we actually found out earlier, a lot of them can't even identify that they have it necessarily for a vast majority until patches are available, because they tend to coincide with CVE published dates, and the data's first detected within an enterprise.
Dr. Wade Baker: Yep. No, I have to say, this is the thing that probably surprised me the most in this entire report, is that, from a point... We try not to make too big of a deal about exact dates in this, because this is an overall timeline for both remediation and exploitation here, but from what we see in general, typically, about one to two weeks after the patch is available, defenders actually gain the momentum. I did not expect that. I didn't expect it that quick, and I didn't expect them, they maintain that momentum through six or seven months, which I think is shocking that they actually are remediating systems at a faster rate than attackers are exploiting new organizations or attempting to exploit new organizations during that window of time. And, again, it's encouraging, it's surprising, but it really shows that, oh, you see those defenders engage and they go to work trying to remediate those systems.
Dan Mellinger: So, in theory, defenders, while they get a slower start, it's relatively rare that they will actually be behind the bell curve overall, that they'll get popped before they have a chance to remediate. And we're finding, which is pretty cool, that companies in general, once a patch is available, they are patching stuff, they're getting work done, and they're doing it at a really fast, I guess, velocity, right? The survival code.
Ed Bellis: At least for that first 80%.
Dan Mellinger: Yes.
Dr. Wade Baker: Yeah. They get 80% of the job done, and that is where things change again, after about six or seven months, the attackers catch up and overtake defenders on momentum. And I think that 80% mark probably represents, you've found all the obvious assets in the environment, you fix the things that you can, and then there's just the stuff that's either legacy equipment or things that, " Oh, don't touch that because it's too critical," or whatever. We've all experienced this. There's the plateau of remediation, and you've seen it in lots of different survival curves that we've done in this research series, but that is what gives attackers back the control of momentum, and whatever we can do to avoid that as defenders is a good thing.
Dan Mellinger: Yeah. And that seems to speak to this long tail of exploitation that we've also seen in previous reports as well. Sorry, I cut you off there.
Ed Bellis: Yeah, no, that's exactly right, a long tail of exploitation, and then a kind of a long tail of not remediating. That survival curve that never ends. We've always seen in all the previous reports that feels like you never quite get to 100%, there's always that long tail out there, and it seems like there's kind of a steady, almost a steady state of exploitation that just continues along that path.
Dan Mellinger: Well, and this kind of speaks to the case for prioritization, right? If you know that these are the 473 that people are going after within 2019 vulns, go patch off 100% of 473. I don't care what you have to do, get these ones done, and then you can worry a little bit less about some of the other stuff and get that stuff done-
Ed Bellis: As we've seen in six volumes of this stuff, easier said than done.
Dan Mellinger: Yes. I love making it sound really easy because I don't actually have to patch and/ or manage any of these programs.
Dr. Wade Baker: But to bring out one of our metrics, this is a coverage problem, right? This goes back to maximizing coverage insofar as you can, because then you're remediating a high number of these vulnerabilities with known exploits.
Dan Mellinger: Yep.
Ed Bellis: That's right.
Dan Mellinger: Well, and then we jump into the interesting implications for some of this stuff. So, do early exploit shift the momentum of attack and defense? Do they wait?
Dr. Wade Baker: They do indeed. And this is where this report gets very interesting, because when, for the vulnerabilities, where exploit code was made public before the patch was available, that exploitation in the wild line shifts 47 days earlier on average across the duration of that line. Sometimes it's less, sometimes it's more, but on average, things occur, exploitation occurs 47 days earlier. And that's because there's a relationship between exploit code and exploitation in the wild. If you have early exploit code, you have earlier exploitation. Not always the case, we've already talked about how these things are all different sequenced, but in general, that happens. And when that happens, that changes the attacker- defender divide substantially. You go from a period where you've got a clear area where defenders have the momentum, to that shift left in the exploitation timeline, really erodes that defender momentum down to not much.
Dan Mellinger: So, ultimately, what we see is, when exploit code is made public before a patch is available, we see, just so everyone knows, I think we're looking at kind of combination of figure 21 and figure 22 here, which figure 22 is the big one for this report. If you look at one figure and understand it, go look at this one, if you're listening in. But not only does it increase the velocity for exploitation for those specific vulns, it looks like it also, like you said, Wade, shifts everything towards the attacker's advantage by a month and a half, almost two months.
Ed Bellis: Yeah. Figure 21 actually shouldn't be a surprise to anyone as Wade's talked about before, the relationship between exploit code and exploitations in the wild is, we've covered that before, obviously, if exploit code is available earlier, the chances of exploitation in the wild happening earlier are high. Figure 22, there is definitely some surprises here for me. Not so much on the left, but I'm on the right. And probably more so, one of the biggest surprises to me is actually the remediation rates on the rights. So, to say, if the exploitation or the exploit code gets published earlier, we actually not only see momentum changing, where the attackers have momentum for a longer period of time than defenders, but we actually see that remediation curve suffer a bit as well, which was interesting.
Dr. Wade Baker: Yeah. And some of that might be the data here, because, again, we start off with 473 vulns, the majority of which exploitation is not before the patch. So, we're looking at a minority of those in that right figure 22. But still, maybe there are reasons. This is something where we want to expand our data set and study this in a lot more detail in the future. But what I don't take this as, and I want to be careful when I say this because I know this is a touchy subject, we start to dance around explosive issues in our domain, but-
Dan Mellinger: What? Like the moral and ethics of vulnerability and exploit code disclosure?
Dr. Wade Baker: Yeah, there are a lot of people that say, " Hey, drop exploit code as fast as you can, because that lets defenders start doing defender things earlier. It ultimately helps defenders." And I see no evidence of that in this data that we've been able to study. Now, there's also the argument that dropping that exploit code early helps defenders in the sense that they can begin detecting exploitation quicker, and I think that's an open door here. This may be what we're seeing. It's not that attackers are necessarily starting their activity earlier, defenders are just able to begin detecting it in the wild earlier because IPS detection signatures or AV detection signatures used that exploit code to do that quicker. And I hate to say it, but as Ed mentioned it already, we have a lot of open questions at the end of this report, and that's one of them. That's really important question that we need to answer, and we'll try to do that later. But lots of things could be happening here.
Dan Mellinger: Opens up a ton of questions primarily around causation. We see the data and we can see what it's showing us thus far, to your point, it could be a smaller sample size. So, I'm sure, like we did with, what? P2PV1, when we're trying to wrap our arms around how do we analyze and measure these things, we'll do a smaller set, make sure that that's out, and then expand that scope. So, I'm sure, and I'm giving you guys extra work for the next one.
Ed Bellis: Well, fortunately, this is a long solved debate in our industry and we won't have any questions or anything arise.
Dan Mellinger: Yeah, absolutely. No one will ever challenge these results. But, I mean, from what we can tell, and even going back to the other reports, patches being available seems to be one factor that really leads to success overall, from what we can see, from an exploitation standpoint. And that's reinforced here as well, almost inversely, at least, from the data that we can see.
Ed Bellis: Yeah, from tons of coordination going around on that patch availability as we saw throughout many of those charts, which is definitely a good thing for defenders.
Dan Mellinger: Absolutely. And just to ground things out real quick on when a patch is available before exploit code is readily available, attackers get a very small advantage early on, and then the remediation rate for companies crosses over, what? 5% mark? Within like a week after the patch is available. When we're looking at detection of exploits in the wild, when exploit code is available before the patch, either they're detecting this or whatever causation we can find, but when that happens, that crossover doesn't happen until month one, and attackers have hit, what? 30% of their target population? And defenders have only patched 30% of those vulnerable systems. Right?
Dr. Wade Baker: Yeah. If you look at the months on those figures, this is eyeballing it around numbers, but that's about a 15- month period, 12 months after the patch, three months before. And in that first scenario, attackers had the upper hand or had the momentum for nine months, compared to six months for defenders. And when you flip it and you release the exploit code before the patch, attackers now own 12 months out of those 15 months, which is a big difference. And defenders only have three. So, now you've really flipped the tables.
Dan Mellinger: Your probability on the defender side is not in your favor for a majority of the time.
Dr. Wade Baker: Right.
Dan Mellinger: Fun. Well, I know that's kind of the big takeaway here on it. I mean, again, we can't draw causation per se, I think the data shows that some way, shape, or form, it really puts an impetus for a patch to be available before ideally people know how to take advantage of a vulnerability, which I think just makes sense logically, we'll try to vet out whether or not that actually maps out from a more scientific standpoint later on I'm sure. But at least as far as the data's showing us right now, what would be your takeaway, Ed? Your advice to the industry based off of this.
Ed Bellis: Please make patch available as soon as possible. It's probably the number one learning here obviously. Yeah, I mean, the three different questions that we lay out at the end of volume six, I think, are something that we need to dive into. And keep in mind that this is not a debate about vulnerability disclosure per se, because we're not talking about disclosing the vulnerability, we're talking about exploit code actually being available prior to a patch, which is very different. And there are great uses for that exploit code, we already discussed some of them. Signatures for IPS could be how signatures for even vulnerability scanners develop sometimes, although we see a great coordination there around patch and CVE published, and all of that earlier on. I think we got a lot of things to dive into here, but I 100% agree that there's a lot of data here to show you that it becomes much more difficult on average for the defender, if that exploit code is available for a long time before patch is.
Dan Mellinger: Yeah. Wade, any thoughts on this?
Ed Bellis: I would love this to start a positive, friendly, and helpful conversation. We've struggled with disclosure for a number of years.
Dan Mellinger: I want it to be violent and dogmatic, personally. I think that would be fun to watch on Twitter. Anyway, sorry, continue.
Ed Bellis: We've talked about disclosure for a number of years, and, again, my takeaway here is that it's predominantly working. And for the most part, patches are available when the CVE publishes, and all of that kind of stuff happens as you would want to see in coordinated disclosure. But the same kind of discussions that we had leading up to that, where I think we were in a decent place overall right now, there's maybe some things we need to talk about what happens after, or these other more in the exploitation phase, like, is there a" responsible disclosure" for exploit code? Is that a thing? Do we need to make it a thing? Recognizing most of the disclosure debate is about vendors, and when they release patches. Getting vendors to own that there's a vulnerability, push them toward patching. And, again, that we're at a much better place now than we were a long time ago. But, I think we need to think about remediation a little bit more. Now, like, okay, the patch is available, that didn't seem to be the biggest problem to me, the biggest problem to me seems to be cutting that remediation cycle down as much as we can to where systems get fixed. And that's not the vendors doing that, although the vendors can help defenders accomplish that, but that's after the patch, after the CVE is published, what do we need to talk about and do in that window of time?
Ed Bellis: You've got to look at the entirety of it, right? It's not just, " Oh, patch is published, now we don't have to worry about anything." Because, as you mentioned, Wade, there's this huge time difference, and getting to 80% or 90% remediation can be months for these organizations.
Dan Mellinger: And someone's dog really agrees with Wade.
Ed Bellis: Yeah, that would be mine.
Dan Mellinger: Well, to me, I think as from my corporate comms kind of security hat, I think I can unequivocally say, " Hey, patches should be available, otherwise, what are you going to do? You discover vuln." To me, overall, I think it shifts a little bit of the impetus if you're a vuln researcher, you're finding new exploits, new vulnerabilities, you're doing bug bounty programs, right? Maybe you have to demonstrate that you can exploit this vulnerability to get paid. It shifts the responsibility a little more squarely on those people who are discovering the exploit code, whether they're good or bad for whatever reason, to try to work with vendors to get patches available. So, maybe we're talking about, " Hey, discovery's nice, writing exploits is nice," maybe shift a little bit of your talents and help them with some of the remediation, some of the patching, getting that out to the world, because there's clearly, at least from what we can see from the data standpoint, a reason to do so. People are more successful when the patch is available first, widely. So, that's my takeaway, at least, from this one. But I'm biased as hell and I know it, so, I'll call it out there.
Ed Bellis: And Dan Mellinger's Twitter handle is...
Dan Mellinger: Yeah, God, I'm going to lose all 200 of my followers. It'll be terrible. Ed, Wade, I know we're rounding out the top of the hour, any other things you'd like to close with before we hop off here?
Dr. Wade Baker: No, we've got a bunch of things we'd like to look at in the future, but if this sparks any ideas or... we have an open call at the end of this one, to say, " Hey, we have these three hypotheses of what could be going on here, we need this data in order to answer this, who wants to play?" And I'll just throw that out there to any listeners that have data that can help us answer these questions, let us know.
Ed Bellis: I always love ending a podcast with an open call for data.
Dan Mellinger: Always. So, if agree, disagree, you have some ideas to help us improve these reports, you're going to want to DM Ed Bellis, I think that's @ebellis on Twitter.
Ed Bellis: Yeah, go ahead. Sure.
Dan Mellinger: No, but seriously, do so, hit us up. You can find all of us on Twitter. You can hit us up on the Kenna Security page, Cyentia, Jay Jacobs, I'm sure he loves getting DMed all the time. Hit up any one of us and we're always looking for new interpretations, new data, all that good stuff. And I think this report, more than most, should have some cool input from the community. So, Ed, Wade, thanks, guys, appreciate you hopping on today. And, everyone, take it easy this Wednesday. Bye now.