Vulnerability Disclosure and Responsible Exposure
Vulnerability Disclosure and Responsible Exposure
We discuss and add some quantifiable data to a hot-button issue in the cybersecurity industry: responsible disclosure of vulnerabilities and exploits.
Dan: Today on Security Science, vulnerability disclosure and responsible exposure. Hello and thanks for joining us. I'm Dan Mellinger and today we're discussing a hot button issue in the cybersecurity industry, responsible disclosure of vulnerabilities and exploits. So this should be a good one. With me as the holy spirit of risk- based vulnerability management, Kenna Security founder co- founder and CTO, Ed Bellis. What's up Ed?
Ed: That's a really nice one day one, Dan. I didn't even have to modify it.
Dan: I know, I know. I thought that was a fun one. I just watched some South Park episode, so anyone who knows that tie- in, it was a good two- part series. I do want to note that we're digging in here. So this is based off of part of the prioritization to prediction volume six research. And specifically, we did a blog actually that you drafted, Ed, called Responsible Exposure and What It Means for the Industry. So overall, there's been this kind of hot button issue on what is responsible disclosure? How should people work with vendors, work with software makers, work with researchers to disclose vulnerabilities publicly? There's always been this kind of undercurrent that I guess publishing vulnerabilities and or proof of concept exploits makes people in general less safe, but it also may be necessary on the other side to get some vendors or software makers to patch stuff. So we want to dig in because as far as we know, I think this is the first time there's any quantification of what some of the actual ramifications of publishing before, after exploitation occurs. So we're going to dig into this one part of it. So I will link to the blog and the P2P report, so if you want to go do some digging into all the fun charts, you can follow along.
Ed: Yeah., And absolutely. And just to clarify, there's disclosure of vulnerability and there's also a disclosure of exploit. And really, while we looked at some of the evidence of disclosure as part of this, we're really focusing a lot on kind of disclosure of vulnerability. We're focusing a lot on that disclosure of exploit here too.
Dan: Yeah. And just to clarify to the audience, because that's a really good point to mention. A lot of people kind of confuse the two. I know I do from time to time, but the vulnerability is the actual flaw in software. The exploit is how you make use of that. And so there's a pretty big difference from when a vulnerability is disclosed, like a CVE. People know that, hey, there is a flaw that exists, versus an exploit, which makes people easily able to take advantage of it. So just because a CVE exists as we know, doesn't necessarily mean it is exploitable or should even be worried about. In some cases, other cases, it should be the first thing you think about and that morning. But the exploit is really one of those kinds of go moments. If there's an exploit available and people know about it, typically you want to concern yourself with that one.
Ed: Yep. Yep. And actually we've seen as evidence in some of our other even P2P research. We've seen evidence that, in fact, even just kind of monitoring the data that's fed into our platform, we've seen evidence that once an exploit, a weaponized exploit of some sort or a POC is disclosed, that is very much a leading indicator to that actually being used in the wild as you would expect.
Dan: Absolutely. Well, and Michael and the inaudible, the EPSS exploit prediction scoring system. That's the whole point is to look for, if POC code exists, it's a highly correlating factor that there will likely be some kind of activity on that in the next 12 months.
Ed: Who you knew that if you published it, they would use it.
Dan: Weird, right? Well, while we talk about publishing, Ed, do you mind going over what's the typical responsible disclosure process? How does that typically function? And in general, what we found in P2P V6 actually typically works correctly most of the time.
Ed: Yeah. In fact, it's remarkable. The evidence that came out of that, it's like the vast majority of the time, there's very clear evidence of, what I would call just to stay away from the hot button word of responsible, a coordinated disclosure that happens, where the researcher who discovered the flaw, as well as the vendor who is affected by it, as well as the vulnerability assessment vendors, as well as the IDs IPS vendors all seem to be working together and to know about this before some sort of public publication. And we see very much, a lot of evidence around that. So, kind of typical process that you ultimately see. And I'll just throw out the example of researcher finds a vulnerability, maybe it's with a large vendor that everybody uses and likely has some sort of either vulnerable disclosure of VDP program or VDP process or a bug bounty or something like that, where they have this method that they're able to communicate with the team of the vendor there to disclose what they found, some back and forth and questioning around how it gets used and that sort of thing. But then ultimately they'll intake that in. And then if everything is working right, they get that prioritized. They start working on it. As they get closer to a fix or a patch, as usually the case, they are probably reaching out and coordinating with these other vendors that we talked about, the vulnerability assessment vendors, the IDS IPS vendors, et cetera, so that they're prepared to actually create the signatures that are needed to identify the flaw, to block the exploit and all of that sort of thing.
Dan: Got it. And just so everyone's clear on their IDS and IPS vendors, how companies basically find these kinds of vulnerabilities is they go scan, and to even scan and find these vulnerabilities in your environment, you need to have some kind of a signature that they look for. And so that needs to be developed typically by security vendors. And they are not aware unless the software vendor itself typically pushes out," Hey, there's this known vulnerability, here's a CVE. Here's the details."
Ed: Yep. And then obviously some vendors are better than others at this, both on the kind of patching and coordinated disclosure side, but then also on the assessment and defense side as well.
Dan: Gotcha. So researcher and or someone, ultimately, they figured out there's a vulnerability and sometimes they'll develop kind of a proof of concept exploit to notify software maker. So ideally next step, go let software maker know," Hey, I found this hole. It's a pretty big deal. Remote code executable. Buffer, overflow combined. We can do a lot of bad stuff with this from my office here." Software maker's like," Oh man, that's a good one. We're going to give you money. Please don't tell anyone yet. Let's go work on a patch." And then once they get the patch developed, they'll go ultimately let all the other security vendors know, so people can go identify and then apply the patch. And then ideally all of this is happening behind closed doors until they get the patch and everything available. And then they'll make it public, researcher can go live. They get a ton of internet points and everyone's happy.
Ed: And maybe some cash, if it's a good enough bounty.
Dan: Yeah, absolutely. Cool. I did want to note why this is a hot button is there are some caveats to this process. So what happens if that kind of vulnerability detail leaks before a patch can get out there? Another piece is what if these evil software makers, they don't care. They don't respond. They're like," This is a non- issue," or worst case, they're like, just don't say anything. What do you do in that case? And then what if there's evidence another caveat? What if people, researchers are actually seeing evidence that a bad guy already found this and they're using it actively across the internet already before the software vendor really knows what's going on? So those are some of the caveats to this process.
Ed: Yeah. And I would add to that, that it can get really complicated. And all those scenarios you mentioned are probably different ways to react to that. Probably the biggest problem is if you have a software vendor who is literally just ignoring you and not doing anything, I can understand that that's going to be really frustrating for the researcher, but that's also going to be really frustrating for people to find out, for all those people that are using that software that," Hey, I'm vulnerable to this. Turns out, maybe it's something that I care about, but you don't." And that's a difficult problem. Now, the something being exploited in the wild already, there's some ways to coordinate around that. I think, certainly Microsoft and others do a pretty good job about alerting people to that, patching out of ban, escalating instances like that, but not everybody's that good about that.
Dan: Yeah, absolutely. Well, and I think what's interesting about all this is there's this kind of inherent, like if I was a researcher and I found this and it was in say a Microsoft product, because let's face it, they have the highest volume overall. And I know it's pretty bad, it could be uncomfortable too, just to have that sitting out there and you know about it for a period of time. I think Google's project zero. They set a pretty rigid 90 day deadline. Respond within this timeframe or we go public with it.
Ed: Yeah. Well, it's not even... I think they do something around fix as well. You have to have a patch within X amount of timeframe, not just respond.
Dan: Yeah. One of their, we'll link to their Project Zero, their timelines as well. But in general, yeah. From the time they notify a company about flaw, vulnerability, they give them 90 days. And the only way to extend that is if there will be a patch available within, I think, 14 days past that 90 days. So the software vendor's like," Oh, my 90 days came up and, oh, we're going to get a patch out soon, two weeks." Two business weeks. They'll extend it kind of that way. And that way, it's interesting because that's ultimately where the debate lies, why this is a hot button. Is that good? If a software vendor doesn't ultimately respond, is it better that the world knows that the vulnerability exists or is security through obscurity a viable tactic in that debate.
Ed: And I'll answer this with a big, giant gray it depends. To me, I guess if they are not responding at all, that's one thing. If they are responding and to your point," Oh, we'll give you an extra 14 days." Well, what if it takes 18 days or 20 days or 17 days? To me, I'd much rather it not be disclosed even if it's 17 days or 18 days or a month versus being disclosed. And then, you hear a lot of arguments about," Yeah, but it's better for people to know that they're vulnerable to this so they can do something to protect themselves, even if it's not a patch." And I think that's great for the security one percenters. But for 99% of the companies out there, they don't have the resources available to them to even one, figure out what they could do, and then actually employ what it is that they could do that's not a patch.
Dan: Yeah. Yeah, that requires hyper proactive, compensating control, shutting off ports, all sorts of workarounds.
Ed: The majority of companies in the world don't even have a security team. So you're going to tell me that they're going to put in a bunch of compensating or mitigating controls around this vulnerability that they don't, they can barely understand?
Dan: Probably not would be my guess. I mean, it's super interesting. And that's why we kind of get into the whole, I guess, responsibility, air quotes. So what is the responsible thing to do? And we're going to kind of dig in a little bit, because we actually have some data that shows at least the outcomes. I don't think we're making a call on ethics or whatnot, but we have a data set. We have some evidence from a quantifiable standpoint that there's some actual repercussions that happens when it comes to timelines of exploitation. So let's dig into it. Is there such a thing as responsible disclosure? How does the sequence of disclosure events impact the overall security? And what are the consequences of releasing vulnerability details ahead of mitigation? Those are the three things we're kind of looking at. So I'll go back and call out that we did this work in P2P inaudible volume six. We looked at 473 vulnerabilities from 2019. And we did that because we were finally able to see evidence of exploitation for all 473 of those, and we were able to look at them over a 15 month period. So we could use hindsight and really nail down this very specific sample. And from there we looked at the life cycle. So there's a couple of key activities. One of vulnerabilities discovered, which we typically never know about. There's not a good way to identify that. When a CVE is reserved, and that happens first pretty much a hundred percent of the time because CVEs are reserved well in advance. They may not even-
Ed: Sometimes in bulk.
Dan: In bulk. Microsoft, they have a direct line as a CNA. We can just publish these at will. So it typically happens first in almost every single sequence, and it's because it's not addressing anything specific at the time that it's reserved. It's just sitting there waiting to be filled. From there, we get CVE published and that's actually a big deal. That's typically a go mode.
Ed: That's where the majority of the details of the vulnerability are actually disclosed out to the world at that point.
Dan: And people, even if there isn't an exploit, people know that there is a way in and they could start proactively trying to develop an exploit as well if their one isn't public, right?
Ed: For sure. You see it all... I mean, there's a reason why they call it patch Tuesday and exploit Wednesday. They go out and they'll patch or they'll disclose all those vulnerabilities on patch Tuesday. And then there's a whole series of people out there that are trying to reverse those patches and figure out how to exploit those vulnerabilities.
Dan: Yep. And then from there, what next big milestone, this is a good one. When a patch is released. So hopefully that's happening roughly around the same time that the CVEs published, which I mean, when we did look back that's happening the vast majority of the time, which is great. Then we have what you were talking about, IDs IPS. When these vulnerabilities are first detected in a company environment, so now it's not theory, this exists. There's a signature, vulnerability scanner found this thing and it exists in a business. So that's the next big one. That's when you're like," Oh, this applies to me now. Now I need to figure out what I should do about it." And then the last and worst part of this is one it's exploited in the wild, which we do have stats to show for all 473 of these vulnerabilities.
Ed: Yep. Yep. And while not part of the vulnerability life cycle per se, there is also, which we'll talk about later, things that shift all of these things around is usually precursor, not always, but precursor to first evidence of exploit in the wild is actually some sort of publicized or published exploit. Much like a published vulnerability, you actually publish the details of how to exploit this vulnerability.
Dan: Absolutely. And that could be actually harder to nail down as well, because sometimes... We did an episode with Jay Jacobs. Sometimes the exploits are published to get up and people are just finding the stuff.
Ed: A lot of times they're published to get up, now more than ever.
Dan: So sometimes these, actually a lot of the time, it's hard to nail down when an exploit is actually developed per se. Because it could be dark web. Typically these things are found in marketplaces. Yeah. So that's a harder one to nail down definitively from a data set perspective, just because they can be anywhere, including places that are harder to track down. But what's interesting is when you look at the top 10 sequences, and there's hundreds of these permutations, but we're looking at out of the 473, what was the sequence of events? What happened? And 16%, so 15.9% of the time CVEs reserved, which is kind of a given, that's always going to be first, a patch is available, which is awesome. It's seen by vuln scanners, so that means a patch is available, you scanned it and you have it, and then the CVE's published. So that means that security, disclosure is working very well.
Ed: That is a highly coordinated disclosure when it goes in that order.
Dan: Yeah. And when we're looking at these, all of these events are typically happening within a two day period. So it's all happening, boom, boom, boom. Things are working as they should. We would call this, I guess, responsible disclosure.
Dan: Coordinated, there we go. CYA. But companies are getting patches, they're getting published. The security vendors are able to update their signatures and their scanners, so companies can go find this stuff and apply those patches. And then the CVE's published, and now it's kind of the race to go patch, ultimately is how that goes. And then we see our first exploitation in the wild, so soon to follow. I will say that out of all these permutations, you go down to number 10 and 2. 3% of the time, at least out of this data set in 2019, it goes from CVE reserve to exploited in the wild. And there's some ramifications to this piece. So ultimately, this is kind of the big takeaway to the P2P V6 reports. But when exploit code is released after the patch defenders, they typically have momentum. They're patching all instances of a specific vulnerability faster than attackers can really exploit it. And they hold that advantage for roughly eight months. So they're beating the speed to remediate. What's interesting and first of its kind, as far as we know, and no one's come forward yet to say that they've done this research before us... Ed, what happens when exploit code is released before a patch is available?
Ed: Yeah. To probably none of our surprises, it was bad for defenders. And to clarify this a bit, so when we're measuring defenders, we're looking at their velocity of remediation here. So a patch obviously comes out, they start scanning for that vulnerability, finding that vulnerability in their environment, pushing out that patch. And we basically looked at how long it took them to get to 100%, and 100% meaning here's all the vulnerabilities you found and 100% of those are patched. And it's almost, if you're looking at P2P V6, it's almost the reverse of a survival curve in that sense. You're jumping way up. We see a bunch of people start identifying vulnerabilities and then remediating them pretty quickly out of the gate. We also measured the attackers. And so in this case, what we're measuring is how many times we saw evidence of this exploitation event in the wild. And how long did it take for the attackers to get to 100% of those attacks again in this 15 month window that we were looking. And when, to your point, and when an exploit was published prior to the patch, we saw on average a 47 day shift to the left for attackers, meaning they had a 40, effectively a 47 day headstart where their velocity was faster than the defenders. Of course, that's going to be kind of a no- brainer for a while, because I don't have a patch yet, so I can't fix anything. And the attackers have already started, they got this exploit that they can start to use. But it was actually a significant jump for the attack side of the house.
Dan: Yeah. That's really interesting. And to kind of recap what Ed was saying, using hindsight, that's why we limited the range to 15 months, and that's why we limited it to 2019, because now we had the full body of attack exploit events. So individual attacks, we know this is how many times malicious, bad actors, whatever, hackers actually engaged in attack, here's the total volume. And so we treated that as the population of zero to 100%, and we could see the timeline that it took them to execute all 100% of those attacks. And on the same vein, we could also see every single company that scanned and identified that they had the same exact vulnerabilities and how fast were they able to get to a hundred percent patch rate out of all of this over the 15 month period. And so, Ed, to your point, it's super interesting when the exploit code is released after the patch. The second that the patch is available, businesses go ham. I think they patch within the first month are up to almost 50%.
Ed: They patched actually faster than I even realized. And to be fair, we're looking at a subset of all the vulnerabilities from 2019 that had exploits associated with them. So theoretically, you're talking about the high risk ones of 2019. But organizations actually were, we saw on average, were patching faster than attackers were attacking. And it gets really interesting when you overlay the two graphs together to talk about momentum, but I'm probably stealing some thunder here.
Dan: So one caveat and poor, this is our data centers. It's well right for the patch data and all that good stuff. So ideally our customers are doing this stuff because that's kind of what we do.
Ed: A little bit of bias.
Dan: Yeah. A little bit of bias in the data. So do want to call that out, but ultimately it's good for us. This is a good outlook for us because out of 2019, these are the vulnerabilities that people should have cared about and they got out of the gate strong and fast, and we're patching these things at a super high velocity. And overall, from a momentum standpoint, they kind of had the upper hand, so to speak, for eight months, from the patch release date, patch is available date, out to eight months. They are patching at a faster velocity than hackers were exploiting these things.
Ed: Even above and beyond that, even for the, so if for eight months, they were patching faster than the attackers were attacking. And I would say they were actually patching significantly faster than the attackers were attacking. And the times where they were kind of outpaced or lost that momentum, it was still incredibly close. The amount of attacks versus, the volume of attacks versus the volume of patching was still very close, that slightly had an edge for the attacker at that time.
Dan: Yeah. And this is the whole long tail of remediation problem. Same thing with the exploitation that we found from the early ones. It's really hard to get a hundred percent patch rate for anything overall.
Ed: Neither one of those we see ever. Nobody ever gets to zero for a given vulnerability and the attackers never quite stop attacking that vulnerability.
Dan: Absolutely. Because it's obviously, it works or it's easier to execute or combination of factors, but yeah, there's this kind of long tail. So you do want to try to be that group that gets a hundred percent patch, but that is super, super hard, nearly impossible in some cases. That'll never go away likely. But I mean, the big kind of shocking moment was just how much of an advantage attackers got when exploit code was released before the patch is available. And I think that's interesting because when we talk about responsible and or coordinated disclosure, we typically think about just a CVE. But when we dig down, it's really patch is something you can do about it, as a company, and exploit means something you could do about it as a hacker. And so looking at those two kind of key milestones just seems like a different take. Anyway, Ed.
Ed: Yeah. Not only was it a shift left, if you will, for the attacker when publishing that exploit before the patch, but then they maintained that, like we talked about before, I think what'd you say that the defender had eight months of an advantage, and even when they didn't have an advantage, it was super close. That is not the case when the exploit gets published before the patch. It's almost all advantage to the attacker. What was there maybe a month and a half, two months of defender advantage, and in that case, it was barely a defender advantage. And then the attackers continued to take over and they had a significant advantage in terms of the rate and velocity of exploitation versus the rate of patching.
Dan: Yeah. And what's also interesting, and I don't think we figured this out. We might be looking into it in the next report, but having an exploit available before the patch, the patch rates for companies was significantly slower. They get off to a strong start as well, but then that long tail becomes a whole lot thicker.
Ed: Yeah. For sure.
Dan: On the other side.
Ed: And I would say to, to your point, we are actually digging in right now hopefully to answer that in volume seven, we'll find out. And there's some other caveats other than bias in the data set here that we should talk about. So we talked about the number of vulnerabilities we're actually looking at, which was the 400 and some odd exploited vulnerabilities versus the, I don't know how many in 2019 off the top of my head, maybe 18,000- ish.
Dan: Yeah. I think it was 18, 000.
Ed: CVEs that were published that year. So it's a much smaller subset of the overall data set. And then we're narrowing that down even further and saying," Okay. Of those that were exploited in the wild, the ones where there was an exploit published before a patch," which is even significantly smaller. So you're dealing with a very small number of vulnerabilities here that we're talking about. And we didn't dig into what those vulnerabilities were. Were these harder to patch? Were they things we had to compile as effecting Java or something where you had to do a lot of regression testing or something like that? Or was it a straightforward push button deploy through SCCM or something? We don't know.
Dan: Yeah, absolutely. Well, and there's another caveat as well as, were we only detecting this stuff because signatures existed? So that means that they found it, there was a disclosure process and people could actually go look for it. So did these exist anyway, and just people didn't know about it or the systems didn't know about it.
Ed: We'll dig into that too. Although I would say there's less evidence of that only because we think that if that was the case, then once the signature was published, that you would see a significant jump immediately. If this was already being exploited widely, that suddenly people display this signatures and detected them and we would not necessarily see that. So a theory that we want to disprove, but at least the data points to that's probably not the case.
Dan: Got it. That makes a lot of sense. Well, I mean, and ultimately all of this goes to show that at a minimum, I don't think we can really state causation yet. There is a correlation though, and there's this incredible link between remediation, patch, availability and exploit development or exploitation timelines. So these things, these factors do share some kind of connective tissue. There is some quantifiable measurement between them that they relate to one another. And that just brings up the whole point that, I mean, ultimately we can see that having exploit code published before a patch is available, whatever the intentions or situation or whatever, is ultimately a detriment ultimately for defenders.
Ed: Yeah. Yeah. I mean, one of the questions we raised at the end is at least publishing this exploit enables the IPS IDS vendors, and folks like that, to develop a signature against that so that you could block that attack. So that's, in theory, that's certainly possible. Although if you're doing coordinated disclosure with the vendor, then you would think that you could do coordinated disclosure with those vendors as well. But I'll let that for other people argue.
Dan: We could go talk about that at some other point, maybe, after we validated some data around it.
Ed: Yeah. Good point.
Dan: Yeah. I mean, ultimately there does seem to be kind of a way to ultimately responsibly disclose. I think this puts kind of the impetus on researchers and vulnerability discovers and all that to really try pretty hard to get vendors to respond. Software companies to release patches, signatures, because ultimately patches need to be developed for this stuff otherwise... To flip it on its head, if there's no patch available, and someone finds exploit, hopefully not a bad guy first, it still puts defenders on the back foot because there's nothing they can do about it.
Ed: Yep, agreed. I mean, and again, we can go back to the beginning of our conversation where people would argue about, and this is more about disclosing the vulnerability and is that good for people? Can they start to at least help defend themselves? And we talked about, well yes, the security one percenters can probably do that. It's even less than the security one percenters when you're publishing the exploit.
Dan: Yep. Ah, that makes sense. Well, I think at least from my kind of layman and you know how I like to simplify all this stuff so much, because I'm a comms guy and don't actually have to do any of the work or solve any of these problems, is ultimately software vendors. If there's legit vulnerabilities in your software, take it seriously, make a patch, help your customers be safer. I know that's, a lot of companies are taking that seriously. Microsoft is kind of the comeback kid story of the century, but there's still such a high volume.
Ed: There's a volume of them.
Dan: Yeah. They're so far behind, it's hard. But then, on the other side as well, if you're a security researcher, don't go to Twitter first. Make a best faith effort, work with these software companies. I know it can be frustrating, but try to be heard. I'm sure there's tons of Slack channels and things like that, where you can go leverage your contacts or find someone else who's a researcher interested, who might know someone at software company. Maybe make a best effort to try to get them to take this stuff seriously as well. I think that's ultimately how everyone can get more secure long- term. We know that there's a correlation here.
Ed: Do the right thing, Dan
Dan: Do the right thing. Awesome. Well, I think, Ed, do you have any last takes other than do the right thing?
Ed: It's hard to beat that, but-
Dan: I know, that was a good close.
Ed: It was actually, while not surprising, it was refreshing to actually see data back this up because these types of arguments have lasted, have been going on in security for as long as I've been in security. But there's very few that have actually backed anything up with data. And so I'm really hoping actually, as we dig into those last questions for volume seven, that we can kind of dispel some of that. But man, it was just, I highly encourage anybody to go read that report because it's pretty, like you said, it's not causation necessarily, but it's 100% clear correlation there.
Dan: Yep. Absolutely. Well, we look forward to the next piece of research as well. And while we're talking about being in security for your entire lifetimes, you can actually go get ISC Squared credit for listening to this podcast. So go to kennasecurity. com/ blog. You'll see this podcast up there and there'll be a form. So fill out your ISC Squared email address and a member code, and you will get credit for listening to this very podcast. So in the meantime, looking forward to chatting some more, but Ed, thank you very much.
Ed: Thanks, Dan.