Episode Thumbnail
Episode 1  |  42:20 min

A Brief History of Vulnerability Management

Episode 1  |  42:20 min  |  06.23.2020

A Brief History of Vulnerability Management

This is a podcast episode titled, A Brief History of Vulnerability Management. The summary for this episode is: In the very first episode of Security Science the Father of Risk-Based Vulnerability Management, Ed Bellis walks us through the history of Vulnerability Management. From the dark times before the CVE list and open-source scanners to the capabilities of today's best performing vulnerability management programs.
Takeaway 1 | 02:26 MIN
The Dark Times Before CVE and Commercial Scanners
Takeaway 2 | 01:29 MIN
Scanning Signatures Before CVE
Takeaway 3 | 00:36 MIN
Defining Vulnerability Management
Takeaway 4 | 01:39 MIN
"Ignorance is Bliss" Stage of Vuln Management
Takeaway 5 | 02:04 MIN
Vulnerability Assessment and Long Lists of Vulns
Takeaway 6 | 03:47 MIN
Emergence of Pen Testing and Application Security
Takeaway 7 | 03:47 MIN
Scanning Accuracy and Confidence
Takeaway 8 | 03:12 MIN
Who is Responsible for Cloud Vuln Management?
Takeaway 9 | 03:43 MIN
The Benefits of CVE and CNA's
Takeaway 10 | 03:19 MIN
Generating Really, Really Long Lists
Takeaway 11 | 04:55 MIN
Processing Vulnerabilities In The Old Days
Takeaway 12 | 00:35 MIN
Beginning to Operationalize Vulnerability Management
Takeaway 13 | 00:30 MIN
Dumping PDF on IT's Desk
Takeaway 14 | 01:37 MIN
CVSS and Prioritizing Vulns
Takeaway 15 | 02:51 MIN
Vulnerability Management Top Performers Today
Takeaway 16 | 01:55 MIN
Closing Thoughts

In the very first episode of Security Science the Father of Risk-Based Vulnerability Management, Ed Bellis walks us through a brief history of Vulnerability Management.

From the dark times before the CVE list and open-source scanners to the inclusion of application testing, complications from the cloud, and what today's best performing vulnerability management programs are able to achieve. This is a fun walk back through vulnerability management memory lane.

Dan: Today on Security Science, we're going to be talking about vulnerability management and providing a brief history. Thank you for joining Security Science. My name is Dan Mellinger. We also got Ed Bellis, the father of risk- based vulnerability management today. We want to do a nice talk, a brief history of vulnerability management, how it started, where we're going, and how we got there over time. So, Ed, how are you doing today?

Ed: I'm good. Dan. Excited to kick this thing off.

Dan: Yeah. This is a really interesting topic, because I was looking into some of the history and there seems to be a ton of gaps. There doesn't seem to be a clear history of where we started and how we got there. I kicked off with the CVE list, MITRE, and their community effort to create a list of things we should really care about. It looks like the end of 1999, we had 194 entries. Now what, 20 years later, we're tracking right around 136,000. Just to kick things off there, how do you even start breaking that stuff down?

Ed: Well, I certainly miss those ignorance is bliss days of old. In fact, you could even predate it a little bit to some of the first vulnerability scanners that came out, even pre the existence CVEs. The first one I can recall, and this is certainly going to age me a bit is SATAN, which was an open source tool for scaling vulnerabilities. I think SAINT ultimately was based off of SATAN. What else was going on back then? Nessus back when it was free and open source. These are all mid to late nineties. And then, as you said, MITRE came in at the end of the decade in'99 and started tracking vulnerabilities via CVEs. Since then, we've seen what I would call almost astronomical volume of CVEs as of late.

Dan: Dang. Yeah. Well, just because something doesn't have a CVE doesn't mean that a vulnerability doesn't exist. That's actually really interesting that you're getting back into the SATAN and SAINT kind of open- source scanning, early Nessus. What were they looking for? How were they identifying something that was a risk to systems back then before CVE even existed?

Ed: Yeah, I don't recall certainly the full list of signatures like SATAN, but, obviously, and even today, you see vulnerability scanners that are finding vulnerabilities that have nothing to do with the CVE, whether it be a vulnerability or even some sort of common misconfiguration, and the checking systems hardening for different things, things like that. But you'll often see that as part of the output of any of these scanners as well. I think when MITRE did ultimately come out with CVEs and that common language to define and say, " This is this particular vulnerability." And then, ultimately, you're able to standardize that across scanners. That was a big win, I think for practitioners. In fact, there's a lot of various standards, CVE being one of them, but ultimately rolled up into that SCAP or the security content automation protocol, I think is what it was, which, I think ultimately good CPEs. I think CVSS was part of that, CWE, all of these different ways to dictionary and describe things more commonly across the industry.

Dan: A technical description of a vulnerability.

Ed: Yep.

Dan: Something that exists, how you could take advantage of it, all that good stuff. And while we're on definitions, probably helps us to define what does vulnerability management mean. I'll just take a quick stab at that. We're really talking about the process of ideally proactively identifying, tracking, prioritizing, and then going ahead and remediating security weaknesses, and our flaws in IT systems, software. The goal of that is to prevent malware outbreaks, data theft, all the scary stuff you read in the newspapers and see all over the tech trade pubs all the time. Ed, you come from a practitioner's background, right? Back in the day when you were starting this, I assume that it was an important goal to try to proactively prevent things from happening. Can you walk through some of the processes back then?

Ed: Sure. If you thought about it in stages, we talked really early on about the ignorance is bliss stage of, " I don't even know where my vulnerabilities are." Regardless of scanners, you have teams of people or folks usually, that back then it was more of you hired somebody to come in and maybe do a vulnerability assessment, or a penetration test, or some sort of security assessment on your infrastructure or applications. But ultimately you got to a point where the industry started to evolve quite a bit. You started to see more commercial offerings from vulnerability scanners. Again, I would still say at this point, it's not vulnerability management, it's more vulnerability assessment, which is, where are my vulnerabilities? Can you at least tell me where they are? And you saw the likes of the big three, Qualys, Tenable, and Rapid7, but there were others. ISS was around back then. There was a number of vulnerability scanners on the commercial side that came up, and you started to see more of that. But still even then, you were probably, at best... You see a lot of things like, " Oh, we did a quarterly assessment or a quarterly scan," or, " We hired a professional services team to come in." They might have did a pen test. And as part of that, they also did a vulnerability assessment or scan or something like that. But it was quarterly, maybe annually even. So even if you had a large set of infrastructure, effectively you then worked off of this list and said, " All right, we're going to try and work this list down before next year's assessment." So you've got all kinds of time.

Dan: That's interesting. So you start the process with an assessment, basically, whatever that is. A scanner would be some kind of software, right? I'm assuming that automatically goes around and scans the infrastructure, tries to find these things. But ultimately the goal is just the assessment piece, getting a list. Am I correct there?

Ed: That's right. And at best, what you were seeing in the early days as you... You had a list of vulnerabilities, what systems they affected, and maybe a high, medium, and low rating. I don't recall, actually, the year that CVSS came out, but people were primarily just using a high, medium, low, red, yellow, green, maybe a scanner assessment score of some sort, but it was really kind of rudimentary.

Dan: I mean, what year roughly, would you say?

Ed: I mean, at this point you're in the single digit 2000s, like 2005- ish or something. You started to see a lot more of the commercial folks make a go of it.

Dan: How long were those lists, just looking at the number of reserved CVE mentions? Back in the early 2000s, you're talking about less than 5K a year. Compare that to now, we're at 18K. How big were these lists, even though we weren't just covering like a quarter of the vulnerabilities we are today?

Ed: Yeah. Yeah. Certainly much less to deal with back then than now. That said, we're also talking about the difference between a definition of a vulnerability and an instance of a vulnerability. So if I had that one CVE on 5, 000 different systems, effectively, I have 5, 000 vulnerabilities, not one. So I still have to deal with this list. And frankly, the patch management tooling back then was much more difficult too. So if you actually did decide to do remediation... By the way, I would say, at this point in time in the early 2000s, the remediation piece was pretty rare. There was a lot of vulnerability assessment. There was not much in the way of vulnerability management.

Dan: I'm curious how many people would admit that back then. We're jumping now into more of the remediation or what do you do with these giant lists? I am curious, though, when we get back to the assessment side a little bit, how accurate were these scans? Did you use multiple sources? There was this open source movement that still exists today, but I don't think quite as powerful as the big three, like you mentioned. Just give us all a sense of, did it work? Were you catching most things? Did you have good confidence in what you were getting out of these lists?

Ed: No, no, and no. Part of that was just there's a limited number of signatures. Certainly there's a limited amount of awareness of the vulnerabilities. We have a lot more people trying to find vulnerabilities today than we did 15 years ago or so. Part of it was the users themselves, the lack of awareness. Hey, I would be willing to bet that the majority of all of the scanning that was going on at that timeframe was all unauthenticated scans, which meant that you were getting false positives and false negatives as part of that.

Dan: What's the difference between unauthenticated, authenticated, remote, internal, all that good stuff?

Ed: You have access to something, or you don't have access to something. An authenticated scan is I have credentials on the machine that I am actually scanning. So it logs into the machine. It's able to see a lot more things. It's able to see what's installed, what's running on it, what processes are running, all of that sort of thing, that you wouldn't get from an authenticated scan where you're just hitting it across the network and you probably throwing some different types of requests at it to see what it returns. Some of it is fuzzy. Some of it is frankly a little bit more guesswork than a legitimate, " We know this is certainly a vulnerability." So there's a lot of more, " This might be a vulnerability on this machine," versus, " We know that this is a vulnerability on the machine."

Dan: Just the way it was described there, it seems like an authenticated scan would be the best route. Is there like pros and cons between doing both? Should you do both or should you pick one over the other or?

Ed: Yeah, there's definitely pros and cons. I would say there's a lot more pros for authenticated scan from a security or from a... How do you put it...

Dan: Confidence in the output. Yeah,

Ed: Exactly. There are many reasons why you might not want to do an authenticated scan, or you've got something where you're hitting it from the outside. You're hitting an external machine. There's some comfort level around getting somebody to allow you to authenticate into the machine in order to do that in its inaudible. And you can also think of it as then the scanner itself could become a security hole itself. If it has permissions to do something on that machine, if that scanner or that scanner machine is compromised, then theoretically, then you could compromise a whole lot of other machines as well.

Dan: Huh. Okay. Then on the unauthenticated side of things, what are some of the pros to that? I think the cons were kind of apparent to me. You can't get as much information, right?

Ed: Yeah.

Dan: Not as much confidence in the data you get out of it.

Ed: Or you might get just as much information, or frankly, sometimes more. But you could get some bad information or at least certainly some fuzzy information. So it tends to, for the most part, not be as accurate. We see a lot more authenticated scans these days than we did back then. But back then, it was almost entirely, I would imagine, with the exception of some environments, certainly in the places where I worked, there was a lot of unauthenticated scanning going on. That even stepped into the later on, as you start to see AppSec evolve, you had a lot of those same problems there too.

Dan: That's also interesting. If you're doing an assessment, sometimes that could be someone you hired to do a pen test as well come out with the list. Okay. So we're still on the authentication/ understanding your footprint side of things. Would you say that's improved over the years?

Ed: It most definitely has. It's funny to watch the industry. We've been doing network security for a really long time, and hence infrastructure fell around that. Then later on, we saw AppSec start to catch up and start to do more things on there. You would see things evolve to where people started to feel certainly more confident, I would say. Not totally confident, but more confident in their infrastructure scanning than they would on their AppSec side. Then there would be the positives and negatives of authenticated versus unauthenticated. Another way to think about it on the AppSec world is dynamic versus static. You have access to the source code. So theoretically you should be able to find a lot more vulnerabilities. But tracing how that application works is a lot harder from the source code side. So you end up getting in a situation on the AppSec side where you might have more false negatives, or you miss things from a dynamic perspective, because it's more of from the outside in, is a way to think about it.

Dan: Just so everyone is following, we're talking about application security. Dynamic testing, static testing, those approaches, basically, to crosstalk-

Ed: Ultimately, you can roll this all up into, how do I find vulnerabilities? How do I prioritize vulnerabilities? How do I remediate vulnerabilities? Whether it's on the operating system, whether it's on an off- the- shelf application that I've purchased and installed, or it's an application that I wrote myself. You get a lot of the balancing trade offs that you see an AppSec now. We saw a lot of that on the infrastructure side very early on, and you start to see a little bit more maturity there.

Dan: Got you. On the hardware operating system side, scanning has gotten pretty mature, basically. But we're saying we have pretty high confidence. We can see most things on the infrastructure for the most part.

Ed: If you ever say that in front of a security audience, I guarantee you'll get some people that push back. But there is no perfect. I can't pick up certainly any automated scanner today and say, " I will find all of the vulnerabilities," or, " Every vulnerability I find will be real."

Dan: I think, yeah. That's one of the laws of cybersecurity, There's no certainty.

Ed: Yeah. That's certainly true.

Dan: Certainty does not exist. Essentially, you're starting to see some similar market trends and or maturity of tooling on the application security side, as you saw back in the day on the infrastructure scanning side.

Ed: In some ways it's completely different, in some ways it's like, " Uh- huh, yeah, I remember this five, six years ago. And on the infrastructure side is now a problem here."

Dan: Well, it's funny too that it's moving so fast, that five or six years ago things are almost completely changed, honestly, in a lot of ways.

Ed: Geez, when you start to think about the cloud, then they start to think about the way people work with things like using Ansible, and Chef, and Puppet, or Terraform, or things like that, it's one and the same. The infrastructure application, it's all melding together.

Dan: How does that work from a VM standpoint? Let's go with Amazon, AWS.

Ed: Yeah. I mean, there's a million different ways that you can assess your vulnerabilities right now. You can do a traditional scanning. Obviously, if you're on Amazon, you're not going to provision any sort of appliance there to do it. But whether that's, " I'm going to drop an agent" or something on an EC2 instance and do an authenticated scan via my agents, or I'm going to use some of the native tooling that Amazon or GCP or Azure offer today in terms of assessing my vulnerabilities. I've seen some new tools coming out that are more like side channel techniques where they're plugging into APIs within AWS as an example, to assess not only vulnerabilities, but oftentimes what you'll see is more on the misconfiguration side or understanding the hardening or lack thereof within your cloud environment as well.

Dan: When we start to talk about essentially leasing infrastructure from another company, where does the responsibility lie? In terms of like scanning and the vulnerability assessment side and AWS, Google, Microsoft, they provide some tooling to do that today, basically. Where does that split of responsibility go? Is it still on the end user?

Ed: Yeah, it depends on, when you say cloud, what you mean by cloud. If you're talking about infrastructure as a service, that's going to be a shared responsibility model where typically the users are responsible for, you can think of it as operating system on a hardware, environmentals, all of these things. That's on Amazon, that's on Google, that's on Microsoft, to provide that. But if there is a vulnerability in the operating system that I'm running on an EC2 instance, that's my responsibility. Then you get up, a little bit higher up. You could have the platform as a service guys like Heroku or something like that where there's am little bit more responsibility on Heroku and a little bit less responsibility on me to a SaaS, where it's almost entirely on the SaaS provider with the exception of, I might have things where I have to manage users and authentication and different things like that. Then suddenly everything else, certainly all the vulnerabilities and stuff like that, are associated with the responsibility of the SaaS provider.

Dan: Got you. Then if you develop, say, a web app on Azure or whatnot, then you would probably be responsible for pen testing and finding all the vulns and basically your own cloud- based app or whatever you created, right?

Ed: Yeah. You certainly get into some really inaudible where you start to meld a lot of different applications together using APIs and whatnot. You might be responsible for a sub piece of that. And then you're making all sorts of calls to different services that are certainly outside of your purvey.

Dan: Some new dynamics in play. We're getting a little bit better at application security, scanning, testing, all that good stuff, but still relatively early days there. Infrastructure scans, we can be relatively confident that if you have one of the predominant solutions out there, you're going to get, I guess, good lists of things to handle. So now we've got a list of vulnerabilities. What's next, right?

Ed: Yep. Yep. As you pointed out early on, using MITRE as an example of CVEs, and we've seen that list grow and expand over time, one of the things that's, as you mentioned, I think what'd you say, 300 and some CVEs in the first year or something like that.

Dan: Yeah. It launched with, I believe, 381 vulnerabilities at launch. And then the end of the year 1999 was 894.

Ed: 894 total CVEs.

Dan: Wow.

Ed: Yeah. Yeah. And now what are we up to? 15, 16, 17,000 a year or something like that?

Dan: Yeah. If you look at reserved, it's right around 18K. Yeah. So I think we're end of 2019, so 20 years, 1999 to 2019. Went from 894 entries in'99 to 136, 051 at the end of 2019.

Ed: Actually, from a security practitioners point of view, for the most part, that is actually good news, not bad news. What I mean, it doesn't mean that we're getting more vulnerable. It means that we are doing a better job of identifying and tracking these vulnerabilities. The last three years, MITRE expanded the number of people. I think it's the numbering authorities, their CNAs, numbering authorities that can actually create new CVEs. So there's a bunch of different software vendors out there now that can create their own CVEs. And I think that will continue and we'll actually see more of that. So it's not like those vulnerabilities didn't exist before, but now we're tracking and we know about them and we can create signatures for them. And we can start to understand remediation and patching and all of that.

Dan: We have a higher number of CVEs that are being issued because we're finding more. We're getting better at identifying that stuff. You said the CNA certificate and numbering authority?

Ed: That sounds right. Yes. I know it's CNA, I was figuring what the C is for.

Dan: That essentially allows software vendors like Microsoft and things like that to submit their own CVEs without going through a longer process with the crosstalk.

Ed: That's right. And we're seeing more and more of that. I mean, in some ways, things are getting a lot better because there's a lot more time and attention and resources being spent on security, and some things that are just... The world is infinitely more complicated than it was 20 years ago. We were talking about all this stuff, what cloud and containers and all of these things, due to security, even folks like GitHub are now a CNA. So they can actually reserve their own CVEs. Think of all of the repos and stuff, and all of the software components and everything, and all of their reused code that's out there that ultimately gets CVEs and gets attention. And then hopefully it gets pumped to remediate those. That's a good thing.

Dan: Yeah. That's an interesting perspective. On the one hand, we have a lot of stuff that the average CSO, and security, IT, they've got to worry about. But on the other hand, we wanted to have visibility. We want to know that these exist so we can do something about it, basically is what it comes down to, right?

Ed: Yeah, absolutely. It's a world where it's become very difficult to be a generalist anymore because there's just so many different things. You see people that are going deep in any specific areas, including specifically with insecurity. So it's not like my specificity isn't security. It's container security or something like that. It's going deep in a particular area.

Dan: Complexity is ever increasing in today's world. And so, we've got, what, roughly 136, 000 individual vulnerabilities that the world at large generally knows about. You just did a combination of authenticated and unauthenticated scans on your infrastructure. Did a little bit of static and dynamic testing on some of your applications. You have a list. How big is that list? What does it look like?

Ed: The answer is, whatever it is, it's too big. It's more than you can deal with, and likely more than you should deal with. As a CSO, and I was thinking about all the... I've got a limited number of resources that I can go out and remediate, fix things, put in new controls, invest in detect and response technologies, whatever it is, there's limited resources. So how am I going to spend my time and my resources? I've got this list now. I need to prioritize that list. And a lot of factors and a lot of this is somewhat with Canada, so we won't go too deep into it. But just thinking about how likely is this vulnerability to be exploited, and if it gets exploited, what's the impact? What happens? What asset is that on? What happens? Is it damaging to my businesses? Is it a relatively non event or somewhere in between?

Dan: When you get these lists as well, what are some of the characteristics that are each line item? I've never seen a scan output in my life.

Ed: Congratulations?

Dan: So what does that look like? Yeah. Yeah. I'll count myself thankful. But what are some of the characteristics that pop out just for people like me who have no idea what these things look like?

Ed: Well, we'll talk about the 2000s, because that was my world of dealing with this on a regular basis and some of the pain. But at that point, it was typically a really large PDF report. And when I say really large, I mean, usually like hundreds of pages of vulnerabilities, lists and lists of vulnerabilities, some graph on there that probably says something like a pie chart of high, medium, and low. Lo and behold, almost everything was a high or a medium. Almost nothing was a low. It was three inches thick, so dealing with that was not fun for anybody, certainly not for security folks, and even less so for the practitioners that are responsible for patching and remediation. They couldn't consume something like that. Prior to that, I mean, if we go back to the'90s, they were pretty small. It was usually probably just a file or it could have been... Typically, it was associated with an engagement. So I hired somebody to do a vulnerability assessment, or a pen test, or a combination thereof. And so, it was probably also a PDF. It was probably much thinner than executive summary and things like that. They might have identified the vulnerabilities and also identified some weaknesses in other areas, some manual assessment work that went into that as well. But you know what? Even if it was thick, typically, I had three months to 12 months to deal with it because I wasn't doing another one of these for a while.

Dan: Your annual security review.

Ed: Yeah. So if I had to go out and fix a hundred vulnerabilities over 12 months, that sounds like, " Take now."

Dan: Versus every month?

Ed: Yeah. Although, to fix those vulnerabilities, I probably didn't have much in the way of patch automation and things like that too. And frankly, where we've also seen a big difference in vulnerability management is patch management, and the reliability of those patches. They're still, depending on the areas of attack, if you're doing something to example Java or some of the Oracle stuff where you've built a whole lot of stuff around it or compiled things around it, that can be pretty scary. Generally speaking, applying something like a Microsoft patch is routine for organizations. But there back in the'90s and the 2000s, there were a whole lot of blue screens of death.

Dan: Oh yeah. I can imagine. Now Microsoft does it for us. We don't get a say in the matter.

Ed: Oh yeah, with the auto updates and Windows 10, for sure. Yeah.

Dan: Okay. That was old list that if you printed it, it would be three inches thick, which, I mean, honestly just scares me to death.

Ed: Awesomely enough, everybody printed them.

Dan: Sometimes printing's fun, old school. Save the trees though. Come on. Okay. What do those outputs look like today?

Ed: Today, they're going into a lot of other systems that are consuming them, for the most part. You typically don't do a point scan, shoot, print a report. It's more of you see a lot more maturity around it. It's more of a continuous process. Maybe I'm scanning these systems every day, every week, whatever. And it's continuously feeding into some other system that's doing some sort of an analysis on it and doing some sort of prioritization. I might have a separate piece of software that's specifically for reporting, or I might pipe things into a data warehouse. Things have become much more mature in that respect. So you're doing a lot more of the, " Hey I'm mining all of this data looking for this stuff that I think I need to remediate quickly." A lot of this stuff that's consuming this now are things like ticketing systems, and workflow, and different things like that. You might even put things into your SOAR to do some sort of auto updating for something. There's a whole lot of different technologies after the scan actually happens. The scan, it used to be point, scan, shoot, generate a big report, and then hopefully do something most of the time doing very little.

Dan: Back in the day, it was a project. You'd do it once a year, maybe once a quarter, if you had the budget, team, time, all that good stuff. Nowadays, it's more of a continual process.

Ed: It's much more operationalized now than it used to be.

Dan: Okay. You got that list, back in the early aughts. What do you do now, or what's your process to figure out what you do with that three-inch stuff, PDF?

Ed: Cross your fingers, pray and hope. Honestly, I mean, it was a difficult time and very frustrating, but you were doing a lot of... I'll speak to my time at Orbitz. We had a team of people on the security team waiting through this because we talked about authenticated and unauthenticated scans. How many of these were actually real? How many were false positives? The more false positives that you put in front of somebody who's responsible for fixing them, that frustration goes up really fast to the point where they just start to ignore you. If you throw them five false positives their way, it's like, " I don't have time to weed through your report and decide if this is legit or not." So there's a whole lot of that that goes down. Now, I've whittled that list down to the stuff that we believe is real. Now, how many of these real things are actually important things, because I know roughly how much patching and remediation work that we can get done in, say, the next 30 days before my next scan. So which ones am I going to prioritize? Start to socialize that with the teams that are responsible for patch management, probably kicking it off in ticketing systems or putting in different tickets, things like that, and tracking the remediation workflow, depending on your organization and how big it is, and how much process there is. There could be change management meetings and all sorts of different things that flow along there as well.

Dan: Is security parsing this giant three-inch thick printed out list?

Ed: I would say the most effective security orgs were parsing that out. I would also argue that there's probably a lot that we're just dumping and running those PDF reports. And that's where he definitely saw a lot of lack of remediation.

Dan: Where were they dumping them? IT's desk, just like, " Here you go," and run?

Ed: Yeah. Basically, " Hey, here's all this. This is ridiculous. Go fix your stuff."

Dan: Wow. I'm sure IT love that.

Ed: Yeah. Obviously, we see a lot of that leftover hostility between operations and security for things like that.

Dan: I mean, how have things changed today? I know that we do something very specific on the risk- based VM side of things. But what does that in general look like? What are the differences with this more operationalized kind of process?

Ed: A few things. Volume, there's a lot more volume today than there ever has been like we talked about before. Not only because there's a lot more CVEs. There's lots more signatures in the scanners. There's a lot more infrastructure. There's a lot more applications that the organizations are dealing with. Everything keeps growing. Almost nothing ever dies. You guys said, " Oh, well, we're not going to patch this thing because we've ended a life bit." Then we plan on sunsetting in 12 months. 12 months comes around, it's still there. Everything keeps growing. So the volume is significantly increased. There's been a lot more automation that's been put in place, not only for the scanning side, but more of the workflow side of the house, the processing side of the house, the remediation, the patch management side of the house. So we can remediate more vulnerabilities. And that's the good news. The bad news is that there are a lot more vulnerabilities as well. I know in our recent joint research work that we did with Scientia, we talked about how, oh my God, look at how many vulnerabilities are just coming out of the Microsoft platform. But then looking at, oh my God, look at how many are actually getting resolved in the Microsoft platform as well. So I'd say that is almost the definition of how this industry has changed. We're identifying more. We have a lot more knowledge about what vulnerabilities are out there. We're also remediating a lot more. It's just the volume has gone up significantly on both sides.

Dan: You're talking about Prioritization to Prediction, and volume five looked at some of the, I guess, more software- based vulnerabilities, and show then... When vendors themselves were identifying and leading these patching efforts, it was a massive correlating factor to how quickly and the volume and velocity with which organizations can patch their systems as far as people could see from scan results. From a prioritization standpoint, what would you say is the biggest change? It sounds almost like it was like a shot in the dark in the early aughts. Were you using anything else? I know CVSS, so a common vulnerability scoring system. When did people start using that as a way to prioritize?

Ed: Yeah, I don't remember the exact year, but I certainly remember a big push for ASCAP, which CVSS is one of the standards that's part of the security content automation protocol, which came out of NIST, NIST and MITRE I think, if I recall correctly. I want to say, mid 2005, 2006, 2007, we certainly started to see more of that... I don't know the exact year, but roughly that timeframe. At my previous employment, we started using CVSS because it certainly was more effective than, one, using the kind of raw scanner score, which was a little bit more brute force. But also because at that point we had multiple assessment tools going on. So it made no sense to standardize on a specific scanner when we had multiple different scanners that were reporting different things and using different rating systems and things like that.

Dan: Okay. Do you think that's still a common practice today?

Ed: CVSS is definitely a common practice today. I would say when, when we're working with our customers or our prospects when they first come in, that's typically more than anything else, they would use the CVSS to prioritize.

Dan: Just general trends. We're seeing more, we're figuring out more, there's more scary out there. But we're also being able to, one, have a better process around figuring out which ones to take care of, and then prioritizing workloads, creating more of an operationalized process around that. Now we're getting the whole patching side. I think we already touched on this, leaving the stack of paper on IT's desk and running. What's that like today, with some of the... You were talking about some of the other tools and things like that. How quickly do teams work together on these?

Ed: Yeah. I mean, once you see this process fully operationalized, it's often, in fact, I'll see sometimes where security has taken themselves out of the mix where they've automated a lot of this. Certainly the assessment pieces on their side. They might come in and do some manual assessments as part of a broader thing, or somebody's messing with a key component of this application. We've got to check out how authentication works or something like that. But the general vulnerability management, scanning process and things like that, that's become largely automated by the security team. And you get to a point where, in the more mature orgs, where it becomes more of a self- service process. So the operations team or the application teams or something like that, hey, we're going in. We're understanding our own vulnerabilities that we have. We already have some prioritized ranking system to understand what needs to be fixed first, what needs to be fixed second. We start to look at it from a more risk- based approach. We get that automated and it could go into things like my automated workflow, my ticketing systems. It could go into a workflow management SOAR tools. It might get kicked off right into patch management, depending on environments and whether it's a corporate or something sitting in a data center. But all of these things, there's a lot more automation, there's a lot more self service, and there's a kind of almost real time or near real time visibility into where risk is or where vulnerability risk is across my org.

Dan: That's interesting. You were talking earlier about security being no more generalist. I believe that's how you put it earlier. And so, that applies towards, I think you were talking about what application security, some of the IT who's managing that side, basically different groups self serving. So as you get application developers, if they know where some of these vulnerabilities lie, maybe in their code, they can go fix that early on because they're specialists. They know their code. They can do this quickly and easily. Same thing with maybe some of the infrastructure people and or different groups. That's interesting. Yeah. I'd never thought about that before.

Ed: Yeah. I was really trying hard not to say shift left anywhere in this podcast, but crosstalk definitely do that, right?

Dan: People like trend words. It's fine.

Ed: It's catching things earlier, is ultimately what you're trying to do because it becomes a lot faster and a lot cheaper to actually fix those things when you find them early.

Dan: As things become more process- driven, tool driven, more confidence like we talked about in the initial scan data, more confidence in the prioritization mechanisms. And now that enables people to actually go in and deal with some of the stuff themselves, instead of IT involved very heavily in finding, prioritizing, and then telling other people what they need to work on, basically.

Ed: That's right. Obviously, if you've got to the point where it's so operationalized that it is more of a self- service process, think about all the points of complexity that you take out of the process, which makes it more streamlined, which makes you able to remediate more, and more quickly.

Dan: Got you. Would you say that that is common today or that's like the dream?

Ed: I would hesitate to use the common. I wouldn't say it's even a dream. We definitely see it. But I would say it's certainly the more mature orgs, the top performers, if you will.

Dan: That seems like a pretty radical shift, honestly. I mean, reflecting back, is there anything else that jumps out to you as a big leap forward in this whole VM process?

Ed: Yeah. I mean, the fact that we're finally, after talking about it for 15 plus years of actually being able to measure risk real risk, not just severity, I think that's huge. It really gets you to the point when you can actually start to... There's so much data out there now that just didn't exist. I remember, at my previous employment, and all the industry just complaining about, where is the data? We have no data to really make any sort of informed decisions. If anything, the complaint these days is, " Oh my God, I'm buried in data. What am I doing with all this?" But that's a good thing in the sense that I'm able to actually make real informed data- driven decisions about what I'm doing. That's going to lower that risk, to understand my likelihood, to understand the overall impact to my broader systems. That's a big deal. The other thing, and this is not even just the infrastructure side, but even the application side, is the speed of remediation. I remember security being in an uproar when Agile was first being touted and used around development organizations. " Where are we going to have our gates? Where are we going to have our checks? How is it going to be secure?" All of this sort of thing. But the good news with all that is, if something did get in, my ability to quickly fix it goes from minutes or hours instead of what could have been weeks or months. So my speeds to remediation, if it's critical, is greatly increased.

Dan: We did a nice brief overview, if 45 minutes is brief. But I know I got a bunch of really good history of the CVE and NVD background from PortSwigger. So I'll make sure to link that on the podcast landing page. We also referenced some of the Prioritization to Prediction research that we do here. Then I pulled a definition from our blogs. So we'll make sure to link all those resources out, so you guys can go check it all out on our blog page on kennaresearch. com. I'm Dan Mellinger. Ed, thanks for joining me again. Have you back soon. Have a nice day.

More Episodes

Exploit Prediction Scoring System - Now With Live Data

Establishing Defender Advantage w/ Cyentia Institute

How CIOs Get Things Done

Counting CVEs

Vulnerability Disclosure and Responsible Exposure

Risk, Measured: 7 Characteristics of Good Metrics