Why Hasn't Cybersecurity Been Automated?
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Dan Mellinger: Today on Security Science, we discussed the topic of security automation or lack thereof. Hello, and thank you all for joining us. I'm Dan Mellinger and today we discuss why the promise of automating cybersecurity has yet to be fulfilled. This is based on a couple of articles from my co- host here today, Ed Bellis, who's CTO and co- founder at Kenna Security, now part of Cisco what's up, Ed.
Ed Bellis: Hey, Dan. Thanks for having me.
Dan Mellinger: It's always a pleasure. So the two articles that you worked on was why data confidence is the key to unlocking security automation and also the much more provocative, is it okay to take your CEO offline to protect the network? So to start this off, you didn't exactly bury the lead here. So I wanted to do a little bit of a preamble from the CSO article and we'll link both of those on the show notes page as well, but automation has long been something of a pipe dream among security professionals. Sure it sounds great, get more done faster without growing your team. That would be a welcome change for security teams that are bogged down with thousands of alerts per day, endless vulnerabilities to investigate, and a growing number of assets to defend. Few teams are large enough to cover everything and automation would free up a lot of time resources to focus on the few events that actually require their attention. So with that said, we seem to have a perfect use case for automation. We have a limited number of humans skilled enough to do work, far too high volume and velocity of incidents for those humans to handle and a massive dataset with several work streams that are able to automate. So what's holding us back, Ed?
Ed Bellis: Just as you said, I did not bury the lead. Confidence in the data, right? So some of the things I've talked about certainly over the last several years now really is because of that problem that you talked about which is I've got a lot more that I can do than I have the resources to actually do, right, I have a lot more things that I need to do. And because of that, there's really these two levers that an organization has at their disposal. One, which we've back going back to all of our candidates, right, prioritization. Being able to relentlessly prioritize what I have down to something that is manageable. But the other piece of that, the other half of that, or the other lever is automation, right. Automate doing more with less human interaction. And really, a mature organization is doing both of those things. That said, what we often see is they attempt to sometimes jump the gun and go straight to automation and that leads to a whole lot of problems. As you can imagine, messy data in. And if you're using data to basically analyze what needs to get done and then automating what gets done, if your data is bad, if you have lacked any sort of confidence in the data, if there's false positives or false negatives within there, anything that you're looking at, if you can't trust that data set that you're relying on in order to use or promote automation, then you're going to be automating all the wrong things and it's going to end up being a disaster. So really the big thing that holds people back often is the fear of automating the wrong thing, right. We even go back to I think of even just the standard vulnerability management world of," All right. Well, next step in the process is I want to generate a ticket to get these things patched," right. But if you don't trust the data, that's generating that you end up generating thousands of tickets in an organization, you start angering everybody in operations who's getting all of these tickets. Suddenly they're not going to do anything because they don't trust what you're sending to them and nothing gets done, right. And that's probably the least consequential thing of what could happen when you start to automate. So having confidence in that data is super critical.
Dan Mellinger: Oh man, I hadn't even thought of that. So you're saying accidentally set the wrong parameters to automate something as simple and benign as cutting tickets to IT staff to go patch stuff could result in literally as we know hundreds of thousands of tickets being cut per patch or something like that just bogging down the system.
Ed Bellis: You could probably deny all service your ticketing system or ITSM system of choice pretty quickly if you mess up automation on the vulnerability management side.
Dan Mellinger: Automate equals cut ticket per Microsoft off vulnerability per system.
Ed Bellis: Yeah. Good luck with your 120 million tickets.
Dan Mellinger: Got it. That's super interesting. And what are some of, I guess, the more riskier ramifications of automating based off of data you have less confidence in, dirty data.
Ed Bellis: Yeah. So we talked about the fairly benign of generating a bunch of tickets and angering people at worst case, bringing down your ITSM ticketing system. It can get a lot worse, right? Automate the actual patching of that, or automate the blocking of something. Suddenly you have a business- critical service that is no longer available. Maybe a business critical service that is your main cash register that generates the majority of your revenue. Taking that down for an extended period of time obviously is a bad thing, and we talked a little bit about and some of that article is the difference between weighing both the security risk in the business risk. There's a business risk to doing all of these things on the security side including taking systems down is one of those.
Dan Mellinger: Well, and it seems that organizations may have, and we actually know this from P2P research, right, prioritization to prediction that there's different risk tolerances, and that's for patching vulnerabilities, right? But how does that translate over into more of the operational side of the automation, right? Because security, automation tools are not new, right? There's soar, right. For example, as a whole class, the promise of security automation has been around forever, right? There's a lot of the stuff that seems easy to automate, but in reality, maybe not so much.
Ed Bellis: Well, I mean, technically speaking, it is easy to automate and that's part of the problem, right? Bad data in bad data out problem is if you automate the wrong things or you start to automate things based on data that is incorrect or bad, that's where it can go sideways really quickly. Don't get me wrong, I think automation is not only a great thing in security, it's really a necessary thing as we talked about. I mean, the biggest problem that we have in security is we don't have the resources is to do everything that needs to get done, but you have to be mature enough and you have to have the confidence in that data and it's kind of a maturity life cycle, right? You're going through it. The first thing you're doing is we talk about the stages of grief within vol management all the time, right? The first is ignorance is bliss, right. I don't even know that I have any vulnerabilities. I'm not assessing anything, everything's great. And then I start to go through and I get through the maturity stages where I start to actually assess and then I start to find these things, and then I start to find that I have security issues all over the place. And then I realize I don't have enough resources to fix all the problems that I have. So the next step I should do is then get into that prioritization. Well, what do data do I need at my fingertips, excuse me, in order to decide whether or not this should be prioritized or should be deprioritized, making sure that I have all of the right data in place to do that. Once I get to that point, right, then I can decide," Okay, this is something that's right for automation. I'm confident that the data is correct, that it's accurate, that it's timely and I understand not only the security risks before I automate, but I also understand those business risks of automating this process."
Dan Mellinger: Interesting. Well, so I know we're going to get into it, but first, I think it would be interesting to get into some of these use cases of both good and bad, right? So you have a very cool quote in the article and that is automation is most helpful at the point at which security risk outweighs business risk. And the cool use case for that is that headline of that set and articles if you take your CEO offline and when is that important and walk us through the ramifications there. What does that scenario look like?
Ed Bellis: Well, it depends on who your CEO is.
Dan Mellinger: Very, very true, but still.
Ed Bellis: Yeah. That said, you're right. So in that case, right, there's a great fine- grained use case of we can identify what's going to happen? What's going to go wrong? What could go right? Also, what's going on in your business at the time? Another great example is yes, I could take my CEO offline. When are you doing it? What's going on in the business? How are you doing it? How long are you doing it? What's the security risk that you're avoiding by doing this? Right? And again, does it outweigh that business risk? Another business risk would be we're in the middle of our financial quarterly close, can you take your CFO offline right now in the middle of this? Right? It better be a really high likelihood, high impact event from a security standpoint in order to do that because the business risk is extremely high in that case, right?
Dan Mellinger: Yeah. The downtime of productivity right before a quarterly earnings or something could have far, far, far more detrimental financial impacts than say them getting hit with an attack that may not pull them offline immediately.
Ed Bellis: Yeah. Yeah.
Dan Mellinger: Which is interesting too, because to your point that lays down the whole underpinning is if you're going to do that, you really should have very high confidence in the back- end data that's saying you should have, right? Your CFO or your CEO is going to be like," Why?" And you better have a very, very good reason, right?
Ed Bellis: So having some sort of quantifiable data that includes both likelihood and impact in that scenario is going to be really important because if you probably can quantify taking your CFO offline in the middle of quarterly close for several hours, you could probably quantify what that risk is. So you better be able to do something very similar to that on the security risk side.
Dan Mellinger: Interesting. Well, okay. And now that we're out of the more provocative use case, but on the other side, automation can work well in business, and we have a really cool example of that from our prioritization of prediction reports, but that's Microsoft phones, right?
Ed Bellis: Yeah. I mean, so they've done actually a really good job and we talked about this in several P2P, I think volume five is where we did a lot of focus on this where we looked at a lot of the vendors, right. And Microsoft in particular amongst others. And one, because they, they kind of had to, right. When you looked at the volume of vulnerabilities that were being produced by Microsoft software, they had to be able to do something. So making it operational, making it easy for their customers to ultimately patch that software and remediate those vulnerabilities was crucial, and they had done that, right? So you look at the patch Tuesday process, you look at their automation software SCCM and their patch management systems, it's become almost like when we look at the data around people remediating their phones, not only do they do that really well. In fact, they do that in some cases better than they do the auto update software like Google Chrome, we saw both really fast. Microsoft was actually slightly faster, which of course I like to throw out the quote that it's funny that people can actually patch something through SCCM faster than they can restart their browser, because that's all it really takes for Chrome.
Dan Mellinger: Well, we were talking about it earlier. You never close Chrome. There's just too many tabs to lose.
Ed Bellis: That's right. Although they've even done a good job of reopening all those same tabs that you had when you restart.
Dan Mellinger: It's the business risk.
Ed Bellis: There's so much business risk to shutting that browser down.
Dan Mellinger: Losing those tabs.
Ed Bellis: So Microsoft, to your point, they've done this really good job at doing that and it's made it, and we saw in the P2P volumes that people were able to remediate just a large bulk of vulnerabilities. And what we see in the data is this is where you start to have those organizations have both of those levers, right? They go and they say," You know what? For these Microsoft phones, it's course of doing business. We run it through our patch Tuesday process, we patch everything. We don't care because it doesn't take us any more effort to patch more one or all of the phones so we just patch all of the phones."
Dan Mellinger: All of them.
Ed Bellis: Now they might do some prioritization that if something bumps up to a really high level, they might run that out of band and get it patched quicker, but for Microsoft, they kind of try to just take them all off the table and then prioritize the ones that are more difficult to patch.
Dan Mellinger: Yeah. And it's an interesting example, because this is almost exactly counter to the CEO, right? Where you have one high value target whereas windows machines, most enterprises, they're an asset mix of 85% windows- based systems, right? So now you're going from a target that is high value to a very large attack surface has varying value across the way, right. I actually have some really, really cool stats as well, but I think the biggest thing is Microsoft has helped create this system of automation in their patch and timing and people and process to make it super effective, so much so that at, I mean, Microsoft vulnerabilities are patched on average within 36 days. So holding to patch Tuesday.
Ed Bellis: Yeah. You compare that to some of the longer poles in the tent where we see average remediation times well over a year, so that's incredibly fast comparatively speaking.
Dan Mellinger: Yeah. We were looking at network devices, right. Some of the stuff was 3. 6 years on average to get patched from...
Ed Bellis: I think that's just them actually replacing that network device, not patching it, but...
Dan Mellinger: It goes with the refresh cycles.
Ed Bellis: That's right, yeah.
Dan Mellinger: The switch comes out, new one goes in. Yeah. Well, I mean, it's just interesting because these are targets of opportunity as well, right. So 70% of window systems have at least one open vulnerability with a known exploit, which we know is rare, right, 2% of all CVEs and they all have an average of roughly 119 vulnerabilities any given month, right. And so Microsoft had to automate this because there's just literally no way for people to go individually patch this stuff.
Ed Bellis: Yeah. If you were not automating your Microsoft patches, your vulnerability debt would get so high so quickly that doing that via refresh cycle would mean almost a guaranteed exploitation event.
Dan Mellinger: Absolutely. Well, and what's another scenario? You brought up the consequences of something like taking down your e- commerce site to patch a bunch of bones. What is...?
Ed Bellis: Imagine automating that, right? That's certainly not something that you're, you're going to do at all, right? You're going to have something in place that says," Okay, how do we take downtime or how do we cycle over to a backup or how do we load balance this across and then patch and then load balance it back?" You're going to have a plan in place and even the prioritization process itself, right? So when I think back on the back end what we're doing under hood at Kenna in terms of even the algorithm, the machine learning that we do to figure out and predict what exploit is or which vulnerabilities are going to be exploited in the future, all of these types of things, we have a man in the middle process here, right? We have a supervised learning process because you need that sort of expertise to understand and make sure that really, the robots aren't going off the rails, right? To basically ensure that you, you are properly prioritizing the things that you're prioritizing. So we looked at lot of data, right? We looked at not only any given vulnerability, the chances of seeing an exploitation event for that particular vulnerability. We started to look at it on a per asset basis like you were talking about, how many volumes are on this asset. Now, how many of those are exploitable? Now look at how many assets across an entire enterprise. And then what is the likelihood ultimately of any one machine being exploited, right, or having an exploitation event due to one of those vulnerabilities? And it's extremely low. Now I'll throw a big caveat out here and what we've been talking about almost this entire time when it comes to the security risk side is targets of opportunity, right? So if it is a targeted attack, if somebody is to go back to the CEO article, somebody is going specifically after your CEO for whom they are or whom you are, that's a different calculus, right?
Dan Mellinger: Yep.
Ed Bellis: But for these targets of opportunity, which by the way, from the things that we see on all of these various reports tends to actually be majority of the cases are either targets of opportunity or in some cases, actual human error that end up in these types of either exploitation events or breaches.
Dan Mellinger: Well, and that makes a lot of sense, right? Because in general, targets of opportunity, the wide net cast, that's where most of the cyber criminal activity seems to be concentrated at least as it relates to earning money nowadays, right? Ransomware campaigns, all that good stuff. They're trying to cast wide nets overall?
Ed Bellis: That's right. And fortunately or unfortunately, attackers are just as lazy as the rest of us, right? And if they find something that works, and they keep using it until it doesn't work anymore.
Dan Mellinger: It's interesting because that makes that side of the automation process even more compelling, right, because that frees up these very, very highly, technically skilled resources to go track down maybe the stuff that's actually targeting you, right?
Ed Bellis: That's right.
Dan Mellinger: Some of the other things that may be a depth adversary.
Ed Bellis: That's right. I mean, we always talk about how it frees you up to take your high value resources and focus on strategic things. Some of those strategic things could be preventing a target attack, could be a bunch of other security things that you could be doing across your enterprise, but make sure that the people that are your valuable are working on the most valuable things.
Dan Mellinger: I want to dig in just really quick and tap your experience as well to get the data clean in order to get these kind of analysis done. How do you clean up the backend data to get clean enough data set to be able to correlate this stuff right? Kind of 10 years, right, several versions of the algorithm. I've heard the stories early on that you guys found out it was not a lack of data problem, it was a dirty data problem, right? So what are some best practices to try to look out for, filter that data down?
Ed Bellis: Yep, absolutely true. In fact, it matured over time too, right? There was always the messy data problem specifically in vulnerability management that I would argue was probably in a lot of areas of security in general, a messy data problem. I remember going way back to my days running security at orbits, right, and it was a different problem, right? We talked about the lack of data, I don't have enough data in order to make informed quantitative decisions, which is why we were always in this qualitative sort of this feels bad red, green, yellow approach, right?
Dan Mellinger: Yep.
Ed Bellis: Fast forward, we now have a ton of data. In fact, it's the opposite problem, we have too much data, right. And within that data that, yes, it's messy, there's also false positives, we probably still have false negatives amongst all of the noise, but it's a different world and a different problem that we have to deal with now. And so one of the things, so going back to Kenna is you got to have some ground truth that you're measuring yourself against, right? So for us it was all about so successful exploitation events in the wild. That was ultimately what I'm trying to prevent so if I can use that and I have that as ground truth data, I can compare and contrast that and figure out what are the features of the model that basically are going to be predictive towards that and make sure that I have that. And I'm weeding out false positives, things that even triggers and all of these different sensors will sometimes pull out false positives, so you've got to have that human in the loop that's looking at these things and making sure that that doesn't happen. So there is even on that side of the house, on the cleaning up your data and prioritizing stuff side, there's a lot of work that has to be done manually, right, so that you can automate, right? So the irony is you need people to manually do work in order to automate that work downstream. And there's just this constant state of data cleaning. You ask any of the data science teams around, probably 90% of that team's job is actually cleaning data, it's not doing all of the exciting research we think about that's associated with machine learning or natural language processing or any the other things that they're doing. It's," Well, I spent most of my day cleaning data." But it's necessary, right. And obviously the other thing is if we've got the fortunate situation of being at the center of activity across hundreds or even thousands of enterprises, right, so you get to see that 100 X, 1000 X more, right. So you can forget the analogy, something about rising tides and floating boats, but ultimately you can use that to the benefit of all of those organizations which is certainly an advantage as well.
Dan Mellinger: Well and building CVSS off of CVE, building EPSS off of some of the CVSS characteristics, can algorithms built off of a lot of this stuff as the data's been refined and better articulated over time. That's super interesting. And speaking of data, I did want to get in because you guys crunched the numbers on what the odds are for any individual. So the likelihood of any employee, in this case, it could be the CEO or it could be me, or it could be you, but being compromised as a target of opportunity, right. So getting caught up in that wide net ultimately. And so at a baseline, we know that it's roughly 2% of all CVEs are ever exploited. And within that 2%, only about 6% of those are actually seen in more than 1% of organizations, right? So most organizations don't have one of those very small number of CBEs that exist. And these are whole numbers, right? So we're talking millions and millions and millions. Ultimately, when you boil this down as any individual at a target of opportunity, chances of them being popped is about 0. 012%.
Ed Bellis: Yeah. Now obviously there's some variables in there I wouldn't say that's a hard and fast number. In fact, even the 2% number we see fluctuations in that year over year by a percent or two but the point of it is, and we tried to get as quantitative as we could, but the real takeaway here is it's easily in the single 10ths of percentage of a single percent, so far less than 1% chance that this would happen, right? So that said, do you have quantitative metrics on the business impact side, right?
Dan Mellinger: Yup.
Ed Bellis: So we talk about the security risk. There's your security risk of any one individual being exploited at any given time, so what is the business risk of that CEO being taken offline for whatever it is, several hours, one hour. That all depends, right? So it depends on the timing and when you're going to do it. Maybe pushing a patch out to my CEO's laptop at 9: 00 p. m. tonight, maybe that's not that bad, right? It's not going to take that long and then they'll be back up and running in an hour. But automating that is a different story, right?
Dan Mellinger: Yeah, absolutely. Well, and given those numbers and knowing that wide nets, right, the large attack services and attackers are playing the numbers game just as much as defenders should be, right, what will it take to get businesses comfortable automating these security actions, right?
Ed Bellis: Baby steps like everything in security. Honestly, it's like first we're going to do this manually and we're going to do this manually many times until we're comfortable with it. And now we think, okay, this is working, let's automate a piece of this, right, and see how that goes. And then that's going to work and then they're going to automate the next step in the process and see how that goes and they're going to supervise it and make sure that it's working good. And then when it's working good, they're going to feel comfortable and they'll continue baby steps, no magic.
Dan Mellinger: Yep. Absolutely. Well, and then to pull in the latest industry buzzword, right, the newest hot tech that promises-
Ed Bellis: That sounds exciting already.
Dan Mellinger: Yeah. Yeah. Promise to help companies do everything and anything and/ or nothing related to the automation of security is the promise of XDR. So extended detection and response seems like most if not all of the platform- based security players are all talking about XDR which, I mean, is no surprise, right. It's driving a lot of business right now and the premise is to help automate detection across a wide range of devices and then therefore you can go investigate, respond faster, be safer, all that good stuff. Give us a little download.
Ed Bellis: Well, I mean, XDR has all the right building blocks there, right? Because ultimately when you think about what it is and what its purpose is, is detect and respond using all of these different data sources to do that, right? So one, I've got all of the data sources available to me. I'm going to go through all of this, I'm going to... Hopefully, I'm using this in some sort of way to prioritize, right? So you don't take that step out just because you're talking about multiple sources. In fact, it should help add more and more context the more sources that you have, right, which is going to tell me a lot about how risky this is, this event that's coming in, this incident, and is it affecting important assets or important business processes and how many, and what's the likely hood of it spreading beyond where it is right now based on all of this different fidelity that I'm getting from all of these different tools. So one is it's got the building block of being able to consume all of this different data and context to ultimately hopefully give you confidence in the data, but ultimately help you to prioritize and use some sort of risk based approach, and it typically has some sort of automation capabilities in it as well, right. So we talked about the soar market earlier, a lot of the XDRs have some sort of soar capability built within them as well, right? So it's basically taking all of that stuff that we've been talking about and trying to put it all together. It doesn't preclude you though from doing all of the pre- req things that we've been talking about, right. To take those baby steps, to understand," Okay, I trust this data, this is real, it's not a false positive. I understand the risk. I understand what the likelihood of something further happening in terms of an incident. And it's worth me now automating this because I've gone through this process several times manually, and I feel comfortable with what steps B, C, and D are."
Dan Mellinger: Got it. So just breaking this down, Kenna, we're typically looking at vulnerabilities and that's the lens that we look at a lot of stuff through and it's primarily infrastructure- based, right?
Ed Bellis: Mm- hmm( affirmative).
Dan Mellinger: And so XDR is arguably evolution out of the EDR space so endpoint detection response, endpoint telemetry, which is huge. We look at that, we've looked at the asset basis for I think that was volume five as well, right?
Ed Bellis: That's right. That's right. Yep.
Dan Mellinger: And then it goes beyond, right? So more baseline data. So telemetry from servers, networking, cloud environments, hopefully, applications, email, all that stuff, right? So a larger base data set which also I would imagine takes much larger effort to normalize.
Ed Bellis: And it's messier, right?
Dan Mellinger: Yeah.
Ed Bellis: If we talk about messy data problem, that definitely sounds like one to me. Now the use cases or the purpose built for them are different, right? So we always talked about the vulnerability management side of the house, we're in the predict and prevent business, right? The hygiene side of the house, making sure that those exploitation events do not occur. The XDR side is the detect and response side and usually what you're seeing there is at least something has occurred, right, and now how bad that something is depends. And you can get a lot more. The more diverse sources, you, the more fidelity that you're going to have to understand how bad it is or how likely it is to spread, but they do feed each other, right? So I can think of Kenna and all the data that's in the vulnerability management side of the house, pushing that into an XDR is going to give you that much more context, right. So if an exploitation event, God forbid, occurred, right, what's the likelihood of spread? What's the impact of that exploitation event happening? Where is it happening? What is it happening to? All that sort of thing. Is it malware? Et cetera. And now I have the context of everything else that's coming in to my XDR whether it's on the network detection side of the house file detection or endpoint protection systems, email, et cetera, I can piece together the entire event and figure out what's happening and what do I need to do about it?
Dan Mellinger: Interesting. So if we think about more of the VM side is hygiene, right? Let's take care of the vast majority of things before they can happen, XDR and/or EDR side, there's response, detected respond is in the name, right? So you're looking for smoke ultimately, and then you got to go triage what out of all these signals is a fire, ultimately.
Ed Bellis: That's right. Or how bad is the fire?
Dan Mellinger: Hopefully, not that bad, but that addresses the other side of the equation, right? We talked about the 0. 012%, right? But then the EDR side may handle something that is an active incident based off of target of attack, right? They're purposefully trying to find something.
Ed Bellis: I mean, likelihood feeds both equations of risk, but likelihood means something different on both sides, right? So on the hygiene side, right, the likelihood of that incident or exploitation event happening on the detect and response side of the house, something has already happened. The likelihood of it becoming big or worse or bigger impact is where you're trying to gauge to figure out what do I automate to take care of this right away? What do I bring a triage team in for? Et cetera. It's different depending on that equation.
Dan Mellinger: Gotcha. So on the VM side, would it be fair to say that successful automation would be being able to basically let the systems decide what most of the high risk stuff would be, the things that have limited business risk and then allow you to cut some remediation pathways right out of that?
Ed Bellis: Yeah.
Dan Mellinger: And then on the EDR XDR detect and response side, automation would be essentially triaging things down to a point where mostly meaningful things are going to humans to investigate.
Ed Bellis: Yes. In fact, you might even say there's some automation opportunity even for those meaningful things that go to an analyst, right? You might want to do quarantining or cut off network access or do whatever. Limit the scope of damage of what may have already occurred until that analyst can manually come in and do something about that too.
Dan Mellinger: Pull your CEO off line?
Ed Bellis: That's right.
Dan Mellinger: "Why couldn't I talk to the CFO? My emails wouldn't go out?"" You were quarantined." Interesting. Well, I guess last question before we sign off here is do you see a future where we have automation VM through response?
Ed Bellis: I don't think we have a choice, right? The tax surface continues to grow, right? I mean, we talk about... And the good news is and we profess this all the time over at Kenna's, things are getting better, right? It's becoming easier to secure things now certainly than it was 10, 15 years ago. Things are becoming a little bit more secure by default which is also a great thing. Teams are doing a better job at remediation, people are getting quicker, they're making things easier. But on the downside is there's way more assets today than there was a year ago, than there was five years ago, than there was 10 years ago. Part of that is the proliferation of everything that we're doing, again, whether it's traditional IT or IOT or whatever the case might be, but the other thing that we see often is almost nothing ever goes away, right? It's like," That system's been around for 20 years, we can't do anything about it. The team that managed this left a long time ago and everybody's afraid to touch it." And those things go on in perpetuity as well. So we continue to improve, but the attack surface grows and grows and grows and the amount of resources that we have, we can't just rely on a manual thing, it's got to be end to end. We got to be able to use of this data that we have in order to both prioritize, but ultimately to automate as well.
Dan Mellinger: Automate or else. Awesome. Well, thanks so much, Ed. Really, really pressuring honestly, at this timeframe, but to your point, we actually have seen things improve even with our own customer base and all that over time. And we know automation works, so how can we do it more?
Ed Bellis: Yeah. So I mean, we know automation works because we prove that in prioritization to prediction volume four. That was where we marry the survey data with the client data. And one of the things that we did was we figured out who are the top performers in remediation? And then we started asking a bunch of things about their remediation programs. Turned out automation was one of the biggest things, right? So we talked about people being able to automate their Microsoft remediation through SCCM and there's a bunch of other tools out there to do that, and people do vulnerability remediation very well. But the same can be said for a whole bunch of areas of security, including the detect and response side of the house. But you got to be able ultimately to trust that data, to clean that data and making sure that you have good data going in, or you're going to be automating some really bad things.
Dan Mellinger: You're going to be automating yourself out of a job.
Ed Bellis: That's right.
Dan Mellinger: Awesome. Well, thanks so much, Ed, as always, and I will go ahead and link all of the resources we talked about on our show notes page. And also, if you want to get some ISC score credits, go on the source page as well, you'll get some just for listening. So Ed, again, thank you very much. And everyone have a secure day.
DESCRIPTION
We discuss why the promise of automating cybersecurity has yet to be fully realized.