Around the Virtual Table with Chris, Jeremiah & Ed
Around the Virtual Table with Chris, Jeremiah & Ed
Dan Mellinger: Today on Security Science, we have a special around- the- virtual- table discussing some of today's hot security topics. Thank you for joining us. I'm Dan Mellinger, and I have with me on the line the sultan of risk- based vulnerability management, co- founder and CEO of Kenna Security, Ed Bellis. How's it going, Ed?
Ed Bellis: Doing well, Dan, how about yourself?
Dan Mellinger: Doing really good, and we're particularly excited to have a couple special guests joining us today. So our first guest is the founder of White Hat Security and advisory board member for multiple cybersecurity companies, the current CEO of Bit Discovery, and most important to our conversation here, Brazilian Jiu- Jitsu black belt, Jeremiah Grossman. How's it going, Jeremiah?
Jeremiah Grossman: Good. It's good to be with you guys.
Dan Mellinger: Oh, awesome. Thanks so much for joining us. And then last but not least, we have a gentleman who has testified before a senate committee, is known for his contributions to responsible disclosure, he's the reviewer of Black Hat CFP Submissions, is the founder and CTO of Veracode, and previously researched vulnerabilities and built password crackers in some sort of loft. Of course, everyone, we've got Chris Wysopal. Thanks for joining us, Chris.
Chris Wysopal: Hey, thanks, Dan, it's great to be here.
Dan Mellinger: We really appreciate it. And just so everyone knows, I'm going to be handing off to Ed to really drive the discussion here today. We have a very on 2020 brand situation in California where we're on fire a little bit. Our skies are this nice burnt umber color, and they're ripping up my street out in front of the recording studio. So, Ed, take it away.
Ed Bellis: Thanks, Dan, appreciate it. Hey guys, so thanks for joining us today. I wanted to pick up our conversation where we kind of left of prerecording, where we were talking a little bit about not only vulnerability management and patch management, but some of the changes that we see amongst cybersecurity since all of this pandemic thing went down earlier this year, right? We're seeing a lot of folks that are now pretty much everyone working remote, which has changed certainly the landscape in terms of who we're employing and who we're hiring, but also how we're securing these things, what sort of preventions that we're putting in, controls that we're putting in, and we can't rely a lot on the network layer and things like that. One of the things that came up earlier today that I was talking about was while the how has changed, the amount of work hasn't changed. I feel like it's shifted quite a bit in terms of how it is that we're going about securing things, but the amount of work that security professionals have in front of them hasn't changed that much, right? There's still a lot of it. The good news, I think, for us who are looking to hire security talent, there's a lot of us have decided, hey, we don't necessarily have to have people hired or are hiring people in the same location of our offices anymore, right? So that's really expanded the talent pool. And Chris, I know that you specifically were talking a little bit about some of the things that you guys are doing at Veracode. I would love to hear a little bit more about that.
Chris Wysopal: Yeah. So typically, it would take us a few months to hire a security person. That's because we have a strong preference for being in the Boston area, and just part of our company culture was if you... The default is you come into the office. We have a nice office building. We like collaborating there. We like the culture it brings. But that all got blown up in March, and we haven't had that. And we're not quite sure when we're going to get it back, but it'll probably be sometime next year, right? At least I think we're going to be out of the office at least until the spring. So that's a long time. And if you think about a lot of security people, they change jobs every few years. So if you get one year of someone remote, that could be half of their tenure. So we made the decision, of course we're going to hire remote during this. And it was basically a month to hire. I filled three positions, and it only took about a month to fill them. So that's a dramatic change, and it just shows me that there's a big talent pool of security people out there that are probably being underutilized because of these requirements to come into the office. And just, frankly, you don't really need to do that unless you actually have to come in and reboot a machine. But you don't need security people to do that. You can have an IT person do that, so you could leverage a few people. So I kind of think this is a wake- up call that more security jobs probably can be done remote. And I don't know once we go back in the office if we're even going to put that requirement back on people.
Ed Bellis: Yeah. I mean, you brought up a great point about every now and then you might need somebody to go in there and physically reboot a machine. And Jeremiah, we were talking about this earlier. And there's some of these orgs who were kind of born out of the cloud where the people who are rebooting the machines work at Amazon and Google and Microsoft anyway. And I know that you're over at Bit Discovery now, which I believe you guys have always been kind of decentralized and remote.
Jeremiah Grossman: Yeah. Well, I lived in Hawaii, and other people lived in Texas, and we had no reason to have an office. And so it gave us the ability to take world- class people, wherever they were, and leave them, and keep them exactly where they were. Lower cost. When you have an office, yeah, you have to go in, but you also have to manage an office, and you have that cost. You have commute. So doing a all- virtual company, we thought it was advantageous. We could just focus on what it is that we do and want to be doing. What I wonder about is I think because an employer, like Chris at Veracode, me at Bit Discovery, I think the competition amongst employees might increase. Before, if I had a company policy that said we could only hire people in the general area, that was my pool of people. Now, a candidate must compete against a worldwide population of other talent. You're going to have to be world- class because your physical geography is going to give no advantage over somebody else. You're going to have to be really good at what you do on a world stage, and that's going to be a very different world.
Chris Wysopal: Yeah, that's a good point. We made the decision a while ago, and this was with our security research team, we haven't done this with our information security team, to go global because we couldn't find application security researchers with deep expertise just in the United States. So we started going global, and we now have people that are in the UK, in Europe, and it makes us better to have a more global talent pool. So I think you're absolutely right.
Ed Bellis: Yeah. I totally agree. In fact, I'd add there are certain jobs where timezones don't matter either. Like you mentioned, Chris, with your security research team, you kind of went global, but for some other reasons you may have stuck with North America, as an example, for your information security team, right? So I find that even at Kenna, we run into things where it's like timezones don't matter in this case, right? So let's go find the talent wherever that talent may lie. There's other times where you need to interface with people who are at least awake at the same time that you're awake, right? And that makes things a little more challenging.
Jeremiah Grossman: I guess one thing that also I heard from HR departments, so I think Ed Lassie and GitHub were hiring internationally all over the place. One was the burden on local laws and regulations for how you treat employees. tax withholdings and how you hire and working hours and things like that, it gets pretty complicated to keep up with. There was a company I was talking to, a smaller kind of company, but it had people in 52 different geos. That's the tough part. I think it's easier on the hiring, harder on the operationalizing.
Ed Bellis: Yeah. Well, there's also companies out there that specialize in doing just that, right? The PEOs and all of these different companies that are all about employment law and hiring people around the globe and dealing with all of those employment laws and payroll and everything that might come up with that. So it's almost sprouted other cottage industries as well. So in addition to that, some of the other things that we were talking about related to everything that's gone down, and obviously a lot has changed since probably all of us were, I think, at RSA earlier this year right before all hell broke loose. But some of the things around what has changed in terms of how you actually secure these corporate networks. Jeremiah, you had mentioned that Bit Discovery had done a report, and we were talking a little bit about what's the cloud makeup of these companies and how many have actually kind of picked up and moved things to the cloud versus just they're continuously adding new things to the cloud while keeping the legacy stuff on- prem, which I would imagine makes things even more complicated for them, not less.
Jeremiah Grossman: Best I could tell, very few, if any, systems get cloudified. Legacy systems don't move to the cloud. They build a brand new system in the cloud, and then they slowly, very slowly will decommission legacy systems. And so when I'm looking across, let's say, the Fortune 500 companies, a very tiny percentage overall, they're overall external attack surface, if you call it that, is hosted in the cloud. So we have legacy, the on- premises stuff, then we have cloud. And then as security people, we have to manage both and the migration between the two. And I think what's going to be especially difficult is we have to have these strategies for securing stuff in the cloud, and then we're going to have to manage the security around legacy using very few resources. Because if they're legacy, the business is not going to want to spend on them. They want to spend on the new stuff in the cloud. So how do we defend the old stuff when you got no money?
Ed Bellis: Yeah, good point. And not to mention, you talked about all the employment talent and things like that. Nobody actually wants to work on legacy anyway.
Jeremiah Grossman: Yeah. Whether it's old WordPress stuff. I think Chris mentioned old JBoss stuff. Yeah, can you redo this and support this old JBoss4 stuff? Is JBoss the new COBOL in the app sec world or what?
Chris Wysopal: Yeah. I think just shifting to the cloud, there's not much advantage there, right? The advantage really is rebuilding. If the application is important and you're going to be running it for the next decade, it's really important to rebuild it. But it's hard. It's really hard. There are so many advantages to it though, like you're going to get more talented people who want to work on the new stuff, so you're going to get better retention of people. It's going to be more reliable. You're going to have higher velocity for new features using the new architectures. Those are all the reasons to do it, and that's why people do it, but it's really hard and expensive, and it takes a really long time. I mean, we've been working for a few years now, migrating all of our old legacy stuff, which we built in 2007, 2008, 2009, to cloud native infrastructure. The other important about it is almost all of your security controls have to change too. And so it's a lot of work for the security team, and they have to take classes in all the different cloud technologies. And then you probably are going to be switching vendors too because some vendors can support the Cloud Native architectures better than other vendors do. Right now, we have two MSSPs at Veracode because we have one that we chose because they supported AWS very well, but we liked the other one for our data center and our corporate network. So that's more expense too, right? So I think you end up with a lot of duplication of security controls, and it's a challenge. And so I think we're sort of in this decade- long process now where the leaders have been doing it for maybe two, three, four years. But this is a big transition for the security industry to become cloud native.
Jeremiah Grossman: It's going to be interesting. We'll get to this whole cloud migration thing or cloud adoption thing one way or another. And then as it happens, then IT will shift the other way, will want to go back to on- prem. And if we think it's really hard to cloudify things, imagine trying to take something off of App Engine or AWS and all those services and moving it on- prem. That's just not going to happen. You think moving to the cloud is hard? Wait until we have to move off.
Ed Bellis: Isn't it just a Docker container? Don't you just pick it up and move it?
Jeremiah Grossman: Yeah. Yeah, sure.
Ed Bellis: Yeah. It's funny. And one of the things that we see a lot because at Kenna we're playing in both, kind of not only in the cloud and on- prem world, but we also play in both, to you guys' point, the infrastructure and application worlds. And we're starting to see a lot more melding of those two worlds too, where it's really hard to say, " Oh, this is an infrastructure vulnerability," or" This is an application vulnerability." It's almost like everything is becoming an application vulnerability in some way, right? It's just various different services that you're providing, and not only is it cloud native, but you've got to have a container, and you're responsible for this part of the service. And when you to go to remediate that vulnerability, regardless of what it is, it's almost always some form of developer who is actually doing the remediation work.
Chris Wysopal: That's a huge change that we're seeing because of infrastructure is code, and everything is written as code, and that means it's beneficial if it's controlled by the development team, right? Because all those changes and all those configurations are version- controlled, and you can roll back, and you can stage it all and test it all before going to production. So there's huge advantages to that. So we're seeing this kind of shift. All these people who did operational security, right? They scanned live machines, what needs to be patched, what configuration is now known bad or how something has drifted to a bad configuration. There's a huge industry, right? It's probably one of the biggest parts is the vulnerability management, and they're scanning all this stuff. But what happens when you can actually understand that you have something known bad or a known bad configuration when it's still at the code stage because it's a configuration file? So we're seeing sort of that management of security of infrastructure shift left, and so you have these companies that span both, right? You have these container companies that are partly in the SDLC and looking at the configuration of the container when it's being built. And then they also have some things that are monitoring and looking at it as it's running. So we're starting to see things span development and operations, and then you have purely operational companies coming into the development space to check configuration at that time as opposed to there. And, I mean, we're doing this now at Veracode. We're looking at container configuration files, so it's not just the framework configuration files. It's starting to be the container configuration files, and I can see us doing more and more configuration over time. Kubernetes configuration, what are your security groups in your cloud. I think the world of vulnerability management is going to be shooken up over time.
Jeremiah Grossman: As you were talking there, Chris, there's a term I recall, and you guys will know this very well, called patch exhaustion. We're exhausted from patching. You guys have been around technology long enough and using computers long enough. I remember in the early days of the web, there was a time where my computer, the one I'm using right now, was fully patched. I patched my browser. I patched my operating system. I was fully patched, and I could go on about my day. It seems now I have to patch constantly. My browser gets patched daily or weekly. My operating system gets daily or weekly. My plug- ins on them. That's just on one computer. My phone next to me and the apps on it. Are we ever fully patched at any moment in time on one machine? Then we're talking about all the containerization that we're doing. At any moment in time, these systems, these mission- critical systems that we're using, we are never ever ever patched, and the patches are coming from 1, 000 different companies. That complexity is daunting. It's nuts. And so patch management. it's an easy thing to say, " Hey, just patch." The people that say it, I don't think they know what they're talking about.
Chris Wysopal: I think it's moving to just becoming continuous, right? So you have to think of it as a completely continuous process. I mean, there are still companies that are like, " Well, we patch once a month." I mean, patch Tuesdays once a month, right? But for the people that are trying to maintain a specific risk posture, it really is becoming continuous. And as you say, it's really complex because it's at all different levels, right? It's the open source components. It's the configuration of my framework. It's my app server. It's my OS.
Jeremiah Grossman: So it's very easy to miss. So when you patch continuously, do you literally mean we're patching all, like there's never a moment in time where we're not patching? Is that what we're saying now? Because it could be.
Chris Wysopal: Well, people are moving to a continuous development model, okay? So for the things that are changing in a continuous development model, like the example of the Netflix or the Facebooks where they're pushing multiple things everyday. Why can't they be changing their infrastructure as code at the same time to be pulling out a new version of something? Or why can't that new version pull down a new open source component? So I think as soon as you start to automate the patching process and you're changing your code every day, you get this continuous nature. But then there's this infrastructure that is just stuck there that's not part of the process, like VPNs and endpoints and firewalls and all these other things, they can't go continuous, right? And those are a big sore point.
Jeremiah Grossman: So I guess then it comes down to, as security professionals, we have to protect systems that are never going to be up- to- date on their patches. So that's a difficult world.
Ed Bellis: Yeah. And, I mean, you guys are talking about patches that are actually written by somebody else for you to apply, not to mention all of the patches that you actually have to write yourself that are on your own software.
Jeremiah Grossman: So yeah, it's going to be a world of whatever amount of software we have in the world, we're going to have 10 times more in probably the next 10 years. I don't know if anybody could have been more right ever than when Marc Andreessen, " Software has eaten the world." Has there ever been a more prophetic statement? We are witnessing it right now. All we do is software, and there's going to be so much more of it. And furthermore, I saw a tweet somewhere that said, " When you load a page in Internet Explorer or Edge, no one knows really how it works." Now we have the systems that are so complex, there's no one in the world that knows how a browser loads a webpage. I mean, so we have constantly evolving software in a world where no one knows how it all works. Okay, sure.
Ed Bellis: It is definitely getting more complex. I mean, Chris said. You were talking a little bit about all the different services and the infrastructure and its code and things like that, but imagine the big push. We went from, we've got to break up these monolithic apps and make them microservices, right? And now we've got millions of services all over the place, and nobody knows what relies on what service anymore. So if something goes wrong in one of those services, you have these cascading failures, right? The complexity is just making things really difficult, I think, for security people.
Chris Wysopal: The other thing we're seeing is just, and this is sort of along the same line as the microservices, is we see a lot of shared code within large organizations, where they sort of build their own frameworks, they build their own components. And obviously people like Google and Facebook are doing this, but we're seeing this across all of our enterprise customers. And so you have this sort of second- party code. So one team is building it, and another team is consuming it. And they have to make sure that when they fix a bug it gets out to all of those places. So you have shared code and also microservices. So even within the application portfolio within one organization, it's just super complex.
Ed Bellis: Yeah, and not even to mention you talk about them writing their own functions and things like that, and then you talked about the open source libraries. We also see people who are just forking those libraries and making modifications to them, and then they have to maintain that over time, and things go wrong there as well. Just the complexity blows my mind sometimes.
Jeremiah Grossman: I guess with a group of us or in the industry, we don't talk so much about it, but at the supply chain at where software comes from, there's a finite number of programmers in the world developing all this software. Are we seeing more and more software developed by more and more software programmers? Is there a significant uptake in the number of people contributing code, or is it more or less flat?
Ed Bellis: Minecraft.
Chris Wysopal: Yeah, probably Minecraft. And he published it, and someone's consuming it. So I don't know if he has a bug in there that could actually be exploited by anybody, but I think it's an example of how easy it is to start contributing a little bit of code and write a library that gets consumed. I mean, I think we talked about the top 1, 000 libraries, but there's literally millions of open source libraries out there. So I think the pool is expanding. Maybe it's not all professional programmers, but you're getting some, we call them, citizen developers. It's just when an employee decides just to learn programming and start writing it. It could be a doctor for doing a research project, or it could be a stockbroker trying to automate something. The tools are there to make it easier and easier to program now.
Ed Bellis: And the good news is that once something works and people are using it, it never goes away.
Jeremiah Grossman: That's a good thing.
Ed Bellis: One of the things that we talked about earlier, speaking of things that never go away, is this kind of cottage industry of things that it doesn't seem like anybody, certainly on the security vendor side, is doing much in terms of checking, like we were talking about, the WordPress plug- ins and how there's a million of them out there, and there are so many different vulnerabilities in it. Everybody seems to look at is as though, ah, it's WordPress. I'm not putting anything too important on here. I'm not going to spend a whole lot of money on maintaining this. I'm not going to certainly be doing vulnerability scanning against it, so I don't have a whole lot of options to do vulnerability scanning against this, right? I mean, what are you guys seeing out there? I feel like there's a million opportunities for just kind of various takeovers of all of the different content providers and things like that.
Jeremiah Grossman: Well, WordPress has its reputation, good, bad or otherwise. WordPress, I think it's like 20% of the web when you look at it, something like that. It's ridiculous, the deployment. WordPress is actually WordPress core security, it's had its problems in the past, but it's actually pretty solid. Where we run into issues predominantly is with the WordPress plug- ins that people use and they deploy. When it comes to info sec vendors and vulnerability management, you have, let's say, the Qualyses and the Tenables of the world. They generally look after vulnerabilities of CVEs, network- based vulnerabilities. They really don't scan, no one really scans, that segment of the market doesn't really scan for WordPress phones, and when they do, they certainly don't go after the WordPress plug- in vulnerabilities. So then you go to the other half of the vulnerability management market, the app sec companies, of which Chris at Veracode is one, White Hat and others, they look for custom web app vulnerabilities. Very rarely, in my experience, do you see them scanning for even WordPress core vulnerabilities, let alone the plug- ins. So this world of WordPress plug- ins, it's not well- served by the vendors of the world. No one generally has a third- party WordPress plug- in scanning vendor that they consume data from. So those who use things like a WP scan, which I highly recommend, are doing it on their own. That always signals to me that the vast majority of WordPress instances out there that companies run don't get really touched. There's vulns everywhere out there. And you could say, " Yeah, these WordPress things are just blogs with more or less static content." But you never really know where these things are deployed, and if you hack that one point, that one server, you get a beachhead into the rest of the organization. So that's a source of risk that I'm curious about, and I'll be looking at some more research later on.
Chris Wysopal: I think a lot of info sec teams sort of ignore the WordPress. They're like, " Oh, it's just the marketing team takes care of that. They hire someone to host it somewhere else. It's not on our network, and I'm not going to worry about that." But then I think there's also a lot of places that don't have an info sec team that are just hosting it on their own corporate network or your own cloud area, and it could be a potential entry point into that network. It does seem like something that just constantly falls through the cracks. It's not just WordPress. I think it's a lot of these marketing tools. It's all the CMS systems. It's a lot of these marketing systems that are all sort of outsourced. And the thing that's in common for marketing systems which is scary is it's customer data, right? So if that stuff gets breached, then that's all reportable events. It's not just like someone's laptop getting popped that's an employee, which you don't have to report. So marketing systems, I guess, scare me. We do a lot of supply chain security on those vendors because there's a lot of vendors out in that space that don't take security seriously because the marketing teams typically don't know what to ask unless you have a robust vendor management program.
Jeremiah Grossman: So this ends up being a compelling source of risk or a compelling entry point for the average adversary because of the way the market dynamics respond. For instance, Qualys is not going to go after, or Tenable is not going to go after, whoever, all the other network scanning guys, are not going to go after these plug- ins because it's really hard to enumerate and test for these particular type of web app vulnerabilities. And generally in the app sec market, you're not going to pay your application scanning rates that you would for a bank for some WordPress site that the marketing department has that it really isn't worth a lot to you. So the only way for a company to get their arms around this is to do it themselves, which is exactly what they tell them not to do. You really don't really want to run your own scanners, your own vulnerability management program, so we're stuck in this zone. So this is why we see a lot of WordPress things getting popped because the incentives in the system are such that these things are going to be constantly deployed and constantly unpatched. And right back to where we started with patch exhaustion. How do you keep all these WordPress plug- ins patched? Or even how do you prevent anybody from stop installing them?
Chris Wysopal: I doubt that there was a vendor management process around adding the file system plug- in to WordPress, right? A lot of companies worry. Any developer can pull down this open source package, put it into the container or link it into the app, and they say, " Oh, I need a control around my open source usage." Right? So that's where you get software composition analysis and other things like that. I don't think anyone's really thinking about controls around plug- in usage. That's just an individual person's decision, " I want to use this plug- in," and the CSO doesn't know about it.
Jeremiah Grossman: Yeah. To possibly raise a contentious issue, a little point. One of the things I've been thinking a lot about when we talk about these topics is a point of peak prevention. Has our industry reached the point of peak prevention where we spend billions of dollars annually, hacks still happen all the time, that if we even doubled our budget and doubled our efforts, we wouldn't really see less breaches? And that the real only way forward from here to make a substantial impact is fast detection and response or minimizing the damage. I don't know how you guys feel about that. I'm really interested. But have we reached the limit of find vulnerability and patch it as a way of stopping breaches?
Chris Wysopal: I think a nuance in there besides detection and response is limiting the blast radius of different things, right? So if a host gets popped, that's it, right? It's just that one thing. If one user gets popped, it's just what they have access to. So I think that's somewhat of a nuance, and that kind of gets to network architecture, identity and access management, other architectural issues. So I think it's not pure prevention. To some degree it is, but maybe it's more valuable money to be spent on, instead of stamping out individual bugs, to minimize the blast radius of a particular bug. Because that's really the only way you're going to help with the zero day besides quick detection response.
Ed Bellis: Yeah. I mean, you have to do both in the end, right? I've seen this industry pendulum back and forth between predict and prevent and detect and respond. And the reality is if you're just focused on detect and respond, then you're constantly in firefighting mode. If you're just on prevention, you're going to miss something, and something's going to happen, and you've got to kind of do both. But it's interesting because we have been talking a lot about WordPress and content management solutions and these things that are being managed by marketing and outside of the purvey of security and all of these things. And like, " Oh, it's okay though. It's just marketing. It's just a public website," these sorts of things. But sometimes that's probably your most popular website or popular post that you have across your company. And it's not your team that's hitting it, it's your customers, and they're all visiting it. And if suddenly my marketing site gets compromised and I'm serving up malware to everyone who visits it, well, there's a huge impact if suddenly I have a million different users that are hitting this that are all getting malware because of my website.
Jeremiah Grossman: I wonder when the business cuts of info sec and goes like, " You've gotten enough money now. Make it better with what you have." And they kind of have a point. I think, what is our industry spending, $ 120 billion annually? And only for everything to get hacked all the time, and the best we can tell them is, " Hey, we tried our best," that sort of thing.
Chris Wysopal: Yeah. I just think the counterpoint though is the Equifax type situation or the Capital One where it runs to hundreds of millions of dollars. And so it's really hard for us to, what's the insurance industry term? The expected loss on there and balance it with the spend. And it's because we don't have the data. We don't know if we're spending enough to sort of put the insurance policy that we don't have the$ 100 million- plus loss. So I think it's economic.
Jeremiah Grossman: One of the things that I was looking at is I've been studying cyber insurance for six, eight- plus years, that what seems to matter in terms of dollar loss is dwell time of the adversary. Let's say you give somebody root access on a bank for a day. How much money are they going to steal? Not a whole lot. If you give them for a year, they're going to rob the joint blind. Same hack. The only difference between the two end losses is dwell time, so fast detection and response. That's where I get back to peak prevention. If we can detect a hack, detect a breach, and get the adversary off the system quickly, we can take all the breaches we want. If you only get an hour a day a week on the system, you're not going to cause too much harm. I think the reason Equifax loses hundreds of millions of dollars in that breach is because the bad guys were there for, what, six, nine months plus? If that wasn't the case, they would be another footnote in the DBIR and that's it.
Chris Wysopal: So I think that, to take Ed's point though, it still has to be a balance because if you're constantly, " Oh, a new intruder today, oh, a new intruder later in the day, new intruder tomorrow, let's clean up after them, let's get rid of them," the only way that you can cut down the dwell time is if it's manageable, right? So if you had a breach a day, even if you had really good detection and response, I think it would be hard to clean up all of those intruders. So you're absolutely right. It is dwell time, but it has to be a manageable number of intrusions to keep that dwell time low.
Jeremiah Grossman: Okay. So then it brings up the really interesting difficult, almost impossible, conversation is, what's the appropriate amount of spend between the two categories, on prevention and detection and response, and how do you approach that exactly? That one's difficult.
Ed Bellis: I mean, there are models out there like FAIR and different things like that where, at a broad level, you can kind of estimate and say, " All right, here's kind of your expected loss ratios," and different things like that. And there is going to be a point of diminishing returns, right? And I think Michael Roitman from our Data Science Team had kind of looked at a lot of these various breaches and kind of along the lines of, it almost falls a power loss, same as kind of like venture capital does, right? So you've got these VCs out there that are investing in a bunch of different companies. Most of them are going to fail. A couple of them are going to be singles and doubles, and then they're aiming for one or two to make their entire portfolio with hitting a home run, right? The same things happen with these breaches. All these different incidents that happen, tons of them are just really small. Either you don't hear anything about them, or you hear a little bit about them or, to your point, Jer, it's a footnote in the DBIR, and then there's a few that are these mega- breaches, right? That just are in the news, that cost the company tens of millions or even more. It follows this power law, almost hockey stick, of there's a few that are really driving a lot of the average breach costs across the industry.
Chris Wysopal: Those breaches also decide what the regulators say you have to do, right? So because of that breach, which is sort of the once- a- year, the regulation gets put in place. And then everyone has to follow. So anyone in a regulated industry kind of has to deal with that. It's not risk- based necessarily.
Jeremiah Grossman: So that's where you get to a point where I tend to make fun of the info sec industry and how we do budgeting, how we choose our priorities. We take last year's budget, add 10%, go to RSA or ask Gartner what the new threat of the day is, and we add a new security control around that, a new point solution, and that's how we do info sec. There really is no strategy behind it. And then we're in this world of COVID where we're not going to conferences and things like that, so what's new the threat of the day? What's the new product short list going to be when we're not going to conferences anymore? How does that ethos evolve? Because it's all very new. Maybe we get more strategic these days, but where are CSOs getting their information about what do next? How are they doing that now?
Chris Wysopal: So I think they have networking groups, right? So we're a member of a few different groups in sort of the Boston area. I think, of course, they listen to podcasts. But I think the CSOs talk amongst themselves. They're like, " What's working for you?" I mean, when COVID hit and people were trying to deal with how do I secure that remote workforce, a lot of CSOs started talking to each other. Because they're like, " How are you doing it? How are you going about it?" So I saw that more interpersonal networking that wasn't related to conferences at all or Twitter or whatever, just person- to- person or small groups chatting, happen more.
Jeremiah Grossman: So maybe it goes to the ethos of info sec now is how do you secure the remote workforce, because that's the one constant change. And anything we do in this industry must be tied to that because all our developers, all our IT people, all our info sec professionals, all our salespeople, all our marketing people, everything that they're doing with the tech is all going to be remote. I mean, is there anything else that was a singular greater change about the things that we're protecting than that? I mean, even where the customers are and they're buying stuff from us from, they're all remote too. Now that's the paradigm shift.
Ed Bellis: Good reasons we all went zero trust earlier this year.
Chris Wysopal: Yeah. So it's a huge challenge. I think it's also a big wake- up call because we were slowly walking into a world that was all remote because people... I know we were allowing more and more remote employees full- time. People that came into the office who had hour- plus commutes are like, " Well, I'm going to come in four days a week. I'm going to come in three days a week." And I think we're slowly going more remote, and it was just a wake- up call. When everyone's remote, when people started to say, " Wait a minute. How are my security controls working where everyone's on a laptop at home?" If it's not endpoint security, I don't have it. I don't have it, right? So how am I doing patch management? If I was using the VPN for authorization, I can't do that anymore.
Jeremiah Grossman: How does that change the threat models for the average adversary? How much does it really change if you're targeting an organization or even the mass blast approach? If everybody at the organizations you're targeting are all remote, what does that change about the style of attacks that we're going to see?
Chris Wysopal: Well, I can tell you one thing is a lot of organizations aren't patching as regularly or hitting 100% of their endpoints like they would because people have to connect to the VPN to update, unless you've moved to a cloud- based zero trust patching. So I would think going after the vulnerabilities, which were supposed to be patched and last patched Tuesday, would be a good strategy. Better now than it was six months ago.
Jeremiah Grossman: Oh, okay. So if you have to connect in, you have to connect to the VPN to do patch management to be connected to the VPN. But just because you're not connected to the VPN doesn't mean you can't get hacked. Yeah, that's fun.
Ed Bellis: Yeah. Well, even better is to look at all those machines, that they're only going to get patched if they connect in via the VPN. But even if you looked at them pre- pandemic, right? And Dan would know this from some of the research we did with Cyentia is they're probably doing an okay job of patching all of those vulnerabilities from Microsoft. It's everything that's not from Microsoft, right?
Chris Wysopal: Right. Because then you need some sort of third- party tools, and those third- party tools require you to be on a VPN to manage those endpoints. So yeah, it's gotten worse.
Jeremiah Grossman: I don't know why it just occurred to me. There are tools out there that companies use, including info sec, that when somebody lands on your website, you can see what company that they're from based upon their IP address. Now that everybody is remote using probably split- tunnel VPNs, that doesn't work anymore. You don't know who's visiting your websites because they're all from home.
Chris Wysopal: Well yeah, so a lot of those open source security scoring solutions that would put a rating on a company based on different things, their endpoint understanding of a company just went out the window, right? Because they were looking at that IP address. That was an exit of the VPN.
Ed Bellis: All that bot traffic is no longer coming out of your network, so you must be secure.
Chris Wysopal: Exactly. Everyone's endpoints got more secure.
Ed Bellis: Well, there is one other thing that's been kind of changing the landscape for the security folks this year other than the pandemic, and that is we have an upcoming election. And some of the things that we are starting to see, both pre- and probably post- election as well, I'd be curious as to your guys' thoughts as to what is going to change this year because it's an election year, what will change post- election if things kind of change in the White House.
Jeremiah Grossman: One of my curiosities I was talking about on Twitter a moment ago is where the campaign websites and their supporter websites are hosted, just do general analysis on the inventory of the campaign websites. Are they on the same hosting overriders? Are they in Google? What countries are they from? Just anything that might stick out. Just on cursory analysis, I noticed a ton of websites behind Cloudflare. I think they know what we modestly already can assume. We're going to see a tremendous amount of DOS coming up, if we haven't already, leading up into the election, a lot of DOS attacks everywhere.
Chris Wysopal: But I think the DHS is more aware of the attacks now than they were four years ago. I think four years ago we were kind of caught with our pants down. I think there's a lot like attacking the voter registration systems and things like that were sort of angles of attack that people hadn't really thought about or certainly not pervasively on a national scale. So I think that we learned a lot from the election four years ago, and there's a lot more hardening and monitoring in place. Obviously, it's not perfect, but DHS is saying that they're not seeing the kind of attacks they even saw last time, and I think they're looking more. So, I mean, maybe we've kind of scared the enemy into not doing anything, or maybe they're just waiting for the last minute. I don't know.
Jeremiah Grossman: I think, and this is just me speculating here, it might be a more disinformation campaigns, meaning if you wanted to attack a democracy, you just have to destroy people's trust in it. When the election happens, whatever the outcome is, is anybody going to trust the outcome? And is anybody going to be able to prove one way or the other that the election and the voting system was fair? Do we have good audit trails? Can we prove to ourselves that the voting system was fair? If we can't do that, then that's a difficult spot. That's what I don't want for this country, that we can't prove that it was fair.
Chris Wysopal: Yeah. So that's why I follow Matt Blaze, and he really knows what he's talking about on this stuff. And states like Colorado where they have these audit controls in place where statistically they can see if something is getting messed with, right? Because it can't happen naturally that way. I think Colorado might be one of the only states that has those kind of sampling audits that they have to do by law. Other states only do it when the election is super close, right? When it's within 0. 1% or something. So I think it just should be a standard thing is to audit the results of every election. It might cost a little bit more money, but what's the price of democracy, right?
Jeremiah Grossman: And see, that's my concern there. We might be able to secure it. We might not be. But either way, was the election cheated? Was it not? We've got to know one way or the other. Otherwise, what's the point of any of it? I mean, that we've got to know.
Chris Wysopal: And I also think part of it is the willingness to accept that the results might take a while to become certified, right? Because until these audits take place, until the right counts take place and double counts happen, we won't know. So this idea that we know instantly because we live in a digital society, I think we have to let go of that notion because we have to have paper, and it has to be double- checked, or I have a hard time believing in the integrity if those aren't there.
Ed Bellis: Is there enough time for a manual paper audit between November and January when the transition would have to occur?
Chris Wysopal: Absolutely. I think they just have to set the expectations that the results are known in two weeks.
Jeremiah Grossman: I would be happy with whatever time it took, provided we can actually do that. Can it be audited? Are there the paper trails? If there are, a day, a week, a month, fine. As long as we can physically audit it, great.
Ed Bellis: Yeah. No, that's a great point. It's well beyond my pay grade. I haven't done too much on the election security stuff, but I will definitely be following Matt Blaze after this conversation.
Jeremiah Grossman: Oh yeah, Matt's great. He's not the only one. There's another name. I'm blanking on it now. Matt Blaze and-
Chris Wysopal: The guy for the University of Michigan. Professor Hellman or something like that?
Jeremiah Grossman: Yeah. Those guys are really good. Thoughtful conversations, even a little humor thrown in, one of the two. Chris is absolutely right. Those are great guys to follow.
Ed Bellis: Awesome. Well, with that, I know that we are starting to run a little low on time. Dan, you want to come back in and wrap us up?
Dan Mellinger: Yeah, sure thing. It's been a super interesting conversation, guys. And just to kind of tie things off, so Jeremiah, can you kick off? What's something that you're working on right now that you're just super excited about, you think people should be paying attention to?
Jeremiah Grossman: I spent the last, oh, I'm dating myself here, 20 years in application security, and the number one problem I found is most companies don't know what websites they own. So the problem that I'm tackling with the next phase of my life here is something called asset inventory. I want companies to be able to know all the assets that they have, their external attack surface, and so part of that is that's what I want for them. But I've gone after trying to do an inventory of not only the Fortune 500 but well down the list, have an inventory of every single company out there so I can understand what the internet is doing, what they have out there, and find signal in the noise. It's fascinating stuff. So that's where I'm working right now, that research. I don't know what the lessons are going to be. I'm keeping an open mind. The one I was touting about today on Twitter just happened to be how no one really moves to the cloud, they do cloud also.
Dan Mellinger: Nice. Very, very interesting. Chris, same question. What are you super excited about right now?
Chris Wysopal: Yeah. So I've been in the sort of finding vulnerability business for a long time, and I actually don't think we have to get better at finding vulnerabilities. We have to get better at fixing vulnerabilities, and I think that's the challenge. And there are SOAR products out there that we're starting to see, especially in the cloud environment. If you detect a misconfiguration, the cloud environment, with an API call, you can just fix that configuration, right? So we're seeing it there. But it really needs to move to application security. We really need auto- remediation. So one of the places we're doing it first is on open source components. If we see that you have a component that has a vulnerability in it and you're actually exercising that code, we'll look, what's the updated component that is more secure, that doesn't have that vulnerability, and is it compatible with the way you're using it? Is the API call the same? We can do an auto pull request and just automatically basically patch that open source component. So I think that type of thinking is really what's needed. And some of the ideas we're working on is there are some vulnerabilities that are really simple to fix in the code. So if you wrote the code this way, maybe templatize the approach where we see how you're doing out, we're outputting coding to prevent cross- site scripting. In a lot of cases, that can just be a templatized approach. You did it this way. We're going to make you do it this way. I don't think developers are going to want that to be fully automated, but they're going to like to see it presented to them as a DIF, to say look this over, do a code review here, is this a correct DIF? Do it. Just like they might be checking their buddy's code change, they're just going to check the machine's code change. So I think that auto- remediation and auto pull request is sort of the next wave of actually making more secure software.
Jeremiah Grossman: So just to jump in there, Chris is speaking my language and this set him up here, because I've been tracking remediation rates forever. I think at White Hat it was stable at 50%. But, Chris, if I recall recently, some of your state, sorry, I'm blanking on the name here. What's the name of your report?
Chris Wysopal: The state of software security.
Jeremiah Grossman: The state of software security. Your remediation rates that you're seeing are going up, aren't they?
Chris Wysopal: They are going up, and so what we did was we wanted to correlate what activities our development team is doing. What correlates with that rate going up? And the highest correlation was the number of times they push a new version of their product. So the number of builds per day. So the number of changes and pushes correlates to them fixing the code. So it just goes to show, if you find a vulnerability and you're going to put out a release in six months after you've found the vulnerability, what's the likelihood that you're going to fix that before then? But if you found it and you're putting out a release tomorrow, it's actually more likely that you'll fix that for tomorrow's release than you will. Because there's a lot of factors, like oh, we can do it later, kick the can down the road, or the resource that knows how to fix the bug is now on a different team so it becomes harder. So the immediacy of the churning of the code actually is what we found makes remediation rates go up.
Dan Mellinger: That's super interesting. I think we've found similar good news on the infrastructure side for remediation rates as well. But I think that's going to wrap it up for us today. So I will go ahead and link both your guys' Twitter profiles on the podcast page. Because if you're not already following them, you should be, and I'm actually surprised you're listening to this podcast if you aren't. But we'll make sure to link that. We'll definitely link Veracode the report as well. And with that, thanks, gentlemen. I appreciate you joining us and giving us such an enlightening conversation.
Jeremiah Grossman: Our pleasure. Guys, thanks very much for a good time.
Chris Wysopal: Thanks a lot.
Ed Bellis: Thanks all.