The State of Threat Intelligence w/ GreyNoise

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, The State of Threat Intelligence w/ GreyNoise. The summary for this episode is: We chat about the state of everyone’s favorite buzz technology: Threat Intelligence with our favorite internet fingerprinter, Kenna’s head of research, Jcran. Joining us is a special guest, longtime pentester, infamous internet listener, and founder of GreyNoise Intelligence, Andrew Morris.
GreyNoise and the Background Noise of The Internet
01:56 MIN
Legitimate Scanning and Legality
02:07 MIN
Congrats to GreyNoise for Its Seed Round
01:10 MIN
"TI Feeds Suck, Don't @ Me"
02:06 MIN
Threat Intelligence Is Supposed to Give Humans Less Work
02:56 MIN
It's Possible to Enumerate Good Intentions
03:34 MIN
Use Case as A Vuln Intelligence Tool
03:10 MIN
Factors Leading to The Rapid Scanning of Vulnerabilities
02:25 MIN
The Internet As One Big Internal Pen Test
03:45 MIN
Security Analysts and Opportunity Costs
03:06 MIN
Quantifying Malicious
05:43 MIN
Three Things Driving The Future of Threat Intelligence
02:53 MIN

Dan: Today on Security Science, we talk about the state of threat intelligence with a special guest. Hello, and thank you for listening to the podcast. Today, we're chatting about the state of everyone's favorite buzz technology, threat intelligence. On the line I have our favorite internet finger printer, Ken is head of research. [JKran 00:00: 25 ]. How's it going?

JCran: Hey, Dan.

Dan: We also have a special guest today. Longtime pen tester, security advisor, internet listener, and founder of GreyNoise Intelligence, Andrew Morris.

Andrew Morris: Hey Dan, I'm glad to be here. Thanks for having me.

Dan: Awesome. We're excited too. We've done a previous podcast here with JCran, where he covered, which essentially scans and fingerprints assets and software versions for specific companies. And Andrew, GreyNoise seems to fill the exact opposite use case of that. Listening to the internet to generate a baseline of" normal. Traffic, can you talk us through GreyNoise and what it is?

Andrew Morris: Yeah, absolutely. For one of a number of different reasons, there are tons of different people and organizations that are constantly scanning and crawling the internet. So that means that every single computer that you hook up directly onto the internet, or that's directly routable on the internet, if you're on like a packet sniffer, or you look at your IP tables, logs, or you look at anything like that, you will see traffic hitting you. Unsolicited scan, and attack, and crawl traffic hitting you, even though you haven't even told anybody that this system even exists yet. This happens to every single device that's routable on the internet. And it's really easy to look at this as attacks or as someone coming out to get you. But the truth is, this is just the background noise of the internet. This is the scan and attack traffic that every single device that is sitting on the internet is constantly subjected to. That's generated by good guys, bad guys. Nobody- really- knows- who guys.

JCran: Everything in between.

Andrew Morris: Everything in between, that's exactly right. And the problem with it is that if you don't have a firm handle on it, then you have to assume that everything is coming after you, specifically, and that it's an attack. And that's not true. It's not coming after you, specifically. It's not an attack. This is just what everybody on the internet is subjected to, and nobody is putting any effort into understanding internet background noise, except for us at GreyNoise, as far as I know. So, shameless plug right there, right?

Dan: Well, I mean, you should be. I mean, we're bringing you on for the expertise. And that's kind of interesting. We chatted with Jerry before on Zero Trust. Right? And you're assuming that if things are pinging, any system, technology, all that good stuff, it's probably bad unless you authorized it. Right? And so that's kind of your role, is to understand, hey, all this stuff's going to be pinging it. Not all of it's bad, but you should only let these things through. Is that correct?

Andrew Morris: Yeah. I mean, that's right. I was going to say, I mean, there's lots of different... There's a lot of legitimate reasons for organizations and individuals to scan and crawl the entire internet. What's bizarre is that there has been litigation around the legality of port scanning, and of internet scanning and things like that. Which is insane to me. Because the way that every search engine was ever born is because they scanned and crawled the internet for content. Like Google and...

Dan: I was going to say SEO, right? I don't want to block a Google scanner if I'm trying to get my blog to number one on the search ranking, right?

Andrew Morris: That's exactly right. But you... It's mind blowing to me. Because you've got the Googles of the internet, who have been doing it for a kajillion years, for a well- established internet, a well- established, legitimate use case. But the technology is, it's the same thing. For the researchers, and for a lot of the good guys. And sometimes, obviously the bad guys are going to do a lot of the same thing too. And so there's ways that you can differentiate the two. But in this case, yeah. I mean, for example, just super easy example is going to be, look, we use uptime monitoring services, like Pinga or PagerDuty, right? And they're going to be pinging your network from one of a bunch of different places. We're going to want to do performance monitoring tests, using things like maybe Prometheus, and maybe you're going to have lots of other different services that are going to be testing your things, web assets and things on your network. And you don't necessarily always know exactly where those things are going to come from. But if you assume that everyone who's pinging your network or something like that is a bad guy, then you're going to live in constant fear for your entire life, and your security team's going to be exhausted, and alerts are going to mean nothing because everything is a red alert. And it's a bad way to live.

Dan: Got it. Well, here, before we get too far into some of the other topics, I did want to say congratulations. If anyone's listening, we're actually recording on the day that GreyNoise, they announced a$ 4. 8 million seed run. So congrats on the funding, that's awesome.

Andrew Morris: Thank you so much. I'm really, really excited about the funding. It's going to enable us to do a lot of stuff that we... That would've just taken us a really long time to do otherwise. Taking funding is always, I'm not being a wet blanket, but it's always a double- edged sword. On one hand, we have all this money to do all this cool stuff. And then on the other hand, the expectations go way up, and we have to deliver with this money. And I take that really, really seriously. So you celebrate a tiny little bit, but it's kind of like a mortgage. It's not just free money. It's very expensive free money.

Dan: Absolutely.

Andrew Morris: Yeah. But thank you. I really appreciate it.

Dan: Hey, I really appreciate that you came on the show the same day that this went live. We'll make sure to link the Tech Crunch article that you guys got on the podcast page itself.

Andrew Morris: Thank you. Thank you so much.

Dan: Awesome. So let's transition here a little bit. You started to dig into that, but there's a couple problems with threat intelligence today. So I assume you created GreyNoise to solve some of those. Could you kind of elucidate, what are some of the challenges with a lot of the threat intelligence feeds, and why did you create GreyNoise? And just for the audience's edification, how I got introduced to Andrew was a tweet. Here at [Kenno 00:06: 25 ], we've been working with threat intelligence quite frequently. We have our VI product we recently launched, all that good stuff. And I see this tweet from this guy named Andrew, which, we'll also link your Twitter on the blog page. But," The vast majority of threat intel products and data feeds on the market right now are an effin' and joke, don't at me." And I was like, we should get this guy on the podcast. So that's how this all came up. I want to understand, what's the context behind that tweet?

Andrew Morris: The context behind that tweet in particular, why I was so... I've felt like that for some time, and for a host of different reasons. But the actual thing that set me off to actually write that tweet on that day was that I got access to a threat feed. And it was the threats of the day, of the threat feed. And I copied, I took everything in that threat feed, and I enriched it against GreyNoise. So I just, I threw it in our analysis page to enrich all the IP addresses against everything that we know something about. And I found 75 IP addresses in this threat feed of, not even that many. It was a thousand that day, or something, that were like, they belong to Google Crawler, some belong to Census, some belong to Shodan, some belong to Bing. And I was just like, why are these in this threat feed? What am I supposed to do with this, if it has all this like definitely good, benign stuff in here? What am I supposed to do with this? This is insane. I've had other cases before where I've seen threat feeds that have include well- known public DNS servers, like Google Public DNS. Is the customer supposed to block that? What's the customer supposed to do? Is it bad if your network is talking to that? If so, how am I supposed to listen to any of this stuff? It's insane. Sorry.

JCran: Well, I can picture you in the room, screaming at this while you're writing this tweet.

Andrew Morris: Yes, that's literally, exactly what happened. Yeah.

JCran: And why do you think that is? I mean, is it the fact that there just isn't context from these feeds? Where, what is going wrong here?

Andrew Morris: That is a really good question. There's a lot of ways to unpack it. There's, the surface- level thing is just going to be, guys, do some research on every single thing, that before... Make sure you've got some guardrails in place before you tell somebody that something's bad. You need context. And you need to make sure that you're really doing your homework on this stuff before you ship it to a customer. that's just the super surface level thing, right? I would say that there is a bigger problem with companies that are charging based on the volume of indicators that they are handing to people, because it incentivizes the threat intelligence company to produce the...

JCran: More threats.

Andrew Morris: The large... More threats, right. To report on more threats and to deliver more IOCs, or more indicators, or threat- y indicators in their threat feed as possible, because it's going to make them more money. And maybe that's going to come from an indicator perspective, or maybe that's going to be because in the platform, they charge based on how much you put into it. I don't know. It can be one of any number of those things. But I think that that is at the root of a lot of the problem.

JCran: Yeah. It makes sense. It puts more work on the analyst. I mean, there's no real disincentive to do that, but ultimately the analyst...

Andrew Morris: I was going to say that, it's their job.

JCran: Yeah. Yeah, it makes sense.

Andrew Morris: Yeah.

Dan: Well, when we're talking about trying to mitigate things at internet scale, though, right, the analysts can only, humans in general can only handle so much.

Andrew Morris: I was actually, when I said it's their job just now, I meant referring to the threat intelligence providers. It's their job to make sure that, that stuff doesn't give the analyst in the SOC more work. They're supposed to give them less work.

Dan: Yeah. That makes sense now. Well, and JKran, I'm curious to get your feedback on this too, right. We were just talking about Patch Tuesday, and diffing, and enriching kind of the volumes that occur, right? That's a whole different ball game when we're talking about external scan data, right?

JCran: Yeah, there's definitely... I mean, if you look at any typical SOC in any one of our customers, or really, any organization of significant size, I mean, those analysts are probably overwhelmed. They're probably dealing with a bunch of alerts that quite honestly don't matter, but you can't, you can't reasonably filter them out. So in the analog to the Kenna problem is focus on what matters. And without intelligence that tells you who's behind the actual alert... I would kind of summarize what's being said to, there's not enough context to be able to filter it out. So let's put more context. And really, what the problem GreyNoise is solving is putting more context behind the actor that's driving that particular alert. And the more we can know about them, and the more we can know about their intentions, and what they're trying to do, the faster we can triage and focus on what actually matters.

Andrew Morris: I agree. And the thing is, I don't want to sound flippant, and like I don't understand why or how we got here or that I have all the answers. There was a time when, when someone reached out, unsolicited to a device that belongs to you on the internet, and did one of a number of different things, that was probably an indicator that they were up to no good. I mean, I'll be the first one to tell you that there was a time where that was the case. That time was long ago. And we need to adapt now. Because now there's ways, there's ways where you can, there are so many people, so many organizations and researchers that are doing good work. They have opt- out lists they're advertising. They have contact pages where they're saying, this is who I am. This is what I'm doing. This is in my user- agent. This is in my RDNS. This is in my whatever. And so there's... The day is passed now. Where, when someone talks to a device that you have on the internet, there's ways to enumerate the devices that have good intentions, the organizations that have good intentions.

JCran: And just to call out a couple of folks there, I mean, Rapid Seven drove a lot of that with their policy folks, and as well as the scans. io project.

Andrew Morris: I agree.

JCran: Which eventually turned into the discovery work that they do, Project Sonar. And I do the same thing with intrigue. io. And it's intended positively, and we offer a lot of that data to folks just for free. Just because we're trying to do research projects on the internet. And so, having the ability to say we're not doing anything harmful. And to come back to the legal question, there's a particular court case, the HiQ versus LinkedIn project where terms of service are, effectively, it's gone to the Supreme Court. And they've set terms of service. You can't really enforce those for public data. And a lot of these systems, and a lot of these researchers, myself included, are looking at public systems, looking at publicly- exposed services, and not doing malicious things. So it's pretty interesting to be able to say, this... To be able to go to GreyNoise and say, do we know anything about this particular actor? That's a pretty powerful thing to do.

Andrew Morris: Yeah. And they can be enumerated. They can be... It's harder and longer. And it's a more- difficult process to enumerate all of the known good people that are scanning and crawling the internet. But the effects that you end up with, the outcome of it, is really powerful. Because what you can do is, you can get to the point where you can tell a customer," Look, if anybody's talking to you other than these folks, or if anyone's... For example, okay, here's a good one. It used, you see, vuln checks on your network or something. You see somebody checking for a vulnerability. It's like, Oh man, I got to freak out about that. But if you have the ability to say, who is vuln checking on my network, that isn't a known, benign, scan crawl organization? That is powerful. Who's checking for blue keep on my network that isn't in this registry of known, good, internet- wide scanners and crawlers. And I don't... What's even more powerful than that is, who is checking for the existence of this vulnerability that's not scanning and crawling the entire internet at all? Right. Because that's someone that's coming after you. Right?

JCran: Yeah.

Andrew Morris: Forget about this known benign thing. Let's just talk about who's coming after you, specifically. Right? That is not internet- wide. That's, they're coming for you. That is valuable.

JCran: So that's an incident, right? That's a real incident that should trigger a whole bunch of work.

Andrew Morris: Yeah.

JCran: Yes. 100%.

Dan: Yeah, that's kind of multi- actual use cases I heard here. Right? So for example, intrigued. io, right? They're trying to fingerprint and tell you, hey, you probably have these vulnerabilities open in the internet. In a good way. That looks, to most people, if you were to just take it from a technical standpoint, as a bad thing. But it's not, and you guys can help identify, hey, you actually want to let this happen. Cause you might actually want to use Intrigue, and figure out if you have this thing, and then go take care of it. And then on the converse is, hey, this one guy from this one, IP is scanning for this one thing that is really bad on your systems. You should go do this, immediately. Right? This is something you want to pay attention to.

JCran: So this is a, this is a pretty good segue into GreyNoise as the vulnerability intelligence tool. And I know it's not the primary thing that Andrew does and that GreyNoise is putting out there. But your Twitter feed is pretty powerful. Because, oftentimes when you're seeing this sort of intelligence, you announce it on greynoise io, and this is a pretty good tip for those of us in the vulnerability management space to go, hey, there's something that's actively being probed. And so I kind of use it as an early warning system. Like, hey, there are people out there looking for this particular vulnerability. It's time to enumerate those vulnerabilities for folks.

Andrew Morris: Yeah. So the reason we do that on Twitter is for one of a number of, it's a few different things. The first thing is, it's just really powerful and it's really valuable for people to know right off the bat. As soon as a vulnerability, especially a new vulnerability, starts being aggressively scanned for, crawled for, vuln checked for, and then opportunistically exploited. It's really powerful for people to know that that is happening at scale. Which basically means, hey everyone, this impacts all of us now. Right? This is now all of our problem, objectively. From a data- driven perspective, this is now objectively all of our problem. So that's really powerful. The reason that we do it for free and we don't charge any money for it, it's hard to package that kind of thing at our size. And so I thought about how to turn that into a product offering. And I was just like, man, there's, I don't really know. Because honestly, you're at the mercy of the vulnerabilities coming out. You're at the mercy of the bad guys. And so it's like, I don't know if this would be able to provide consistent value. I don't know how we would be able to price and package this thing.

JCran: So I'll say this. I mean, this is, it's very similar to the feeds that we integrate into the platform. It's, at least one of the methods we use, is it's those sort of notifications that a particular vulnerability has been seen through a signature, right? So you have a signature that's tied to a CDE, and that tells you effectively that was used in the wild. And so, in that way, we often think of GreyNoise when we think about things that have gone very, very public. And often this happens very, very quick. I mean, the F59502. 5902.

Andrew Morris: That was a really good one because of how insanely fast it was. This was one of the fastest ones I have ever seen.

JCran: It's not going to slow down. I mean...

Andrew Morris: No, you're exactly right.

JCran: I mean, it's been increasing for years. And I think the prioritization to prediction reports have laid this story out pretty well. But what's happening here is you have these kind of scannable or wormable bugs where... And what has also happened is the tooling to scan the internet has gotten exponentially better.

Andrew Morris: And JKran, everyone and their grandma can spin up devices on the Cloud that have recyclable IP addresses. And so you have no... I mean, IP addresses are cheap as dirt. they don't cost anything now, really. And the internet is the back... The internet connections themselves, the internet is faster than ever. So it's the combination of those three things. The toolings there, the IP addresses, recyclable IP addresses are there. And the internet isn't slow anymore. The Internet's really fast now.

JCran: I would even throw in a fourth. Which is, the simplicity of these bugs is pretty real. I mean, it's oftentimes just a single HTTP request, or a single set of HTTP. And you're not looking at a memory corruption bug where you have to understand the layout of memory, understand objects...

Andrew Morris: Oh yeah, and understand the nuances of the...

JCran: Continuation, [ crosstalk 00:00:19:21].

Andrew Morris: Yeah, it's this stack layout of all these different architectures. It's all just web requests, it's web servers.

JCran: Let me insert an admin account, I'll be right back.

Andrew Morris: Yeah, yeah, it's all, it's literally, it's all just, it's just web servers. And it's string expansion bugs. It's...

JCran: Path traversals.

Andrew Morris: Dude, oath traversals.

JCran: Yep.

Andrew Morris: OS command injection. It's like a new class of bugs that's reasonably...

JCran: We thought they were dead.

Andrew Morris: Yeah, yeah. And they're back. It's reasonably universal.

JCran: Yeah.

Andrew Morris: Because everyone's building on top of web technology now. So that's just kind of how it is.

JCran: Yeah. Completely. And this sort of thing, at least with the ASAs and with other technology, I think people are being surprised by the technologies that are embedded inside these network devices. You thought they were very hardened. I mean, they were hardened, right? They've been through tons and tons of pen tests. But bugs sometimes get missed. And I think in the case of the ASA bug, which is the most recent one that comes to mind, there's like Lua embedded inside of this web VPN client. And there's another one that moved really, really quickly. Now that one hasn't been RCE'd yet, but that's another one where it's two hours from release to everybody's scanning for it.

Andrew Morris: This is going to sound really weird, but it's like, 10 years ago there was a concept of an internal pen test and an external pen test, right? For better or worse. I mean, whether that's the right or the wrong way to think about doing security testing, there's an internal pen test and an external one. And it's kind of like now the bad guys are thinking of the internet as one, big, giant internal pen test. Where you have... The name of the game is just find the device. That has the bug that you're talking about and then pivot from there. And then maybe you get lucky and it's on a network that's really juicy. But at the end of the day, it's like bug first. Find where all its at, right? And the bugs happen. They're going to keep happening. They're not going to stop happening.

JCran: Yeah. And I would even beat the drum a little bit harder. I mean, if you're really trying to do this in a big way, you're going to pre- collect all that data and just have it available.

Andrew Morris: Oh, yeah! You're not going to scan the internet. You're just going to, you're going to run one of a number of services that's constantly grabbing the info for as many of these different services as possible so that you don't have to do that early warning thing. Or you can do as little of it as humanly possible, as the bad guy.

Dan: Let's go correlate against our cache.

Andrew Morris: Yeah. That's exactly right. And that's, what's funny is, that's how good guys do it and that's how bad guys do it too. Because it's the right way to do it. It's the smart way.

Dan: It works for both of them.

Andrew Morris: Yeah.

JCran: And so, I think that just reinforces the point that scanning is not going to be a threat. It's the actual usage of that data. That's the threat.

Andrew Morris: This is where like my eyes roll into the back of my head sometimes when I'm talking to people. Where some people really do think that, when you put something on the internet, if someone comes and talks to it they're up to no good. And there's a whole other school of thinking, which is, don't put anything on the internet that you don't expect people to talk to. And then what it becomes is, okay, we have all these things on the internet. We did not realize they were on the internet. And I would like to yell at the person who made me aware that they're on the internet.

Dan: On the internet.

Andrew Morris: And my message to everyone is, that is a bad way to look at things. You should be thanking these people for finding these things that are on the internet.

Dan: Yeah, that's JKran's MO, right?

JCran: Yeah. I mean, it's using the legal system instead of using technical controls, right? And using legal threats. And that that's only so effective.

Andrew Morris: You're exactly right. You're exactly right. It's not even a question of, is that the right of the wrong way to do it? It's like, is it effective or not? Right? And it turns out, maybe it was at a point, but not anymore.

JCran: And nobody's saying you shouldn't use legal threats in the appropriate circumstances, but I think everybody's saying there are good technical controls now that allow you to build your networks in such a way that prevent unauthorized parties from being even able to see them. And that's where we need to go with this. I mean, that's got to be sort of the end state. And so I think everything's moving in the right direction. The incentive is generally correct for researchers to be able to do this. And the more we can do this from a security perspective, encourage researchers to be able to go out and enumerate these problems, and get them in front of organizations, it sort of sets this incentive for organizations to pull their systems to more of a zero- trust model, and that that is definitely the right direction for the vast majority of IT systems.

Dan: Yeah. Well, for me, the ship is almost already sailed, right? Collection of data, and optimizing people's pathways and time to data, right? That's the name of the game on the internet right now. Right? So to do that you need to have these scanners, you need to know where things sit, and you need to understand all that. So it just, it's already there.

JCran: It seems so logical, right?

Dan: It's whether or not you're okay with that.

JCran: Go look at AWS. I mean, it is a mess.

Dan: Well, and that's what I'm curious. So I want to kind of take this back around. We were talking about GreyNoise as a vulnerability intelligence tool, but you're talking about how there's so much data right now, inherently. I think where we're driving to is, threat intel feeds before, we're okay with just throwing all this data at you. But you were having, how do you productize that? Right? You can't. The value from that is providing the context, enriching that data, giving actionable Intel.

Andrew Morris: Right. So here is my biggest problem with threat intel right now. My biggest problem with threat intel right now is that you can't quantify the value. They don't even try. And so, in this case in particular, it's like you can't rattle up someone's cage and tell them that there's bad guys out there doing bad things, and tell them that if they buy your product, the bad thing won't happen. And the bad thing costs one kajillion- zillion dollars. So we're going to charge you 10 bucks less than that and you're safe now, right? And as long as the undesired outcome doesn't occur, then they will feel good about it. And look, I get that there is an amount of that, that is insecurity just generally, right? That's definitely true of security products, a hundred percent. But you've got to at least put some effort into quantifying the value that you're providing to the customer. And that's why for us it all goes, for GreyNoise, it all goes back to, how much time are we saving the analyst based on how much their average incident time- to- close, or ticket time- to- close, multiplied by how many analysts you have, multiplied by the regional salary of the area, or of the SOC. You end up with the exact amount of money that we're saving you. And I get that is not... That level of mathematical soundness is not achievable with everything in security, especially as it relates to threat intelligence. But give it a shot. Put some effort into quantifying the value that you're providing to people so that it's not just... It's too opaque right now. And they don't even, they don't even try.

JCran: Completely agree, and very similar to how Kenna does it, where basically we look at the amount of vulnerabilities that get delayed instead of fixing them right away. It's all about that... The analyst is the most expensive thing out there when it comes right down to it. I mean, the time that these folks are spending on these alerts, the more you can save with that, the better. And so I think this is really incredibly important work.

Dan: Yeah. An opportunity costs, where they could be doing something much more valuable. With all that time they're doing hygiene admin work, which is all necessary, right. But we can cut that down.

Andrew Morris: I'm not advocating for buying GreyNoise and then firing all the analysts that would have been in the SOC responding to the alerts that they don't have to respond to anymore because GreyNoise just saved them all that time. That's not it at all. There's so much work to be done elsewhere in the SOC, automating, tuning, building, hunting, all this other stuff. So, yeah, it's the opportunity cost of all the other work that they can't be doing. Because they're busy getting absorbed into looking at alerts that [inaudible 00:27: 24 ].

JCran: Non- incidents.

Andrew Morris: Non- incidents, that's right.

Dan: Well, that's actually a really good kind of segue into these analysts, right? They should be doing something a little bit more advanced with their human skills, and we can automate a lot of this stuff. How intelligent are some of these bad actors out there? Are you seeing them modify their behavior based off of what they're trying to go after? Any insights on some of the stuff you're seeing out there.

Andrew Morris: Oh, that's very hard to answer, only because it's broad. So you're talking about the bad guys that GreyNoise sees. How smart are they?

JCran: Yeah. And ultimately, how do you quantify something as malicious? What is the thing that's causing... Because, as I understand, GreyNoise will go in and label certain IP addresses, domains, et cetera as either benign, safe, and ultimately give more context around who it is. But what is it that drives malicious?

Andrew Morris: So malicious is defined by doing one of any things that we have, albeit, I don't want to call it arbitrary, but that we have defined as, this is something you shouldn't do on someone else's machine without going through a set of steps of best practices of being a good internet citizen. You need to let them know who you are. You need to let them be able to contact you, be able to opt out, be able to... And you need to come from your predictable set of ranges, or advertise who you are. And if you do those things, then it's one thing. If you do one of a number of different things that only a bad guy would do, or you exhibit a bit of behavior that only a compromised device would do, then, on a number of different systems, without going through the steps of announcing who you are, et cetera, you're bad. You're malicious.

JCran: Nice. Interesting.

Andrew Morris: And it doesn't have to the thing is, it doesn't actually have to be like an- ML crazy sort of AI. The list of bad things that an IP address can do at the scale that we look at is enumeratable. It's not the kind of that you have to just throw TensorFlow at it. You can literally write all of the cases of the, or many, many of them. You end up with hundreds of rules. But it's achievable. It's not rocket science.

JCran: I have to ask the question, is it, is there a ton of [ Marai 00:29:55], still? Is that still a thing?

Andrew Morris: So much. So much Marai.

JCran: Didn't we end that?

Andrew Morris: One would think. There are still devices that ship insecure to the vulnerabilities that Marai targets by default. Straight from the factory. There's still devices... I mean, not a few of them, a ton of them, that you can go and buy from whatever website, an IP camera. Plug it into your network, or plug it straight onto the internet. And it is, it might as well come pre- infected. And it will get owned in five minutes. Yeah, nice.

Dan: So, and just so everyone knows, Marai is a very, very famous IOT- centric bug, basically. Vulnerability.

JCran: And it's splintered, right? There's a bunch of different variants now.

Andrew Morris: That's right, yep.

JCran: And do you see those different variants? I'm curious, kind of like...

Andrew Morris: Yeah.

JCran: Yeah, yeah.

Andrew Morris: Yeah, I mean, and we... There's ways to fingerprint Marai as, this is a Marai variant, for sure. And then there's even ways to sub categorize and sub- catalog, which Marai variant it is. I'm not going to go into an insane amount of detail on it, but the best way that I can talk about that intelligently is basically, they're all programmed to look for slightly different things within a small window. And so you basically just say," Okay, what things are you looking for?" Because they're all taking C2 orders from the same place. So you just, who's looking for the same stuff? Who got the same orders? Right. Or, who has the same capabilities, right.

JCran: So you can kind of see them change in real time, too.

Andrew Morris: Oh, yeah.

JCran: If they add a new path?

Andrew Morris: Yeah, yeah.

JCran: That's super interesting.

Andrew Morris: Yeah. And there's some that's automated, ongoing, spreader behavior. And then there's some that's on- demand, the bot, the bot- man just told me to look for this thing. And all of a sudden, everyone starts looking for it at the exact same time. And you're like," Cool. I don't know anything about you other than the fact that you're all controlled by the same person, because you all just started doing the exact same thing at the exact same time." Which actually is really cool, especially when you're trying to tie different families together. And you're trying to tie who's controlled by the same actor. Yeah. Or, my IP lawyer is about to come in here, kick the door down and tell me that shut up too. So I'm not going to go too much further on it.

JCran: Yeah, yeah. It makes sense. Well, this is really good context and really good detail. And I think I'm really excited to see more detail around that come out. And as you talked through the GreyNoise Twitter account, keep those vulnerabilities coming. That's useful intelligence for Kenna customers, but pretty much everybody in the space can benefit from that sort of intelligence.

Andrew Morris: Yeah, it's always good when somebody who has no skin in the game, objectively... Because the Twitter doesn't, obviously, it doesn't just go to our customers. It goes thousands of people. So it's like, we have no skin in the game for any of these people, other than building a good reputation and making sure that people trust us. And if they click the link and they trail the product and they buy it, maybe we'll make a little bit of money. But we want to, we do that specifically because it's like, look, we we're objective. All we do is, we look at internet- wide scanning attack traffic. And when we see something that we feel like is relevant to be told to everybody we're going to do that. And we don't have it, we don't have a product for it yet. So, I mean, for that specific use case. So we'll just keep doing it. And a lot of people find a lot of value in that.

Dan: Very, very cool. So this has been an awesome conversation. I kind of want to tie things together at the end with, what do you see for this future of threat intelligence? What is an ideal state for you, and what's it going to take to get there?

Andrew Morris: Oh, that's a really good question. Two big things jump out at me right off the top. I have three. Three big things jump out at me right off the top of my head. One, we need to see more of a focus on allow list, and good. Everyone's trying to enumerate the threats. But while the threats multiply, you're going to eventually get to a point where nobody's trying to enumerate the good, and that is really, really important. And that is just as threat intelligence as bad guys are threat intelligence. It's just, you're thinking of it the exact other way around. You can't know if something is bad without having a reasonably good, firm handle on what's good or what's expected.

Dan: And that's kind of the core tenant to Zero Trust, right? Let's know what's a good thing to let communicate.

Andrew Morris: Yeah. I actually, I haven't done my homework on what Zero Trust means to a lot of people right now, so I couldn't tell you.

Dan: I just learned it three weeks ago, it's all good.

Andrew Morris: I'll do my best. So I would say bigger enumeration of what's good, what's expected. Right? What should be allowed? Because everyone already has an internal process for that, anyway. The second thing is, I want to see more marriage between vulnerability and threat. Which we all, obviously, everybody in this room right now is like, yes.

JCran: Yes, please.

Andrew Morris: Right. But the idea of atomic risk ratings for a threat, it just needs to die. Because the rating of the vulnerability relies on how prevalent that vulnerability is in your organization, if and when bad guys are opportunistically exploiting that vulnerability. You just can't look at a vulnerability in a vacuum. And I know that you have to for certain things, to figure out how bad it could be. But we need to know how bad it is. Right? So I would say, more of a focus on that. And I'm not just saying that because I know that you guys do a lot of that.

Dan: I was going to say, you're kind of preaching to the choir. I'm blushing a little bit.

Andrew Morris: Well yeah, I know. We're all in here patting ourselves on the back. Like, we all got it, pack it up boys. But no. So then the third thing is just more collaboration. We just need to see more organizations sharing data. And that's the for- profit companies. That's the SOCs. That's the organization. That's, if that's the ice ax, right? I mean, I don't know. We need to see more collaboration. We need to see more open dialogue, more sharing of information, right? Because that's just, everybody wins when we do that. Right? And if we define this, the good outcome as preventing bad things from happening, or staying on top of the threats as an industry, then we've got to work together better. We got to collaborate more. We absolutely have to.

JCran: Completely agree.

Dan: Totally agree there. Well, Andrew, thank you so much for joining us. Again, congratulations on the seed round, that is amazing.

Andrew Morris: Thank you very much.

Dan: Anyone listening, I will link in the podcast show notes, along with the blog that we'll use to promote this. Go follow GreyNoise IO on Twitter. You should go follow Andrew if you don't mind some cuss words in his tweets. They're very entertaining. Go check it out. We'll link to a GreyNoise. io and any relevant content as well. And thanks for listening, everyone have a nice day.


We chat about the state of everyone’s favorite buzz technology: Threat Intelligence with our favorite internet fingerprinter, Kenna’s head of research, Jcran. Joining us is a special guest, longtime pentester, infamous internet listener, and founder of GreyNoise Intelligence, Andrew Morris.