Security Unlocked

Share

Unpacking the New ML Threat Matrix

Ep. 9

Yeehaw! “Data Cowboy” is in the building. Join us as Nic Fillingham and Natalia Godyla sit down with Ram Shankar Siva Kumar, aka “Data Cowboy” at Microsoft, for an exciting conversation about the release of a new adversarial ML threat matrix created for security analysts. Have no fear, we made sure to find out how Ram acquired the name, “Data Cowboy”, so saddle up and get ready for the ride!


Stick around to hear Nic and Natalia explore the urgency of surfacing threats at a faster rate with Justin Carroll, a Threat Analyst at Microsoft, and why it is more important now than ever before.


In This Episode, You Will Learn: 

• How Microsoft is using the new ML threat matrix against cyber attacks 

• The approach and philosophy for putting the threat matrix on GitHub  

• ML applications in regard to healthcare and why it is worrisome 

• What needs to happen in order to be successful in combating certain threats   


Some Questions We Ask:

• What is an adversarial ML threat matrix? 

• How will the community on GitHub contribute to the evolution of the ML threat matrix? 

• What resources are available to learn about all things VM? 

• What techniques are being used to find threats at a faster speed? 

• How do AI and ML factor into the role of managing data and collaborating with other teams? 


Resources

Ram’s Blog:  

https://www.microsoft.com/security/blog/2020/10/22/cyberattacks-against-machine-learning-systems-are-more-common-than-you-think/ 


Microsoft Security Blog:  

https://www.microsoft.com/security/blog/ 


Nic’s LinkedIn 

https://www.linkedin.com/in/nicfill/ 


Natalia’s LinkedIn 

https://www.linkedin.com/in/nataliagodyla/  


Ram’s LinkedIn 

https://www.linkedin.com/in/ram-shankar-siva-kumar-7b04a73a/ 


Justin’s LinkedIn 

https://www.linkedin.com/in/justin-carroll-20616574/  


Related:

Security Unlocked: CISO Series with Bret Arsenault

https://SecurityUnlockedCISOSeries.com


Transcript

(Full transcript can be found at https://aka.ms/SecurityUnlockedEp09)


Nic Fillingham:

Hello, and welcome to Security Unlocked. A new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.


Natalia Godyla:

And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel, research and data science.


Nic Fillingham:

And profile some of the fascinating people working on artificial intelligence in Microsoft security. If you enjoy the podcast, have a request for a topic you'd like covered, or have some feedback on how we can make the podcast better.


Natalia Godyla:

Please contact us at securityunlocked@microsoft.com or via Microsoft security on Twitter. We'd love to hear from you. Hi Nic. Welcome back. How were your holidays?


Nic Fillingham:

Yes. Thank you, Natalia. Welcome back to you as well. Mine were great. You know, normally you drive somewhere or you fly somewhere, you go visit people, but this was all the FaceTimes and the Zooms and the Skypes, staycation, but it was still nice to eat too much and drink too much over the holiday period. How about you?


Natalia Godyla:

Yes, it was... to quote my boss. "It was vegetative." It was definitely just... well actually you know what? I did have a big moment over the holidays. I got engaged.


Nic Fillingham:

Oh, what!


Natalia Godyla:

I know.


Nic Fillingham:

Congratulations.


Natalia Godyla:

Thanks.


Nic Fillingham:

That's amazing.


Natalia Godyla:

I feel like it was absolute relaxation, really high point during the five minute proposal. And then we went back to our natural state and just absolute relaxation, lots of video games.


Nic Fillingham:

Hang on. So were you both sitting on the couch, playing some switch, eating your 95th packet of Doritos, and then all of a sudden your partner pauses and says, "You want to get hitched?"


Natalia Godyla:

There was a little bit more pomp and circumstance to it. Though I think that would have been very fitting for us.


Nic Fillingham:

Wow! Good on you guys. That's awesome.


Natalia Godyla:

I'm sure that like us, everyone has forgotten what they were doing at work, and I'm sure also what this podcast is doing. So why don't we give everyone a after the holiday refresher?


Nic Fillingham:

So just before the holidays, we partnered with Petri who run the Petri.com site Thurrott.com. First Ring Daily, a bunch of other great blogs, podcasts, email newsletters, and so welcome to all our new listeners who've come to us from Petri, from Throughout from First Ring Daily. Yeah. So what is security unlocked? Well, first and foremost, Natalia, and all your co-hosts, we are Microsoft employees and we will be interviewing, and we do interview on this podcast, other Microsoft employees, but we talk about security topics that hopefully are relevant to all security professionals and those who are interested in the state of cybersecurity.


Nic Fillingham:

And what we'll do in each episode is the first half is we'll pick a sort of a recent ish topic and we'll speak to a subject matter expert or an author of a recent blog post and ask them about the thing that they're working on, or that they've announced in the AI and ML space, hopefully try and demystify some new terms or concepts that may be either nascent or sort of difficult to wrap one's head around. And then in the second half...


Natalia Godyla:

We talk to again, another Microsoft security expert, this time more focused on the individual and their path to cybersecurity. So we'll ask them about what interested them about cyber security, what compelled them to join the industry, what jobs they've had, how they've come to Microsoft or their current role. In addition, we also have a new announcement about the podcast, which is we'll be switching to a weekly cadence. So prior to this, we were bi-weekly, now more goodness coming your way.


Nic Fillingham:

More pod in your pod app. What is the collective receptacle for pod? What is it? More pods in your cast, more cast in your pod?


Natalia Godyla:

More beans in your pod.


Nic Fillingham:

I like that. More beans in your pod. And I think the other thing that's worth reiterating Natalia is if you have a cyber-security topic you would love to learn more about, or a perspective you'd like to hear from, please let us know, we'll go after it for you and try and bring that to a future episode.


Natalia Godyla:

Yes, absolutely. We're really thankful to everyone who has reached out thus far and just keep it coming.


Nic Fillingham:

On today's episode in the first segment, which we call our deep dive, we speak with Ram Shankar Siva Kumar, whose title I will not give away in the intro because we talk about it in the conversation. And it's an awesome one. Ram works in the Azure Trustworthy ML team. And he's here to talk to us about a blog post that Ram co-authored with Ann Johnson that announces a new adversarial ML threat matrix that has been built and published up on GitHub as a collaboration between Microsoft, MITRE, IBM, Nvidia, Bosch, a bunch of other organizations as a sort of open source approach to this upcoming sort of nascent threat category in adversarial machine learning. And it was a great conversation. And then after that, we speak with...


Natalia Godyla:

Justin Carroll of the Microsoft Threat Intelligence Global Engagement and Response team. He started in networking very on the ground and only got his education in cybersecurity later in his career, which I think to anybody out there, who's looking to transition to security, who has a different background in security and is wondering whether they can make it, you can. He also chats a little bit about what inspired him to join cybersecurity. Some of it came from video games, which is a theme we're seeing again and again.


Natalia Godyla:

So he had a unique spin on vigilantism within video games and ensuring that those who had an unfair advantage by using mods were checked and tried to level the playing field for all the rest of the players of that game. And of course we touch on Ninja Turtles, which is really the highlight of the episode. I think, with that on with the pod.


Nic Fillingham:

Ram Shankar Siva Kumar, thank you for joining us on Security Unlocked.


Ram Shankar Siva Kumar:

Hey, thanks for having me, Nick and Natalia. Really appreciate it.


Nic Fillingham:

So we're going to talk about a blog post that you co-authored with the wonderful Ann Johnson. The title is, it's a great title. I'll get straight to the point. Cyber attacks against machine learning systems are more common than you think. Before we get into that, though, I just have to ask, you list your title as data cowboy, which is fantastic. I would love data cowboy, anything cowboy. I would love that for my title. Could you explain to people, what does a data cowboy do and what is the Azure Trustworthy ML group?


Ram Shankar Siva Kumar:

Oh, totally. First of all, this is like every kid's dream is to be Woody from Toy Story. It's just like, I realize it in my own way. So when I joined Microsoft in 2013, there really wasn't an ML engineer position. So my boss was like, "You can be whatever you want. You can pick your own title." I was like, "Yes, Toy Story comes to life." So it was like, this is a brown version of this Woody that you kind of get. So basically what the Trustworthy Machine Learning group does is our promise to Microsoft is to essentially ensure we can enable engineers and customers to develop and deploy ML systems securely. So it's kind of a broad promise that we make to Microsoft and our customers.


Nic Fillingham:

Got it. I would love to come back to just the data cowboy one more time. Tell me what you do. I mean, I have visions of you riding around the office on a hobby horse. Lassoing errant databases. Tell us about your day to day. What does it look like?


Ram Shankar Siva Kumar:

Yeah. So what really happens is that, like I said, I really wish I can ride it on my office, now I am at my home and my 500 square foot apartment- definitely not recommended. But most of the time we end up doing is this wonderful Hiram Anderson who's part of our team, he's militantly looking at how we can detect attacks on machine learning systems. So really working with him and the rest of the Microsoft community to kind of keep our eyes and ears on the ground, see like what sort of attacks on machine learning systems we are seeing, our various different channels and trying to see how we can detect and respond and remediate those sort of attacks. So that's the first one big one. The second thing is like I get to work with a wonderful Will Pears. So I get to work with him to think about actively attacking red teaming Microsoft's machine learning system. So even before our attackers can look at, exploit the vulnerabilities Will and Hiram go and actively attack Microsoft ML systems.


Natalia Godyla:

So how does the work you do connect to the different product groups. So as you're identifying these cyber attacks, are you then partnering with our products to build those into the detections?


Ram Shankar Siva Kumar:

Yeah, that's a great question. So one of the things I really like about Microsoft is that super low slake to meet with somebody from another product team. So the amazing Mira Lane who heads the Azure Cognitive Services, really worked very closely with her. And I believe you ever had a Holly Stewart in your podcast as well, so worked very closely with her team. So it's really a big partnership with working with leaders from across Microsoft and kind of shopping around what we're doing and seeing how we can kind of help them and also learn from them because they also have sensors that necessarily might not have.


Nic Fillingham:

Let's talk about this blog post. So you and Ann both announced this really interesting sort of consortium of 11 organizations, and you're releasing an adversarial ML threat matrix. It's open source, it's on GitHub. Very exciting. Tell us about it.


Ram Shankar Siva Kumar:

So the goal of the adversarial ML threat matrix is essentially to empower the security analyst community so that they can start thinking about building detections and updating their response playbooks in the context of protecting ML systems. And one of the things that's kind of like we want to be mindfully different is the attacks that we see to this framework with, all these techniques, we kind of only put the ones that Microsoft and MITRE jointly vetted that were effective to be against production machine learning systems.


Ram Shankar Siva Kumar:

So first of all, the whole area of attacking machine learning systems goes all the way back to 2004. In fact, you can find Daniel Loud, whose Twitter handle is Dloud on Twitter today. He continues to work on this super cool fields and there's a wonderful timeline by this other researcher called Battista Bisho that he also linked to the blog, but he can basically see that this work has gotten immense academic interests for the last 16 years. And especially in the last four years after a very seminal paper was released in 2014.


Ram Shankar Siva Kumar:

So when a lot of people think about spiel, they think of as, oh, this is something that is really theoretical. This is something that... Oh, Great, you're working in academic setting, but no, that's not true. There are marquee companies, who've all had their ML systems subverted for fun and profit. So the whole point of this blog post with MITRE and this whole corpus of industry organizations was, this is real. Attacks on machine learning systems is real, you need to start thinking about this.


Ram Shankar Siva Kumar:

Gartner released a report on 2019 saying, 30% of all cyber attacks in 2022 is going to involve a tax on machine learning systems. So this is not a pie in the sky. Oh, I'll get to it when I get to it. 2022 was a year and a half, it's a year away from now. So we got together in this blog post to really empower our security analysts community and help them orient for this new threats.


Natalia Godyla:

Can you talk a little bit more about what exactly is the adversarial ML threat matrix and how you envision security analysts using this tool?


Ram Shankar Siva Kumar:

Yeah, totally. So one of the things that before we even put this matrix together, we kind of conducted a survey of 28 organizations. We spoke to everybody from SMBs to governments to large organizations and we spoke to the security analyst Persona, as well as the MLG person. I asked them, "Hey, how do you think about securing ML systems? This is a big deal. What are you doing about it?" And they were like, "Well, we don't have the tools and processes in place to actually go and fix these problems." So the first thing we realized is that we wanted the security analysts community to be introduced to adversarial ML as a field, try to condense the work that's happening in a framework that they already know. Because the last thing we want to do is to put another framework another toolkit on their head.


Ram Shankar Siva Kumar:

And they're just going to be like, "Nope, this is not going to work out. This is one more thing for them to learn." So we took the MITRE's attack framework. So this is something that was again, bread and butter for any security analyst today. So we took the attack framework and we kind of said, "Hey, we've been really cool." If you took all the ML attacks and put it in this framework, and that's exactly what we did. So if you look at our track matrix, it's modeled after the MITRE attack framework.


Ram Shankar Siva Kumar:

So the wonderful folks from MITRE's ML research team and us, we got together and we basically aligned the attacks on machine learning systems, along reconnaissance persistence, model evasion, ex-filtration. So if you look at the top of our matrix, the column headers are essentially tactics and the individual ones are techniques.


Ram Shankar Siva Kumar:

So let's say that an attacker wants to gain initial access to a machine learning subsystem, let's say that's her goal. So she has a couple of options to kind of execute her goal. She has a couple of techniques in her kit. The first thing is that she can just send a phishing email to an ML engineer. That's very valid. Phishing is not going to go away. The second thing that she can do is she can take a pre-trained ML model available that people generally download and she can backdoor it. So the whole point of this attack matrix is to A, build a common corpus of attack techniques and attack tactics in a framework that a security analyst already has knowledge of.


Natalia Godyla:

Are you seeing any trends? What's most common to combine.


Ram Shankar Siva Kumar:

Oh, that's a great question. So before I just step into this, I first want to tell you about this attack called model replication. So the easy way to think about this and Natalia, I will get to this, I promise.


Natalia Godyla:

I love the excitement. I'm so ready for it.


Ram Shankar Siva Kumar:

We're going to take a little detour like Virgil and Homer. So essentially the best way to think about model replication is that open AI is a very famous ML start up. And they last year released a model called GPT-2, and they said, "Hey, you know what? We're not going to release the entire model immediately. We're going to release it in a stage process." We're going to just... because we want to do our own verification and before they could release the entire model, these spunky researchers, so I love that. They're still cool. Vania Cohen. And I know this other person's name is Skylion with a O, they replicated GPT-2 it was like 1.5 billion parameter model, and they've leased it on the internet on Twitter. And they call it open GPT-2. And I love their tagline, which is GPT-2 of equal or lower value.


Ram Shankar Siva Kumar:

So even before the company could release, they replicated the ML model based on the data sets that were available based on the architecture. And they basically at the end of the day, and we also references our case study is that they basically tweaked an existing model to match GPT-2 and they publish that for everybody to use. No, it does not have the same accuracy or the same metrics as the original GPT-2 model. But the fact that an attacker can even replicate a ML model using publicly available data sets and having some insights about the architecture is something for people to think about.


Ram Shankar Siva Kumar:

So now to come back to your excellent question. So what exactly is a common pattern? So what essentially we see attackers doing is that they go interact with the machine learning system, attackers might send some data. They might get some responses back and they keep doing that enough amount of time. And they now have sufficient data to replicate the ML model. So the first step is that they go and replicate the ML model and from the ML model that they have replicated, they go do an offline attack. Because now they their own ML model, they try to evade this ML model and then they find a way to evade the ML model. And they take the examples of the test points that evade the ML model and now evade the online, the real ML that's out there taking that and then boom, fooling the real online ML model. So that's a common data point, but three case studies in our adversarial ML GitHub page that actually kind of shows this.


Nic Fillingham:

So the sort of takeaway from that. If your data set is public, don't make your ML architecture public and or vice versa.


Ram Shankar Siva Kumar:

That's a great question. And I've been thinking about this a lot, first of all, we definitely want to be transparent about the baby builder ML models, right? Marcus Sanovich, Oh gosh, he's such an amazing guy. But for the last so many years in RSA has been like militantly, been talking about how we build our ML models for security purposes, because we want to give insights into our customers about how we actually built ML models. And the data sets are machine learning as a field, it has as norms of opening up our data sets. In fact, one can attribute the entire deep learning revolution to Dr. Fei-Fei Li's image in a dataset which really sparked this whole revolution. So, I really don't want anybody to think that being open with our data sets or being open with our ML platforms is a good idea.


Ram Shankar Siva Kumar:

Because even if you think of traditional cyber security, right? Security by obscurity is never a good strategy. So the way we want to push people to think about is how are you thinking about detection? How are you thinking about response? How are we thinking about remediation? So really trying to take the assumed breach mindset and feeding it into your ML systems is how we want to push the field towards. So if you take away anything from this is continue to be opening your systems for scrutiny, because that's the right thing to do, that's the norms that we've set. And that's important to advance research in this field and think about detection strategies and think about, and assume breach strategies for building ML systems.


Ram Shankar Siva Kumar:

We wanted to distinguish between traditional attacks and attacks on ML systems. So the one thing that I want to think about is the threat matrix contains both traditional attacks and attacks on ML systems. Whereas the taxonomy only contains attacks on ML systems. The second difference is that, like I said, the matrix is meant for security analysts. This one is meant for policymakers and engineers. The third that's the more important difference is that in the context of the threat matrix, essentially we are only putting attacks that we have validated against commercial ML systems. It's not a laundry list of attacks. We're not trying to taxonomize.


Nic Fillingham:

I wonder if you could talk about the approach and the philosophy here for putting this on GitHub and making it open to the community. How do you hope folks will contribute? How would you like them to contribute?


Ram Shankar Siva Kumar:

Yeah, absolutely. So Miguel Rodriguez, who runs the MITRE, who we collaborated with, wonderful team over there before putting this out on GitHub, there was a little bot of angst, right? Because this is not fully baked product. This is something that 13 organizations found useful, but doesn't mean everybody in the community might find useful. And I think he said something to the effect of-


Nic Fillingham:

It's almost as if you're a cowboy.


Ram Shankar Siva Kumar:

Yeah. There you go, herding people. It was like, we're putting this out, acknowledging this is a first cut attempt. This is a living document. This is something that we have found useful as 13 organizations, but we really are hoping to get feedback from the community. So if you're listening to this podcast and you're excited about this, please come and contribute to this matrix. If you think there are attacks that are missing, if you would like to spotlight a case study on a commercial ML system, we are super looking to get feedback on this.


Ram Shankar Siva Kumar:

And we also kind of realized that we wanted a safe space almost to talk about attacks on ML systems. So we were like, you know what? We're just going to have a little Google groups. And the membership of the Google groups is extremely diverse. You've got philosophers that are interested in adversarial machine learning. We've got people who are looking from various perspectives, joining our Google groups and kind of like giving us feedback and how we can make it better.


Natalia Godyla:

Yeah. As you mentioned, there are tons of different perspectives coming into play here. So how do you envision the different roles within the community interacting? What do you think needs to happen for us to be successful in combating these threats?


Ram Shankar Siva Kumar:

Yeah. This is a great question. The one thing that I've learned is that this topic is immensely complex. It's mind boggling to wrap the different personas here. So I'll just give you a rundown, right? So, so far we know that policymakers are interested in securing ML systems because every national AI strategy out there is like, securing ML systems is top priority for them. ML engineers are thinking about this, academic researchers. There were like 2000 papers published in the last, I want to say five or six years on this topic. So they are like a hotbed of research we want to rope into. We've got security analysts from these companies that we're talking to are interested. Csos are also thinking about this because this is a new threat for them. So as a business decision maker, how should they think about this?


Ram Shankar Siva Kumar:

One thing that I got an opportunity with Frank Nagle, who's a professor at HBS. We wrote up piece at Harvard Business Review talking about, is it time to insure ML systems. ML systems are failing so if you're ML powered like vacuum cleaner burns a home down, what do you do about it? We try and rope in the insurers to come in participate in this. So, Natalia this is such a green field and the only way we're going to like get ahead to really get people excited and try for clarity together as a community.


Nic Fillingham:

How would an ML powered vacuum cleaner work?


Natalia Godyla:

I was going to say that sounds like a 2020 headline, ML powered vacuum cleaner burns down house and threat.


Ram Shankar Siva Kumar:

Oh my gosh. So, okay-


Nic Fillingham:

Man bites dog.


Ram Shankar Siva Kumar:

There you go. It's funny because this was not an example that I made up. I wish I did. I know. Yes, Nic. I see, yes.


Nic Fillingham:

What?


Ram Shankar Siva Kumar:

Yes.


Nic Fillingham:

All right.


Ram Shankar Siva Kumar:

This is a well-documented paper called a concrete problems in AI safety. And they talked to the most it's like Final Fantasy. Everything that needs to go wrong is going wrong. So, they're like robots that are burning down homes, breaking things that they can clean up. So if your machine learning system is not trustworthy, there are going to be problems. And you really need to think about that.


Nic Fillingham:

I can't even get my kettle to boil.


Ram Shankar Siva Kumar:

But the thing that really worries me is ML applications used in health care. You keep seeing headlines like machine learning systems being used by radiologists, amidst radiologists when it comes to identifying Mulligan tumors and things like that. There's a fantastic work by Samuel Finlayson from Harvard. He show that if you take an x-ray image, just take it and slightly rotate it and you give it to the ML system. It goes from very confidently thinking that it's malignant to very confidently judging it's benign. And that is really scary.


Ram Shankar Siva Kumar:

In the beginning of the podcast, we spoke a lot about how an adversary can subvert machine learning systems for fun and profit. Oh boy, there is an entirely separate world of how machine learning systems can fail by themselves. What we call unintentional failure modes. And trust me, you will want to go live in the middle of the North cascades in a cabin after you read that work. It'd be like, I am not getting anything ML powered until they figure this out. But the good news is there're extremely smart people, including Hiram and Will from my team who are looking into this problem. So you can feel a little bit like a shore that they're the true Avengers out there.


Natalia Godyla:

I love all the head nods from Nic. I feel like it underscores the fact that we only know a percentage of the knowledge on ML. So we just need a community behind this. No one company person can know all of it.


Ram Shankar Siva Kumar:

Absolutely. Oh my gosh. Yeah. When we open the adversarial ML threat matrix Google group, we now went from zero. We felt like nobody's going to join this Google group. It's going to be like a pity party where I'm going to email Michel from MITRE and he's going to respond back to me. But no, we went from zero to 150 right now over just the last four days.


Natalia Godyla:

Ram, thank you for giving us all of this context on the adversarial ML threat matrix. So what's Microsoft's continued role. What's next for you in ML?


Ram Shankar Siva Kumar:

First of all, we are hiring. So, if you'd like to come and join us, we are looking for developers to come and join us in this quest. So please email anybody, even Nic, and he can forward his resume.


Nic Fillingham:

Do you need to have a cowboy hat? Is a cowboy hat a necessity?


Ram Shankar Siva Kumar:

Not at all. We will accept you for who you are.


Natalia Godyla:

Do you provide the cowboy hats?


Ram Shankar Siva Kumar:

We will provide everything. Anything to make you feel comfortable. So we are growing and we'd love to work with the folks. With the adversarial ML threat matrix, like I said, we really are looking for feedback from the community. We really think that like Natalia very correctly pointed out this is a problem so big that we can only solve it if we all come together. So please go to our GitHub link. I'm sure Nic and Natalia might put the link to it. We'd love to get their feedback.


Ram Shankar Siva Kumar:

The second thing is if you kind of are... We are especially looking for people to come in at case studies, if you think we're missing a tactic, or if you think that you've seen an attack on a ML system on a commercial Ml system, please reach out to us and we'd be happy to include that in the repository.


Nic Fillingham:

If your autonomous vacuum cleaner has attempted to undermine democracy, let us know.


Ram Shankar Siva Kumar:

And the one thing that I want everybody to take away is that when we did our survey, 25 out of 28 organizations did not have tools and processes to kind of secure the ML systems. So if you're listening to this podcast and you're like, "Oh my gosh, I don't have a guidance." Do not feel alarmed. You're tracking with the majority of the industry. In fact, three organizations, all of whom were large in our survey even thought about this problem. So there are tools for you and processes that we put out. So in our docs at Microsoft.com, there's a chat modeling guidance, there's taxonomy, there's a bug bar that you can give to your incident responders so that they can track bugs. And for the security analysts community, there is the adversarial ML chat matrix. So please go read them and please give us feedback because we really want to grow.


Natalia Godyla:

I love it. Thank you for that. That's a great message to end on.


Ram Shankar Siva Kumar:

Awesome. Thank you, Nic and Natalia for having me. Really appreciate it. This was really fun.


Natalia Godyla:

And now let's meet an expert in the Microsoft security team to learn more about the diverse backgrounds and experiences of the humans, creating AI and tech at Microsoft. Today, we're joined by Justin Carroll, threat analyst on the Microsoft threat intelligence, global engagement and response team. Well thank you for joining us, Justin.


Justin Carroll:

Thanks for having me.


Natalia Godyla:

Well can we kick things off by you just sharing your role at Microsoft. What does your day to day look like?


Justin Carroll:

So my role is related to threat hunting across large data sets to find advanced adversaries and understand what they're doing. Look for detection opportunities and communicate out the behaviors of the specific threats that we're finding to partner teams or to our customers to help them understand the threat landscape and kind of staying on top of what attackers are doing.


Natalia Godyla:

That's super interesting. And can you talk a little bit about any recent patterns that you've identified or interesting findings in your last six, eight months?


Justin Carroll:

Well, it's been a busy six or eight months, I would say, because everybody's been very busy with COVID. We've been seeing quite a large increase in human-operated ransomware and stuff like that. So I've been working really hard to try and figure out different ways to try and surface their behaviors as early as we can to customers to help them take action before the ransom happens. And we've been seeing quite a few other different really advanced adversaries compromising networks.


Justin Carroll:

A lot of it's kind of the same old, same old, just more of it, but it's always interesting and there's never a shortage of new findings each day and kind of moments of, "Oh, that looks like this, or they're doing this now." Awesome. Great.


Natalia Godyla:

You mentioned you're constantly trying to find new ways to identify these faster. What are the techniques that you're trying to use to find the threats quicker?


Justin Carroll:

There's a whole bunch of different ways that you kind of try and surface the threats quicker. Some of it's research and reading other people's work and blogs and stuff like that. I tend to live in the data most of all, where I'm constantly looking at existing attacks and then trying to find similar related behaviors or payloads or infrastructure and pivoting on those to try and attempt to find the attack, to be ready to find it as early as possible. And what's called the kill chain.


Justin Carroll:

So from the time that the attacker gets in the network, how quick can we find them before they've had a chance to conduct their next set of actions? So whether if they're stealing credentials or something like that, can we surface them before they've had a chance to do the credential theft and then kind of always trying to move earlier and earlier in the kill chain to understand how they got there. And then what are some of the first things that they did when they did get there and how do we surface those next?


Justin Carroll:

Because a lot of those are a little bit more difficult to surface because it can kind of tend to blend in with a lot of other legitimate activities.


Nic Fillingham:

What kind of tools do you use Justin? Are you in network logs and sort of writing queries, is there a big giant futuristic dashboard that you sit in front of and you have virtual reality gloves moving big jumps of numbers left and right. Well, what are the tools of your trade?


Justin Carroll:

So one of the tools that we use a lot, there is a bunch of data that's stored... Customer facing, it's usually called Azure data Lake. It's these huge databases with large amounts of information where you can construct queries with what's called KQL, I believe it's Kusto query language. So there's a specific tool for kind of deep diving into all of that data across our many different sources. And then using that to basically structure and create different queries or methods of finding interesting data and then kind of pivoting on that data.


Justin Carroll:

Then in addition, I've built some of my own tools to kind of help improve my efficiency or automate some of the stuff that I have to do all the time and then just to make me faster at hunting for the things that I'm looking for.


Nic Fillingham:

Is it an AI version of yourself? Is it a virtual Justin?


Justin Carroll:

No. We work with the ML team to try and share as much knowledge with them as possible. There is no tool for an AI Justin, as of yet.


Nic Fillingham:

Well, let's back it up a bit. So one of the things we would like to do in these interviews with the security SMEs, I'm not even sure if we've explained what an SME yet. We call it a Subject Matter Expert. That's an acronym. We use a lot here at Microsoft. I think it's pretty broadly known, but if you've heard of SME or SME, that's what it means.


Nic Fillingham:

Now, you and I, we crossed paths about a year ago for the first time when Jessica Payne, who actually hasn't been on the podcast yet, Jessica introduced me to you and she said, "You have to talk to Justin." And she gave me three sort of very disparate, but intriguing bits of data about you. She said, "Justin used to climb telegraph poles. He is a big Star Wars fan and is in a metal band." And I'm sure I've gotten those three things slightly wrong. Could you kind of talk about your journey into the security space and then sort of how you found yourself working for Microsoft. But first of all, these three things that Jessica told me are any of them true?


Justin Carroll:

Mostly they are. So some of these will kind of combine for the telephone climbing aspect. I used to work for a wireless internet provider that had leases or specific towers, cell phone towers or other towers on top of mountains, essentially, where we would have wireless radio dishes that would communicate to each other. So I was occasionally tasked with installing and or fixing said towers, which is okay if you are fine with heights, I wasn't at first, but you just kind of get used to it. And you kind of realize once you're above 20 feet, it really doesn't make any difference. If you fall, it's going to hurt, but climbing a tower in the winter and in the wind and where you can barely feel your hands and all that wasn't great.


Justin Carroll:

I was a pretty big Star Wars fan growing up as a kid, even more of a Ninja Turtle fan. And as for metal, I used to be in a band with some friends and have been playing guitar for 25 or 26 years. And music has been a very huge part of my life and remains to be.


Nic Fillingham:

I think we'll circle back to Ninja Turtles. I'm not going to let that one go, but so let's talk about your path into security. So was this you're working for the wireless internet provider was this your first job. Was this mid career. Where does that fit in your sort of LinkedIn chronology? And at what point did you use formerly into insecurity?


Justin Carroll:

So it's been a long and winding road to get here I would say. So the internet provider was what I would guess I'd call my first career job of sorts. I had started there in my early 20s and worked for them for about... sorry my cat is right in front of the microphone. One second.


Nic Fillingham:

There's a cat there.


Justin Carroll:

She wanted to say her piece. So I worked for the internet company for just under a decade. I used to do some networking type fun stuff in Halo 2, to kind of maybe garner a little bit of an advantage, I guess I would say, and use those learned skills to land that first job. And I did that for quite a while, but realized I was kind of stuck in this job. It was in a city that I didn't want to live in. And I had kind of maxed out my capabilities there. I had attempted to move to Portland because I wanted to have a bigger city experience. I applied to 254 jobs, got one interview for basically an office tech support role was the only position I got hired, but it wasn't feasible to live in Portland.


Justin Carroll:

So after quite a bit of soul searching and realizing that basically nobody cared that I had eight years of on the job experience because I didn't have a college degree. There were not any doors open for me for the most part. I then decided to take a pay cut and go get a job at a university that was just a city over and work full-time and go to school for a degree in cybersecurity while working full-time for the university doing kind of technical work for them, helping them understand their... Sorry, my cat is a whole thing right now.


Nic Fillingham:

Your cat's just trying to interject with like don't. Hey, you glossed over that Halo 2 thing, you better to come back to that.


Justin Carroll:

Aria, come here.


Nic Fillingham:

We're leaving all this in, by the way.


Natalia Godyla:

Yeah. We're very much enjoying it.


Justin Carroll:

So kind of advising the university on different technologies that they could use for their students. So I did that for about three and a half years while going to school and then graduated top of my class and applied for another 150 some odd jobs and mostly the Seattle area this time and was about to give up because even though I now had a degree and almost 10 years of experience, it still wasn't enough. And everybody that I kept losing to had between 10 and 20 years experience. And it just wasn't an option for folks with less specific cybersecurity experience to kind of enter the field.


Justin Carroll:

There were a lot of walls that were put up. I had a friend of a friend who worked for cybersecurity at a company somewhere in Arizona, who I'd never met. And he decided to go out of his way, even though I'd never met him and looked for some cybersecurity type jobs in my area that he thought maybe I'd be good for and helped me look at my resume and stuff like this. And that helped me land a vendor role for Microsoft, where I kind of started my path and career towards cybersecurity specific stuff.


Justin Carroll:

I had basically given up at that point on ever working in cybersecurity and had kind of thought that it just wasn't meant for me. So that was kind of a big break and a guy almost closed the application to apply for the job and then figured what's the worst they can say is no, that is kind of how I finally got to Microsoft and cybersecurity, where I was able to work as a vendor for the team evaluating kind of telemetry. And I was kind of given an opportunity to learn a lot and that eventually transitioned into when a position became available, where I started working full-time as a Microsoft employee and went from there.


Natalia Godyla:

So what in your soul search brought you to cyber security? Was it your background, the fact that you already had those foundations as a network admin, or was there something in particular in the cybersecurity world that just attracted you?


Justin Carroll:

I'd always found it fascinating. When I started university, they just launched the cybersecurity program. The quarter that I started there, and one of my friends who was a computer science major, basically called me up immediately and was like, "Hey, they just launched this. You need to do this." And there's the very popular culture aspect of it where everybody thinks it's fascinating and you sure there was a little bit of a grab with that. But I like learning how computers work and I like kind of the constant problem solving nature of everything. And the first class I took on it I was hooked and still remains that day where it's just, it's fascinating and it's really fun to just kind of continually work to see what attackers are doing. But I also, there's a huge aspect of it like I like helping people. I think it's important and having a role where I'm able to help millions or even potentially billions of people through better detections or stopping malware. It feels pretty great.


Nic Fillingham:

What other aspects Justin, of your path to security, your path to Microsoft, do you feel you're sort of bringing forward? I want to ask about you very briefly mentioned something about Halo 2 and I want to know what that was. And then I wonder if there were other sort of dare I say, sort of maybe unorthodox or non-traditional things that you worked on where you learned a bunch of bunch of tools or tricks of the trade that you're bringing forward to your work right now.


Justin Carroll:

So Halo 2 was a fun one. Back in those days, there were lots of what were called modders, who would mod their Xbox's to gain an unfair advantage. So I would use my networking know-how basically, and learned a lot of it too, when encountering a modder to kick them out of the game. I think it was possibly a little frowned upon, but I was tired of having cheaters constantly win, so I did a lot of research and I didn't know a whole lot about networking at that point, but I tried to not use it as a competitive advantage, but more to just level the playing field, but it was a great way to learn how firewalls worked and network traffic and building more on my understanding of computers.


Justin Carroll:

And then, kind of, that set a foundation for me, of understanding, there's always going to be stuff that I don't know and what I have done, but I did it all through college and continued all the way till basically getting full-time employment at Microsoft was I set up a lab environment and I would set up servers and clients and I would attack them and monitor the logs on my own little private lab on my machine and see what worked, what didn't, try and figure out why it worked, what didn't and try and build different tools to see how I could make it more effective or deal with different issues.


Justin Carroll:

Just kind of both playing attacker and defender at the same time on my network, all by myself, essentially and kind of learning from all of that data was massively important and anybody who's looking to get into security, I highly recommend both learning how to attack, on a safe, your own little lab environment where you're not hurting anybody. And what's it like to try and defend and find those attacks because both sides are-


Nic Fillingham:

Red Justin versus blue Justin.


Justin Carroll:

Exactly. Yes.


Natalia Godyla:

You noted earlier that just the sheer amount of data can be overwhelming, especially as you moved through your career and then came to Microsoft where we have billions of signals. So the same transition happens from Halo to now just the sheer scale and scope of your role and the amount of good that you can do. So, how did you handle that overwhelming amount of information, amount of impact that you can have?


Justin Carroll:

So when I was first brought on one of the things that made a significant difference was I had somebody that kind of instructed me in a lot of the ways of kind of how to work with the data, but I was also given quite a bit of an area for trial and error. So there was lots of opportunity to fail and to learn from what didn't work and to kind of keep building on that. And then any time that I got stuck or I would kind of just do everything I could to attempt to solve the problem or work with the data. If I kind of hit a wall that I couldn't climb on my own, I could go to him and then we would solve it together. So it was kind of both a mentoring and a guidance thing, but also kind of given that ability to experiment and try and learn. So that was kind of one of the biggest ways of learning to pivot on that data and understand it and consume it.


Justin Carroll:

And then honestly, collaboration with other folks on my team and other team was massively instrumental to be able to kind of learn what they had already learned or pass on my knowledge to them. And just that constant sharing and understanding because there is so much data, it's quite impossible almost to be an expert at all of it. So having those folks that you can reach out to you that are experts in each basically set of their data. So you can understand what the data is trying to tell you, because that's one of the things that is particularly difficult is to take the data and actually glean understanding from it. The data is trying to tell you something, you just need to make sure you're interpreting the message correctly.


Natalia Godyla:

How do AI and ML factor into your role into helping you manage this data and collaborating with other teams.


Justin Carroll:

So I work quite a bit with a lot of different data science folks on a few different teams to either use a lot of the models that they're creating to kind of a source, a lot of the malicious information or a particular attackers or stuff like that. And then also collaborating back in sharing my knowledge and intelligence to them to say, this is what an attack looks like. This is what it should look like in the data and kind of giving them the ideas and signals for what they should be looking in their data to kind of train those models.


Justin Carroll:

It's really important to have that partnership between security and data science for AI and ML to kind of help them understand the security sphere of it. And then they can kind of take the real math and data prowess that they've got and turn our knowledge into ML or AI to detect and surface a lot of these things.


Nic Fillingham:

If it's possible, Justin, how would you sort of summarize your guidance to other Justin Carroll's that are out there that are... They want to get into security, they're fascinated by cybersecurity in sort of a macro sense, but they feel either don't have a degree or they're not even sure what they should go study or they're trying to work at, how can they translate their current sort of career experience and sort of skills? Can you summarize that into some guidance of what folks should do to try and break in?


Justin Carroll:

Sure. One, if you're in school, remember that school is not going to teach you a lot of the stuff that you need to know. It's lots of taking what you're learning and building upon it outside. So if it's cybersecurity, that's an interest, try and experiment and fail. Cyber security is huge. There are so different facets of it. Find out the thing that kind of scratches the itch and piques your interest. For me, that was setting up a lab, right? Where I could play both the attacker, the defender, the person monitoring logs, the person setting up all the configurations to try and stop the attacks and was able to kind of see all different aspects of the industry.


Nic Fillingham:

So just jumping in, was that literally just a bunch of VMs on your machine or did you have multiple PCs sort of networked together? Just very quickly, what did that look like? How accessible is setting up a lab? I guess I'm what I'm asking.


Justin Carroll:

It is pretty accessible. So while I was in college, it was actually multiple machines and I had four different machines and I set up a router that you can pick up for 50 bucks and a smart switch that I could mirror the traffic on to understand everything for 100 bucks. So there's a little bit of cost. That was kind of my college setup. And as I was kind of learning where I at that point, it made a little more sense to do it with actual machines and for extra clarity. My college was only a couple of years ago. I did not go to college young. So the next route that I did once I headlined did my vendor role and was kind of like security is for me and I want to keep building on it.


Justin Carroll:

I did it all with VMs. So I just had a desktop computer that had okay specifications and I configured two clients, the domain controller, server on the device and then a mail server. And then basically you just connect to each client and then network them all together. So at that point you can use VirtualBox, you can use lots of different stuff. So the availability of doing that, it's actually pretty good. There isn't a lot of overhead costs or anything like that. You just have to have a okay computer.


Natalia Godyla:

What about resources to learn how to do all of that? Are there organizations or sites that someone could turn to, if they're interested in starting to do some of this starting to experiment with what they're interested in?


Justin Carroll:

Honestly, I would say one of the best resources that I had throughout was YouTube. It was a great place to get walkthroughs for every different thing. So like I wanted to learn how to set up a VM and configure it with networking to another VM. I turned to YouTube. I wanted to learn how to attack the VM using Kali Linux, YouTube. And there's a whole bunch of different channels out there that specifically focus on that. And then the other thing is because it's so much more open for creators to share content. You can find people who are at a similar level or maybe just a few steps ahead of you. So you can really kind of join along with other people.


Justin Carroll:

There are a few websites for coding, I think one's called hacking the box as far as attacking different things. And that was also kind of fun where a lot of the devices that need to be attacked we're already pre-configured for you. But for me, honestly, a lot of the fun was setting up those devices and then learning what I did that worked and didn't and what allowed it to be attacked and what I could do to stop that.


Natalia Godyla:

Quick plug Microsoft security also has a YouTube channel in case somebody would like to get any, how to content on our products.


Nic Fillingham:

Natalia may or may not have been involved in that channel, just full disclosure there.


Natalia Godyla:

Yeah. I couldn't help myself. But it is also great to hear that you found people to work with in the community as well. That's something that's been noted by a few of our guests, like Michelle Lamb, that as she was entering the space, she found mentors. She found conversations, people readily available to either work on a problem alongside her, or just answer questions. So I'm glad that you've also been able to turn to the community for that. So what's next for you? Is there a new challenge that you'd like to solve?


Justin Carroll:

Definitely want to work on the toolkit that I'm building and kind of continue that growth. It's been interesting to kind of see the hurdles I run into. And even last week I ran into one that felt insurmountable and was able to chat with one of the devs and solve in a few minutes and learned a whole lot and going forward, now I have that in my pocket. And then both-


Nic Fillingham:

Hang on. Did you say you went from found a new challenge, thought all this is insurmountable and then a few minutes later you solved it?


Justin Carroll:

With a little support from people that knew how to solve the problems. So collaborating with like one of the other devs on the team and basically having him kind of explain the part it felt like a giant wall, but really once you kind of have somebody to break it down a little bit for you, it was just like, "Oh, okay. I see what I'm missing here." And then it was just like, "Got it. Okay. Moving forward."


Nic Fillingham:

Oh, I see. So that that's more an endorsement. Yeah, I got it.


Justin Carroll:

Yeah. Yeah. It's more an endorsement of others teaching abilities and just kind of those times of being able to reach out to others for when you really get stuck and how much of a difference it can make. I had spent an hour on something and was just like, this is ridiculous. This should work. Why isn't it working? What's wrong with me. I'm not smart. And then just chatting with them a little bit and then figuring it out and then like, "Oh, okay. Oh, okay. That's actually pretty simple." I wasn't thinking about it in the right way and kind of getting that other perspective.


Justin Carroll:

And then what's next kind of going forward is a kind of continued partnership with a lot of the data science folks to, I think we've only scratched the surface in many ways as an industry on how data science and cybersecurity can work together. So I am very excited to kind of see what kind of stuff we can accomplish, whether it's, you know, surfacing attacks shortly after they happen, very early in the kill chain or understanding related behaviors and trying to understand who the might be, or I think most of all, the intent of the attack or adversary.


Justin Carroll:

Intent can sometimes be a very difficult to suss out, even for SOCs and their entire center. They have all these folks that are trying to figure out what happened. Why did it happen? What does it actually mean? So if we can have data science that can provide a lot of context on that, through understanding existing attacks and modeling what future ones might look like, I think there's some pretty exciting opportunities there.


Nic Fillingham:

All right, I'm doing it. We're coming to Teenage Mutant Ninja Turtles. You're a fan. How much of a fan are you, Justin?


Justin Carroll:

I'd say quite a fan. I do have a couple of figurines and a mint package unopened from '87 I think, something like that. And then have a Ninja Turtles tattoo on my back of Raphael. So that was kind of one of those moments where I was trying to think about what steps I wanted to take forward in life and things like that. And I had kind of thought about what are the things that actually make me happy?


Justin Carroll:

This was probably my mid 20s quarter life crisis kind of thing. And I was like, "I always liked the Ninja Turtles as a kid." They always brought me great joy. I still get excited about watching them. The movies are definitely a guilty pleasure. I realized they're not great. But now I'm talking about the original movies, not the new ones. We won't talk about the new movies. And it was just one of those like, "Yeah, I identify with this. This is a huge part of my life. It's been around since I was... it was started the year I was born." So I was just like, "All right, let's do it." And haven't regretted it at all.


Nic Fillingham:

I was going to ask who your favorite turtle was, but you've obviously... If you've inked Rafaelle on your back so that question is moot. I'm a Donatello guy. I've always been a Donatello guy.


Justin Carroll:

I would think of myself as Raf, but really I'm more of a Donatello. Ralph was kind of the cool guy with a little bit of an attitude, but really I was Donatello. When I was 10 dressed up for Halloween, I was Donatello. I'm definitely Donatello with a little bits Raf thrown in for good measure.


Nic Fillingham:

Well, this has been a blast. Thank you, Justin, for walking us down, Teenage Mutant Ninja Turtle memory lane, and Halo 2 memory lane and sharing your story with us. It was great. Wonderful to get your perspective. Great to have you as a part of the threat hunter team here at Microsoft and contributing in all the ways that you do. Thanks for joining us. I'm sure we'll talk to you again at some point on the Security Unlocked podcast, but keep doing you Cowabunga, dude.


Justin Carroll:

Thanks very much for having me. I appreciate it. It was great to talk to you all.


Natalia Godyla:

Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.


Nic Fillingham:

And don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then stay safe.


Natalia Godyla:

Stay secure.

More Episodes

6/2/2021

Pearls of Wisdom in the Security Signals Report

Ep. 30
It’s our 30thepisode! And in keeping with the traditional anniversary gift guide, the 30thanniversary means a gift of pearls.Sofrom us to you, dear listener, we’ve got an episode with somepearlsofwisdom!On today’s episode, hostsNic FillinghamandNataliaGodylabringback returning champion,Nazmus Sakib, to take us through the newSecurity Signals Report. Sakib walks us through why the reportwasdoneand then helps us understand the findings and what they mean for security.In This Episode You Will Learn:How pervasive firmware is in our everyday livesWhy many people were vulnerable to firmware attacksHow companies are spending the money they allocate towards digitalprotectionSome Questions We Ask:What was the hypothesis going into the Security Signals Report?How do we protect ourselves from vulnerabilities that don’t exist yet?Wereany of the findings from the report unexpected?ResourcesNazmusSakib’sLinkedIn:https://www.linkedin.com/in/nazmus-sakib-5aa8a6123/Security Signals Report:https://www.microsoft.com/security/blog/2021/03/30/new-security-signals-study-shows-firmware-attacks-on-the-rise-heres-how-microsoft-is-working-to-help-eliminate-this-entire-class-of-threats/Nic Fillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.com
5/26/2021

Securing Hybrid Work: Venki Krishnababu, lululemon

Ep. 29
On this week’s Security Unlocked we’re featuring for the second and finaltime,a special crossover episode of our sister-podcast, Security Unlocked: CISO Series with Bret Arsenault.Lululemon has been on the forefront of athleisure wear since its founding in 1998,but while many of its customers look atitexclusively as a fashionbrand,ata deeper level thisfashion empire is bolstered by a well thought out and maintained digital infrastructure that relies on ahard workingteam to run it.On today’s episode, Microsoft CISO Bret Arsenault sits down with VenkiKrishnababu, SVP of Global Technology Services at Lululemon.Theydiscuss the waysin whichtechnology plays into the brand, how Venkileada seamless transition into the remote work caused by the pandemic, and how he’s using the experiences of the past year to influence future growth in the company.In This Episode You Will Learn:Why Venkifeels sopassionatelyabout leading withempathyWhy Venki saw moving to remote work as only the tip of the iceberg; and how he handled whatlaidbelow.Specific tools and practices that haveleadto Venki’ssuccessSome Questions We Ask:What is the biggest lesson learned during the pandemic?How doesone facilitate effective management during this time?Howdoes Lululemonviewthe future of in-person versus remote work?Resources:VenkiKrishnababu’sLinkedIn:https://www.linkedin.com/in/vkrishnababu/Brett Arsenault’s LinkedIn:https://www.linkedin.com/in/bret-arsenault-97593b60/Nic Fillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.com
5/19/2021

Contact Us; Phish You!

Ep. 28
Threat actors arepeskyand, once again,they’reup to no good.A newmethodologyhas schemers compromising onlineformswhere userssubmittheir information like their names, email addresses,and, depending on the type of site, some queries relating totheir life.This new methodindicatesthat the attackers have figured out away around the CAPTCHA’s that have been making us all provewe’renot robotsbyidentifyingfire hydrantssince 1997.Andwhat’smore,we’renot quite surehowthey’vedone it.In this episode, hosts NataliaGodylaand Nic Fillingham sit down with Microsoftthreat analyst, Emily Hacker, to discuss what’s going on behind the scenes as Microsoft begins todigintothis new threat and sort through how best to stop it.In This Episode You Will Learn:Why this attack seems to be more effective against specificprofessionals.Why this new method of attack has a high rate ofsuccess.How to better prepare yourself for this method of attackSome Questions We Ask:What is the endgame for these attacks?What are we doing to protect againstIceIDin these attacks?Are we in need of a more advanced replacementforCAPTCHA?Resources:Emily Hacker:https://www.linkedin.com/in/emilydhacker/Investigating a Unique ‘Form’ of Email Delivery forIcedIDMalwarehttps://www.microsoft.com/security/blog/2021/04/09/investigating-a-unique-form-of-email-delivery-for-icedid-malware/Nic Fillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp26]Nic Fillingham: (00:08)Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security, engineering and operations teams. I'm Nick Fillingham.Natalia Godyla: (00:20)And I'm Natalia Godyla. In each episode we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research, and data science.Nic Fillingham: (00:30)And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla: (00:36)And now, let's unlock the pod.Nic Fillingham: (00:40)Hello, the internet. Hello, listeners. Welcome to episode 28 of Security Unlocked. Nic and Natalia back with you once again for a, a regular, uh, episode of the podcast. Natalia, how are you?Natalia Godyla: (00:50)Hi, Nic. I'm doing well. I'm stoked to have Emily Hacker, a threat analyst at Microsoft back on the show today.Nic Fillingham: (00:58)Yes, Emily is back on the podcast discussing a blog that she co-authored with Justin Carroll, another return champ here on the podcast, called Investigating a Unique Form of Email Delivery for IcedID Malware, the emphasis is on form was, uh, due to the sort of word play there. That's from April 9th. Natalia, TLDR, here. What's, what's Emily talking about in this blog?Natalia Godyla: (01:19)In this blog she's talking about how attackers are delivering IcedID malware through websites contact submission forms by impersonating artists who claim that the companies use their artwork illegally. It's a new take targeting the person managing the submission form.Nic Fillingham: (01:34)Yeah, it's fascinating. The attackers here don't need to go and, you know, buy or steal email lists. They don't need to spin up, uh, you know, any e- email infrastructure or get access to botnets. They're, they're really just finding websites that have a contact as form. Many do, and they are evading CAPTCHA here, and we talk about that with, with, with, uh, Emily about they're somehow getting around the, the CAPTCHA technology to try and weed out automation. But they are getting around that which sort of an interesting part of the conversation.Nic Fillingham: (02:03)Before we get into that conversation, though, a reminder to Security Unlock listeners that we have a new podcast. We just launched a new podcast in partnership with the CyberWire. It is Security Unlocked: CISO Series with Bret Arsenault. Bret Arsenault is the chief information security officer, the CISO, for Microsoft, and we've partnered with him and his team, uh, as well as the CyberWire, to create a brand new podcast series where Bret gets to chat with security and technology leaders at Microsoft as well as some of his CISO peers across the industry. Fantastic conversations into some of the biggest challenges in cyber security today, some of the strategies that these big, big organizations are, are undertaking, including Microsoft, and some practical guidance that really is gonna mirror the things that are being done by security teams here at Microsoft and are some of Microsoft's biggest customers.Nic Fillingham: (02:52)So, I urge you all to, uh, go check that one out. You can find it at the CyberWire. You can also go to www.securityunlockedcisoseries.com, and that's CISO as in C-I-S-O. CISO or CISO, if you're across the pond, securityunlockedcisoseries.com, but for now, on with the pod.Natalia Godyla: (03:12)On with the pod.Nic Fillingham: (03:18)Welcome back to the Security Unlocked Podcast. Emily Hacker, thanks for joining us.Emily Hacker: (03:22)Thank you for having me again.Nic Fillingham: (03:24)Emily, you are, uh, coming back to the podcast. You're a returning champion. Uh, this is, I think your, your second appearance and you're here-Emily Hacker: (03:30)Yes, it is.Nic Fillingham: (03:30)... on behalf of your colleague, uh, Justin Carroll, who has, has also been on multiple times. The two of you collaborated on a blog post from April the 9th, 2021, called Investigating a Unique Form-Emily Hacker: (03:43)(laughs)Nic Fillingham: (03:43)... in, uh, "Form", of email delivery for IcedID malware. The form bit is a pun, is a play on words.Emily Hacker: (03:51)Mm-hmm (affirmative).Nic Fillingham: (03:51)I- is it not?Emily Hacker: (03:53)Oh, it definitely is. Yeah.Nic Fillingham: (03:54)(laughs) I'm glad I picked up on that, which is a, you know, fascinating, uh, campaign that you've uncovered, the two of you uncovered and you wrote about it on the blog post. Before we jump into that, quick recap, please, if you could just reintroduce yourself to the audience. Uh, what, what do you do? What's your day-to-day look like? Who do you work with?Emily Hacker: (04:09)Yeah, definitely. So, I am a threat intelligence analyst, and I'm on the Threat Intelligence Global Engagement and Response team here at Microsoft. And, I am specifically focused on mostly email-based threats, and, as you mentioned on this blog I collaborate with my coworker, Justin Carroll, who is more specifically focused on end-point threats, which is why we collaborated on this particular blog and the particular investigation, because it has both aspects. So, I spend a lot of my time investigating both credential phishing, but also malicious emails that are delivering malware, such as the ones in this case. And also business email, compromise type scam emails.Nic Fillingham: (04:48)Got it. And so readers of the Microsoft Security Blog, listeners of Security Unlocked Podcast will know that on a regular basis, your team, and then other, uh, threat intelligence teams from across Microsoft, will publish their findings of, of new campaigns and new techniques on the blog. And then we, we try and bring those authors onto the podcast to tell us about what they found that's what's happened in this blog. Um, the two of you uncovered a new, a unique way of attackers to deliver the IcedID malware. Can you walk us through this, this campaign and this technique that you, you both uncovered?Emily Hacker: (05:21)Yeah, definitely. So this one was really fun because as I mentioned, it evolved both email and endpoint. So this one was, as you mentioned, it was delivering IcedID. So we initially found the IcedID on the endpoint and looking at how this was getting onto various endpoints. We identified that it was coming from Outlook, which means it's coming from email. So we can't see too much in terms of the email itself from the endpoint, we can just see that it came from Outlook, but given the network connections that the affected machines were making directly after accessing Outlook, I was able to find the emails in our system that contains emails that have been submitted by user 'cause either reported to junk or reported as phish or reported as a false positive, if they think it's not a phish. And so that's where I was actually able to see the email itself and determined that there was some nefarious activity going on here.Emily Hacker: (06:20)So the emails in this case were really interesting in that they're not actually the attacker sending an email to a victim, which is what we normally see. So normally the attacker will either, you know, compromise a bunch of senders and send out emails that way, which is what we've seen a lot in a lot of other malware or they'll create their own attacker infrastructure and send emails directly that way. In this case, the attackers were abusing the contact forms on the websites. So if you are visiting a company's website and you're trying to contact them a lot of times, they're not going to just have a page where they offer up their emails or their phone numbers. And you have to fill in that form, which feels like it goes into the void sometimes. And you don't actually know who it went to in this case, the, the attackers were abusing hundreds of these contact forms, not just targeting any specific company.Emily Hacker: (07:08)And another thing that was unique about this is that for some of the affected companies that we had observed, I went and looked at their websites and their contact form does require a CAPTCHA. So it does appear that the attackers in this case have automated the filling out of these contact forms. And that they've automated a way around these CAPTCHAs, just given the, the sheer volume of these emails I'm seeing. This is a good way of doing this because for the attacker, this is a much more high fidelity method of contacting these companies because they don't have to worry about having an incorrect email address if they have gotten a list off of like Pastebin or a list, you know, they purchased a list perhaps from another criminal. Emily Hacker: (07:52)A lot of times in those cases, if they're emailing directly, there's gonna be some, some false emails in those lists that just don't get delivered. With the contact form, they're designed to be delivered. So it's gonna give the attacker a higher chance of success in terms of being delivered to a real inbox.Natalia Godyla: (08:11)And so when we, we talk about the progression of the attack, they're automating this process of submitting to these contact forms. What are they submitting in the form? What is the, and what is the end goal? So there's malware somewhere in their-Emily Hacker: (08:27)Mh-mm-hmm (affirmative).Natalia Godyla: (08:27)... response. What next?Emily Hacker: (08:29)Yeah. It's a really good question. So the emails or rather the contact form submissions themselves, they're all containing a, a lore. So the contents themselves are lore that the attacker is pretending to be a, um, artist, a photographer, and illustrator, something along those lines. There's a handful of different jobs that they're pretending to be. And they are claiming that the company that they are contacting has used an image that belongs to the artist, illustrator, photographer on their website without permission. And so the attacker is saying, "You used my art without permission. I'm going to sue you if you don't take this down, if you wanna know what aren't talking about, click on this link and it'll show you the exact art that I'm talking about or the exact photo." What have you, all of the emails were virtually identical in terms of the content and the lore.Emily Hacker: (09:21)The attacker was using a bunch of different fake emails. So when you fill out a contact form, you have to put your email so the, the company can contact you, I guess, in reply, if they need to. And the attackers, almost every single email that I looked at had a different fake attacker email, but they did all follow a really consistent pattern in terms of the, the name, Mel and variations on that name. So they had like Melanie, I saw like Molina, like I said, there was hundreds of them. So the email would be Mel and then something relating to photography or illustration or art, just to add a little bit more credence, I think to their, to their lore. It made it look like the email address was actually associated with a real photographer. The, the attacker had no need to actually register or create any of those emails because they weren't sending from those emails. They were sending from the contact form. So it made it a lot easier for the attacker to appear legitimate without having to go through the trouble of creating legitimate emails. Emily Hacker: (10:16)And then the, um, the email itself from the recipients view would appear other than the fact that it felt fishy, at least to me, but, you know, I literally do this for a living. So maybe just everything feels fishy to me. Other than that, the email itself is going to appear totally legitimate because since it's coming through the contact form, it's not going to be from an email address. They don't recognize because a lot of times these contact forms are set up in a way where it'll send from the recipient's domain. So for example, a contact form, I don't know if this is how this works, but just as an example at Microsoft might actually send from Microsoft.com or the other large percentage of these that I saw were sent from the contact form hosting provider. So there are a lot of providers that host is kind of content for companies. And so the emails would be coming from those known email addresses and the emails themselves are gonna contain all of the expected fields, all in all. It's basically a legitimate email other than the fact that it's malicious.Nic Fillingham: (11:17)And, and just reading through the sample email that you, that you have in the blog post here, like sort of grammatically speaking it's, it reads very legitimately like, the-Emily Hacker: (11:26)Mh-mm-hmm (affirmative).Nic Fillingham: (11:27)... you know, the s- the, the grammar and the spelling is, it's colloquial, but it's, but it seems, you know, pretty legitimate. The idea of a photographer, a freelance photographer, stumbling upon their images being used without permission. You know, you hear stories of that happening. That seems to be somewhat plausible, not knowing how to contact the, the infringing organization. And then therefore going to the generic contact us form like this all, this all seems quite plausible. Emily Hacker: (11:52)And, definitely. And it's als one of those situations where even though, like I said, I do this for a living, so I read this and I was like, there's no way that's legit. But if my job was to be responsible for that email inbox, where stuff like this came in, it would be hard for me to weigh the consequences of like, is it more likely that this is like a malicious email? Or is it yeah. Is it possible that this is legit? And if I ignore it, my company is gonna get sued. Like, I feel like that kind of would give the recipient that, that weird spot of being like, "I don't want to infect the company with malware, or, you know, I don't wanna click on a phishing link if that's what this is, but also if I don't and then we get sued, is it my fault?"Emily Hacker: (12:33)I just, I, I feel for the recipient. So I, I understand why people would be clicking on this one and infecting themselves. And speaking of clicking on that is the other thing that's included in this email. So that was the last bit of this email that turns us from just being weird/legitimate, to totally malicious. All of the emails contain a link. And, um, the links themselves are also abusing legitimate infrastructure. So that's, uh, the next bit of abused, legitimate infrastructure that just adds that next bit of like believability if that's a word to this campaign.Nic Fillingham: (13:05)It is a word.Emily Hacker: (13:06)Okay, good believability. Is that the, the links, you know, we're, if you don't work insecurity, and even if you do work in security, we're all kind of trained like, "Oh, check the links, hover over the links and make sure it's going somewhere that you expect and make sure it's not going to like bad site dot bad, dot bad or something," you know, but these don't do that. All of the emails contained a sites.google.comm link. And I've looked at literally hundreds of these, and they all contain, um, a different URL, but the same sites.google.com domain. If you click on the link, when you receive the email, it'll take you actually to a legitimate Google authentication page that'll ask you to log in with your Google credentials, which again, every step along the way of this, of the email portion of this, of this attack, the attacker just took extra steps to make it seem as real as possible, or to almost like every piece of security advice. Emily Hacker: (14:01)I feel like they did that thing. So it seemed more legitimate because it's not a phishing page. It's not like a fake Google page that's stealing your credentials. It's a real where you would log in with your real Google credentials. Another thing that this does outside of just adding an air of legitimacy to the emails, it also can make it difficult for some security automation products. So a product that would be looking at emails and detonating the link to see if they're malicious and this case, it would detonate the link and it would land on, you know, a real Google authentication page. And in some cases it may not be able to authenticate. And then it would just mark these as good, because it would see what it expected to see. So, outside of just seeming legit, it also makes, you know, security products make this think it's more legit as well. But from there, the, uh, user would be redirected through a series of attacker own domains and would eventually download a zip file, which if they unzipped, they would find the IcedID payload.Emily Hacker: (15:06)So in this case, it's delivering IcedID, although this technique could be used to deliver other stuff as well, but it's not necessarily surprising that it's delivering IcedID right now, because pretty much everything I feel like I'm seeing lately as I study. And I don't think I'm alone in that there's murmurings that IcedID might be replacing Emotets now that you Emotet has been taken down in terms of being, you know, the annoyingly present malware. (laughs) So this is just one of many delivery methods that we've seen for IcedID malware lately. It's certainly in my opinion, one of the more interesting ones, because in the past, we've seen IcedID delivered a lot via email, but, um, just delivered via, you know, the normal type of malicious email if you will, with a compromised email sending with a, a zip attachment, this is much more interesting.Emily Hacker: (15:56)But in this case, if the user downloaded the payload, the payload would actually do many things. So in this case, it was looking for machine information. It was looking to see what kind of security tools were in place to see what kind of antivirus the machine was running. It was getting IP and system information. It was getting, you know, domain information and also looking to access credentials that might be stored in your browser. And on top of that, it was also dropping Cobalt Strike, which is another fun tool that we see used in every single incident lately. It feels like, um, which means that this can give attacker full control of a compromised device.Natalia Godyla: (16:38)So, what are we doing to help protect customers against IcedID? In the blog you stated that we are partnering with a couple of organizations, as well as working with Google.Emily Hacker: (16:52)Yes. So we have notified Google of this activity because it is obviously abusing some of their infrastructure in terms of the sites at Google.com. And they seem to be doing a pretty good job in terms of finding these and taking them down pretty quickly. A lot of times that I'll see new emails come in, I'll go to, you know, click on the link and see what it's doing. And the site will already be taken down, which is good. However, the thing about security is that a lot of times we were playing Catch Up or like, Whack-A-Mole, where they're always just gonna be a step ahead of us because we can't pre block everything that they're going to do. So this is still, um, something that we're also trying to keep an eye on from, from the delivery side as well. Emily Hacker: (17:34)Um, one thing to note is that since these are coming from legitimate emails that are expected is that I have seen a fair bit like, uh, a few of these, uh, actually, um, where the, the customers have their environment configured in a way where even if we mark it as phish, it still ends up delivered. So they have a, what is like a mail flow rule that might be like allow anything from our contact form, which makes sense, because they wouldn't wanna be blocking legitimate requests from co- from customers in their contact form. So with that in mind, we also wanna be looking at this from the endpoint. And so we have also written a few rules to identify the behaviors associated with the particular IcedID campaign. Emily Hacker: (18:16)And it will notify users if the, the behaviors are seen on their machine, just in case, you know, they have a mail flow rule that has allowed the email through, or just in case the attackers change their tactics in the email, and it didn't hit on our rule anymore or something, and a couple slipped through. Then we would still identify this on the endpoint and not to mention those behaviors that the rules are hitting on are before the actual IcedID payload is delivered. So if everything went wrong in the email got delivered and Google hadn't taken the site down yet, and the behavioral rule missed, then the payload itself is detected as I study by our antivirus. So there's a lot in the way of protections going in place for this campaign.Nic Fillingham: (18:55)Emily, I, I wanna be sort of pretty clear here with, with folks listening to the podcast. So, you know, you've, you've mentioned the, the sites.google.com a, a couple of times, and really, you're not, you're not saying that Google has been compromised or the infrastructure is compromised simply that these attackers have, uh, have come up with a, a, you know, pretty potentially clever way of evading some of the detections that Google, uh, undoubtedly runs to abuse their, their hosting services, but they could just evasively has been targeting OneDrive or-Emily Hacker: (19:25)Mh-mm-hmm (affirmative).Nic Fillingham: (19:25)... some other cloud storage.Emily Hacker: (19:25)That's correct. And we do see, you know, attackers abusing our own infrastructure. We've seen them abusing OneDrive, we've seen them abusing SharePoint. And at Microsoft, we have teams, including my team devoted to finding when that's occurring and remediating it. And I'm sure that Google does too. And like I said, they're doing a pretty done a good job of it. By the time I get to a lot of these sites, they're already down. But as I mentioned, security is, is a game of Whack-A-Mole. And so for, from Google point of view, I don't envy the position they're in because I've seen, like I mentioned hundreds upon hundreds of these emails and each one is a using a unique link. So they can't just outright block this from occurring because the attacker will just go and create another one.Natalia Godyla: (20:05)So I have a question that's related to our earlier discussion. You, you mentioned that they're evading the CAPTCHA. I thought that the CAPTCHA was one of the mechanisms in place to reduce spam. Emily Hacker: (20:19)Mh-mm-hmm (affirmative).Natalia Godyla: (20:19)So how is it doing that? Does this also indicate that we're coming to a point where we need to have to evolve the mechanisms on the forms to be a little bit more sophisticated than CAPTCHA?Emily Hacker: (20:33)I'm not entirely sure how the attackers are doing this because I don't know what automation they're using. So I can't see from their end, how they're evading the CAPTCHA. I can just see that some of the websites that I know that they have abused have a CAPTCHA in place. I'm not entirely sure.Nic Fillingham: (20:52)Emily is that possible do you think that one of the reasons why CAPTCHA is being invaded. And we talked earlier about how the, sort of the grammar of these mails is actually quite sophisticated. Is it possible? This is, this is a hands on keyboard manual attack? That there's actually not a lot of automation or maybe any automation. And so this is actually humans or a human going through, and they're evading CAPTCHA because they're actually humans and not an automated script?Emily Hacker: (21:17)There was another blog that was released about a similar campaign that was using the abusing of the contact forms and actually using a very similar lore with the illustrators and the, the legal Gotcha type thing and using sites.google.com. That was actually, it was very well written and it was released by Cisco Talos at the end of last year, um, at the end of 2020. So I focused a lot on the email side of this and what the emails themselves looked like and how we could stop these emails from happening. And then also what was happening upon clicks over that, like I said, we could see what was happening on the endpoint and get these to stop. Emily Hacker: (21:55)This blog actually focused a lot more on the technical aspect of what was being delivered, but also how it was being delivered. And one thing that they noted here was that they were able to see that the submissions were performed in an automated mechanism. So Cisco Talos was able to see that these are indeed automated. I suspected that they were automated based on the sheer volume, but I Talos is very good. They're very good intelligence organization. And I felt confident upon reading their blog that this was indeed automated, how it's being captured though, I still don't know.Natalia Godyla: (22:35)What's next for your research on IcedID? Does this round out your team's efforts in understanding this particular threat, or are, are you now continuing to review the emails, understand more of the attack?Emily Hacker: (22:52)So this is certainly not the end for IcedID. Through their Microsoft Security Intelligence, Twitter account. I put out my team and I put out a tweet just a couple of weeks ago, about four different IcedID campaigns that we were seeing all at the same time. I do believe this was one of them. They don't even seem related. There was one that was emails that contained, um, zip files. There was one that contained emails that contained password protected zip files that was targeting specifically Italian companies. There was this one, and then there was one that was, um, pretending to be Zoom actually. And that was even a couple of weeks ago. So there's gonna be more since then. So it's something that, like I mentioned briefly earlier, IcedID almost feels to be kind of, it feels a little bit like people are calling it like a, the next wave of replacement after Emotech are taken down. Emily Hacker: (23:43)And I don't know necessarily that that's true. I don't know that this will be the new Emotech so to speak, Emotech was Emotech And IcedID is IcedID but it does certainly feel like I've been seeing it a lot more lately. A lot of different attackers seem to be using it and therefore it's being delivered in different ways. So I think that it's gonna be one that my team is tracking for awhile, just by nature of different attackers using it, different delivery mechanisms. And it'll be, it'll be fun to see where this goes.Nic Fillingham: (24:13)What is it about this campaign or about this particular technique that makes it your Moby Dick-Emily Hacker: (24:17)(laughs) Nic Fillingham: (24:17)... if I may use the analogy.Emily Hacker: (24:20)I don't know. I've been thinking about that. And I think it has to do with the fact that it is so, like, it just feels like a low blow. I don't know. I think that's literally it like they're abusing the company's infrastructure. They're sending it to like people whose job is to make sure that their companies are okay. They're sending a fake legal threat. They're using legit Google sites. They're using a legit Google authentication, and then they're downloading IcedID. Like, can you at least have the decency, descend to crappy like unprotected zip attachment so that-Nic Fillingham: (24:49)(laughs)Emily Hacker: (24:49)... we at least know you're malicious, like, come on. It's just for some reason it, I don't know if it's just 'cause it's different or if it's because I'm thinking back to like my day before security. And I, if I saw this email as this one that I would fall for, like maybe. And so I think that there's just something about that and about the, the fact that it's making it harder to, to fully scope and to really block, because we don't want to block legitimate contact emails from being delivered to these companies. And obviously they don't want that either. So I think that's it.Nic Fillingham: (25:22)What is your guidance to customers? You know, I'm a security person working at my company and I wanna go run this query. If I run this, I feel like I'm gonna get a ton of results. What do I do from there?Emily Hacker: (25:33)That's a good question. So this is an advanced hunting query, which can be used in the Microsoft Security portal. And it's written in advanced hunting query language. So if a customer has access to that portal, they can just copy and paste and search, but you're right. It is written fairly generically to a point where if you don't have, you know, advanced hunting, you can still read this and search and whatever methodology, whatever, you know, searching capabilities you do have, you would just have to probably rewrite it. But what this one is doing the top one, 'cause I, I have two of them written here. The first one is looking specifically at the email itself. So that rejects that's written there is the, um, site.google.com.Emily Hacker: (26:16)All of the emails that we have seen associated with this have matched on that rejects. There was this morning, like I said, I was talking to a different team that was also looking into this and I'm trying to identify if she found, um, a third pattern, if she did, I will update the, um, AHQ and we have, we can post AHQ publicly on the Microsoft advanced hunting query, get hub repo, which means that customers can find them if we, if we change them later and I'll be doing that if that's the case, but point being this rejects, basically it takes the very long, full URL of this site.google.com and matches on the parts that are fairly specific to this email.Emily Hacker: (27:02)So they all contain, you know, some of them contain ID, some of them don't, but they all contain that like nine characters, they all contain view. It's just certain parts of the URL that we're seeing consistently. And that's definitely not by itself going to bubble up just the right emails, which is why have it joined on the email events there. And from there, the, I have instructed the users to replace the following query with the subject line generated by their own contacts, their own websites contact submission form. What I have in there are just a few sample subject lines. So if your website contact form generates the subject line of contact us or new submission or contact form, then those will work. But if the website con-, you know, contact form, I've seen a bunch of different subject lines. Then what this does is that it'll join the two. So that it's only gonna bubble up emails that have that sites.google.com with that specific pattern and a subject line relating to the contact form. Emily Hacker: (28:02)And given the searching that I've done, that should really narrow it down. I don't think there's going to be a ton in the way of other contact emails that are using sites.google.com that are showing up for these people. I wouldn't be surprised if this did return one email and it turned out to be a malicious email related to this campaign. But if the contact form generates its own subject line per what the user inputs on the website, then, you know, the screenshots that are in the blog may help with that, but it might be more difficult to find in that case. There's a second advanced hunting query there, which we'll find on the endpoint.Natalia Godyla: (28:37)And I know we're just about at time here, but one quick question on endpoint security. So if a customer is using Microsoft Defender for endpoint, will it identify and stop IcedID?Emily Hacker: (28:49)Yes, it will. The IcedID payload in this case, we're seeing Defender detecting it and blocking it. And that was what, one of the things I was talking about earlier is that Defender is actually doing such a good job. That it's a little bit difficult for me to see what's, uh, gonna happen next because I'm limited to, um, seeing kind of what is happening on customer boxes. And so, because our products are doing such a good job of blocking this, it means that I don't have a great view of what the attacker was going to do next because they can't, 'cause we're blocking it. So it's of mostly a win, but it's stopping me from seeing if they are planning on doing, you know, ransomware or whatever, but I'd rather not know if it means that our customers are protected from this.Nic Fillingham: (29:32)Well, Emily Hacker, thank you so much for your time. Thanks to you and Justin for, for working on this. Um, we'd love to have you back again on Security Unlocked to learn more about some of the great work you're doing.Emily Hacker: (29:41)Definitely, thank you so much for having me.Natalia Godyla: (29:47)Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham: (29:54)And don't forget to tweet us @msftsecurity or email us at securityunlockedatmicrosoft.com, with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla: (30:05)Stay secure.