Discovering Router Vulnerabilities with Anomaly Detection
Ready for a riddle? What do 40 hypothetical high school students and our guest on this episode have in common?Whythey can help you understand complex cyber-attack methodology, of course!In this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylaare brought back to school byPrincipalSecurityResearcher,Jonathan Bar Or who discusses vulnerabilities in NETGEAR Firmware. During the conversation Jonathan walks through how his teamrecognized the vulnerabilities and worked with NETGEAR to secure the issue,andhelps usunderstand exactly how the attack workedusing an ingenious metaphor.In This Episode You Will Learn: How a side-channel attack worksWhy attackers are moving away fromoperating systemsand towards network equipmentWhy routers are an easy access point for attacksSome Questions We Ask: How do you distinguish an anomaly from an attack?What are the differences between a side-channel attack and an authentication bypass?What can regular users do to protect themselvesfrom similarattacks? Resources: Jonathan Bar Or’s Blog Post:https://www.microsoft.com/security/blog/2021/06/30/microsoft-finds-new-netgear-firmware-vulnerabilities-that-could-lead-to-identity-theft-and-full-system-compromise/Jonathan Bar Or’s LinkedIn:https://www.linkedin.com/in/jonathan-bar-or-89876474/Nic Fillingham’s LinkedIn: https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog: https://www.microsoft.com/security/blog/ Related: Security Unlocked: CISO Series with Bret Arsenault https://thecyberwire.com/podcasts/security-unlocked-ciso-series
Discovering Router Vulnerabilities with Anomaly Detection
Ready for a riddle? What do 40 hypothetical high school students and our guest on this episode have in common?Whythey can help you understand complex cyber-attack methodology, of course!In this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylaare brought back to school byPrincipalSecurityResearcher,Jonathan Bar Or who discusses vulnerabilities in NETGEAR Firmware. During the conversation Jonathan walks through how his teamrecognized the vulnerabilities and worked with NETGEAR to secure the issue,andhelps usunderstand exactly how the attack workedusing an ingenious metaphor.In This Episode You Will Learn: How a side-channel attack worksWhy attackers are moving away fromoperating systemsand towards network equipmentWhy routers are an easy access point for attacksSome Questions We Ask: How do you distinguish an anomaly from an attack?What are the differences between a side-channel attack and an authentication bypass?What can regular users do to protect themselvesfrom similarattacks? Resources: Jonathan Bar Or’s Blog Post:https://www.microsoft.com/security/blog/2021/06/30/microsoft-finds-new-netgear-firmware-vulnerabilities-that-could-lead-to-identity-theft-and-full-system-compromise/Jonathan Bar Or’s LinkedIn:https://www.linkedin.com/in/jonathan-bar-or-89876474/Nic Fillingham’s LinkedIn: https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog: https://www.microsoft.com/security/blog/ Related: Security Unlocked: CISO Series with Bret Arsenault https://thecyberwire.com/podcasts/security-unlocked-ciso-series
Securing the Internet of Things
Thereused to bea time when our appliances didn’t talk back to us, but it seems like nowadays everything in our home is getting smarter.Smart watches, smart appliances,smart lights-smart everything! Thisconnectivity to the internetis what we call the Internet of Things(IoT).It’s becoming increasingly common for our everyday items to be “smart,” and while thatmay providea lot of benefits, like your fridge reminding you when you may need to get more milk, it alsomeans thatall ofthose devices becomesusceptible to cyberattacks.On this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylatalk toArjmandSamuelabout protecting IoT devices, especially with a zero trust approach.Listenin to learnnot onlyaboutthe importance of IoT security,but also what Microsoft is doing to protect againstsuchattacks and how you canbettersecurethesedevices.In This Episode You Will Learn: Whatthe techniquesareto verify explicitly on IoT devicesHow to apply the zero trust model in IoTWhat Microsoft is doing to protect against attacks on IoTSome Questions We Ask:What isthedifference between IoT and IT?Why is IoT security so important?What are the best practices for protecting IoT?Resources:ArjmandSamuel’s LinkedIn:https://www.linkedin.com/in/arjmandsamuel/Nic Fillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://thecyberwire.com/podcasts/security-unlocked-ciso-seriesTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp36]Nic Fillingham:(music) Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in new and research from across Microsoft's security, engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla:And now, let's unlock the pod. (music)Natalia Godyla:Welcome everyone to another episode of Security Unlocked. Today we are joined by first time guest, Arjmand Samuel, who is joining us to discuss IoT Security, which is fitting as he is an Azure IoT Security leader a Microsoft. Now, everyone has heard the buzz around IoT. There's been constant talk of it over the past several years, and, but now we've all also already had some experience with IoT devices in our personal life. Would about you, Nic? What do you use in your everyday life? What types of IoT devices?Nic Fillingham:Yeah. I've, I've got a couple of smart speakers, which I think a lot of people have these days. They seem to be pretty ubiquitous. And you know what? I sort of just assumed that they automatically update and they've got good security in them. I don't need to worry about it. Uh, maybe that's a bit naïve, but, but I sort of don't think of them as IoT. I just sort of, like, tell them what I music I want to play and then I tell them again, because they get it wrong. And then I tell them a third time, and then I go, "Ugh," and then I do it on my phone.Nic Fillingham:I also have a few cameras that are pointed out around the outside of the house. Because I live on a small farm with, with animals, I've got some sheep and pigs, I have to be on the look out for predators. For bears and coyotes and bobcats. Most of my IoT, though, is very, sort of, consummary. Consumers have access to it and can, sort of, buy it or it comes from the utility company.Natalia Godyla:Right. Good point. Um, today, we'll be talking with Arjmand about enterprise grade IoT and OT, or Internet of Things and operational technology. Think the manufacturing floor of, uh, plants. And Arjmand will walk us through the basics of IoT and OT through to the best practices for securing these devices.Nic Fillingham:Yeah. And we spent a bit of time talking about zero trust and how to apply a zero trust approach to IoT. Zero trust, there's sort of three main pillars to zero trust. It's verify explicitly, which for many customers just means sort of MFA, multi factorial authentication. It's about utilizing least privilege access and ensuring that accounts, users, devices just have access to the data they need at the time they need it. And then the third is about always, sort of, assuming that you've been breached and, sort of, maintaining thing philosophy of, of let's just assume that we're breached right now and let's engage in practices that would, sort of, help root out a, uh, potential breach.Nic Fillingham:Anyway, so, Arjmand, sort of, walks us through what it IoT, how does it relate to IT, how does it relate to operational technology, and obviously, what that zero trust approach looks like. On with the pod.Natalia Godyla:On with the pod. (music) Today, we're joined by Arjmand Samuel, principle program manager for the Microsoft Azure Internet of Things Group. Welcome to the show, Arjmand.Arjmand Samuel:Thank you very much, Natalia, and it's a pleasure to be on the show.Natalia Godyla:We're really excited to have you. Why don't we kick it off with talking a little bit about what you do at Microsoft. So, what does your day to day look like as a principle program manager?Arjmand Samuel:So, I am part of the Azure IoT Engineering Team. I'm a program manager on the team. I work on security for IoT and, uh, me and my team, uh, we are responsible for making sure that, uh, IoT services and clients like the software and run times and so on are, are built securely. And when they're deployed, they have the security properties that we need them and our customers demand that. So, so, that's what I do all a long.Nic Fillingham:And, uh, we're going to talk about, uh, zero trust and the relationship between a zero trust approach and IoT. Um, but before we jump into that, Arjmand, uh, we, we had a bit of a look of your, your bio here. I've got a couple of questions I'd love to ask, if that's okay. I want to know about your, sort of, tenure here at Microsoft. Y- y- you've been here for 13 years. Sounds like you started in, in 2008 and you started in the w- what was called the Windows Live Team at the time, as the security lead. I wonder if you could talk a little bit about your, your entry in to Microsoft and being in security in Microsoft for, for that amount of time. You must have seen some, sort of, pretty amazing changes, both from an industry perspective and then also inside Microsoft.Arjmand Samuel:Yeah, yeah, definitely. So, uh, as you said, uh, 2008 was the time, was the year when I came in. I came in with a, a, a degree in, uh, security, in- information security. And then, of course, my thinking and my whole work there when I was hired at Microsoft was to be, hey, how do we actually make sure that our product, which was Windows Live at that time, is secure? It has all the right security properties that, that we need that product to have. So, I- I came in, started working on a bunch of different things, including identity and, and there was, these are early times, right? I mean, we were all putting together this infrastructure, reconciling all the identity on times that we had. And all of those were things that we were trying to bring to Windows Live as well.Arjmand Samuel:So, I was responsible for that as well as I was, uh, working on making sure that, uh, our product had all the right diligence and, and security diligence that is required for a product to be at scale. And so, a bunch of, you know, things like STL and tech modeling and those kind of things. I was leading those efforts as well at, uh, Windows Live.Natalia Godyla:So, if 2008 Arjmand was talking to 2021 Arjmand, what would he be most surprised about, about the evolution over the past 13 years, either within Microsoft or just in the security industry.Arjmand Samuel:Yeah. Yeah. (laughs) That's a great, great question, and I think in the industry itself, e- evolution has been about how all around us. We are now engulfed in technology, connected technology. We call it IoT, and it's all around us. That was not the landscape 10, 15 years back. And, uh, what really is amazing is how our customers and partners are taking on this and applying this in their businesses, right? This meaning the whole industry of IoT and, uh, Internet of Things, and taking that to a level where every data, every piece of data in the physical world can be captured or can be acted upon. That is a big change from the last, uh, 10, 15 to where we are today.Nic Fillingham:I thought you were going to say TikTok dance challenges.Arjmand Samuel:(laughs)Natalia Godyla:(laughs)Nic Fillingham:... because that's, that's where I would have gone.Arjmand Samuel:(laughs) that, too. That, too, right? (laughs)Nic Fillingham:That's a (laughs) digression there. So, I'm pretty sure everyone knows what IoT is. I think we've already said it, but let's just, sort of, start there. So, IoT, Internet of Things. Is, I mean, that's correct, right? Is there, is there multiple definitions of IoT, or is it just Internet of Things? And then, what does the definition of an Internet of Things mean?Arjmand Samuel:Yeah, yeah. It;s a... You know, while Internet of Things is a very recognized acronym these days, but I think talking to different people, different people would have a different idea about how Internet of Thing could be defined. And the way I would define it, and again, not, not, uh, necessarily the authority or the, the only definition. There are many definitions, but it's about having these devices around us. Us is not just people but also our, our manufacturing processes, our cars, our, uh, healthcare systems, having all these devices around, uh, these environments. They are, these devices, uh, could be big, could be small. Could be as small as a very small temperature sensor collecting data from an environment or it could be a Roboticom trying to move a full car up and down an assembly line.Arjmand Samuel:And first of all, collecting data from these devices, then bringing them, uh, uh, using the data to do something interesting and insightful, but also beyond that, being able to control these devices based on those insights. So, now there's a feedback loop where you're collecting data and you are acting on that, that data as well. And that is where, how IoT is manifesting itself today in, in, in the world. And especially for our customers who are, who tend to be more industrial enterprises and so on, it's a big change that is happening. It's, it's a huge change that, uh, they see and we call it the transformation, the business transformation happening today. And part of that business transformation is being led or is being driven through the technology which we call IoT, but it's really a business transformation.Arjmand Samuel:It's really with our customers are finding that in order to remain competitive and in order to remain in business really, at the end of the day, they need to invest. They need to bring in all these technologies to bear, and Internet of Things happens that technology.Nic Fillingham:So, Arjmand, a couple other acronyms. You know, I think, I think most of our audience are pretty familiar with IoT, but we'll just sort of cover it very quickly. So, IoT versus IT. IT is, obviously, you know, information technology, or I think that's the, that's the (laughs) globally accepted-Arjmand Samuel:Yeah, yeah.Nic Fillingham:... definition. You know, do you we think of IoT as subset of IT? What is the relationship of, of those two? I mean, clearly, there are three letters versus two letters, (laughs) but there is relationship there. Wh- wh- what are your thoughts?Arjmand Samuel:Yeah. There's a relationship as well as there's a difference, and, and it's important to bring those two out. Information technology is IT, as we know it now for many years, is all about enterprises running their applications, uh, business applications mostly. For that, they need the network support. They need databases. They need applications to be secured and so on. So, all these have to work together. The function of IT, information technology, is to make sure that the, there is availability of all these resources, applications, networks and databases as well as you have them secured and private and so on.Arjmand Samuel:So, all of that is good, but IoT takes it to the next level where now it's not only the enterprise applications, but it's also these devices, which are now deployed by the enterprise. I mentioned Roboticoms. Measured in a conference room you have all these equipment in there, projection and temperature sensors and occupancy sensors and so on. So, all of those beco- are now the, the add on to what we used to call IT and we are calling it the IoT.Arjmand Samuel:Now, the interesting part here is in the industrial IoT space. Th- this is also called OT, operation technology. So, you know, within an organization there'll be IT and OT. OT's operation technology and these are the people or the, uh, function within an organization who deal with the, with the physical machines, the physical plant. You know, the manufacturing line, the conveyor belts, the Roboticoms, and these are called OT functions.Arjmand Samuel:The interesting part here is the goal of IT is different from the goal of OT. OT is all about availability. OT's all about safety, safety so that it doesn't hurt anybody working on the manufacturing line. OT's all about environmental concerns. So, it should not leak bad chemicals and so on. A while, if you talk about security, and this is, like, a few years back when we would talk about security with an OT person, the, the person who's actually... You know, these are people who actually wear those, uh, hard hats, you know, on, uh, a manufacturing plant. And if you talk about security to an OT person, they will typically refer to that guard standing outside and, and, uh, the-Nic Fillingham:Physical security.Arjmand Samuel:The physical security and the, the walls and the cameras, which would make sure that, you know, and then a key card, and that's about all. This was OT security, but now when we started going in and saying that, okay, all these machines can be connected to, to each other and you can collect all this data and then you can actually start doing something interesting with this data. That is where the definition of security and the functions of OT evolved. And not evolving, I mean different companies are at different stages, but they're now evolving where they're thinking, okay, it's not only about the guard standing outside. It's also the fact that the Roboticom could be taken over remotely and somebody outside, around the world, around the globe could actually be controlling that Roboticom to do something bad. And that realization and the fact that now you actually have to control it in the cyber sense and not only in the physical sense is the evolution that happened between OT.Arjmand Samuel:Now, IT and OT work together as well because the same networks are shared typically. Some of the applications that use the data from these devices are common. So, IT and OT, this is the other, uh, thing that has changed and, and we are seeing that change, is starting to work and come closer. Work together more. IoT's really different, but at the same time requires a lot of stuff that IT has traditionally done.Natalia Godyla:Hmm. So, what we considered to be simple just isn't simple anymore.Arjmand Samuel:That's life, right? (laughs) Yeah.Natalia Godyla:(laughs)Arjmand Samuel:(laughs)Natalia Godyla:So, today we wanted to talk about IoT security. So, let's just start with, with framing the conversation a little bit. Why is IoT security important and what makes it more challenging, different than traditional security?Arjmand Samuel:As I just described, right, I mean, we are now infusing compute and in every environment around us. I mean, we talked a little bit about the conveyor belt. Imagine the conference rooms, the smart buildings and, and all the different technologies that are coming in. These are technologies, while they're good, they're serve a scenario. They, they make things more efficient and so on, but they're also now a point of, uh, of failure for that whole system as well as a way for malicious sectors to bring in code if possible. And to either, uh, imagine a scenario where or an attack where a malicious sector goes into the conveyor belt and knows exactly the product that is passing through. And imagine that's something either takes the data and sells it to somebody or, worse case, stops the conveyor belt. That is millions of dollars of loss very, uh, that data that the company might be incurring.Arjmand Samuel:So, now that there's infused computer all around us, we are now living in a target which in a environment which can be attacked, and which can be used for bad things much more than what it was when we were only applications, networks and databases. Easy to put a wall around. Easy to understand what's going on. They're easy to lock down. But with all these devices around us, it's becoming much and much harder to do the same.Nic Fillingham:And then what sort of, if, if we think about IoT and IoT security, one of the things that, sort of, makes it different, I- I th- think, and here I'd love you to explain this, sort of... I- I'm thinking of it as a, as a, as a spectrum of IoT devices that, I mean, they have a CPU. They have some memory. They have some storage. They're, they're running and operating system in some capacity all the way through to, I guess, m- much more, sort of, rudimentary devices but do have some connection, some network connection in order for instruction or data to, sort of, move backwards and forwards. What is it that makes this collection of stuff difficult to protect or, you know, is it difficult to protect? And if so, why? And then, how do we think about the, the, the potential vectors for attack that are different in this scenario versus, you know, protecting lap tops and servers?Arjmand Samuel:Yeah, yeah. That's a good one. So, uh, what happens is you're right. Uh, IoT devices can be big and small, all right. They could be a small MCU class device with a real-time operating system on it. Very small, very, uh, single purpose device, which is imagine collecting temperature or humidity only. Then we have these very big, what we call the edge or heavy edge devices, which are like server class devices running a Roboticom or, or even a gateway class device, which is aggregating data from many devices, right, as a, a, and then take, taking the data and acting on it.Arjmand Samuel:So, now with all this infrastructure, one of the key things that we have seen is diversity and heterogeneity of these devices. Not just in terms of size, but also in terms of who manufactured them, when they were manufactured. So, many of the temperature sensors in environments could be very old. Like, 20 years old and people are trying to use the same equipment and not have to change anything there. And which they can. Technically they could, but then those devices were never designed in for a connected environment for these, this data to actually, uh, be aggregated and sent on the network, meaning they per- perhaps did not have encryption built into it. So, we have to do something, uh, additional there.Arjmand Samuel:And so now with the diversity of devices, when they came in, the, the feature set is so diverse. Some of them were, are more recent, built with the right security principles and the right security properties, but then some of them might not be. So, this could raise a, a challenge where how do you actually secure an infrastructure where you have this whole disparity and many different types of devices, many different manufacturers, many of ages different for these devices. Security properties are different and as we all know talking about security, the attack would always come from the weakest link. So, the attacker would always find, within that infrastructure, the device which has the least security as a entry point into that infrastructure. So, we can't just say, "Oh, I'll just protect my gateway and I'm fine." We have to have some mitigation for everything on that network. Everything. Even the older ones, older devices. We call them brownfield devices because they tend to be old devices, but they're also part of the infrastructure.Arjmand Samuel:So, how do we actually think about brownfield and the, the newer ones we call greenfield devices? Brownfield and greenfield, how do we think about those given they will come from different vendors, different designs, different security properties? So, that's a key challenge today that we have. So, they want to keep those devices as well as make sure that they are secure because the current threat vectors and threat, uh, the, and attacks are, are much more sophisticated.Natalia Godyla:So, you have a complex set of devices that the security team has to manage and understand. And then you have to determine at another level which of those devices have vulnerabilities or which one is the most vulnerable, and then, uh, assume that your most vulnerable, uh, will be the ones that are exploited. It, so, is that, that typically the attack factor? It's going to be the, the weakest link, like you said? And h- how does an attacker try to breach the IoT device?Arjmand Samuel:Yeah, yeah. And, and this is where we, we started using the term zero trust IoT.Natalia Godyla:Mm-hmm (affirmative).Arjmand Samuel:So, IoT devices are deployed in an environment which can not be trusted, should not be trusted. You should assume that there is zero trust in that environment, and then all these devices, when they are in there, you will do the right things. You'll put in the right mitigations so that the devices themselves are robust. Now, another example I always give here is, and, uh, I, your question around the attack vectors and, and how attacks are happening, typically in the IT world, now that we, we have the term defined, in the IT world, you will always have, you know, physical security. You will always put servers in a room and lock it, and, and so on, right, but in an IoT environment, you have compute devices. Imagine these are powerful edge nodes doing video analytics, but they're mounted on a pole next to a camera outside on the road, right? So, which means the physical access to that device can not be controlled. It could be that edge node, again, a powerful computer device with lots of, you know, CPU and, and so on, is deployed in a mall looking at video streams and analyzing those video streams, again, deployed out there where any attacker physically can get a hold of the device and do bad things.Arjmand Samuel:So, again, the attack vectors are also different between IT and OT or IoT in the sense that the devices might not be physically contained in a, in an environment. So, that puts another layer of what do we do to protect such, uh, environments?Nic Fillingham:And then I want to just talk about the role of, sort of, if we think about traditional computing or traditional, sort of, PC based computing and PC devices, a lot of the attack vectors and a lot of the, sort of, weakest link is the user and the user account. And that's why, you know, phishing is such a massive issue that if we can socially engineer a way for the person to give us their user name and password or whatever, we, we, we can get access to a device through the user account. IoT devices and OT devices probably don't use that construct, right? They probably, their userless. Is that accurate?Arjmand Samuel:Yeah. That's very accurate. So, again, all of the attack vectors which we know from IT are still relevant because, you know, if you, there's a phishing attack and the administrator password is taken over you can still go in and destroy the infrastructure, both IT and IoT. But at the same time, these devices, these IoT devices typically do not have a user interacting with them, typically in the compute sense. You do not log into an IoT device, right? Except in sensor with an MCU, it doesn't even have a user experience, uh, a screen on it. And so, there is typically no user associated with it, and that's another challenge. So you need to still have an identity off the device, not on the device, but off the device, but that identity has to be intrinsic off the device. It has to be part of the device and it has to be stable. It has to be protected, secure, and o- on the device, but it does not typically a user identity.Arjmand Samuel:And, and that's not only true for temperature sensors. You know, the smaller MCU class devices. That's true for edge nodes as well. Typically, an edge node, and by the way, when I say the edge node, edge node is a full blown, rich operating system. CPU, tons of memory, even perhaps a GPU, but does not typically have a user screen, a keyboard and a mouse. All it has is a video stream coming in through some protocol and it's analyzing that and then making some AI decisions, decisions based on AI. And, and, but that's a powerful machine. Again, there might never ever be a user interactively signing into it, but the device has an identity of its own. It has to authenticate itself and it workload through other devices or to the Cloud. And all of that has to be done in a way where there is no user attached to it.Natalia Godyla:So, with all of this complexity, how can we think about protecting against IoT attacks. You discussed briefly that we still apply the zero trust model here. So, you know, at a high level, what are best practices for protecting IoT?Arjmand Samuel:Yeah, yeah. Exactly. Now that we, we just described the environment, we described the devices and, and the attacks, right? The bad things that can happen, how do we do that? So, the first thing we want to do, talk about is zero trust. So, do not trust the environment. Even if it is within a factory and you have a guard standing outside and you have all the, you know, the physical security, uh, do not trust it because there are still vectors which can allow malicious sectors to come into those devices. So, that's the first one, zero trust.Arjmand Samuel:Uh, do not trust anything that is on the device unless you explicitly trust it, you explicitly make sure that you can go in and you can, attest the workload, as an example. You can attest the identity of the device, as an example. And you can associate some access control polices and you have to do it explicitly and never assume that this is, because it's a, uh, environment in a factory you're good. So, you never assume that. So, again, that's a property or a principle within zero trust that we always exercise.Arjmand Samuel:Uh, the other one is you always assume breach. You always assume that bad things will happen. I- it's not if they'll happen or not. It's about when they're s- uh, going to happen. So, for the, that thinking, then you're putting in place mitigations. You are thinking, okay, if bad things are going to happen, how do I contain the bad things? How do I contain? How do I make sure that first of all, I can detect bad things happening. And we have, and we can talk about some of the offerings that we have, like Defender for IoT as an example, which you can deploy on to the environment. Even if it's brownfield, you can detect bad things happening based on the network characteristics. So, that's Defender for IoT.Arjmand Samuel:And, and once you can detect bad things happening then you can do something about it. You get an alert. You can, you can isolate that device or take that device off the network and refresh it and do those kind of things. So, the first thing that needs to happen is you assume that it's going breach. You always assume that whatever you are going to trust is explicitly trusted. You always make sure that there is a way to explicitly trust, uh, uh, uh, either the workload or the device or the network that is connected onto the device.Nic Fillingham:So, if we start with verify explicitly, in the traditional compute model where it's a user on a device, we can verify explicitly with, usually, multi factor authentication. So, I have my user name and password. I add an additional layer of authentication, whether it's an, you know, app on my phone, a key or something, some physical device, there's my second factor and I'm, I'm verified explicitly in that model. But again, no users or the user's not, sort of, interacting with the device in, sort of, that traditional sense, so what are those techniques to verify explicitly on an IoT device?Arjmand Samuel:Yeah. I, exactly. So, we, in that white paper, which we are talking about, we actually put down a few things that you can actually do to, to, en- ensure that you have all the zero trust requirements together. Now, the first one, of course, is you need, uh, all devices to have strong identity, right? So, because identity is a code. If you can not identi- identify something you can not, uh, give it an access control policy. You can not trust the data that is coming out from that, uh, device. So, the first thing you do is you have a strong identity. By a strong identity we mean identity, which is rooted in hardware, and so, what we call the hardware based root of trust. It's technologies like TPM, which ensure that you have the private key, which is secured in our hardware, in the hardware and you can not get to it, so and so on. So, you, you ensure that you have a, a strong identity.Arjmand Samuel:You always have these privilege access so you do not... And these principles have been known to our IT operations forever, right? So, many years they have been refined and, uh, people know about those, but we're applying them to the IoT world. So, these privilege access, if our device is required to access another device or data or to push out data, it should only do that for the function it is designed for, nothing more than that. You should always have some level of, uh, device health check. Perhaps you should be able to do some kind of test station of the device. Again, there is no user to access the device health, but you should be able to do, and there are ways, there are services which allow you to measure something on the device and then say yes it's good or not.Arjmand Samuel:You should be able to do a continuous update. So, in case there is a device which, uh, has been compromised, you should be able to reclaim that device and update it with a fresh image so that now you can start trusting it. And then finally you should be able to securely monitor it. And not just the device itself, but now we have to technologies which can monitor the data which is passing through the network, and based on those characteristics can see if a device is attacked or being attacked or not. So, those are the kind of things that we would recommend for a zero trust environment to take into account and, and make those requirements a must for, for IoT deployments.Natalia Godyla:And what's Microsoft's role in protecting against these attacks?Arjmand Samuel:Yeah, yeah. So, uh, a few products that we always recommend. If somebody is putting together a new IoT device right from the silicone and putting that device together, we have a great secure be design device, which is called Azure Sphere. Azure Sphere has a bunch of different things that it does, including identity, updates, cert management. All these are important functions that are required for that device to function. And so, a new device could use the design that we have for Azure Sphere.Arjmand Samuel:Then we have, a gateway software that you put on a gateway which allows you to secure the devices behind that gateway for on time deployments. We have Defender for IoT, again as I mentioned, but Defender for IoT is on-prem, so you can actually monitor all the tracks on the network and on the devices. You could also put a agent, a Micro Agent on these devices, but then it also connects to Azure Sentinel. Azure Sentinel is a enterprise class user experience for security administrators to know what bad things are happening on, on-prem. So, it, the whole end to end thing could works all the way from the network, brownfield devices to the Cloud.Arjmand Samuel:We also have things like, uh, IoT Hub Device Provisioning service. Device provisioning service is an interesting concept. I'll try to briefly describe that. So, what happens is when you have an identity on a device and you want to actually put that device, deploy that device in your environment, it has to be linked up with a service in the Cloud so that it can, it knows the device, there's an identity which is shared and so on. Now, you could do it manually. You could actually bring that device in, read a code, put it in the Cloud and your good to go because now the Cloud knows about that device, but then what do you do when you have to deploy a million devices? And we're talking about IoT scale, millions. A fleet of millions of devices. If you take that same approach of reading a key and putting it in the Cloud, one, you'd make mistakes. Second, you will probably need a lifetime to take all those keys and put them in the cloud.Arjmand Samuel:So, in order to solve that problem, we have the device provisioning service, which it's a service in the Cloud. It is, uh, linked up to the OEMs or manufacturing devices. And when you deploy our device in your field, you do not have to do any of that. Your credentials are passed between the service and the, and the device. So, so, that's another service. IoT Hub Device Provisioning Service.Arjmand Samuel:And then we have, uh, a work, the, uh, a piece of work that we have done, which is the Certification of IoT Devices. So, again, you need the devices to have certain security properties. And how do you do that? How do you ensure that they have the right security properties, like identity and cert management and update ability and so on, we have what we call the Edge Secured-core Certification as well as Azure Certified Device Program. So, any device which is in there has been tested by us and we certify that that device has the right security properties. So, we encourage our customers to actually pick from those devices so that they, they actually get the best security properties.Natalia Godyla:Wow. That's a lot, which is incredible. What's next for Microsoft's, uh, approach to IoT security?Arjmand Samuel:Yeah, yeah. So, uh, one of the key things that we have heard our customers, anybody who's going into IoT ask the question, what is the risk I'm taking? Right? So, I'm deploying all these devices in my factories and Roboticom's connecting them, and so on, but there's a risk here. And how do I quantify that risk? How do I understand th- that risk and how do I do something about that risk?Arjmand Samuel:So, we, we got those questions many years back, like four, five years back. We started working with the industry and together with the Industrial Internet Consortium, IIC, which a consortium out there and there are many companies part of that consortium, we led something called The Security Maturity Model for IoT. So, so, we put down a set of principles and a set of processes you follow to evaluate the maturity of your security in IoT, right? So, it's a actionable thing. You take the document, you evaluate, and then once you have evaluated, it actually give you a score.It says you're level one, or two, or three, or four. Four, that's the authentication. All else is controlled management. And then based on th- that level, you know where you care, first of all. So, you know what your weaknesses are and what you need to do. So, that's a very actionable thing. But beyond that, if you're at level two and you want to be at level four, and by want to means your scenario dictates that you should be at level four, it is actionable. It gives you a list of things to do to go from level two to level four. And then you can reevaluate yourself and then you know that you're at level four. So, that's a maturityArjmand Samuel:Now, In order to operationalize that program with in partnership with IAC, we also have been, and IAC's help, uh, has been instrumental here, we have been working on a training program where we have been training auditors. These are IoT security auditors, third party, independent auditors who are not trained on SMMs Security Maturity Model. And we tell our customers, if you have a concern, get yourself audited using SMM, using the auditors and that will tell you where you are and where you need to go. So, it's evolving. Security for IoT's evolving, but I think we are at the forefront of that evolution.Nic Fillingham:Just to, sort of, finish up here, I'm thinking of some of the recent IoT security stories that were in the news. We won't mention any specifically, but there, there have been some recently. My take aways hearing those stories reading those stories in the news is that, oh, wow, there's probably a lot of organizations out here and maybe individuals at companies that are using IoT and OT devices that maybe don't see themselves as being security people or having to think about IoT security, you know T security. I just wonder if do you think there is a, a population of folks out here that don't think of themselves as IoT security people, but they really are? And then therefore, how do we sort of go find those people and help them go, get educated about securing IoT devices?Arjmand Samuel:Yeah, that's, uh, that's exactly what we are trying to do here. So, uh, people who know security can obviously know the bad things that can happen and can do something about it, but the worst part is that in OT, people are not thinking about all the bad things that can happen in the cyber world. You mentioned that example with that treatment plant. It should never have been connected to the network, unless required. And if it was connected to the, uh, to the network, to the internet, you should have had a ton a mitigations in place in case somebody was trying to come in and should have been stopped. And in that particular case, y- there was a phishing attack and the administrative password was, was taken over. But even with that, with the, some of our products, like Defender for IoT, can actually detect the administrative behavior and can, can detect if an administrator is trying to do bath things. It can still tell other administrators there's bad things happening.Arjmand Samuel:So, there's a ton of things that one could do, and it all comes down, what we have realized is it all comes down to making sure that this word gets out, that people know that there is bad things that can happen with IoT and it's not only your data being stolen. It's very bad things as in that example. And so, the word out, uh, so that we can, uh, we can actually make IoT more secure.Nic Fillingham:Got it. Arjmand, again, thanks so much for your time. It sounds like we really need to get the word out. IoT security is a thing. You know, if you work in an organization that employs IoT or OT devices, or think you might, go and download this white paper. Um, we'll put the link in the, uh, in the show notes. You can just search for it also probably on the Microsoft Security Blog and learn more about cyber security for IoT, how to apply zero trust model. Share it with your, with your peers and, uh, let's get as much education as we can out there.Arjmand Samuel:Thank you very much for this, uh, opportunity.Nic Fillingham:Thanks, Arjmand, for joining us. I think we'll definitely touch on cyber security for IoT, uh, in future episodes. So, I'd love to talk to you again. (music)Arjmand Samuel:Looking forward to it. (music)Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to Tweet us @MSFTSecurity or email us at securityunlocked@Microsoft.com with topics you'd like to hear on a future episode. (music) Until then, stay safe.Natalia Godyla:Stay secure. (music)
Looking a Gift Card Horse in the Mouth
Is it just me, or do you also miss the goodoledays of fraudulent activity?You remember the kind I’m talking about, theemails from princes around the world asking for just a couple hundred dollars to help them unfreeze or retrieve their massive fortune which they would share with you. Attacks havegrownmore nuanced, complex, and invasive since then, but because of the unbelievable talent at Microsoft, we’re constantly getting better at defending against it.On this episode of Security Unlocked, hosts Nic Fillingham and NataliaGodylasit down with returning champion, Emily Hacker, to discuss Business Email Compromise (BEC), an attack that has perpetrators pretending to be someone from the victim’s place of work and instructs them to purchase gift cards and send them to thescammer.Maybe it’s good tolookagift cardhorse in the mouth?In This Episode You Will Learn:Why BEC is such an effective and pervasive attackWhat are the key things to look out for to protect yourself against oneWhy BEC emails are difficult to trackSome Questions We Ask:How do the attackers mimic a true-to-form email from a colleague?Why do we classify this type of email attack separately from others?Why are they asking for gift cards rather than cash?Resources:Emily Hacker’s LinkedIn:https://www.linkedin.com/in/emilydhacker/FBI’s2020Internet Crime Reporthttps://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdfNicFillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp35]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest thread intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft security.Natalia Godyla:And now, let's unlock the pod.Nic Fillingham:Hello listeners, hello, Natalia, welcome to episode 35 of Security Unlocked. Natalia, how are you?Natalia Godyla:I'm doing well as always and welcome everyone to another show.Nic Fillingham:It's probably quite redundant, me asking you how you are and you asking me how you are, 'cause that's not really a question that you really answer honestly, is it? It's not like, "Oh, my right knee's packing at the end a bit," or "I'm very hot."Natalia Godyla:Yeah, I'm doing terrible right now, actually. I, I just, uh- Nic Fillingham:Everything is terrible.Natalia Godyla:(laughs)Nic Fillingham:Well, uh, our guest today is, is a returning champ, Emily Hacker. This is her third, uh, appearance on Security Unlocked, and, and she's returning to talk to us about a, uh, new business email compromise campaign that she and her colleagues helped unearth focusing on some sort of gift card scam.Nic Fillingham:We've covered business email compromise before or BEC on the podcast. Uh, we had, uh, Donald Keating join us, uh, back in the early days of Security Unlocked on episode six. The campaign itself, not super sophisticated as, as Emily sort of explains, but so much more sort of prevalent than I think a lot of us sort of realize. BEC was actually the number one reported source of financial loss to the FBI in 2020. Like by an order of magnitude above sort of, you know, just places second place, third place, fourth place. You know, I think the losses were in the billions, this is what was reported to the FBI, so it's a big problem. And thankfully, we've got people like, uh, Emily on it.Nic Fillingham:Natalia, can you give us the TLDR on the, on the campaign that Emily helps describe?Natalia Godyla:Yeah, as you said, it's, uh, a BEC gift card campaign. So the attackers use typosquatted domains, and socially engineered executives to request from employees that they purchase gift cards. And the request is very vague. Like, "I need you to do a task for me, "or "Let me know if you're available." And they used that authority to convince the employees to purchase the gift cards for them. And they then co-converted the gift cards into crypto at, at scale to collect their payout.Nic Fillingham:Yeah, and we actually discuss with Emily that, that between the three of us, Natalia, myself and Emily, we actually didn't have a good answer for how the, uh- Natalia Godyla:Mm-hmm (affirmative).Nic Fillingham:... these attackers are laundering these gift cards and, and converting them to crypto. So we're gonna, we're gonna go and do some research, and we're gonna hopefully follow up on a, on a future episode to better understand that process. Awesome. And so with that, on with the pod.Natalia Godyla:On with the pod.Nic Fillingham:Welcome back to the Security Unlocked podcast. Emily hacker, how are you?Emily Hacker:I'm doing well. Thank you for having me. How are you doing?Nic Fillingham:I'm doing well. I'm trying very hard not to melt here in Seattle. We're recording this at the tail end of the heat wave apocalypse of late June, 2021. Natalia, are you all in, I should have asked, have you melted or are you still in solid form?Natalia Godyla:I'm in solid form partially because I think Seattle stole our heat. I'm sitting in Los Angeles now.Nic Fillingham:Uh huh, got it. Emily, thank you for joining us again. I hope you're also beating the heat. You're here to talk about business email compromise. And you were one of the folks that co-authored a blog post from May 6th, talking about a new campaign that was discovered utilizing gift card scams. First of all, welcome back. Thanks for being a return guest. Second of all, do I get credit or do I get blame for the tweet that enabled you to, to- Emily Hacker:(laughs) It's been so long, I was hoping you would have forgotten.Nic Fillingham:(laughs) Emily and I were going backward forward on email, and I basically asked Emily, "Hey, Emily, who's like the expert at Microsoft on business email compromise?" And then Emily responded with, "I am."Emily Hacker:(laughs)Nic Fillingham:As in, Emily is. And so I, I think I apologized profusely. If I didn't, let me do that now for not assuming that you are the subject matter expert, but that then birthed a very fun tweet that you put out into the Twitter sphere. Do you wanna share that with the listeners or is this uncomfortable and we need to cut it from the audio?Emily Hacker:No, it's fine. You can share with the listeners. I, uh- Nic Fillingham:(laughs)Emily Hacker:... I truly was not upset. I don't know if you apologized or not, because I didn't think it was the thing to apologize for. Because I didn't take your question as like a, "Hey," I'm like, "Can you like get out of the way I did not take it that way at all. It was just like, I've been in this industry for five years and I have gotten so many emails from people being like, "Hey, who's the subject matter in X?" And I'm always having to be like, "Oh, it's so and so," you know, or, "Oh yeah, I've talked to them, it's so-and-so." And for once I was like, "Oh my goodness, it me."Natalia Godyla:(laughs)Emily Hacker:Like I'm finally a subject matter in something. It took a long time. So the tweet was, was me being excited that I got to be the subject matter expert, not me being upset at you for asking who it was.Nic Fillingham:No, I, I took it in it's, I did assume that it was excitement and not crankiness at me for not assuming that it would be you. But I was also excited because I saw the tweet, 'cause I follow you on Twitter and I'm like, "Oh, that was me. That was me." And I got to use- Emily Hacker:(laughs)Nic Fillingham:... I got to use the meme that's the s- the, the weird side eye puppet, the side, side eye puppet. I don't know if that translates. There's this meme where it's like a we-weird sort of like H.R. Pufnstuf sort of reject puppet, and it's sort of like looking sideways to the, to the camera.Emily Hacker:Yes.Nic Fillingham:Uh, I've, and I've- Emily Hacker:Your response literally made me laugh a while though alone in my apartment.Nic Fillingham:(laughs_ I've never been able to use that meme in like its perfect context, and I was like, "This is it."Emily Hacker:(laughs) We just set that one up for a comedy home run basically.Nic Fillingham:Yes, yes, yes. And I think my dad liked the tweet too- Natalia Godyla:(laughs)Nic Fillingham:... so I think I had that, so that was good.Emily Hacker:(laughs)Nic Fillingham:Um, he's like my only follower.Emily Hacker:Pure success.Nic Fillingham:Um, well, on that note, so yeah, we're here to talk about business email compromise, which we've covered on the, on the podcast before. You, as I said, uh, co-authored this post for May 6th. We'll have a, a broader conversation about BEC, but let's start with these post. Could you, give us a summary, what was discussed in this, uh, blog post back on, on May 6th?Emily Hacker:Yeah, so this blog post was about a specific type of business email compromise, where the attackers are using lookalike domains and lookalike email addresses to send emails that are trying, in this particular case, to get the user to send them a gift card. And so this is not the type of BEC where a lot of people might be thinking of in terms of conducting wire transfer fraud, or, you know, you read in the news like some company wired several million dollars to an attacker. That wasn't this, but this is still creating a financial impact and that the recipient is either gonna be using their own personal funds or in some cases, company funds to buy gift cards, especially if the thread actor is pretending to be a supervisor and is like, "Hey, you know, admin assistant, can you buy these gift cards for the team?" They're probably gonna use company funds at that point.Emily Hacker:So it's still something that we keep an eye out for. And it's actually, these gift card scams are far and away the most common, I would say, type of BEC that I am seeing when I look for BEC type emails. It's like, well over, I would say 70% of the BEC emails that I see are trying to do this gift card scam, 'cause it's a little easier, I would say for them to fly under the radar maybe, uh, in terms of just like, someone's less likely to report like, "Hey, why did you spend $30 on a gift card?" Than like, "Hey, where did those like six billion dollars go?" So like in that case, "This is probably a little easier for them to fly under the radar for the companies. But in terms of impact, if they send, you know, hundreds upon hundreds of these emails, the actors are still gonna be making a decent chunk of change at the end of the day.Emily Hacker:In this particular instance, the attackers had registered a couple hundred lookalike domains that aligned with real companies, but were just a couple of letters or digits off, or were using a different TLD, or use like a number or sort of a letter or something, something along the lines to where you can look at it and be like, "Oh, I can tell that the attacker is pretending to be this other real company, but they are actually creating their own."Emily Hacker:But what was interesting about this campaign that I found pretty silly honestly, was that normally when the attacker does that, one would expect them to impersonate the company that their domain is looking like, and they totally didn't in this case. So they registered all these domains that were lookalike domains, but then when they actually sent the emails, they were pretending to be different companies, and they would just change the display name of their email address to match whoever they were impersonating.Emily Hacker:So one of the examples in the blog. They're impersonating a guy named Steve, and Steve is a real executive at the company that they sent this email to. But the email address that they registered here was not Steve, and the domain was not for the company that Steve works at. So they got a little bit, I don't know if they like got their wires crossed, or if they just were using the same infrastructure that they were gonna use for a different attack, but these domains were registered the day before this attack. So it definitely doesn't seem like opportunistic, and which it doesn't seem like some actors were like, "Oh, hey look, free domains. We'll send some emails." Like they were brand new and just used for strange purposes.Natalia Godyla:Didn't they also fake data in the headers? Why would they be so careless about connecting the company to the language in the email body but go through the trouble of editing the headers?Emily Hacker:That's a good question. They did edit the headers in one instance that I was able to see, granted I didn't see every single email in this attack because I just don't have that kind of data. And what they did was they spoofed one of the headers, which is an in-reply-to a header, which makes it, which is the header that would let us know that it's a real reply. But I worked really closely with a lot of email teams and we were able to determine that it wasn't indeed a fake reply.Emily Hacker:My only guess, honestly, guess as to why that happened is one of two things. One, the domain thing was like a, a mess up, like if they had better intentions and the domain thing went awry. Or number two, it's possible that this is multiple attackers conducting. If one guy was responsible for the emails with the mess of domains, and a different person was responsible for the one that had the email header, like maybe the email header guy is just a little bit more savvy at whose job of crime than the first guy.Natalia Godyla:(laughs)Nic Fillingham:Yeah, I li- I like the idea of, uh, sort of ragtag grubbing. I don't mean to make them an attractive image, but, you know, a ragtag group of people here. And like, you've got a very competent person who knows how to go and sort of spoof domain headers, and you have a less competent person who is- Emily Hacker:Yeah. It's like Pinky and the Brain.Nic Fillingham:Yeah, it is Pinky and the Brain. That's fantastic. I love the idea of Pinky and the Brain trying to conduct a multi-national, uh- Emily Hacker:(laughs)Nic Fillingham:... BEC campaign as their way to try and take over the world. Can we back up a little bit? We jumped straight into this, which is totally, you know, we asked you to do that. So, but let's go back to a little bit of basics. BEC stands for business email compromise. It is distinct from, I mean, do you say CEC for consumer email compromise? Like what's the opposite side of that coin? And then can you explain what BEC is for us and why we sort of think about it distinctly?Emily Hacker:Mm-hmm (affirmative), so I don't know if there's a term for the non-business side of BEC other than just scam. At its basest form, what BEC is, is just a scam where the thread actors are just trying to trick people out of money or data. And so it doesn't involve any malware for the most part at the BEC stage of it. It doesn't involve any phishing for the most part at the BEC stage of it. Those things might exist earlier in the chain, if you will, for more sophisticated attacks. Like an attacker might use a phishing campaign to get access before conducting the BEC, or an attacker might use like a RAT on a machine to gain access to emails before the actual BEC. But the business email compromise email itself, for the most part is just a scam. And what it is, is when an attacker will pretend to be somebody at a company and ask for money data that can include, you know, like W-2's, in which case that was still kind of BEC.Emily Hacker:And when I say that they're pretending to be this company, there's a few different ways that that can happen. And so, the most, in my opinion, sophisticated version of this, but honestly the term sophisticated might be loaded and arguable there, is when the attacker actually uses a real account. So business email compromise, the term might imply that sometimes you're actually compromising an email. And those are the ones where I think are what people are thinking of when they're thinking of these million billion dollar losses, where the attacker gains access to an email account and basically replies as the real individual.Emily Hacker:Let's say that there was an email thread going on between accounts payable and a vendor, and the attacker has compromised the, the vendor's email account, well, in the course of the conversation, they can reply to the email and say, "Hey, we just set up a new bank account. Can you change the information and actually wire the million dollars for this particular project to this bank account instead?" And if the recipient of that email is not critical of that request, they might actually do that, and then the money is in the attacker's hands. And it's difficult to be critical of that request because it'll sometimes literally just be a reply to an ongoing email thread with someone you've probably been doing business with for a while, and nothing about that might stand out as strange, other than them changing the account. It can be possible, but difficult to get it back in those cases. But those are definitely the ones that are, I would say, the most tricky to spot.Emily Hacker:More common, I would say, what we see is the attacker is not actually compromising an email, not necessarily gaining access to it, but using some means of pretending or spoofing or impersonating an email account that they don't actually have access to. And that might include registering lookalike domains as in the case that we talked about in this blog. And that can be typosquatted domains or just lookalike domains, where, for example, I always use this example, even though I doubt this domain is available, but instead of doing microsoft.com, they might do Microsoft with a zero, or like Microsoft using R-N-I-C-R-O-S-O-F-t.com. So it looks like an M at first glance, but it's actually not. Or they might do something like microsoft-com.org or something, which that obviously would not be available, but you get the point. Where they're just getting these domains that kind of look like the right one so that somebody, at first glance, will just look up and be like, "Oh yeah, that looks like Microsoft. This is the right person."Emily Hacker:They might also, more commonly, just register emails using free email services and either do one of two things, make the email specific to the person they're targeting. So let's say that an attacker was pretending to be me. They might register email@example.com, or more recently and maybe a little bit more targeted, they might register like firstname.lastname@example.org, and then they'll send an email as me. And then on the, I would say less sophisticated into the spectrum, is when they are just creating an email address that's like email@example.com. And then they'll use that email address for like tons of different targets, like different victims. And they'll either just change the display name to match someone at the company that they're targeting, or they might just change it to be like executive or like CEO or something, which like the least believable of the bunch in my opinion is when they're just reusing the free emails.Emily Hacker:So that's kind of the different ways that they can impersonate or pretend to be these companies, but I see all of those being used in various ways. But for sure the most common is the free email service. And I mean, it makes sense, because if you're gonna register a domain name that cost money and it takes time and takes skill, same with compromising an email account, but it's quick and easy just to register a free email account. So, yeah.Nic Fillingham:So just to sort of summarize here. So business email compromise i-is obviously very complex. There's lots of facets to it.Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:It sounds like, first of all, it's targeted at businesses as opposed to targeted individuals. In targeted individuals is just more simple scams. We can talk about those, but business email compromise, targeted at businesses- Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... and the end goal is probably to get some form of compromise, and which could be in different ways, but some sort of compromise of a communication channel or a communication thread with that business to ultimately get some money out of them?Emily Hacker:Yep, so it's a social engineering scheme to get whatever their end goals are, usually money. Yeah.Nic Fillingham:Got it. Like if I buy a gift card for a friend or a family for their birthday, and I give that to them, the wording on the bottom says pretty clearly, like not redeemable for cash. Like it's- Emily Hacker:So- Nic Fillingham:... so what's the loophole they're taking advantage of here?Emily Hacker:Criminals kind of crime. Apparently- Natalia Godyla:(laughs)Emily Hacker:... there are sites, you know, on the internet specifically for cashing out gift cards for cryptocurrency.Nic Fillingham:Hmm.Emily Hacker:And so they get these gift cards specifically so that they can cash them out for cryptocurrency, which then is a lot, obviously, less traceable as opposed to just cash. So that is the appeal of gift cards, easier to switch for, I guess, cryptocurrency in a much less traceable manner for the criminals in this regard. And there are probably, you know, you can sell them. Also, you can sell someone a gift card and be like, "Hey, I got a $50 iTunes gift card. Give me $50 and you got an iTunes gift card." I don't know if iTunes is even still a thing. But like that is another means of, it's just, I think a way of like, especially the cryptocurrency one, it's just a way of distancing themselves one step from the actual payout that they end up with.Nic Fillingham:Yeah, I mean, it's clearly a, a laundering tactic.Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:It's just, I'm trying to think of like, someone's eventually trying to get cash out of this gift card-Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... and instead of going into Target with 10,000 gift cards, and spending them all, and then turning right back around and going to the returns desk and saying like, "I need to return these $10,000 that I just bought."Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:I guess I'm just puzzled as to how, at scale- Emily Hacker:Yeah.Nic Fillingham:... and I guess that's the key word here, at scale, at a criminal scale, how are they, what's the actual return? Are they getting, are they getting 50 cents on the dollar? Are they getting five cents on the dollar? Are they getting 95 cents on the dollar? Um, it sounds like, maybe I don't know how to ask that question, but I think it's a fascinating one, I'd love to learn more about.Emily Hacker:It is a good question. I would imagine that the, the sites where they exchange them for cryptocurrency are set up in a way where rather than one person ending up with all the gift cards to where that you have an issue, like what you're talking about with like, "Hey, uh, can I casually return these six million gift cards?" Like rather than that, they're, it's more distributed. But there probably is a surcharge in terms of they're not getting a one-to-one, but it's- Nic Fillingham:Yeah.Emily Hacker:... I would not imagine that it's very low. Or like I would not imagine that they're getting five cents on the dollar, I would imagine it's higher than that.Nic Fillingham:Got it.Emily Hacker:But I don't know. So, that's a good question.Natalia Godyla:And we're talking about leveraging this cryptocurrency model to cash them out. So has there been an increase in these scams because they now have this ability to cash them out for crypto? Like, was that a driver?Emily Hacker:I'm not sure. I don't know how long the crypto cash out method has been available.Natalia Godyla:Mm-hmm (affirmative).Emily Hacker:I've only recently learned about it, but that's just because I don't spend, I guess I don't spend a lot of time dealing with that end of the scam. For the most part, my job is looking at the emails themselves. So, the, learning what they're doing once they get the gift cards was relatively new to me, but I don't think it's new to the criminals. So it's hard for me to answer that question, not knowing how long the, the crypto cash out method has been available to them. But I will say that it does feel like, in the last couple of years, gift card scams have just been either increasing or coming into light more, but I think increasing.Nic Fillingham:Emily, what's new about this particular campaign that you discussed in the blog? I-it doesn't look like there's something very new in the approach here. This feels like it's a very minor tweak on techniques that have been employed for a while. Tell me what's, what's new about this campaign? (laughs)Emily Hacker:(laughs) Um, so I would agree that this is not a revolutionary campaign.Nic Fillingham:Okay.Emily Hacker:And I didn't, you know, choose to write this one into the blog necessarily because it's revolutionary, but rather because this is so pervasive that I felt like it was important for Microsoft customers to be aware that this type of scam is so, I don't know what word, now we're both struggling with words, I wanna say prolific, but suddenly the definition of that word seems like it doesn't fit in that sentence.Nic Fillingham:No, yeah, prolific, that makes sense. Emily Hacker:Okay.Nic Fillingham:Like, this is, it sounds like what you're saying is, this blog exists not because this campaign is very unique and some sort of cutting-edge new technique, it exists because it's incredibly pervasive.Emily Hacker:Yes.Nic Fillingham:And lots and lots of people and lots and lots of businesses are probably going to get targeted by it. Emily Hacker:Exactly.Nic Fillingham:And we wanna make sure everyone knows about it.Emily Hacker:And the difference, yes, and the, the only real thing that I would say set this one apart from some of the other ones, was the use of the lookalike domains. Like so many of the gift cards scams that I see, so many of the gift cards scams that I see are free email accounts, Gmail, AOL, Hotmail, but this one was using the lookalike domains. And that kind of gave us a little bit more to talk about because we could look into when the domains were registered. I saw that they were registered the day, I think one to two days before the attack commenced. And that also gave us a little bit more to talk about in terms of BEC in the blog, because this kind of combined a couple of different methods of BEC, right? It has the gift cards scam, which we see just all the time, but it also had that kind of lookalike domain, which could help us talk about that angle of BEC.Emily Hacker:But I had been, Microsoft is, is definitely starting to focus in on BEC, I don't know, starting to focus in, but increasing our focus on BEC. And so, I think that a lot of the stuff that happens in BEC isn't new. Because it's so successful, there's really not much in the way of reason for the attackers to shift so dramatically their tactics. I mean, even with the more sophisticated attacks, such as the ones where they are compromising an account, those are still just like basic phishing emails, logging into an account, setting up forwarding rules, like this is the stuff that we've been talking about in BEC for a long time. But I think Microsoft is talking about these more now because we are trying to get the word out, you know, about this being such a big problem and wanting to shift the focus more to BEC so that more people are talking about it and solving it. Natalia Godyla:It seemed like there was A/B testing happening with the cybercriminals. They had occasionally a soft intro where someone would email and ask like, "Are you available?" And then when the target responded, they then tried to get money from that individual, or they just immediately asked for money.Emily Hacker:Mm-hmm (affirmative).Natalia Godyla:Why the different tactics? Were they actually attempting to be strategic to test which version worked, or was it just, like you said, different actors using different methods?Emily Hacker:I would guess it's different actors using different methods or another thing that it could be was that they don't want the emails to say the same thing every time, because then it would be really easy for someone like me to just identify them- Natalia Godyla:Mm-hmm (affirmative).Emily Hacker:... in terms of looking at mail flow for those specific keywords or whatever. If they switch them up a little bit, it makes it harder for me to find all the emails, right? Or anybody. So I think that could be part of the case in terms of just sending the exact same email every time is gonna make it really easy for me to be like, "Okay, well here's all the emails." But I think there could also be something strategic to it as well. I just saw one just yesterday actually, or what day is it, Tuesday? Yeah, so it must've been yesterday where the attacker did a real reply.Emily Hacker:So they sent the, the soft opening, as you said, where it just says, "Are you available?" And then they had sent a second one that asked that full question in terms of like, "I'm really busy, I need you to help me, can you call me or email me," or something, not call obviously, because they didn't provide a phone number. Sometimes they do, but in this case, they didn't. And they had actually responded to their own email. So the attacker replied to their own email to kind of get that second push to the victim. The victim just reported the email to Microsoft so they didn't fall for it. Good for them. But it does seem that there might be some strategy involved or desperation. I'm not sure which one.Natalia Godyla:(laughs) Fine line between the two.Emily Hacker:(laughs)Nic Fillingham:I'd want to ask question that I don't know if you can answer, because I don't wanna ask you to essentially, you know, jeopardize any operational security or sort of tradecraft here, but can you give us a little tidbit of a glimpse of your, your job, and, and how you sort of do this day-to-day? Are you going and registering new email accounts and, and intentionally putting them in dodgy places in hopes of being the recipient? Or are you just responding to emails that have been reported as phishing from customers? Are you doing other things like, again, I don't wanna jeopardize any of your operational security or, you know, the processes that you use, but how do you find these?Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:And how do you then sort of go and follow the threads and uncover these campaigns?Emily Hacker:Yeah, there's a few ways, I guess that we look for these. We don't currently have any kind of like Honey accounts set up or anything like that, where we would be hoping to be targeted and find them this way. I know there are different entities within Microsoft who are, who do different things, right? So my team is not the entity that would be doing that. So my team's job is more looking at what already exists. So we're looking at stuff that customers have reported, and we're also looking at open source intelligence if anyone else has tweeted or released a blog or something about an ongoing BEC campaign, that might be something that then I can go look at our data and see if we've gotten.Emily Hacker:But the biggest way outside of those, those are the two, like I would say smaller ways. The biggest way that we find these campaigns is we do technique tracking. So we have lots of different, we call them traps basically, and they run over all mail flow, and they look for certain either keywords or there are so many different things that they run on. Obviously not just keywords, I'm just trying to be vague here. But like they run on a bunch of different things and they have different names. So if an email hits on a certain few items, that might tell us, "Hey, this one might be BEC," and then that email can be surfaced to me to look into.Emily Hacker:Unfortunately, BEC is very, is a little bit more difficult to track just by the nature of it not containing phishing links or malware attachments or anything along those lines. So it is a little bit more keyword based. And so, a lot of times it's like looking at 10,000 emails and looking for the one that is bad when they all kind of use the same keywords. And of course, we don't just get to see every legitimate email, 'cause that would be like a crazy customer privacy concern. So we only get to really see certain emails that are suspected malicious by the customer, in which case it does help us a little bit because they're already surfacing the bad ones to us.Emily Hacker:But yeah, that's how we find these, is just by looking for the ones that already seem malicious kind of and applying logic over them to see like, "Hmm, this one might be BEC or," you know, we do that, not just for BEC, but like, "Hmm, this one seems like it might be this type of phishing," or like, "Hmm, this one seems like it might be a buzz call," or whatever, you know, these types of things that will surface all these different emails to us in a way that we can then go investigate them.Nic Fillingham:So for the folks listening to this podcast, what do you want them to take away from this? What you want us to know on the SOC side, on the- Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... on the SOC side? Like, is there any additional sort of, what are some of the fundamentals and sort of basics of BEC hygiene? Is there anything else you want folks to be doing to help protect the users in their organizations?Emily Hacker:Yeah, so I would say not to just focus on monitoring what's going on in the end point, because BEC activity is not going to have a lot, if anything, that's going to appear on the end point. So making sure that you're monitoring emails and looking for not just emails that contain malicious links or attachments, but also looking for emails that might contain BEC keywords. Or even better, if there's a way for you to monitor your organization's forwarding rules, if a user suddenly sets up a, a slew of new forwarding rules from their email account, see if there's a way to turn that into a notification or an alert, I mean, to you in the SOC. And that's a really key indicator that that might be BEC, not necessarily gift cards scam, but BEC.Emily Hacker:Or see if there is a way to monitor, uh, not monitor, but like, if your organization has users reporting phishing mails, if you get one that's like, "Oh, this is just your basic low-level credential phishing," don't just toss it aside and be like, "Well, that was just one person and has really crappy voicemail phish, no one's going to actually fall for that." Actually, look and see how many people got the email. See if anybody clicked, force password resets on the people that clicked, or if you can't tell who clicked on everybody, because it really only takes one person to have clicked on that email and you not reset their password, and now the attackers have access to your organization's email and they can be conducting these kinds of wire transfer fraud.Emily Hacker:So like, and I know we're all overworked in this industry, and I know that it can be difficult to try and focus on everything at once. And especially, you know, if you're being told, like our focus is ransomware, we don't want to have ransomware. You're just constantly monitoring end points for suspicious activity, but it's important to try and make sure that you're not neglecting the stuff that only exists in email as well. Natalia Godyla:Those are great suggestions. And I'd be remiss not to note that some of those suggestions are available in Microsoft Defender for Office 365, like the suspicious forwarding alerts or attack simulation training for user awareness. But thank you again for joining us, Emily, and we hope to have you back on the show many more times.Emily Hacker:Yeah, thanks so much for having me again.Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity, or email us at firstname.lastname@example.org with topics you'd like to hear on our future episode. Until then, stay safe.Natalia Godyla:Stay secure.
Simulating the Enemy
How does that old saying go?Keep your friends close andkeepyour understanding of a threat actor’sunderlying behavior and functionality of tradecraft closer?Asnew tools are developed and implemented for individuals and businesses to protect themselves, wouldn’t it be great to see how they hold up against different attacks withoutactually havingto wait for an attack to happen?Microsoft’s new open-source tool,Simuland, allows users to simulate attacks on their owninfrastructureto see where their own weaknesses lie.In this episode of Security Unlocked, hosts Natalia Godyla and Nic Fillingham sit down with Roberto Rodriguez,PrincipleThreatResearcher for the Microsoft Threat Intelligence Center (MSTIC)andSimuland’sdeveloper,to understand how the project came to life, and what users can expect as they use it.In This Episode You Will Learn:How community involvement will helpSimulandgrowHow individuals can useSimulandto seeexamples of actions threat actors can take against their infrastructureWhat other projects and libraries went intoSimuland’sdevelopmentSome Questions We Ask:What exactly is being simulated inSimuland?What do does Roberto hope for users to take away fromSimuland?What is next for theSimulandproject?Resources:RobertoRodriguez’sLinkedIn:https://www.linkedin.com/in/roberto-rodriguez-96b86a58/Roberto’s blog post,SimuLand: Understand adversary tradecraft and improve detection strategies:https://www.microsoft.com/security/blog/2021/05/20/simuland-understand-adversary-tradecraft-and-improve-detection-strategies/Roberto’s Twitter:Cyb3rWard0ghttps://twitter.com/Cyb3rWard0gNic Fillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp34]Nic Fillingham:Hello and welcome to Security Unlocked. A new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft Security Engineering and Operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia and Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security. Natalia Godyla:And now let's unlock the pod. Nic Fillingham:Hello listeners. Hello, Natalia. Welcome to episode 34 of Security Unlocked. Natalia, how are you? Natalia Godyla:I'm doing well, thanks for asking. And hello everyone. Nic Fillingham:On today's episode, we have Principal Threat Researcher from the MSTIC Group, Roberto Rodriguez, who is here to talk to us about SimuLand, which is a new open source initiative, uh, that Roberto, uh, announced and discuss in a blog post from may the 20th, 2021. Natalia, you've got a, an overview here of SimuLand. Can you give us the TLDR? Natalia Godyla:Of course. So SimuLand is like you said, an, an open source initiative at Microsoft that helps security researchers test real attack scenarios, and determine the effectiveness of the detections in products such as Microsoft 365 Defender, Azure Defender and Azure Sentinel, with the intent of expanding it beyond those products in the future. Nic Fillingham:And Roberto, obviously we'll sort of expand upon that in the interview. Uh, one of the questions we asked Roberto is how did this all begin? And it began with an email from someone in Roberto's team saying, "Hey Roberto, could you write a blog post that sort of explains the steps needed to go and, uh, deploy a lab environment that reproduces some of these techniques?" And Roberta said, "Sure." And started writing. And he got to about page 80. Uh, you got 80 pages in and decided, "You know what, I think I can probably turn this into, uh, a set of scripts or into a tool." And that's sort of the kickoff of the SimuLand project. There's obviously more to it than that, which Roberto will go into, uh, in the interview. The other thing we learned, Natalia is Roberto might have taken the crown as the busiest person in, in security. Natalia Godyla:He certainly does. And, uh, lucky us, we get to ask him questions about all of the open source projects that he's been working on. So we'll do a little bit of a Harbor cruise through those projects in addition to SimuLand and this episode.Nic Fillingham:And with that, on with the pod.Natalia Godyla:On with the pod.Nic Fillingham:Welcome to the Security Unlocked podcast, Roberto Rodriguez. Thanks for your time.Roberto Rodriguez:Yeah. Thank you. Thank you. Thank you for having me here. Nic Fillingham:Yeah. We'd love to start with a quick intro. If you could tell the audience, uh, about yourself, about your role at Microsoft and, and what is your day-to-day look like? Roberto Rodriguez:Sure. Yeah. So my name is Roberta Rodriguez. Um, I'm a Principal Threat Researcher for the Microsoft Threat Intelligence Center, known as MSTIC, and I'm part of the R&D team. And my day-to-day, uh, is very interesting. There's a lot of things going on. So my role primarily is to empower all their security researchers in my organization to do, for example, some of their development of detections, performing research in general. So I tend to follow my day-to-day into... I kind of like breaking it down into a couple of pieces. Like the whole research methodology has several different steps.Roberto Rodriguez:So what I do is I try to innovate in some of those steps in order to expedite the process, trying to maybe come up with some new tools that they could use. And at the same time, I like to dissect adversary tradecraft, and then try and just to take that knowledge and then share it with others and trying to collaborate with other teams as well. Not only in MSTIC, but yeah, but across like other teams at Microsoft as well.Natalia Godyla:Thank you for that. And today we're here to talk about one of the blogs you authored on the Microsoft Security blog, SimuLand understand adversary tradecraft, and improve detection strategies. So, um, can we just start with defining SimuLand? What is SimuLand? Roberto Rodriguez:Yep. So SimuLand is an open source initiative. It's, it's a project that started just as a blog post to talk about, for example, an end-to-end scenario where we can start mapping detections to it. So we decided to take that idea and started sharing more scenarios with the community, showing them a little bit how, for example, like a threat actor could go about it and trying to compromise the specific, you know, resources either in Azure or on Prem. And then try to map all that with some of the detections that we have, trying to validate detections and alerts from different products from the 365 Defenders security, Azure Defender. Roberto Rodriguez:And of course, Azure Sentinel at the end, trying to, trying to bring all those data sources together and then allow also not only people at Microsoft, but outside, right? Customers or people even trying to use trial licenses to understand the, you know, the power of all this technology together. Because usually, you know, when you start thinking about all these security products, we always try to picture them like as isolated products. So the idea is how we can start providing documentation to deploy lap environments, walk them through a whole scenario, map the... For example, attack behavior to detections, and then just showcase what you can do with, you know, with all these products.Roberto Rodriguez:Um, that's kind of like the main idea. And of course I, some of the output could be understanding, you know, the adversary in general, trying to go deep beyond just alerts. Because our goal also is not just to say, "Oh, this attack action happens. And then this alert triggers." The idea is to say first, you know, let's validate those alerts, but then second, we want you to go through and analyze the additional data, additional context that gets created in every single step, because at the same, you know, it will be nice to see what people can come up with. Roberto Rodriguez:You know, there's a lot of different data sets being showcased through this, you know, type of lab environments that, you know, for example, we believe that there could be other use cases that you can create on the top of all that telemetrics. So that's what we want to expose all that documentation that has helped us, for example, to do internal research. When I joined Microsoft, there was not much so I would say from a lap environment that was fully documented to deploy and then just try to use it right away when there is an incident, for example, or just trying to do research in general. So my idea was why can't we share all this with a community and see if they could also benefit because we're using this also internally.Nic Fillingham:I, I'd love to actually just quickly look at the name. So SimuLand, I'm assuming that's a portmanteau or is it, is it an acronym? Tell me how you got to SimuLand. Because I think that may actually also help, you know, further clarify what this is. Roberto Rodriguez:Yeah. So, yeah, SimuLand, uh, it's I believe, you know, it comes from as... Well, it has also some contexts around Spanish. Uh, so in Spanish we say simulando. So simulando means simulating something.Nic Fillingham:Okay.Roberto Rodriguez:But at the same time, I feel that SimuLand, the idea was to say, deploy this environment, which could turn into a, let's say like a land out there that it's, it's primarily to simulate stuff and to start, you know, learning about adversary trade graph. So it's kind of like the SimuLand, like the simulating land or the land of the simulation. And then also in Spanish, they simulando. So it has a couple of different meanings, but the, the main one is this is the land where you can simulate something and then learn and learn about that simulation in general. Roberto Rodriguez:So that, that was kind of like the thought that, you know, when behind it, not probably too much, but, uh, (laughs) that was idea. And I think that people liked it. I think it just stayed with the project. So-Nic Fillingham:And, and given that you're s- you're simulating sort of the threat space is, is this land that's being simulated? Is this your sort of sovereign, uh, land to protect? Or is this the, is this the actual sort of the theater of cyber war? Like what are you simulating here? Are you're simulating the attacker's environment. Are you simulating your environment? Are you simulating both?Roberto Rodriguez:Yeah, it's a great question. So we're trying to, primarily of course you simulate, let's say an organization that has, for example, like on-prem resources that are trying to connect to an Azure cloud infrastructure, for example. So simulating that environment first, but then at the same time, trying to execute some of those, for example, actions that I threat actor could take in order to compromise the environment. And of course that could come with some of the tools that are used also by, you know, known threat actors who trying to stay with public tools. So things that are already out there, things that have been also identified, but a few threads reports out there as well.Roberto Rodriguez:So we're trying to use what others also could use right away. You know, we don't want to, you know, of course share code or applications that no one has seen ever out there. So the idea is to primarily simulate the full organization environment, like an example of, of what that environment will look like, but then at the same time use public tools to perform some actions in the environment. Natalia Godyla:So, as you said before, you're exposing a lab environment that you had been leveraging internally at Microsoft so the community can benefit from it. What was the community using before in order to either test these products or do further research? Roberto Rodriguez:Sure. So I would say that there is a lot of different communities that we're building, let's say, like, for example, some active directory environments, uh, trying to simulate the creation of different, you know, windows endpoints, um, on a specific domain. And then they were using a lot of open source tools, for example, like, you know, things such as Sysmon from a windows perspective, like, oh, it's squarely also in windows, but then on other platforms. But at the same time, what I wanted to do is why can't we use that, which people are used to trying to use open source tools or just open tools. Roberto Rodriguez:And then at the same time trying to use, uh, for example, enterprise, security controls or products in general. That type of, uh, simulation of a full end-to-end scenario, I have not seen it before. I have seen, for example, some basic examples of one, let's say, um, you know, scenario from Microsoft Defender, evaluation labs, for example, they have a service where you can simulate two to four computers with MDE, which is Microsoft Defender for endpoint, those scenarios existed, but there was nothing out there that could have everything in one place. Roberto Rodriguez:So we're talking about Microsoft Defender for Endpoint, identity, Microsoft Defender for cloud application security, Azure Defender. And then on the top of that, Azure Sentinel detections, all that together was not out there. Once again, there was just a couple of scenarios, lap environments that were touching a few things, but he was not covering the whole framework or the whole platform to test all these different detections. But at the same time, how you can work with everything at once, because that's also one of the goals of the project is we always hear, for example, once again, detections from one product only, but then there is a lot that you can do when you have one detection from MDE, one detection from Azure Sentinel, MDI, et cetera, all that additional context was not public yet before SimuLand.Roberto Rodriguez:So that's what I was trying to do. Is to bring all this in one place and, and, you know, bringing everything to the SimuLand. (laughs)Nic Fillingham:Is there a particular scenario Roberto, that you can sort of walk us through that's sort of gonna, gonna fully cover the gamut of what SimuLand can do?Roberto Rodriguez:Yes, yes. Definitely. So there is one scenario in there. We're trying to, to of course, you know, add more scenarios to this, uh, platform. So the only one that we have in there is what I call golden SAML two, you know, still for example, or 4J SAML token, and then use that in order to, for example, modify Azure ID applications in order to then use those applications to access mail data, for example. So that's one scenario. The, the main part is golden SAML. That's scenario for example, what we're trying to do with SimuLand is to first make sure that we prepare whoever is using SimuLand to understand what it is that you need before you even try to do anything. Roberto Rodriguez:Right? Because usually we try to jump directly to the simulation and trying to let's say, attack an environment, but there is a lot of pieces that you need to happen before, right? So SimuLand gives you what is called preparation. So in preparation, and you understand all the licensing that you might need, not every scenario needs, uh, we'll need, let's say an enterprise license, or there's going to be a couple of scenarios where are going to be simple. So not too much going on in there, but next step is how to deploy an environment. So once you take care of the licensing, once you take care of, for example, what are the additional resources that you might need to stand up before you deploy a full environment? So now we can deploy it. Roberto Rodriguez:We provide also Azure resource manager templates. So arm templates to let's say first document the environment as code, and then be able just to deploy it with a few commands, um, rather than trying to do everything manually, which is time consuming and is too complex to, to figure it out. The next step of once we have the environment, then we can start for example, running a few actions. So if we go to golden SAMLs, a golden SAMLs starts with let's for example, use a compromised account that was the one handling the Active Directory Federation Services, for example, in the organization on Prem, then we take that and then we start, for example, accessing the database where we can instill the certificate to sign tokens. Roberto Rodriguez:Once we get that, then we can go through that whole scenario step-by-step as we go executing every single action, we can start identifying detections, images of what it would look like on MDI, MD, MDE, MKAZ, Azure Sentinel, all the way to even show you some additional settings that you might need to potentially enable if you want to collect more telemetry. And then at the end, which is, you know, closest scenario with, you know, showing you what it is that you did. And then, uh, at the same time, all the alerts that trigger or the telemetry that was available.Roberto Rodriguez:And since we are sharing a full environment where everything is running, then you can just go back to the environment and go deeper. Maybe do some forensics, maybe do some additional incident response actions. So that, that will be, I would say the, the end-to-end thing with SimuLand, what you can do once you jump into the project.Natalia Godyla:And so for users who've jumped into SimuLand and gone through some of the scenarios, what is your intent for the users once they have these results, what's the use case for them and how do you want them to interact with your team as well? How do you want the community to get involved? Roberto Rodriguez:Yes, that's a great question. So initially what we want to people using SimuLand is once again, go beyond just the alerts. Because alerts, which is one thing that will trigger, we're taking care of all that. So wherever is using, for example, the Microsoft 365 Defender products in general, you know, they are protected with all these detections, right? But my goal is for a researcher or a security analyst to go deeper into that telemetry once again, around in a specific, uh, so I run a specific on alerts so that they can learn more about the adversary behavior in general.Roberto Rodriguez:Usually we just see the alert and then we stop and then we just started the incident and then we pass it to somebody else. I want people to dive into the, you know, all this telemetry that is being collected and they start putting together that whole adversary tradecraft, for example. Understanding the behavior to me is, is very important. There is a lot of different things that you can do with a telemetry already in SimuLand. So that's just one of the goals. The second goal is to see if you're even ready for those types of, you know, alerts. For example, what do you do if you get all these four or five alerts in your environment? How do you respond to that? Roberto Rodriguez:So these could also be part of our training exercise, for example. So there is a couple of things that you can do in there. Another scenario could be, you know, exporting all the data that is being collected and then probably use it for some demos. Once again, also for some training, focusing a lot on trying to understand and learn the adversary tradecraft. Like for me, that's very important once again, because we don't just want to learn about one specific indicator of compromise, we want to make sure that we're covering, uh, scenarios that would allow us to, you know, respond and understand techniques or at the tactical level.Roberto Rodriguez:Um, and then from a collaboration with us, I believe that, you know, one could be trying to give us some feedback and see what else we could do with these scenarios. There is a couple of people in the community, for example, that are sharing some cool detections on the top of the stuff that we already developed. There is a lot of detections being insured through Azure Sentinel GitHub, through enter 65, advanced square is GitHub. And there is people just building things on the top of that. So we would like to hear more of those scenarios and maybe include all those to SimuLand so that we can make SimuLand also a place where we can share those schools, those cool detections ideas that people might have. Roberto Rodriguez:And that could be shared also with others using the environment. Everything I would say from a communication perspective happens through GitHub through issues. Anything that anybody would like to add or probably request, any features. It will be nice. We had one person asking us about, can we add, for example, Microsoft Defender, so MDO, which is Microsoft Defender for Office 365, I think it is. And so those, you know, for example, products, something that I had not added yet. So that's something that is coming. So, uh, invest the type of collaboration that I expect from the community as well.Natalia Godyla:And what's on the roadmap for simulant? What's next for evolving the project?Roberto Rodriguez:Yeah. So SimuLand has a couple of things that are coming out. So one is going to be automation, automation from the execution of attacker actions. So right now the deployment is automated. I would say, I would say 90% of the deployment is automated. There is a few things that are kind of hard to automate right now. And it's just a simple, just like a few more clicks on the top of the deployment. But from the attacker's perspective, we wanted to make SimuLand a project where you can walk someone through the whole process. These are the actions that take place in the whole simulation, and then you can start exploring one-by-one. Roberto Rodriguez:So it's a very manual process to, to go through the SimuLand labs, for example. So one thing that we wanted to do is to automate those steps, those attacker actions, because, you know, we have, for example, a few people that are taking advantage of how modular SimuLand is that they do not want to deal with preparation and deployment. All they wanna do is take the execution of the actions and then just plug them into their own environment. Because they say, I already have the same deployment. Well, yeah. A similar deployment with all the tools that you ask to be deployed. Why not? Can I just take the attacker actions and then just to start a learning or maybe do it in a schedule base, right?Roberto Rodriguez:Like every Friday we execute a few scenarios. So that turned into, uh, a new project, which I'm going to be releasing in Black Hat, 2021 in August. That project is called Cloud Katana. And that's a project where I will be using Azure functions to execute actions automatically. And then the other thing that we have for SimuLand is data export. So what I wanna do also is share the data that gets generated after going through the whole SimuLand scenarios, and then just give it to the community. Because I believe that we also have a few conversations with people from the community that say, you know what, I don't have the environment to deploy this. Roberto Rodriguez:You know, for example, I don't have resources to, you know, learn about all, you know, all of this, my company doesn't want to somehow, I don't know, support these type of projects, right? So a lot of things, you know, people are having some obstacles as well, right? To try to use these things, even like having a subscription in Azure might be an obstacle or constraint for a lot of people. So why not just give them the data with all the actions that were taken, all the alerts that were collected by Azure Sentinel, and then allow them to use, for example, plain Python code or PowerShell or Jupiter notebooks on the top of that, like, you know, to analyze the data, build visualizations from the top.Roberto Rodriguez:So we want to empower those that also, you know, my want to use it, but do not have the resources to do it. So that's also, you know, second thing in the, uh, uh, in the list for SimuLand. The other thing is going to be, so we have, uh, have a lot of things going on, but, (laughs) the, the other thing is going to be, how can we provide a CICD pipeline for the deployment? That's critical because want to make sure that people can plug these into, for example, Azure DevOps, and then they can just have the environment running and they may be, you know, bring the deployment down, you know, bring it up every week and then run a few scenarios, bringing down again.Roberto Rodriguez:So we wanted to make sure that he's also flexible for those too, right, to work with. And what else. And I think that the last thing that we have would, would be trying to see if we can integrate more products from Microsoft, and just share, uh, more scenarios. We have two or three coming, uh, hopefully in the next couple of months and it's going to be fun. Yeah. We have a lot of stuff in there. (laughs)Nic Fillingham:Tell me how you built SimuLand and then worked a full-time job in the MSTIC team. Was this actually a special project that you're assigned to, or was this all extra curricular? A little column A, little column B?Roberto Rodriguez:(laughs) Yeah. So once again, when I started right, these conversations, so I, I mentioned that my role is to also empower others and help to, you know, develop, you know, environments for research, because I love to do research as well, like dissecting. Yeah. Adversary tradecraft is pretty cool. And then the question was just, "Hey, can you build this environment?" Just a simple email? And I was like, "Yeah, I can do that." And I just, to be honest, it took me maybe a week or two to figure it out the infrastructure, and then maybe took me, uh, probably close to a month to write down the whole scenario and make sure that I have the PowerShell scripts that were actually working.Roberto Rodriguez:So let's say probably two months it, it took me to do this. It was extra curriculum activities. (laughing?) Definitely besides what I was doing already. Um, and it was fun. I mean, it was fun because that's what I love to do. So some of my boss is super cool, you know, letting me do all this research and then allow me just to also spend some time and trying to get some feedback from also our internal team and other teams as well. So yeah. So it turned into just as a question, can you do this? And I love those questions and somebody says, can you do this? I was like, I would say yes, but then I don't know what I'm getting myself into. And that's the fun part of it.(laughs)Nic Fillingham:Before we, before we sort of wrap up here, we're a better, are there any projects that you're working on right now or you're contributing to that you can, you can talk about? Roberto Rodriguez:Yeah. So I would say from an open threat research perspective, there's a project called Modeler. So Modeler is a project where I decided to every time I execute or go through my research process, and, and then let's say learn about a specific attack technique, I can collect the data. And then I share those datasets through that project. So for other people that would like to learn about those techniques, they can just access the data directly. So you can learn about adversaries through the data instead of trying to go through a whole process to like to emulate or simulate an adversary. Roberto Rodriguez:Which for a lot of people, it's, it's not that easy. So, you know, so for me, I wanted to find ways to expedite that process. Uh, so that project is something that I'm, you know, revamping, uh, soon. So I'm, I'm collecting more data sets from the cloud. Most of my datasets were windows base. I have a couple of from Linux. I have some from AWS, but I wanted to get more from, you know, from Azure. So SimuLand datasets are going to live in Modeler project. So, you know, anything that, you know, gets out of SimuLand, contributed directly to an open source project as well. Roberto Rodriguez:So that's one of them. And the other one is Cloud Katana, which is the one that I talked about a couple of minutes ago. So Cloud Katana, the automation of SimuLand attack actions, that one I'm spending, uh, a lot of time to, uh, that one will be released under Azure, but this is still going to be open source. So that's also something that we want to provide to the community to use. And let's say there is a, all the projects too. Yes, I have another project. So it is a project called OSSCM, O-S-S-C-M. And OSSCM is a project that I started to document telemetry that I use during research. Roberto Rodriguez:So I believe that a lot of people that want to dive into the technicians and the starring the, you know, defender world, they need to understand the data before they want to make the decisions of like building detections. So my goal with that project was to first document events that I use from different platforms. At the same time, I wanted to create a standardization like common data model for data sets, which by the way, Azure Sentinel is building their common data models through this project OSSCM. So it's also one of our interesting collaboration and opportunities that we have. Uh, Azure Sentinel reaching out to the community and saying, "Hey, instead of Pfizer reinventing the wheel, can we explore your project?" Which is OSSCM.Roberto Rodriguez:And then the third part of OSSCM is also a way to document, for example, you know, relationships that we identify in data. So when you want to build, for example, detections, most of the time you want to understand what events can I use to build a chain of events that would actually give me context around an attack behavior. So what we do is we explore the data, we identify relationships and we just document them through that project. So that way somebody else could actually use it and understand what can they do with that telemetry.Roberto Rodriguez:So I would say, once again, you learn about that telemetry, you standardize your telemetry, and at the same time, we give you some ideas into what you can do with our telemetry to build detections. So that's another project. Last one would be, (laughs) yeah, last one would be another-Nic Fillingham:There's more?Roberto Rodriguez:Yes. There's one more. (laughing)Nic Fillingham:Do you sleep, man? When do you sleep?Roberto Rodriguez:It is being hard but I try to manage my time for sure and do that, but it is, uh, a another project, it's private right now, but it's going to be public, uh, soon. It's going to be through the open threat research community as well. This project is a way to collaborate with, for example, researchers in the community that build offensive security tools or just tools to do, for example, you know, red teaming, they want to use those tools to perform certain actions in, in, in, in a specific environment. Roberto Rodriguez:So we want to, you know, collaborate and partner with them and start documenting those tools in a way that we can share with others in the community. So for example, me as a researcher, dissecting adversary tray graph, like all, all the techniques and the behavior behind on a specific tool or a specific technique, it takes time. Like for me, like it would take probably a couple of weeks to dissect all the modules of one tool. So the goal is to why don't we partner with the authors of those tools, we document those, uh, tools and then we can start also sharing some potential ideas into how to detect those scenarios. Roberto Rodriguez:That way we, you know, we expedite the research, right? We do it, let's say in a private setting with a lot of researchers from the community, and then we just distribute that, that knowledge across the world. So that way we also help and expedite that whole process. So open through research, we have data. Now we have knowledge, we have infrastructure and then we have a way to share it with our community. So it's like a whole kind of like the main parts of your, you know, research process, but we want to give it a community touch to the, you know, you know, to all this. And that's, and that's it. So I have a couple more, but that's, (laughing) that's kinda like another project that it's, it's, it's coming soon. So-Nic Fillingham:I, I think we're going to have to let you go, Roberto. 'Cause if you're just going to get back in today's projects and start submitting some more contributions.(laughing) But before we do that, I want to, I want to circle back to SimuLand, and again, for folks listening to SimuLand, um, they're going to get rid of the blog post. We'll put the link in the, in the show notes. Tell me, what is your dream contribution? What is sort of the first scenario that you want sort of contributed back into this project?Nic Fillingham:Or sort of, where are you really hoping that the community will come and rally around either a particular scenario or some sort of other... Who is the person you, you want to be listening to this podcast right now and go like, "Oh yeah, I can do that." What's that one thing you need, or you're really looking for?Roberto Rodriguez:Well, actually two things. So one is the automation of, of the attacker actions. It will be, uh, uh, a dream, I would say because I'm, I'm building it on the top of Azure infrastructure. So it will be easier to plug in into your environments to kind of like, you know, periodically do some testing and then map it to SimuLand scenarios. So you have like the full end to end, uh, the environment. You have the labs preparation infrastructure as code all the way to even automating those, um, you know, validation of analytics, for example.Roberto Rodriguez:That, that, that's one that even though it's something that it's been done in other places, I think the way how it's going to be done through, through Azure functions is going to be very, very interesting because we're going to have potentially not only attack our actions being automated, but we could maybe have some detections being automated on the top of that. So instead of releasing a tool that will only be used, let's say to attack, right, and a specific environment, we can use a tool that can do both to attack and defend the, uh, the environment.Roberto Rodriguez:So usually you see one or the other. One tool to attack or one to defend. The automation that I'm planning to, to release, which would be one of the dreams is to be able to attack and defend automatically. And I think that that would link also very nicely with projects like CyberBattleSim. So that's also one of the dreams is how can we, uh, for example, document SimuLand in a way that could help us create synthetic scenarios that CyberBattleSim can use and then drop an agent and then learn about the most efficient path to take? Because that's, you know, CyberBattleSim, right? Roberto Rodriguez:They build environments, synthetic environments to then, you know, teach an agent to take the most efficient path through like, you know, rewards and, and, you know, all this stuff. So SimuLand, the dream would be to connect also those projects. How can, you know how you can have these nice process where you can SimuLand can provide the adversary, tradecraft knowledge, all the, for example, preconditions and all the, the context that is needed to create a CyberBattleSim scenario, and then improve a model to, for example, automate some of that execution of attacks. Roberto Rodriguez:And then that model can then be used through Cloud Katana to then execute those paths automatically. And then at the end, you can have some detections on the top where you can apply a similar context. Because SimuLand comes with the attack and detections. So we might find a way to create a data model where we could say, here's the attack here, all detection. So we can maybe build something also with CyberBattleSim the same way. And the other one, so the other dream bug is for me in SimuLand would be, since I was talking to a few coworkers today about this, um, that it would be nice to maybe provide SimuLand as a service for customers or also for, you know, people in the community.Roberto Rodriguez:It will be nice to have a platform that people can just access and start learning about these, these tools, these, these data, uh, necessarily not give somebody of course control to execute something. We take care of the execution, but then just expose all this telemetry in a way that is easier for those that, you know, might not have the resources. I love to do things, to build things that would help others to, you know, to do more. So I think that that will be also one of the dreams is how can we just take SimuLand and then just make it a service for, you know, for the community.Roberto Rodriguez:That would be pretty cool. So if anybody is listening, (laughs) and, and, you know, would like to make that happen, it would be amazing to have SimuLand as a service for those that don't have the resources like schools, uh, you know, like has anybody in general, the community that, you know, would like to, you know, learn more about this.Natalia Godyla:Wow. Roberto, you're going to be busy. Roberto Rodriguez:Yes. (laughs)Natalia Godyla:For anyone who hasn't watched episode 26, we did discuss CyberBattleSim there. So if that peaked your interest, definitely check out that episode and Roberto, as we wrap up here, are there any resources, Twitter handles that folks can follow to continue to watch your work or maybe join the threat research community? Roberto Rodriguez:Yes, yes. Yes. So my Twitter handle is Cyb3rWard0g with a three and the zero. So instead of the E and the O. So Cyb3rWard0g in Twitter. So there is what I share everything that I do is through there. Um, if you want to join the community, we would love to, you know, learn from you and collaborate, go to the Twitter handle OTR. So OT and then R_community. And then they're in the profile and description of the Twitter handle, you have a better link for the, uh, for the discourse invite. So the moment you join that discord, all you have to do is just accept the code of conduct. We want to make sure that we're inclusive, which is welcome everybody. Roberto Rodriguez:And if you agree with that, just click the 100% emoji, and then you have access to, to, (laughing) and then you have access to all these channels where you can, you know, ask questions about open source projects. So that's the best way to collaborate.Natalia Godyla:Awesome. Thank you. We'll definitely drop those links in the show notes. And thank you again for joining us on the show today, Roberto. Roberto Rodriguez:No, thank you for having me. This was amazing. Um, I had never had the opportunity to talk about a lot of projects. Uh, usually it's a one project and then we will see when we talk about. So this has been nice. So thank you very much. I really appreciate it. And I hope to see you guys in another episode. Nic Fillingham:We hope so too. Thanks for Roberto.Roberto Rodriguez:Thank you. Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity, or email us at email@example.com, with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla:Stay secure.
Dial 'T' for Tech Support Fraud
A Day in the Life of a Microsoft Principal Architect
We’re formally sending out a petition to change the phrase “Jack of all trades” to “Hyrum of all trades” in honor of this episode’s guest,Hyrum Anderson. In this episode,hostsNatalia GodylaandNic Fillinghamsit down withHyrum Andersonwho, when he’s not fulfilling his duties as the Principal Architect of the Azure Trustworthy ML group, spends his time playing accordions, making cheese, andfoundingimpressive technologyconferences.He does it all!Rather than chatting with Hyrum about a specific capability that he’s helped to develop, or a blog post that he co-authored – because, believe us, the episode would last for hours – we decided to have a chat with him about his life, how hefirstgot intothe world of technology,and his thoughts on the current state ofcyber security.In This Episode You Will Learn:The differences between a risk and a threatWhy it’s easier to attack thandefendWhataPrincipal Architect of the Azure Trustworthy ML group does in his spare timeSome Questions We Ask:How does Hyrumthinkabout adversarial machine learningand protecting A.I. systems?What is it like for Hyrum to oversee both the red teaming and defensive side of operations?Why are we better at finding holes in security than we are at making sure they don’t exist in the first place?Resources:Hyrum Anderson’s LinkedIn:https://www.linkedin.com/in/hyrumanderson/Hyrum Anderson’s Twitter:https://twitter.com/drhyrum?s=20Conference on Applied Machine Learning in Information Security (CAMLIS)https://www.camlis.org/Machine Learning Security Evasion Competition:Mlsec.ioNic Fillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp32]Nic Fillingham:(silence) Hello, and welcome to Security Unlocked. A new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft Security, engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia and Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security. Natalia Godyla:And now let's unlock the pod.Nic Fillingham:Hello Natalia. Hello listeners. Welcome to episode 32 of Security Unlocked. Natalia, how are you?Natalia Godyla:I'm doing great, Nic. And, and welcome everyone to another episode. Who do we have on the show today? Nic Fillingham:Today we have Hyrum Anderson, Dr. Hyrum Anderson, who, uh, is the Principal Architect of the Trustworthy Machine Learning group here at Microsoft. We have been trying to get Hyrum on the podcast for a long time, and Eagle eyed, Eagle eared, Eagle, Eagle eared. That's the thing I made it up. We're going to use it. Um, listeners will have actually heard Hyrum's name a bunch of times as well as a lot of the work that Hyrum has been pioneering. Hyrum is really one of the leading voices, uh, here at Microsoft in this brand new space that is really just sort of being defined now around Adversarial Machine Learning and protecting AI systems. And so it's fantastic to get a chance to get Hyrum on the podcast and hear about Hyrum's journey into security, into Machine Learning, into AI, and then, uh, finding his way to Microsoft.Natalia Godyla:Yeah. So Hyrum, as you said, is a leading voice in this area. And I think he said it really well when he framed the, the challenge here that an attacker has to be right once and a defender has to be right 100% of the time. And that perspective is what drives him to be proactive about researching Adversarial Machine Learning, knowing that the attacker community is aware that they can use Machine Learning and they'll leverage it when it becomes the right technique for them. So we as organizations and, and defenders listening to this podcast have to start thinking about it early. We just don't have the luxury to not be prepared.Nic Fillingham:I love that a lot of the work that Hyrum does, uh, ends up getting publicized and made public through research, through GitHub. If you listen to last week's episode with Will Peers, Will is actually on Hyrums team. And a lot of, a lot of the work that... A lot of the, the sort of research and, and think tank work that Hyrum and folks do, is not just being sort of absorbed into Microsoft products and services, it's being put out there for the community, for the public, for researchers, for security professionals to really help push the industry forward. So a great conversation, I think you'll really enjoy it. I think with that I'm with the pod.Natalia Godyla:I'm with the pod. Hello, Hyrum Anderson, Principal Architect of the Azure Trustworthy ML group. Welcome to the show today. Hyrum Anderson:Thank you, Natalia. Nice to be here. Natalia Godyla:Well, we're definitely glad to have you, and it'd be great to start by understanding who you are and what your role is at Microsoft. What is your day to day look like? Hyrum Anderson:Well, my role as Principal Architect really means that I code a little, and I talk externally a little, and I'm stuck in that awkward middle. Now that's what, that's what it really means. But it's a really fun role. I joined Microsoft to join a startup inside Microsoft to really address the question, how do we secure AI systems? You know, think about AI systems as a special case, but it is. There, there is a special case that should be considered in the context of larger security, and our little startup inside Microsoft is to address that. So that's why I joined Microsoft. And that's the title I got and I'm happy with it. Natalia Godyla:(laughs) And is this something that you've been working on for some time? Understanding the impact of AI systems or is this a new endeavor you're taking on at Microsoft? Hyrum Anderson:Well, I want to just know that this whole idea of Adversarial Machine Learning has been around a long time way before me. I'm not a founding father in any sense of all, all the brilliant work that's come since the mid 2000s, in exploiting weaknesses in AI systems. But you know, in five or six years ago, I became actively involved in this, especially as it relates to how does an attacker who wants to evade your anti-malware model, if he knew it was an AI system, what could he do special about that to make his job easier? So that's where I came into the game. How do I think like an attacker to get around security controls that are implemented as AI systems?Hyrum Anderson:And from that time, I think that's, that's where some of my work came to be known at. I spoke at Black Hat and DEF CON and things, and, and then, um, that were kind of built and finally, uh, has culminated in a new way of thinking of Microsoft, how do we do this here at Microsoft? And what, what would it look like for both us as Microsoft, you know, first party securing our own, as well as what could it look like for our customers so that everybody who deploys Machine Learning can do it safely and securely.Nic Fillingham:Hyrum, we've spoke with some of your, your colleagues on the podcast before. Could you sort of expand a little bit upon the, I think you've talked about the mission of Trustworthy Machine Learning at Microsoft, but some of the different roles that are involved, you know, how do you work with, with Rom, if you do, how do you work with folks like, uh, Sharon Shaw? How do you work with Andrew Marshall? Uh, the other folks at Microsoft, thinking about Adversarial Machine Learning and protecting AI systems?Hyrum Anderson:Our vision is that you should be able to build your Machine Learning model anywhere, and we can help you to manage the risk, any risk associated with that. That's the vision. And there's a lot of risks associate with Machine Learning that starts from simple things like, how do I know that my translation service is accurate and works for every language that it, you know, those, those are risks. There's also risk about ethics and fairness. Does a face detection work better for some and not for others?Hyrum Anderson:And this final piece of risk is security, and that's how we're focused. So this final piece of risk is, if there's somebody trying to deliberately cause my system or company or business harm, am I able to manage that risk? That's where the Azure Trustworthy Machine Learning team has come into play here as managing that third piece, working across Microsoft to manage the other pieces. Rom has been a internal champion for this effort since several years before I joined. We've had a professional relationship for several years and I, I've known him and he was instrumental in, in, uh, telling me about the cool efforts he wanted to get started here.Hyrum Anderson:So he has led this effort and I joined to help him co-lead this effort, uh, about a year and a half ago. So Andrew [Power 00:07:18], for example, we work with, uh, we try to stay abreast of relevant attacks and defenses in MSR, Andrew Power does a really good job of straddling the line between MSR and applied security. And it's a great resource for us. Our team actually has these, these two interesting parts. One is, how do we go about Microsoft to assess the security of our existing systems? So we have a red team. We have an, A, a red team that kind of goes around and does that. Hyrum Anderson:And the second part is how do we address, you know, how do we take those lessons learned and, and, um, implement defensive tooling, both at Microsoft and for our partners? That's the second piece. And as part of the, the learnings that we have from our red team, we also work with, uh, the great folks like Andrew Marshall in the ether committee to help us reach all the corners of Microsoft for defensive guidance. Andrew and team conduct assessments, and risk assessments of AI systems. And we, we try to, to make a one Microsoft efforts in, uh, making sure that we have a common voice in how we address risk mitigation.Nic Fillingham:Thank you for that explanation. It was fantastic. Matter of fact, uh, we just recently interviewed, uh, Will Pierce the, uh, AI red team lead days ago. Hyrum Anderson:Will is a treasure. Natalia Godyla:(laughs) Hyrum Anderson:Will is a treasure and I, I, if, if you haven't listened to Will's podcast, I have not, but I, I want to listen to it. He is a really interesting individual. Nic Fillingham:Yeah. And we talked about, you talked quite a bit about counterfeit, which is the, the tool that he sort of built for himself and then it spun up into a, a GitHub project that's been released into the wild. And that was a fascinating conversation. I would love for you to walk us through your journey as far back as you want to go into security, into Machine Learning and, and sort of eventually to Microsoft. When did this start? Were you into, you know, into Legos? Were you into pulling apart radios? Did you build your first computer when you were three? Like what, how did this passion and this career start for you? Hyrum Anderson:Oh, wow. That's, that's a great question. I, I want to just first, be- before I tell stories, I want to say that I am a relative newcomer to security. And the more I learn from real security people, the more I realize what I don't know about security. So I, I would consider myself as a, an engineer, a researcher who has applied his craft to security. And I'm really appreciative of, of members of my team who are teaching me all the time about, uh, new ways. That sad, (laughs) that said, I just have a story, a great memory I want to share with you of when I was in middle school, early high school, maybe.Hyrum Anderson:I come from a big family and everybody's a nerd. Like I, I had brothers who were coding Commodore 64. They used to get like these magazines. And if you were too cheap to buy a game, you could actually, you could actually like copy from the magazine.Nic Fillingham:Yeah. And photocopy the pages and cut it in. Hyrum Anderson:Yeah. Do you remember that?Nic Fillingham:I do. Yeah. Hyrum Anderson:So this, this is how I got my start up computers. I was actually just watching my much more patient older brothers do this, and they'd also coded Pascal and basic at the time. And so I, I got involved. But the security angle, so that the programming started early for me, but the, a really fun security angle is, um, my, my awesome parents and their big family is to help us to focus on the right time they had, they had a BIOS password, right?Nic Fillingham:Oh, wow. Hyrum Anderson:So the BIOS password did, did not allow... And this was like windows 3.1 or something. It-Nic Fillingham:Yeah.Hyrum Anderson:It didn't allow us to, to log in without the password. So we crafted a way to get around this. It included everything from... So they didn't apparently have regard for either physical or cybersecurity controls, and we exploited this weakness.Nic Fillingham:This is windows 3.1?Hyrum Anderson:(laughs) Yeah. No-Nic Fillingham:Okay keep going.Hyrum Anderson:... It was much simpler. One was, um, we taped a mirror to the ceiling.Nic Fillingham:Nice.Hyrum Anderson:And then we would tell my dad that it was time we needed to do homework on the family computer, and we would try to watch in the mirror what the BIOS password was. That didn't work so well. 'Cause we're not good at like the reversing, the mirror image. We also tried to put sticky glue on the keyboard so we could figure out what, like what the most common keys were and do kind of cryptographic, cryptographic to a middle school. Right? What were the most common keys? Can we figure out what words were involved in the password? Hyrum Anderson:Finally, my brothers and I, we found a BIOS book, and we realized that the keystrokes were logged even after boot, and we inserted a little utility into the autoexec doc, that file. If, if this is bringing you back in history, walk with me, enjoy this time.Nic Fillingham:Please, please keep going. Um, I'm, I'm having visceral memories here of my Osborne 3866. Keep going.Hyrum Anderson:We, we can make this little tool that would read the last characters typed in the BIOS buffer and dump at the desk. That was our, that was our, our final. So anyway, this, this sort of like rudimentary hacking process was my first introduction to skit computer security. I went on to be an engineer and in a signal processing and Machine Learning, got my PhD at the University of Washington and, and did a bachelor's and master's degree at BYU. Actually did not do anything in computer security, but I did work... I was a researcher at the National Labs and security kind of with a big guys.Hyrum Anderson:You know, situational awareness for defense industry, that things like that. That kind of helped me appreciate what I think so many people in security just get. And it's this sense of mission and purpose that I don't know that there's a better replacement for getting up to work every day than a sense of mission and purpose. And it's something I have sought at every career hits, right? Like if, if that's missing, I'm not really having a good time. Uh, when I eventually left the National Labs, I started on a data science team at this company called [Mendiant 00:13:26], who had just released a, a big report. Hyrum Anderson:And they were... Honestly my, my job, Jamie Butler, if you're listening, I remember Jamie saying, saying that, um, "Like we don't really know what to do with you. We just think data science can be cool here. And so we're gonna, yeah, we're trying to build a team and we're just going to kind of figure it out as we go. So there's no purpose." But that was really fortunate for me because you know, this was in the days when, uh, data science, Machine Learning, they're still kind of oil and water, but back then, it was like very much a new kind of endeavor, and gave me some early exposure to lots of failed attempts and some, some early wins in that.Hyrum Anderson:So from then I've, I've been a data scientist for security, then, you know, a Mediant became FireEye. And then I went to Endgame and, uh, worked with an excellent team at Endgame. I eventually was the chief scientist in Endgame, was acquired by Elastic, Elastic is a, a fantastic company. This opportunity at Microsoft, Rom said, "Hyrum, come to Microsoft. There's a startup here, Security Machine Learning." And here I am. That's my history.Natalia Godyla:And what are you working on now at Microsoft? Hyrum Anderson:Well, we do a, a number of these. So the, the team I lead includes the red team and the defensive side, and we are really busy on both fronts. Natalia Godyla:(laughs)Hyrum Anderson:So the red team work that happens now is much more sophisticated than when I started. And I was the red team. You know, that was really the, when, when I started at Microsoft and we did one, a red team engagement that has part, parts of which had been publicly disclosed, that was really Hyrum, the Machine Learning person going for a ride with the Azure red team, and saying like, "Hey, if you can find something that looks like this, it's probably a Machine Learning model. Let's go find it." And these really, really smart people, Kathy and Susie were able to find those things. And then I can tinker, um, that this model break it essentially. And they could complete the, the ops.Hyrum Anderson:So it was very much... I was a, kind of a one trick pony in a, what I consider a really high quality Azure red teaming experience that we can affect some big change. Now, our red team is I think, much more robust, uh, with Will Pierce, who you've interviewed. Now he's actually an ops person who gets ML. He gets both sides of the coin, and he'll go in now and do the whole engagement like himself, right? So that keeps us really busy on, on a day-to-day basis. We partner with both first and third-party teams in assessing if your Machine Learning could be vulnerable to some kind of violation that would cause your business pain. Hyrum Anderson:And there are lots of them. And nobody knows better than the team itself what that worst night, nightmare scenario would be. And we try to work with them to say, "Okay, that's the nightmare. Let's try to make it happen." And so we, we try to be... Take on that, uh, attack a persona, and then we, we work with them to try to, uh, tell them how we did it, recommendations to plug it.Nic Fillingham:Hyrum, it feels like we're better at poking AI systems and finding holes and finding flaws than perhaps we are protecting them. Is that sort of where we're at in, in this sort of, this sort of new journey in understanding how to go and secure AI? Are we now, are we sort of at the stage where we're working out how to break in, we're working out how to go and poke holes, but we, we maybe haven't quite got the sort of ratified tools or processes in place to, to, to strengthen them, or am I just missing an, the other side of the coin?Hyrum Anderson:You're exactly right. But I guess I would also ask like, isn't this always the case that Machine Learning or not is kind of always easier to be an attacker than a defender because of the asymmetry involved? An attacker has to be right once, a defender has to be right 100% of the time. Those kinds of things. The added wrinkle for Machine Learning, I think is that, whereas in like an information security system, you can patch a vulnerability, in an AI system, what it means that patch is a really gnarly issue. There are ways proposed to do it in academia and research. They're really cool and some of them work well in, in some cases, but there issues.Natalia Godyla:When do you expect attackers will start regularly using this technique? When should organizations be prepared to actively be red-teaming and build a program around it? And on the other end, when will we have the resources to build fully fledged programs and understand Adversarial Machine Learning?Hyrum Anderson:Well, first I want to make sure that we are talking about the, the difference between a risk and a threat. Okay? So the risk is here and it's everywhere, right? And it can be exploited and that's, that's our job. And, and the red team side of my team, that's what we do, right? The threat exists in niche areas. And those niche areas often don't actually care that this Machine Learning they're attacking, right? There's nothing special. So example, content moderation. It uses Machine Learning to determine if the content you're posting on LinkedIn, or I'm making this up. Wherever, whatever platform is appropriate to, to be seen by others.Hyrum Anderson:And nefarious people or whatever, for whatever motivation they, they want to get content up there and they find ways to obfuscate it. Right? So that's, that, that is an adversary attacking a Machine Learning model, probably the adversary in that case doesn't even know. But the adversary is finding blind spots or design oversights in that system. The same exists in fraud, the same exists in security. So there are adversaries whether they know it or not, who are attacking Machine Learning systems. What they aren't doing today are using these sophisticated algorithmic kind of fuzzing like procedures to attack.Hyrum Anderson:That's what we have not seen widely used. We've seen that a lot in sort of research laboratories. And probably the reason we haven't seen it in the wild yet, is like as easy. Like there's just easier ways, right? If I can just guess with my content moderation upload, and I can be right, like why in the world do I need to have a fancy algorithm to, to do it? So as generally, security is improving for systems in general, to plug some of these guess and check methods, which in my opinion, will never go away. There will be more economic incentive to have a kind of a sure-fire algorithmic way to do this for adversaries.Hyrum Anderson:I do not know if that's going to happen in the next year or the next five years, but economically speaking, if we're doing our job as defenders, that is something in the tool bag that exists is open source and that they will reach for when that becomes the lowest hanging fruit. Nic Fillingham:This feels like a unique point in time for cybersecurity where, and, and, and maybe I'm being too optimistic here, but where we, we do have an opportunity, we, the industry have an opportunities to sort of get ahead of something before it, before it gets ahead of us. Would you share that sort of optimistic view or do you, do you think we're sort of neck and neck? Hyrum Anderson:Yeah, by ahead. I mean, we're thinking about this and I don't think that adversaries are not thinking about it. I just don't think they have to, to pull up this bag yet. Right? So are we ahead? We have an opportunity to be ahead. I guess the concern I have is like, if, if you feel like you're ahead, you're guessing, you're guessing at a defense for an attack that doesn't exist. That means an attacker's gonna choose a different kind of attack. So I would not say that we're ahead. I, I think we have an opportunity to be proactive, especially at these higher level questions about how to manage risk. I think we are too early for things like I'm any tech in this kind of thing right now, right? Hyrum Anderson:Like tho- tho- those things are maybe a bit premature because kind of by construction, you can't be ahead of a threat, in sort of detection and remediation space. Because they haven't punched you yet. You don't know how to, you don't want to block that one. So I agree with you Nic, that we have an opportunity to be deliberate in how we frame this problem. And that is an excellent advantage. And when's the last time that's happened?Nic Fillingham:It certainly feels sort of unique, but I'm with you. You can't block the punch that you haven't experienced yet. And so that's probably a great analogy. I'm thinking back to the episode we did with, uh, Christian Seifert and Josh Neil in CyberBattleSim. You talked about how sometimes attacks on Machine Learning systems, I think content moderation was your example, the attacker, the adversary doesn't even know that they're attacking against a, a Machine Learning model. So that's sort of a really interesting perspective. But sort of try to bridge the gap there with, with the, uh, CyberBattleSim conversation, how far away do you think we are from having automated agents, automated sort of AI constructs, which I know is a sort of fantastical concept.Nic Fillingham:But like how far away do you think where we, we are from actually having Machine Learning on Machine Learning going at it, to some degree of scale and sophistication? Do you think we're... Are you thinking like it's a year, 5, 10 20? What, what, what does that timeline look like?Hyrum Anderson:Now, if you mean Machine Learning versus Machine Learning in a security context for like a breach? I think that's-Nic Fillingham:Absolutely. Yeah.Hyrum Anderson:Yeah. Believe it or not, like that is here in very narrow redefined things. So-Nic Fillingham:Okay.Hyrum Anderson:An example, my, I'll bring up Will Pierce, he published some research at his previous company about using Machine Learning to detect us kind of sandbox that you're in. So you know how to act in a piece of malware, and that sandbox might have Machine Learning employed also. There's this, um, combatitive elements between them. There's been other work published that has attempted to do things like simple reinforcement learning to choose what kinds of, sort of pen testing actions to get into a network that I think the authors would, would say is, is not yet mature.Hyrum Anderson:I myself have done research in using machines against machines, and trying to like a reinforcement learning approach to develop a malware strains that will, will evade Machine Learning model detector. So it's using Machine Learning against Machine Learning. In all these cases, they're narrow and there are easier ways in my opinion to date to do that. And if, uh, you know, our listeners are trying to think about kind of, I dunno. If you think about like the Avengers AI, (laughs) Jarvis, like taking on a big massive scale attack and, and another Jarvis defending it, we are very, very, very far away from that.Hyrum Anderson:I think Machine Learning and AI is best employed today on narrow tasks, sort of this more general artificial intelligence where we're, we're not very mature at all in that larger level of reasoning. So I would not raise any alarm about AI systems swarming our networks in, in mass and, and being effective. I think we're, you know, we're five plus years away from, from that. Nic Fillingham:So we're not going to have a, uh, Jarvis breach shield sort of moment any time soon, where that's the only instruction required and then the, the next thing, you know, you, you've got root access to, to the shield network. That's a, that's a long way away. Hyrum Anderson:That's right. And really the thing that, that you should be more concerned about is how Machine Learning could be used by an adversary to make that human much more efficient. And that's actually not a new thing either. I mean, adversaries are smart, they're economically motivated, and they, they use analysis to be smart about how they attack. Think about like a phishing campaign and who they target. They want to use data to inform them. And I, I wouldn't doubt that there are some Machine Learning models that would help them to predict who the ripest target might be, for example. Or in, in, in a breach scenario. Hyrum Anderson:Let me use, for a very narrow scope, let me use an agent to like, you know, if I know to find out what, what kind of, you know anti-malware is installed, and what's kind of, decide what the kind, the, the best payload would be to evade that. Computers are really good at that kind of fast, quick reflex math and, uh, Machine Learning is w would excel at that. I'd be far more, you know concerned about real adversary, like human adversaries equipped with Machine Learning that scales their intentions more than I would a, about like an autonomous act all by itself, AI doing all the hacking on its own.Natalia Godyla:And speaking of the future, what's next? What's your next big mission? The next problem you'd like to solve? Is it continuing to educate the ecosystem on Adversarial Machine Learning? Is it to get us to the point where we are establishing preventative measures? Or is it something else entirely?Hyrum Anderson:Really it's chasing this goal that while elusive I don't, do not believe is impossible. And that is, build your Machine Learning model wherever. And we want to help you to be able to manage that risk. And do it in a way that's natural in kind of the same kinds of motions that if you're a security professional, you're used to assessing or, or like doing compliance things or doing policy things. If we can do that, as Nic brought up earlier, that can be the beginning... Help, help people to begin security programs with AI, not as, as part of an overall security strategy for the business. You know?Hyrum Anderson:There, there are these special things you have to consider about AI, but you shouldn't make it its own security department, right? Security is a, a business kind of consideration, and we want to make that easy for you now. Today it's hard. Today AI is a special snowflake. We want to make it part of a security network of decisions.Nic Fillingham:I noticed you are the co-founder of the Conference on Applied Machine Learning for Information Security, CAMLIS. Can you tell us a little bit about CAMLIS, uh, if, if you would like to, and then is there anything else you'd sort of like to point listeners to, do you have a blog? Do you have a Twitter? Where can we go to play along at home with, with your work? Hyrum Anderson:So CAMLIS, the Conference on Applied Machine Learning and Information Security was founded by Keegan Heinz and myself several years ago, because we didn't find that right venue that was a mix. Really it's for Machine Learning people doing security things. And those would surface at major conferences, but there was never a place you could go for like a sink your teeth in kind of experience. And I have, I am just so thrilled with the community that has developed around CAMLIS and the quality, the people there. And so for anybody who would be interested in how Machine Learning is used in security, or maybe you're in Machine Learning and you want to learn a little bit more about security, this is a great place that has still a, it's still a boutique conference in the sense that there's not 3000 people there, where you can network. Hyrum Anderson:It's a great location. That will be happening later this fall. I wanna shout out to Edward Raff who will be chairing the conference this year, and you can find out more information in the coming months about that. The second thing I wanna give a shout out to, and this is much sooner, happening much sooner. For the last several years, a partner, Zoltan Balazs and I have been sponsoring a really clever competition that you're all going to want to participate in. So if you like packing things, and if you like malware, and you'd like Machine Learning, this is for you. Hyrum Anderson:This is the Machine Learning security evasion competition. You get prizes for attacking Machine Learning models to create evasive malware variants. This is as real as it gets. So it's real malware. The malware is actually bites on disks. So you're t-, you're, you, you take all the bits, you don't get a change code. You take all the bits and you get to disguise your malware or the malware we provide rather, to evade a suite of defensive solutions. And this attracts a really, really, really gnarly smart crowd of people who are good with both, both malware and Machine Learning, and do it in really clever ways. Even if you're not a malware reverse engineering ninja, there'll be ways for you to participate and still evade Machine Learning models.Hyrum Anderson:And, and I will, I will leave that there. If you'd like to know more about any of this, please do reach out to me. Twitter, I will respond to Twitter eventually. Um, Dr. Hyrum is my handle, or on LinkedIn, you can find me also. If you've heard about the announcement for the Machine Learning security evasion competition, you can head over to MLsec.IO.Nic Fillingham:Hyrum, what do you do for fun when you're not out there on the frontier of Adversarial Machine Learning? Hyrum Anderson:Nic, uh, you don't know this about me, but I am the most interesting man alive. And-Nic Fillingham:Oh, no. I knew that. Rom told us this.Hyrum Anderson:(laughs) Hey, so first I have five kids. So caveat that, that free time expression with knowing that I'm primarily a bus driver and, uh, an entertainer. But, um, so I, I live in Boise, Idaho. I grew up on a hobby farm, and I, I'm lucky enough to be able to work, uh, in a distributed manner. But my folks still have this farm that has like a milk cow. So my COVID hobby, I make artisanal cheese.Nic Fillingham:[inaudible 00:32:15].Hyrum Anderson:Yes. I do.Nic Fillingham:Keep talking.Hyrum Anderson:Handcrafted.Natalia Godyla:(laughs)Hyrum Anderson:Handcrafted [inaudible 00:32:20], and some Alpine sort of Swiss style cheeses, have little cheese cave. Also our viewers can't see this, but in the background, you'll, you'll notice like a little accordion. And, uh, I was a missionary for my church in Russia. And, you know, we didn't, I didn't have a lot of money, but I could spend $8 and buy that sweet puppy. Natalia Godyla:(laughs)Hyrum Anderson:As it turns out, when you have one accordion, they're like, they're like amoeba on a Petri dish. They just multiply. I now have three accordions. And the total amount of money I've spent on accordions is $8.Nic Fillingham:Hang on. You woke up one morning and your, your accordion had divided and split into two accordions?Hyrum Anderson:Yes, it's amazing. It's more like the neighbor's like, "Oh, weird nerd with the accordion, and I have something in my garage I'm trying to get rid of." But it, it brings such a thrill to me to have three accordions. Kids love accordions. And I am one of the most popular person with like elementary school kids, like who doesn't like happy birthday played on the accordion to them. I [inaudible 00:33:23] anymore. Nic Fillingham:I do, I do love a sort of an accordionie powered shindig, you know. A bulker or... That's beautiful.Natalia Godyla:Awesome. Thank you for sharing that. And thank you for joining us on the show today, Hyrum.Hyrum Anderson:Thank you, Natalia. Thank you, Nic. Great to be with you. Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @MsftSecurity, or email us at SecurityUnlocked@Microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla:Stay secure.
Red-teaming AI with CounterFit
It’s anall outoffensive on today’s episode while we talk about how the best defense is a good offense. But before we plan our attack, we need to know our vulnerabilities, and that’s where our guest comes in.On this episode, hosts Nic Fillingham and NataliaGodylaare joined by Will Pearce, who discusses his role asAI Red Team Lead from the Azure Trustworthy ML Groupand how he works to find weaknesses in security infrastructure to better develop ways to prevent against attacks.In This Episode You Will Learn:The three main functions of counterfeitWhy the best defense is agood offenseWhy Will and his team aren’t worried about showing their hand by releasing this software as opensourceSome Questions We Ask:Whatpreviously developed infrastructure was the counterfeit tool built upon?How AI red teaming differs from traditionalspecopsredteamingHow did the counterfeit project evolve from conception to release?Resources:Will Pearce’s LinkedInhttps://www.linkedin.com/in/will-pearce-a62331135/AI security risk assessment usingCounterfithttps://www.microsoft.com/security/blog/2021/05/03/ai-security-risk-assessment-using-counterfit/Nic Fillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp31]Nic Fillingham: (00:08)Hello and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham. Natalia Godyla: (00:20)And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel, research and data science.Nic Fillingham: (00:30)And profile some of the fascinating people working on artificial intelligence in Microsoft security. Natalia Godyla: (00:36)And now let's unlock the pod. Nic Fillingham: (00:41)Hello listeners, and welcome to episode 31 of Security Unlocked. Natalia, hello to you. Welcome. Natalia Godyla: (00:46)Hello, Nic. Happy to be here. Uh, what do we have on the docket for today? Nic Fillingham: (00:50)Today we have Will Pearce joining us. Will Pearce is the AI red team lead inside the Azure Trustworthy Machine Learning Group. Eager listeners of the podcast might recognize Will's name from a couple of episodes back where we had Ram Shankar Siva Kumar come on the podcast and mentioned Will a few times. Will is here to talk to us today about a blog post that he co-authored with Ram Shankar Siva Kumar on May 3rd, discussing the announcement of a new AI security risk assessment tool called Counterfit. And this is a great conversation, a sort of fascinating project here, and his job is about trying to break into our AI systems and compromise them in order to sort of make them, make them safer, make them better. And so we're gonna say that word, we're gonna say this word red teaming in quite a bit in the interview, and for those that may not be super familiar with the concept, we thought we might just sort of revisit it. Natalia, you've, you've got a good definition there, w- walk us through what does red teaming mean? Natalia Godyla: (01:47)And so red teaming originated in the military as a way to test strategies by posing as an external force. The US force would be the blue team, the defenders, and the red team would be someone that is trying to infiltrate the United States, and that same concept is now applied to security. So red teaming is that training exercise to determine where are the gaps in your security strategy.Nic Fillingham: (02:11)Right. And so in this context here, with regards to the Counterfit tool, Will just had a bunch of scripts that he had built himself just to sort of do his job. These are scripts he built for himself, and at some point Will talked about in the interview how he decided to pull them together into a toolkit and create a sort of an open source project that's now available up on GitHub, so that other AI red team folks, uh, really anyone who's out there trying to make AI systems more secure through red teaming can benefit from the work that Will's done. Natalia, some of the things that Counterfit can do, obviously we'll hear from Will in just a second, but what's your summary. Natalia Godyla: (02:45)I mean, there's so many different ways you can use this tool for offensive security. So you, you can pen test an red team AI systems using Counterfit, you can do vulnerability scanning, and you can also log for AI systems. So collect that telemetry to improve your understanding of the different failure modes in AI systems. Nic Fillingham: (03:07)Well, this is a great conversation with Will Pearce. I think you'll enjoy it. On with the pod.Natalia Godyla: (03:11)On with the pod. Today, we are joined by Will Pearce, an AI red team lead from the Azure Trustworthy ML Group to talk about a blog post called AI Security Risk Assessment Using Counterfit. Welcome to the show Will.Will Pearce: (03:29)Thank you. Thanks for having me. Natalia Godyla: (03:31)Awesome. Yeah. We're really excited to talk about Counterfit, and I think it'd be great to start with a little bit of an intro. So could you share who you are, what your day-to-day is at Microsoft? Will Pearce: (03:40)Yeah. Yeah. As you mentioned, Will Pearce, I'm the red team lead for the Azure Trustworthy Machine Learning team. My day to day is attacking machine learning inside Microsoft. So building tools, doing research and going after machine learning models wherever they live inside Microsoft.Natalia Godyla: (03:59)And Counterfit is a tool that helps with that, correct? Could you share what Counterfit is? Will Pearce: (04:05)Yep. Yeah. So Counterfit is a command line application that helps me automate these assessments. So this was sort of a lot of data processing that can go into them, and is taking a lot of time, and so I sort of built this command line application to take care of it. I come from the ops world, so traditional red teaming, you know, where you kind of hack networks. And so sort of the command line interface, that malware interface is what I was used to, but in the machine learning world, a lot of the tools or libraries, they're not, so they're not really readily available for you to automate things. And so I just kind of married the two together that basically wraps existing frameworks. Nic Fillingham: (04:47)Will, I'd love to step back just to speak to you. So you are the AI red team lead, tell us about AI red teaming or AI ML red teaming, how does that differ from sort of traditional SecOps red teaming?Will Pearce: (05:00)In and a lot of ways it doesn't, machine learning is a new sort of attack surface that is coming up like as businesses integrate machine learning into all kinds of things, the security of machine learning hasn't really been paid attention to. But you know, machine learning is part of a larger system, it's still an information asset that still the model files exist on a server. They're put into websites, all the normal stuff. And so a lot of those skills transferred, you know, one-to-one, the difference being is having that, that knowledge of how machine learning algorithms work, how you can bend them, how you can alter your inputs to get the outputs that you want, and a lot of it, a lot of the attacks are really just kind of engineering to get to that point. Nic Fillingham: (05:46)And the types of specialists that you have on an AI red team versus again, a sort of, sort of more, more generalist, uh, SecOps red team. Do you have data scientists and do have other statisticians and other folks that maybe have a different set of skills? Will Pearce: (06:01)Yep, absolutely. So we have a couple of members on the team that are extremely experienced data scientists and ML engineers. So basically blending of those skillsets, you know, where I don't have that formal background, but I do understand how sort of attacks work and, you know, how to run an op. They understand how the algorithm works at a, a very deep level, and so we, we have a lot of fun going back and forth brainstorming ideas. Natalia Godyla: (06:32)So bringing this back to the Counterfit project, how did the Counterfit project evolve? As I understand it, it started as a group of attack scripts, and, and now it's an automated tool. So what did that process of evolution look like? Will Pearce: (06:49)So earlier I mentioned all these things are libraries and-Natalia Godyla: (06:53)Mm-hmm (affirmative).Will Pearce: (06:53)... you know, I've been at Microsoft for nine months-ish. And coming from that ops role, it just wasn't scalable. So to write a script for every attack that you wanted to do-Natalia Godyla: (07:04)Mm-hmm (affirmative).Will Pearce: (07:05)... isn't scalable. So the first thing, just natural to want that tool, that malware type interface was to build, was to wrap these into a single tool that you could run any attack script that you wanted in, in an automated fashion. That was that, it was, it was just a need for an automated tool for my own purposes and it kind of evolved into this. Truth be told, I didn't necessarily think it was gonna be as popular as it was. Natalia Godyla: (07:29)(laughs)Will Pearce: (07:30)Yeah. I wrote it because I needed it, not because, you know, we wanted to release it, but it has kind of taken on a life of its own at this point where, you know, I don't do more bug fixes than I do attacks, but I could see in the not too distant future we would need a dev to like take care of the day-to-day maintenance of it, or, you know, build in whatever features we wanted for it. Nic Fillingham: (07:55)And did not thing exists here in this space Will, was there, was there nothing that allowed for the automation of, of the work that you were doing and that's why you sort of built it, or did something exist, but the modifications that would have been necessary to meet your needs would have been sort of too laborious? Will Pearce: (08:10)I shouldn't say nothing existed 'cause I don't... There was nothing that, you know, for example, data types, right? Like you have texts, images, NumPy, or, or arrays of numbers, things like that. A lot of the tools only focus on one of those data types or two let's say, right? But there's a wide variety of models at Microsoft that I need to test. And so having something that can do text, audio, image, any arbitrary data type is extremely valuable, and that was sort of the first step. It was just having a need, I didn't wanna use five different tools, you know, I wanted to use one, and so that was kind of the, the driver for me to build it. Nic Fillingham: (08:53)And I noticed, uh, Will it's been published through GitHub. So is the intent here for it to be a true sort of community initiative, community project and, and have contributors and, and sort of a, a vibrant community?Will Pearce: (09:05)Yeah, absolutely. Yeah, that's the plan. Ram will tell you I'm not the best data scientist, so this is the blending of offensive security and machine learning, right? And data science. And so there are just conventions in the data science world that I'm not familiar with, similarly, there are inventions in the offensive security world that data scientists aren't familiar with. So moving this Counterfit becomes a metaphor of sorts for these machine learning algorithms, where people feel welcomed to submit new research, um, and to really become a platform for the conversation between machine learners and security people to evolve, start to understand each other and what matters to the other. Natalia Godyla: (09:51)And are you also continuously updating the tool, so as you learn more adversarial attacks against AI, will you be feeding that into the product, and what does that process look like? Will Pearce: (10:04)Yeah, yeah, absolutely. So it exists on algorithms, right? Natalia Godyla: (10:09)Mm-hmm (affirmative). Will Pearce: (10:09)Uh, attack algorithms. So an algorithm basically iterates on an input in a particular way, right? And that's how it, you kind of create that output that you want. So there's that piece, is just creating new algorithms that will do whatever we think is useful for the particular task. But there's also things like a web interface that would be extremely nice for some users or, you know, just some niceties that aren't built in yet still somewhat difficult to look at the results of a scan or the samples of the scan. And so, so some of those things still need to be built in, but yeah, that's kind of the plan is to build any, you know, someone could submit a feature request tomorrow and we would probably build it the next day just because we're excited to see what people do with it and what they care about with it. Nic Fillingham: (11:05)So Will, if we could jump forward into, I think the three core functions or the three use cases of this tool as they're sort of listed out in the blog here for those that have read the blog post. So the first one is listed out as penetration testing and red teaming AI systems, and the, the tool here is preloaded with published attack algorithms, which can be used to, to test out evading and, and stealing AI models. We've had a bunch of your colleagues, uh, and peers on the podcast before, so we've learned a little bit on the podcast here about adversarial ML. We know that it's sort of a new frontier, we know that the vast majority of organizations out there don't have anything in place to protect their AI systems. Can you tell us a bit about this first scenario here? So evading and stealing AI models, what does that sort of look like in a hypothetical sense or in the real world, and then how do we use this tool to sort of test against it? Will Pearce: (11:59)Let me go backwards a little bit in your questions.Nic Fillingham: (12:01)Please. Yeah. Will Pearce: (12:02)So you mentioned that organizations don't have the tools to protect these systems.Nic Fillingham: (12:08)Right.Will Pearce: (12:08)That's only partly true, only because machine learning, the model itself is a very small part of that whole system, but there's a very mature information security presence around principles of least privilege, setting up servers, deploying end points. Like we know exactly there are very mature security processes that can already be attached to these things, the difference is because machine learning people aren't cued in to this, the security apparatus at a higher level, they're not aware that these things exist, right? So you're looking at ML engineers who are responsible for deploying an endpoint to, uh, you know, let's say a public site, but they're not aware that maybe the way they're deploying it, you know, they, they put secrets in the code or, or whatever. And that's kind of what this is about, is it is about marrying of traditional information security principles and this new technology, machine learning. Will Pearce: (13:07)So in terms of evading a model, I mean, what that looks like is basically you have a model that is responsible for taking input and making a decision based on that input. So the classic example is images, but, you know, if you think about authentication system, you know, where it uses your face, you know, Windows Hello, maybe there is a different face that would also work on it. So evading a model is basically just giving an input such that you get the output that you want. So in the traditional information security sense, it would be like bypassing a malware classifier, bypassing a spam filter, so that's how you get your phishing. Will Pearce: (13:43)Stealing is, it's basically turning machine learning on its head. So it's just reflecting the model back at itself. So all you do is you send in, you grab a dataset from online, there's ton of them, for example, like an email data set. So let's say you're a spam filter. I did some research like before I got to Microsoft, it was a spam filter. In their email headers, they leaked their spam scores. So you'd send an email and you'd get one back, and in the headers it would be like 900. Nic Fillingham: (14:12)Hmm. Will Pearce: (14:13)I recall it's interesting. And it was in every email. So what we did is we grabbed big data set of emails, like the Enron data set, and we just sent every single email, every single Enron email through this spam filter, and we collected the email we had already. And then for each email, we just collected the score, right? And then we just trained a local model to mimic the spam filter, and using that, we were able to sort of reverse that spam filter and figure out what words the model thought were bad and what words the model thought were good. Will Pearce: (14:46)And so Counterfit kind of automates that process. It gives you a framework in which you can put all that code into one place and then run that attack. The code we wrote for that particular attack, it was in like, you know, 15 different files, it was several different services. It wasn't pretty, or repeatable necessarily. And so Counterfit allows you to sort of aggregate all of the weird code that you might need and allow you to interface some target model with any number of algorithmic attacks, including, you know, model stealing. Nic Fillingham: (15:22)So I, I might've got this wrong Will, but, so if the goal is to stop adversaries from potentially stealing your model using this technique here where you, you'd basically grab a dataset, throw it at a, at a model, monitor the output and then go train your own model to mimic that. How does Counterfit help protect against that, or how does Counter- what kind of information or data does Co- Counterfit output to help you in that, in stopping model stealing? Will Pearce: (15:49)Um, (laughs) it, it doesn't.Nic Fillingham: (15:51)Oh.Will Pearce: (15:52)Counterfit is an offensive security tool. (laughs)Nic Fillingham: (15:55)Got it. Will Pearce: (15:56)So the primary piece being offense drives defense. Nic Fillingham: (16:00)Got it. Will Pearce: (16:01)So using this tool in that particular way, you can then test, right? In any number of scenarios, before you deploy a model, you can scan it and you, after you deploy a model, you can scan it, but you start to develop benchmarks. So in traditional information security, when you have a vulnerability scan, right? You scan the entire network, you get your list of critical, high, medium, low vulnerabilities. You then go start checking, you know, patching, check it, and then you re-scan the next month. This is a similar function. Natalia Godyla: (16:34)So we talked through two of the use cases here, the pen testing and red teaming, and then you just touched on vulnerability scanning. Can you provide a little bit more color on how you intend security professionals to use it for logging, what's the, the purpose, the driver behind that use case? Will Pearce: (16:54)Yeah. So logging... (laughs) Going back to security foundations, currently machine learning, a lot of them don't log-Natalia Godyla: (17:00)Mm-hmm (affirmative).Will Pearce: (17:02)... or they, they don't explicitly log for the purpose of security. So they'll log telemetry data, they'll log usage data, but that doesn't feed any higher level security processes. So the Counterfit has logging built in where it will track every input and every output, just as you would, you would put a l- a logging mechanism behind a model where you would track every input and every output. So we've built it in here so organizations can get some form of logging during an attack, right? So they could then turn those logs into some sort of detection pipeline, some sort of ability to detect a particular attack, but ideally organizations would log, right? They're gonna be logging anyway. And so I think it, in a lot of ways, it's just about getting machine learning people to start thinking about these security motions in a consistent way. So if you're gonna collect logs, do it in a way that's repeatable (laughs) and consistent and gives you the information that you need to, to do whatever you need to do, whether it's, you know, telemetry data or usage data or w- whatever it is.Nic Fillingham: (18:11)You know, you talked about a, a golfer Counterfit to sort of fit the nature of a metasploit, and being, uh, popular and, and powerful red teaming tool. What efforts are being made, or what's being done to ensure that this doesn't end up being an actual breach toolkit for adversaries? How do you toe that line of making a, a powerful tool for red teams who are ultimately trying to do good, and actually, you know, making it easier for adversaries to go out there and evade or steal models? Will Pearce: (18:39)I don't have a good answer for you. Well, I mean, in a lot of ways, you know, offense drives defense, right? So we think adversaries are gonna be doing this anyway. So in this way, if we can get a tool into people that make it easier for everybody (laughs) including adversaries, you know, we would hope that organizations would start putting mitigations in place for these things. If they see an uptick in attacks, they should do something about it, if they don't, then great, it's obviously not on the radar of attackers. And I would say currently it is not really on the radar of attackers. Nic Fillingham: (19:19)Well, not until this podcast comes out. Will Pearce: (19:21)Yeah, yeah. Exactly.Natalia Godyla: (19:21)(laughs)Will Pearce: (19:22)And so we're, yeah, I think we're maybe a little ahead of schedule just in terms of what this tool represents, and we might've missed the mark completely, right? Like we might be, we don't know if attackers are gonna go this route of attacking machine learning. There are certainly new attacks every year that come out, so the trend is up, but I think widespread abuse has yet to be seen, which I guess is the whole point here is to get ahead of that. Nic Fillingham: (19:51)Well, let me to just recap to make sure I, I sort of understand this. So as someone red teaming and penetration testing AI machine learning systems, you had a lot of disparate scripts, a lot of disparate tools, a lot of disparate processes, you needed to bring them all together into a, into a single pane of glass, to use an overused, uh, analogy. So you created a first and foremost for you, then you realize it would be a powerful tool for, for others out there that are, that are trying to protect AI machine learning systems through red teaming, through, as you say, offense drives defense. Can you share any examples of how the, the tool, either the, the work that you've done in protecting ML systems at Microsoft or with customers or other projects, do you have any stories you can tell of how this tool has been used out in the wild and, and some of the things that it's done to help find vulnerabilities, help patch gaps? Yeah, what are some of the positive stories or positives outcomes? Will Pearce: (20:42)Yeah. I mean, in the wild, I don't think so. You know, it's like when I go back-Nic Fillingham: (20:46)(laughs)Will Pearce: (20:46)... to talk to my, my like traditional red team peers, for them, machine learning is still a main in a lot of ways. So it's like they only hear about it in terms of, you know, they're only being sold at, right? Like they only say an EDR and it's like, okay, well, we've seen this story a million times. Like two years ago, it was application white listing. So it's gonna take, I think a little bit to get on board, but there are a couple of use cases. There's one we did with the expense fraud where you would take a receipt and you would change a digit to be more, right? So you would spend 20 bucks, you get a receipt for 20 bucks, but you'd change the two to three, then you would net $10.Will Pearce: (21:25)There, in a lot of systems, there's still like a human in the loop, so a lot of engines will have like a rule that says, if this is below 90% confidence, send it to a human, otherwise just trust the machine learning algorithm. There's a number of different NLP models that we've gone through, uh, with this where you can, you know, make algorithms say racist things or impolite things, and you can basically force it to do that. Nic Fillingham: (21:56)NLP is, uh, natural language processing? Will Pearce: (21:58)Mm-hmm (affirmative). Yeah. It's also neu- neuro linguistic programming-Nic Fillingham: (22:03)Okay. Okay.Will Pearce: (22:03)... and I, I think it's natural language processing. (laughs)Nic Fillingham: (22:04)But it's, it's sort of, it's sort of the processing of written or spoken word?Will Pearce: (22:08)Yup. Yeah, exactly. So have you, I'm sure you might've heard of GPT-3, Open AI.Nic Fillingham: (22:11)Yes, we have.Will Pearce: (22:15)Yeah. So there's, there's a couple things there with the, like that dataset for example. They pulled everything from the internet, right? And it's like as much public data as they possibly could, but it's like, just because it was public doesn't mean it should have been public. So there's a number, an amount of PII that you can pull out of GPT-3 that, you know, organizations might not be aware exists inside the model. A lot of models like will memorize training data, and so, you know, when you deploy like an NLP model to an end point and you don't realize this, if that model has PII in it, you know, you're kind of exposing it to whoever has access to that end point. And that's, that's a new challenge for sure. Will Pearce: (23:02)It also, you know, if you have PII saved in your model, like it's easy to say a database has PII, this falls within a particular compliance boundary, but when you say, this model has PII, where does that fall? Does it fall inside of that same compliance boundary? Security would say yes, but a lot of machine learning data scientists, they're not there yet. And so, you know, you might have a model that is deployed that is backed by this NLP system where you can pull PII from, and Counterfit kind of helps automate this and helps me, you know, play and tweak and, you know, figure out what I need to send to model to get the output that I want. Natalia Godyla: (23:45)How do you coordinate with teams inside Microsoft to build a feedback loop? I'm, I'm assuming you're, as you said, tweaking along the way, and with your findings, you've discovered vulnerabilities or opportunities to evolve the way that we're handling our AI systems. How do you work with teams to better the process? Will Pearce: (24:08)Yeah. It's report writing. (laughs)Natalia Godyla: (24:11)(laughs)Will Pearce: (24:12)So sometimes we reach out, you know, there's a particular service we wanna go after, maybe it has a high impact, a high value to us, you know, maybe there's something that we, we wanna do 'cause we think for style points, so, you know, we wanna go after that. So we'll reach out and we'll contact PLC as like, hey, we're, as the trustworthy machine learning team we wanna attack your model, we'll give you a report. Other times we'd go into the Azure website and I just look at all the products that exist and I just provision them into my, into our own tenant and attack them from there, and then write the report and send it over.Will Pearce: (24:50)So it usually depends, it's a production system. I usually provision it if I can, and go after it that way. If it's not quite there yet, or it's, you know, a high impact use case, you know, for example, the PII one that we just talked about, will work directly with the team and kind of set up an official project. We have like rules of engagement, you know, there's a cadence, and in the end it's a report that basically states what we did, recommendations that we have, and a kind of a, a pat on the back and-Natalia Godyla: (25:23)(laughs)Will Pearce: (25:24)... good luck, not good luck, but, you know, reach out if you need anything kind of thing. And I would say, yeah, it's been positive. I think it's really difficult to show impact. So in a traditional information security sense, getting domain admin, you know, it's an easy way to show impact. Dumping a database full of PII, you know, it's an easy way to show impact, but, you know, when you, uh, change an image to make a dog look like a cat, and then you'd like, okay, see, this is possible? Like it's a harder sell and it doesn't quite hit home. So, you know, a lot of the work done is really just trying to show impact and give teams just an easy way to see the risks that exist-Natalia Godyla: (26:11)Mm-hmm (affirmative).Will Pearce: (26:12)... without having to, not dumb it down, but without having to resort to toy examples. Nic Fillingham: (26:19)So are there folks out there Will listening to this podcast hearing about the Counterfit tool who may not think of themselves as sort of the target audience for this, you know, protecting AI and ML systems is, is obviously still very nice and red teaming AI and ML systems, it sounds like even more so. Can you talk to us about some of the types of data scientists, security ops folks, what are some of the roles out there of people that should be taking a look at Counterfit and sort of thinking about the AI systems that might be in use in their organizations that need to be pen tested, vulnerability tested, logged, et cetera, et cetera, who, who needs to use this tool that maybe doesn't realize they need to use this tool?Will Pearce: (26:58)You know, really anybody using machine learning. But Microsoft has a mature information security program, a lot of places don't. So what this tool doesn't give is like, there's no model inventory, there's no tracking of assets. There's, there's none of th- those foundational security things that are, that would normally in place, right? Like how do you know what to vulnerability scan in a traditional environment where you can either scan, right? You can just every internal IP address possible, you know, or you can pull it out of an asset inventory, right? Organizations for their models don't even have asset inventories yet. If there is a machine learning person who is wondering, you know, what is possible, you know, with this model, like what can I get it to do? Like those are the kinds of people, and it's just bringing it into their own process, their own machine learning development life cycle, and saying at the end of this, I'm gonna scan and see, see what's there. Will Pearce: (27:53)Or maybe they're the ones responsible for deploying models to a public endpoint, and they were like, you know what? Let's see what this thing kicks out, right? Let's, let's, let's see what Counterfit comes up with. We're just point Counterfit, and if something falls out, like we'll deal with it then. But I don't know, from the security side, anytime you mention machine learning to security people, they, math, like they just don't wanna talk to you 'cause they assume machine learning means math. Nic Fillingham: (28:19)(laughs)Will Pearce: (28:20)And in a lot of ways-Nic Fillingham: (28:20)Math hard.Will Pearce: (28:21)... it does. Natalia Godyla: (28:21)(laughs)Will Pearce: (28:21)Yeah. And I, to be fair, I was maybe one of those people in the beginning, but I have always enjoyed like numbers and data and things like that. So this is kind of a, in some ways a dream, right? For me, because that's the things that I'm interested in. But I would say if there is an interest in data and numbers and watching what comes out, like it is a rabbit hole that just doesn't end, right? Like you can think of, I mean, in, in all the ways like attacks are, are just like this, like attackers need feedback, right? To, to be successful, and machine learning model is the same way. It's like you input data, you get output, and then you in the middle, there's some inference, there's some like black box that you have to like wonder what happens. Will Pearce: (29:08)And so I think in a lot of ways, security people are, already think that way. So for Counterfit, like if you have a product that you wanna bypass, if you have a spam filter you wanna bypass, like figure out how these, these algorithms that, you know, researchers built that you can use in your ops, and you'll find that fortunately, that all the math has done for you and, and all you have to do is get your data in the right format and just let the math take care of itself. Nic Fillingham: (29:39)I wonder if you should make up some t-shirts or some stickers that say like, you know, just Counterfit it. Like should we verb-Natalia Godyla: (29:45)(laughs)Nic Fillingham: (29:45)... should we verb that now and then like put it all over Blackout Conference in RSA and-Will Pearce: (29:50)Yeah.Nic Fillingham: (29:51)... get all the, get all the SecOps folks out there just, uh, just point Counterfit at it and see what happens. Will Pearce: (29:56)Yeah. Well, it's funny. So the spam filter attack that I mentioned earlier, the reason it's called Counterfit is because it is a, like a model stealing piece. So I think in some libraries like to fit a model is the term. Natalia Godyla: (30:11)Mm-hmm (affirmative).Will Pearce: (30:12)So it's like to Counterfit is to steal it. Nic Fillingham: (30:15)Very clever. I think you're, you're neck and neck with a cyber battle SIM for-Natalia Godyla: (30:19)(laughs)Nic Fillingham: (30:19)... coolest, uh, ML tool name, uh, to come out, in, of, of Microsoft. Will Pearce, thank you so much for joining us on Security Unlocked today. Before we wrap, before we let you go, tell us where our listeners can go to learn more about this project and/or potentially follow you on the inter webs.Will Pearce: (30:36)You can go to, to get the tool, go to github.com/azure/counterfit, and there is a highly recommend the Wiki, and Docker and/or Ubuntu, or if you're brave, you can install it on Windows. And I am on Twitter @Moohacks, which is...Nic Fillingham: (30:57)Moohacks as in M-O-O or M-U? What's Moohacks? Will Pearce: (30:59)Uh, M-O-O... I can't remember if I have the underscore, on my Git I have Moohacks. Nic Fillingham: (31:06)All right. What will we find if we follow you on Twitter, or is that an NSFW question? Will Pearce: (31:11)No, it's mostly, uh, machine learning things... Well, it's a good mix I think. Machine learning and, uh, cybersecurity research that I like. Nic Fillingham: (31:20)Sounds good. All right. Well, Will Pearce once again, thanks for being on Security Unlocked. Will Pearce: (31:23)Yeah. Thank you very much. Natalia Godyla: (31:25)Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode. Nic Fillingham: (31:36)And don't forget to tweet us @msftsecurity, or email us at firstname.lastname@example.org with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla: (31:47)Stay secure.