Security Unlocked

5/12/2021

Securing the Cloud with Mark Russinovich

Ep. 27
On this week’s Security Unlocked, we’re pullinga baitand switch! Instead of our regularly scheduled programming, we’re going to be featuring the first episode of our newpodcast,Security Unlocked: CISO Serieswith Bret Arsenault. Each episode is going to feature Microsoft’sCISO Bret Arsenault sitting down withother top techies in Microsoft and other companies in the industry.In its inaugural episode – which we’re featuring onthisepisode – Bret sits down with MarkRussinovich,Chief Technology Officer of Microsoft’s Azure.Mark has a unique perspective on cloud technologies and offers insight into the changes that have occurred over the past few years due toadvancing technologyandthe unique challenges brought aboutduringthe coronavirus pandemic.Enjoy this first episode of the new series and remember to subscribe so you catch all the rest that are yet to come.In This Episode You Will Learn:Theinitialism FFUUEE and why it’s important in understanding people’s resistance to adopting newer securitycapabilitiesMarkRussinovich’sthree points of advice for those looking to become moresecureTheories on improving MFA adoption across theboardSome Questions We Ask:How do we think of cloud security now versus ten years ago?What does a leading engineer think of moving toward a hybrid workforce?How do you find and screen potential newteam membersin a remote world?ResourcesCISO Series with Bret Arsenault:https://aka.ms/securityunlockedcisoseriesBrett Arsenault’s LinkedIn:https://www.linkedin.com/in/bret-arsenault-97593b60/Mark Russinovich’s LinkedIn:https://www.linkedin.com/in/markrussinovich/Nic Fillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/
5/5/2021

Ready or Not, Here A.I. Come!

Ep. 26
Remember the goodoledays when wespent youthfulhours playing hide and seek with our friends in the park?Wellit turns out that game of hide and seek isn’t just for humans anymore.Researchers have begunputting A.I. to the test by having it play this favorite childhood gameover and overandhavingthe softwareoptimize its strategiesthrough automated reinforcement training.In today’s episode,hosts Nic Fillingham and Natalia Godyla speak with Christian Seifert and Joshua Neil about their blog postGamifying machine learning for stronger security and AI models,and how Microsoft is releasing this new open-sourcedcode to help it learn and grow.In This Episode, You Will Learn:What is Microsoft’sCyberBattleSim?What reinforcement learning is and how it is used in training A.I.How theOpenAIGym allowed for AI to be trained and rewarded for learningSome Questions We Ask:Is an A.I. threat actor science fiction or an incoming reality?What are the next steps in training the A.I.?WhowastheCyberBattleSimcreated for?ResourcesOpenAIHide and Seek:OpenAIPlays Hide and Seek…and BreaksTheGame! 🤖Joshua and Christian’sblog post:Gamifying Machine Learning for Stronger Security and AI ModelsChristian Seifert’sLinkedIn:https://www.linkedin.com/in/christian-seifert-phd-6080b51/Joshua Neil’sLinkedIn:https://www.linkedin.com/in/josh-neil/NicFillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog: https://www.microsoft.com/security/blog/Transcript[Full transcript at https://aka.ms/securityunlockedep26]Nic Filingham:Hello and welcome to Security Unlocked! A new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft Security Engineering and Operations Teams. I'm Nic Filingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security. Deep dive into the newest threat intel, research, and data science.Nic Filingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla:And now, let's unlock the pod.Nic Filingham:Hello, Natalia! Hello, listeners! Welcome to episode 26 of Security Unlocked. Natalia, how are you?Natalia Godyla:Thank you, Nic. And welcome to all our listeners for another episode of Security Unlocked. Today, we are chatting about gamifying machine learning, super cool, and we are joined by Christian Seifert and Joshua Neil who will share their research on building CyberBattleSim, which investigates how autonomous agents operate in a simulated enterprise environment by using high-level obstruction of computer networks and cyber-security concepts. I sounded very legit, but I did just read that directly from the blog. Nic Filingham:I was very impressed.Natalia Godyla:(laughs)Nic Filingham:If you had not said that you read that from the blog, I would've been like, "Wow". I would to like to subscribe to a newsletter. Natalia Godyla:(laughs)Nic Filingham:But this is a great conversation with, with Christian and Joshua. We talked about what is reinforcement learning. Sort of as a concept and how does that gonna apply to security. Josh and Christian also walked us through sort of why this project was created and it's really to try and get ahead of a future where, you know, malicious actors have access to some level of automated, autonomous tooling. Uh, and so, this is a new project to sort of see what a future might look like when there all these autonomous agents out there doing bad stuff in the cyber world.Natalia Godyla:And there are predecessors to this work, at least in other domains. So, they used a toolkit, a Python-based Open AI Gym interface to build this research project but there have been other applications in the past. OpenAI is, uh, well-known for a hide-and-seek. There is a video on YouTube that shows how the AI learned over time different ways to obstruct the agent and the simulated environment. Things like, blocking them off using some pieces of the wall or jumping over the wall.Nic Filingham:The only thing we should point out is that this CyberBattleSim is an open source project. It's up on GitHub and attained very much want researchers, and really anyone who's interested in this space to go and download it, go and run it, play around with it, and help make it better. And if you have feedback, let us know. There is contact information, uh, through the GitHub page but you can also contact us at Security Unlocked at Microsoft dot com and we can make sure you, uh, get in contact with the team. And with that, on with the pod?Natalia Godyla:On with the pod!Nic Filingham:Welcome to Security Unlocked, new guest, Christian Seifert. Thanks for joining us and welcome returning guest, Josh Neil, back to the podcast. Both of you, welcome. Thanks for being on Security Unlocked.Christian Seifert:Thanks for having us!Joshua Neil:And thanks, Nic.Nic Filingham:Christian, I think as a, as a new guest on the podcast, could we get a little introduction for our listeners? Tell us about, uh, what you do at Microsoft. Tell us about what a day to day look like for you.Christian Seifert:Sure, so I'm a, uh, research lead on the Security and Compliance team. So our overall research team supports a broad range of enterprising consumer products and services in the security space. My team in particular is focused on protecting users from a social engineering attack. So, uh, think of, like, fishing mails for instance. So we're supporting Microsoft Defender for Office and, um, Microsoft Edge browser.Nic Filingham:Got it, and Josh, folks are obviously familiar with you from previous episodes but a, a quick re-intro would be great. Joshua Neil:Thanks. I currently lead the Data Science team supporting Microsoft threat experts, which is our managed hunting service, as well as helping general res... cyber security research for the team.Nic Filingham:Fantastic, uh, again, thank you both for your time. So, today in the podcast, we're gonna talk about a blog post that came out earlier in this month, on April 8, called Gamifying Machine Learning for Stronger Security in AI Models, where you talk about a new project that has sort of just gone live called CyberBattleSim. First off, congratulations on maybe the coolest name? For, uh, sort of a security research project? So, like, I think, you know, just hats off there. I don't who came up with the name but, but great job on that. Second of all, you know, Christian if, if I could start with you, could you give us a sort of an introduction or an overview what is CyberBattleSim and what is discussed in this blog post?Christian Seifert:As I... before talking about the, the simulator, uh, the... let me, let me kind of take a step back and first talk about what we tried to accomplish here and, and why. So, if you think about the security space and, and machine learning in particular, a large portion of machine-learning systems utilized supervised, uh, classifiers. And here, essentially, what we have is, is kinda a labeled data set. So, uh, for example, a set of mails that we label as fish and good. And then, we extract, uh, threat-relevant features. Think of, like, maybe particular words in the body, or header values we believe that are well-suited to differentiate bad mails from good mails. And then our classifiers able to generalize and able to classify new mails that come in. Christian Seifert:There's a few, uh, aspects to consider here. So, first of all, the classifier generalizes based on the data that we present to it. So, it's not able to identify completely unknown mails. Christian Seifert:Second, is that usually a supervised classification approach is, is biased because we are programming, essentially, the, the classifier and what it, uh, should do. And we're utilizing domain expertise, red teaming to kind of figure out what our threat-relevant features, and so there's bias in that. Christian Seifert:And third, a classifier of who has needs to have the data in order to make an appropriate classification. So, if I have classifier that classifies fish mail based on the, the content of the mail but there is the threat-relevant features are in the header, then that classifier needs to have those values as well in order to make that classification. And so, my point is these classifiers are not well-suited to uncover the unknown unknowns. Anything that it has not seen, kinda new type of attack, it is really blind to it. It generalizes over data that, that we present to it. Christian Seifert:And so, what we try to do is to build a system that is able to uncover unknown attacks with the ultimate goal then to, of course, develop autonomous defensive component to defend against those attacks. So, that gives it a little bit of context on why we're pursuing this effort. And this was inspired by reinforcement learning research and the broader research community, mostly that is currently applied kinda in the gaming context. Christian Seifert:So OpenAI actually came out with a neat video a couple of years ago called Hide and Seek. Uh, that video is available on YouTube. I certainly encourage listeners to check it out, but basically it was a game of laser tag where you had a kinda, uh, a red team and a blue team, uh, play the game of laser tag against each other. And at first they, of course, randomly kind of shoot in the air and run around and there is really no order to the chaos. But eventually, that system learned that, “Hey, if a red team member shoots a blue team member, there's a reward.” and the blue team member also learned while running away from the red team member is, is probably a good thing to do. Christian Seifert:And so, OpenAI kinda, uh, established the system and had the blue team and the red team play against each other, and eventually what that led to is really neat strategies that you and I probably wouldn't have come up with. 'Cause what the AI system does, it explores the entire possible actions base and as result comes up with some unexpected strategies. So for instance, uh, there was a blue team member that kinda hid in a room and then a red team guy figured, “Hey, if I jump on a block then I can surf in that environment and get into the room and shoot the blue team member”. So that was a little bit an inspiration because we wanted to also uncover these unknown Christian Seifert:Unknownst in the security context.Nic Filingham:Got it. That's great context. Thank you Christian. I think I have seen that video, is that the one where one of the many unexpected outcomes was the, like, one of the, the, blue or red team players, like, managed to sort of, like, pick up walls and used them as shields and then create ramps to get into, like, hidden parts of the map? Uh, uh, am I thinking about the right video? Christian Seifert:Yes, that's the right video. Nic Filingham:Got it. So the whole idea was that that was an experiment in, in understanding how finding the unknown unknowns, using this game, sort of, this lazar tag, sort of, gaming space. Is, is that accurate?Christian Seifert:That's right, and so, they utilized reinforcement learning in order to train those agent. Another example is, uh, DeepMind's AlphaGo Zero, playing the game of Go, and, and here, again, kind of, two players, two AI systems that play against each other, and, over time, really develop new strategies on how to play the game of Go that, you know, humans players have, have not come up with. Christian Seifert:And it, eventually, lead to a system that achieved superhuman performance and able to beat the champion, Lisa Dole, and I think that was back in 2017. So, really inspiring work, both by OpenAI and DeepMind.Nic Filingham:Got it. I wonder, Josh, is there anything you'd like to- before we, sort of, jump into the content of the blog and, and CyberBattleSim, is there anything you'd like to add from your perspective to, to the context that Christian set us up on? Joshua Neil:Yeah. Thanks, Nic. I, I mean, I think we were really excited about this because... I think we all think this is a natural evolution of, of our adversaries, so, so, currently, our adversaries, the more sophisticated ones, are primarily using humans to attack our enterprises and, that means they're slow and they can make mistakes and they don't learn from the large amount of data that's there in terms of how to do attacks better, because they're humans.Joshua Neil:But I think it's natural, and we just see this, uh, everywhere and, all of technology is that people are bringing in, you know, methods to learn from the data and make decisions automatically, and it's- so it's a natural evolution to say that attackers will be writing code to create autonomous attack capabilities that learn while they're in the enterprise, that piece of software that's launched against the enterprise as an attack, will observe its environment and make decisions on the fly, automatically, from code. Joshua Neil:As a result, that's a frightening proposition because, I think the speed at which these attacks will proceed will be a lot, you know, a lot more quick, but also, being able to use the data to learn effective techniques that get around defenses, you know, we just see data science and machine learning and artificial intelligence doing this all over the place and it's very effective that the ability to consume a large amount of data and make decisions on it, that's what machine learning is all about. And so, we at Microsoft are interested in exploring this ourselves because we feel like the threat is coming and, well, let's get ahead of it, right? Let's go experiment with automated learning methods for attacks and, and obviously, in the end, for defense that, by implementing attack methods that learn, we then can implement defensive methods that will, that will preempt what the real adversaries are doing, eventually, against our customers.Joshua Neil:So, I think that's, sort of, a philosophical thing. And then, uh, I love the OpenAI Hide-and-Seek example because, you know, the analogy is; Imagine that instead of, they're in a room with, um, walls and, and stuff, they're on a computer network, and the computer network has machines, it has applications, it has email accounts, it has users, it's got a cloud applications, but, in the end, you know, an attacker is moving through an environment, getting blocked in various ways by defenses, learning about those blockings and detections and things and finding gaps that they can move through in, in very similar ways. So, I just, sort of, drawing that analogy back, Hide-and-Seek, it is what we're trying to do in cyber defense, you know, is, is Hide-and-Seek. And so the, I think the analogy is very strong.Nic Filingham:Josh, I just wanna quickly clarify on something that, that you said there. So, it sounds like what you're saying is that, while, sort of, automated AI-based attacking, attackers or attacking agents maybe aren't quite prevalent yet, they're, they're coming, and so, a big part of this work is about prepping for that and getting ahead of it. Is, is, is that correct?Joshua Neil:That's correct. I, I'm not aware of sophisticated attack machinery that's being launched against our enter- our customers yet. I haven't seen it, maybe others have. I think it's a natural thing, it's coming, and we better be ready.Christian Seifert:I mean, we , we see some of it already, uh, in terms of adversarial machine learning, where, uh, our machine learning systems are getting attacked, where, maybe the input is manipulated in a way that leads to a misclassification. Most of that is, is currently more, being explored in the research community.Natalia Godyla:How did you apply reinforcement learning? How did you build BattleSim? In the blog you described mapping, some of the core concepts of reinforcement learning to CyberBattleSIm, such as the environment, that action space, the observation space and the reward. Can you talk us through how you translated that to security?Christian Seifert:Yeah. So, so first let, let me talk about reinforcement learning to make sure, uh, listeners understand, kinda, how that works. So, as I mentioned, uh, earlier in the supervised case, we feed a label data set to a learner, uh, and then it able to generalize, and we reinforcement learning works very differently where, you have an agent that sits within an environment, and the agent is, essentially, able to generate the data itself by exploring that environment.Christian Seifert:So, think of an agent in a computer network, that agent could, first of all, scan the network to, maybe, uncover notes and then they're, maybe, uh, actions around interacting with the notes that it uncovers. And based on those interactions, the agent will, uh, receive a reward. That reward actually may be delayed by, like, there could be many, many steps that the agent has to take before the reward, uh, manifests itself. And so, that's, kinda, how the agent learns, it's, e- able to interact in that environment and then able to receive a reward. And so that's, kinda, what, uh, stands, uh, within the core of the, the CyberBAttleSim, because William Bloom, who is the, the brains behind the simulation, has created an environment that is compatible with, uh, common, uh, reinforcement learning tool sets, namely, the OpenAI Gym, that allows you to train agents in that environment.Christian Seifert:And so, the CyberBattleSim represents a simple computer network. So, think of a set of computer nodes, uh, the, the nodes represent a computer, um... Windows, Mac OS, sequel server, and then every node exposes a set of vulnerabilities that the agent could potentially exploit. And so, then, as, kind of, the agent is dropped into that environment, the agent needs to, first, uncover those nodes, so there's a set of actions that allows to explore the state space. Overall, the environment has a, a limited observability, as the agent gets dropped into the environment, you're not necessarily, uh, giving that agent the entire network topology, uh, the agent first needs to uncover that by exploring the network, exploiting nodes, from those nodes, further explore the network and, essentially, laterally move across the network to achieve a goal that we give it to receive that final reward, that allows the agent to learn.Natalia Godyla:And, if I understand correctly, many of the variables were predetermined, such as, the network topology and the vulnerabilities, and, in addition, you tested different environments with different set variables, so how did you determine the different environments that you would test and, within that particular environment, what factors were predetermined, and what those predetermined factors would be.Christian Seifert:So we, we determined that based on the domaine expertise that exists Christian Seifert:... is within the team, so we have, uh, security researchers that are on a Red Team that kind of do that on a day-to-day basis to penetration tests environments. And so, those folks provided input on how to structure that environment, what nodes should be represented, what vulnerabilities should be exposed, what actions the agent is able to take in- in terms of interacting and exploring that, uh, network. So our Red Team experts provided that information. Nic Filingham:I wonder, Christian, if you could confirm for me. So there are elements here in CyberBattleSim that are fixed and predetermined. What elements are not? And so, I guess my question here is if I am someone interfacing with the CyberBattleSim, what changes every time? How would you sorta define the game component in terms of what am I gonna have to try and do differently every time? Christian Seifert:So the- the CyberBattleSim is this parametrized, where you can start it up in a way that the network essentially stays constant over time. So you're able to train an agent. And so, the network size is- is something that is dynamic, that you can, uh, specify upon startup. And then also kinda the node composition, as well as ... So whether ... how many Windows 10 machines you have versus [inaudible 00:19:15] servers, as well as the type of vulnerabilities that are associated with each of those nodes. Nic Filingham:Got it. So every time you- you establish the simulation, it creates those parameters and sort of locks them for the duration of the simulation. But you don't know ... The agent doesn't know in advance what they will d- they will be. The agent has to go through those processes of discovery and reinforcement learning. Christian Seifert:Absolutely. And- and one- one tricky part within reinforcement learning is- is generalizability, right? When you train an agent on Network A, it may be able to learn how to outperform a Red Team member. But if you then change the network topology, the agent may completely flail and not able to perform very well at all and needs to kind of re- retrain again. And that- that's a common problem within the- the re- reinforcement learning research community. Natalia Godyla:In the blog you also noted a few opportunities for improvement, such as building a more realistic model of the simulation. The simplistic model served its purpose, but as you're opening the project to the broader community, it seems l- that you're endeavoring to partner with the other researchers to create a more realistic environment. Have you given some early thought as to how to potentially make the simulation more real over time? Christian Seifert:Absolutely. There is a long list of- of things that we, uh, need to think about. I mean, uh, network size is- is one component. Being able to simulate a- a regular user in that network environment, dynamic aspects of the network environment, where a node essentially is added to the network and then disappears from the network. Uh, all those components are currently not captured in the simulation as it stands today. And the regular user component is an important one because what you can imagine is if we have an attacker that is able to exploit the network and then you have a defender agent within that network as well, if there is no user component, you can very easily secure that network by essentially turning off all the nodes. Christian Seifert:So in- a defender agent needs to also optimize, uh, to keep the productivity of the users that are existing on the network high, which is currently not- not incorporated in- in the simulation. Nic Filingham:Oh, that's w- that's amazing. So there could be, you know, sort of a future iteration, sort of a n- network or environment productivity, like, score or- or even a dial, and you have to sort of keep it above a particular threshold while you are also thwarting the advances of the- of the agent. Christian Seifert:Absolutely. And I mean, that is, I think, a common trade off in the security space, right? There are certain security m-, uh, measures that- that make a network much more secure. Think of like two-factor authentication. But it does u- add some user friction, right? And so, today we're- we're walking that balance, but I'm hoping that there may be new strategies, not just on the attacker's side, but also on the defender's side, that we can uncover that is able to provide higher level of security while keeping productivity high. Nic Filingham:I think you- you- you have covered this, but I- I'd like to ask it again, just to sort of be crystal clear for our audience. So who is the CyberBattleSim for? Is it for Red Teams? Is it for Blue Teams? Is it for students that are, you know, learning about this space? Could you walk us through some of the types of, you know, people and- and roles that are gonna use CyberBattleSim?Christian Seifert:I mean, I think that the CyberBattleSim today is- is quite simplistic. It is a simulated environment. It is not ... It'-s it's modeled after a real world network, but it is far from being a real world network. So it's, uh, simplistic. It's simulated, which gives us some advantages in terms of, uh, scalability and that learning environment. And so at this point in time, I would say, uh the simulation is really geared towards, uh, the research community. There's a lot of research being done in reinforcement learning. A lot of research is focused on games. Because if you think about a game, that is just another simulated environment. And what we're intending to do here with- with some of the open source releases is really put the spotlight on the security problem. And we're hoping that the- the reinforcement learning researchers and the research community at large will pay more attention to this problem in the security domain. Nic Filingham:It's currently sort of more targeted, as you say, as- as researchers, as sort of a research tool. For it to be something that Red Teams and Blue Teams might want to look at adopting, is that somewhere on a road map. For example, if- if you had the ability to move it out of the simulation and into sort of a- a- a VM space or virtual space or perhaps add the ability for users to recreate their own network topology, is that somewhere on your- your wishlist? Christian Seifert:Absolutely. I think there's certainly the goal to eventually have these, uh, autonomous defensive agent deployed in real world environments. And so in order to get to that, simulation needs to become more and more realistic in order to achieve that. Joshua Neil:There's a lot of work to be done there. 'Cause reinforcement learning on graphs, big networks, i- is computationally e- expensive. And just a lot of raw research, mathematics and computing that needs to be done to get to that real- real world setting. And security research. And in incorporating the knowledge of these constraints and goals and rewards and things that ... T- that takes a lot of domain research and getting- getting the- the security situation realistic. So it's hard. Christian Seifert:In the simulation today, it provides the environment and ability for us to train a Red Team agent. So an agent that attacks the environment. Today, the defender is very simplistic, modeled probabilistically around cleaning up machines that have been exploited. So as kinda the next point on the wishlist is really getting to a point where we have the Red Team agent play against a Blue Team agent and kinda play back and forth and see kinda how that influences the dynamic of the game. Natalia Godyla:So Christian, you noted one of the advantages of the abstraction was that it wasn't directly applicable to the real world. And because it wasn't approved as a safeguard against nefarious actors who might use CyberBattleSim for the wrong reason. As you're thinking about the future of the project, how do you plan to mitigate this challenge as you drive towards more realism in the simulation? Christian Seifert:That is certainly a- a- a risk of this sort of research. I think we are still at the early stages, so I think that risk is- is really nonexistent as it stands right now. But I think it can become a risk as the simulation becomes more sophisticated and realistic. Now, we at Microsoft have the responsible AI effort that is being led at the corporate level that looks at, you know, safety, reliability, transparency, accountability, e- et cetera, as kind of principles that we need to incorporate into our AI systems. And we, early on, engaged the proper committees to help us shape the- the solution in a responsible fashion. And so at this point in time, there weren't really any concerns, but, uh, as the simulation evolves and becomes more realistic, I very much expect that we, Christian Seifert:... be, uh, need to employ particular safeguards to prevent abuse. Nic Filingham:And so without giving away the battle plan here, wh- what are some other avenues that are being, uh, explored here as part of this trying to get ahead of this eventual point in the future, where there are automated agents out there in the wild? Joshua Neil:This is the- the core effort that we're making, and it's hard enough. I'll also say I think it's important for security folks like us, especially Microsoft, to try hard things and to try to break new ground and innovation to protect our customers and really the world. And if we only focus on short-term product enhancements, the adversaries will continue to take advantage of our customers' enterprises, and we really do need to be taking these kind of risks. May not work. It's too ... It's really, really hard. And t- and doing and in- in purposefully endeavoring to- to- to tackle really hard problems is- is necessary to get to the next level of innovation that we have to get to. Christian Seifert:And let me add to that. Like, we have a lot of capabilities and expertise at Microsoft. But in the security space, there are many, many challenges. And so I don't think we can do it alone. Um, and so we also need to kinda put a spotlight on the problem and encourage the broader community to help solve these problems with us. And so there's a variety of efforts that we have pursued over the last, uh, couple of years to do exactly that. So, about two years ago we published a [inaudible 00:28:52] data science competition, where we provided a dataset to the broader community, with a problem around, uh, malware classification and machine risk identification and basically asked the community, "Hey, solve this problem." And there was, you know, prize money associated with it. But I really liked that approach because we have ... Again, we have a lot of d- expertise on the team, but we're also a little bit biased, right, in- in terms of kinda the type of people that we have, uh, and the expertise that we have. Christian Seifert:If you present a problem to the broader research community, you'll get a very different approaches on how people solve the problems. Most likely from com- kind of domains that are not security-related. Other example is an RFP. So we funded, uh, several research projects last year. I think it was, uh, $450,000 worth of research projects where, again, we kind of laid out, "Here are some problems that are of interest that we wanna put the spotlight on, and then support the- the research community p- to pursue research in that area." Nic Filingham:So what kind of ... You know, you talk about it being, uh, an area that we all sort of collectively have to contribute to and sort of get b- behind. Folks listening to the podcast right now, going and reading the blog. Would you like everyone to go and- and- and spin up CyberBattleSim and- and give it a shot, and then once they have ... Tell us about the- the types of work or feedback you'd like to see. So it's up on GitHub. What kind of contributions or- or feedback here are you looking for from- from the community? Christian Seifert:I mean, I'd really love to have, uh, reinforcement learning researchers that have done research in this space work with the CyberBattleSim. Kinda going back to the problem that I mentioned earlier, where how can we build agents that are generalizable in a way that they're able to operate on different network topology, different network configuration, I think is an- an- an exciting area, uh, that I'd love to see, uh, the research community tackle. Second portion is- is really enhancing the simulation. I mentioned a whole slew of features that I think would be beneficial to make it more realistic, and then also kinda tackle the problem of- of negatively impacting potential productivities of- of users that operate on that network. So enhancing the- the simulation itself is another aspect. Nic Filingham:Josh, anything you wanted to add to that? Joshua Neil:Yeah, I mean, I- I think those were the- the major audiences we're hoping for feedback from. But a- al- also like Christian said, if a psychologist comes and looks at this and has an idea, send us an email or something. You know, that multidisciplinary advantage we get from putting this out in the open means we're anticipating surprises. And we want those. We want that diversity of thought and approach. A physicist, "You know, this looks like a black hole and here's the m- ..." Who knows? You know, but that's- that's the kind of-Nic Filingham:Everything's a black hole to a physicist- Joshua Neil:(laughs) Yeah. Nic Filingham:... so that's, uh ... Joshua Neil:So, you know, I think that diversity of thinking is what we really solicit. Just take a look, yeah. Anybody listening. Download it. Play with it. Send us an email. We're doing this so that we get your- your ideas and thinking, for us and for the whole community. Because I think we- we also believe that good security, uh, next generation security is developed by everybody, not just Microsoft. And that there is a- there is a good reason to uplift all of humanity's capability to protect themselves, for Microsoft but for everybody, you know? Natalia Godyla:So Christian, what are the baseline results? How long does it take an agent to get to the desired outcome? Christian Seifert:So the s- simulation is designed in a way that also allows humans to play the game. So we had one of our Red Teamers to actually play the game and it took that person about 50 operations to compromise the entire network. Now when we take a- a random agent that kinda uninformed takes random actions on the network, it takes about 500 steps. So that's kind of the- the lower baseline for an agent. And then we trained, uh, a Deep Q, uh, reinforcement learning agent, and it was able to accomplish, uh, the human baseline after about 50, uh, training iterations. Again, network is quite simple. I wouldn't expect that to hold, uh, as kinda the- the simulation scales and becomes more complex, but that was, uh, certainly an encouraging first result. Joshua Neil:And I think the- the significant thing there is, even if the computer is- takes more steps than the human, well, we can make computers run fast, right? We can do millions of iterations way faster than a- than a human and they're cheaper than humans, et cetera. It's automation. Nic Filingham:Is there a point at which the automated agent gets too good, or- or is there sort of a ... What would actually be the definition of almost a failure in this experiment, to some degree? Joshua Neil:I think one- one is to- to sort of interpret your question as it could be overfed. That is, if it's too good, it's too specific and not generalized. And as soon as you throw some different set of constraints or network at it, it fails. So I think that's a- that's a real metric of the performances. Okay, it- it learned on this situation, but how well does it do on the next one? Nic Filingham:Is there anything else, uh, either of you would like to add before we wrap up here? I feel like I've covered a lot of ground. I'm gonna go download CyberBattleSim and- and try and work out how to execute it. But a- anything you'd like to add, Christian? Christian Seifert:No, not from me. It was, uh, great talking to you.Natalia Godyla:Well, thank you Josh and Christian, for joining us on the show today. It was a pleasure. Christian Seifert:Oh, thanks so much. Joshua Neil:Yeah, thanks so much. Lots of fun. Natalia Godyla:Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode. Nic Filingham:And don't forget to tweet us at MSFTSecurity, or email us at securityunlocked@microsoft.com, with topics you'd like to hear on a future episode. Until then, stay safe. Natalia Godyla:Stay secure.
4/28/2021

Knowing Your Enemy: Anticipating Attackers’ Next Moves

Ep. 25
Anyonewho’severwatched boxing knows that great reflexes can be the difference between achampionshipbeltand a black eye.The flexing ofan opponent’s shoulder, the pivot of theirhip-a good boxer will know enoughnot only topredictand avoidthe incoming upper-cut, but willknow how to turn the attack back on theiropponent.Microsoft’s newestcapabilities in Defender puts cyber attackers in the ring and predicts theirnext attacks as the fight is happening.On today’s episode,hosts Nic Fillingham and Natalia Godyla speak with ColeSodja, Melissa Turcotte, and Justin Carroll(and maybe even a secret, fourth guest!)abouttheirblogposton Microsoft’s Security blogabout the new capabilities of using an A.I.to see the attacker’s next move.In This Episode, You Will Learn:• What kind of data is needed for this level of threat detection and prevention?• The crucial nature of probabilistic graphical modeling in this process• The synergistic relationship between the automated capabilities and the human analystSome Questions We Ask:• What kind of modeling is used and why?• What does the feedback loop between program and analyst look like?• What are the steps taken to identify these attacks?Resources:Justin, Melissa’s, and Cole’s blog post:https://www.microsoft.com/security/blog/2021/04/01/automating-threat-actor-tracking-understanding-attacker-behavior-for-intelligence-and-contextual-alerting/Justin Carroll’s LinkedIn:https://www.linkedin.com/in/justin-carroll-20616574/Melissa Turcotte’s LinkedIn:https://www.linkedin.com/in/mturcotte/ColeSodja’sLinkedIn:https://www.linkedin.com/in/cole-sodja-a255361b/Joshua Neil’s LinkedIn:https://www.linkedin.com/in/josh-neil/NicFillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Transcript[Full transcript at https://aka.ms/SecurityUnlockedEp25]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft Security engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories for Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla:And now, let's unlock the pod. Welcome, everyone, to another episode of Security Unlocked, and hello, Nic, how's it going?Nic Fillingham:It's going well, good to see you on the other side of this Teams call. Although, you and I were in person not 24 hours ago. You were here in Seattle, we were filming some more episodes of the Security Show. I don't think we've really given listeners of the podcast a full, meaty introduction to the Security Show, have we? Do you wanna let listeners know what they can find?Natalia Godyla:We play games and hang out with experts in the industry and we've done everything from building robots with folks, to building blocks, to painting our nails. You can find the Security Show on our YouTube channel, so, YouTube.com/MicrosoftSecurity or you can go to aka.ms/securityshow. We talk with Chris Wysopal, the CTO and co-founder of Veracode on modern secure software development, and Dave Kennedy, who comes to talk to us about SecOps and everything you need for a survival kit in SecOps, so come come check them out.Nic Fillingham:Bad news is you, you have to deal with, uh, Natalia and I on another, uh, media format. But before you go there, make sure you listen to today's episode of Security Unlocked. We have a couple of returning guests. We have Cole and Justin, who have been on before, as well as Josh Neil, who comes on in the, in the last few minutes. And new guest, Melissa. They're all from the Microsoft 365 Defender research team, and they all co-authored a blog from April 1st called Automating Threat Actor Tracking, Understanding Attacker Behavior for Intelligence and Contextual Alerting, which is exactly what it is but I think it buries the lead. Natalia, you had a great TL;DR, what did they do?Natalia Godyla:The team used statistics to predict the threat actor group and the next stage in the attack and really early in the attack, so that we could identify the attack and inform customers so that they could stop it. I think what's really incredible here is, not only the ability to predict that information, but to just do it so early in kill chain. Nic Fillingham:Within two minutes after an attack begin, using this model, Microsoft threat experts were able to send a notification to the customer to let them know an attack was underway. The customer was able to do, you know, the necessary things to get that attack shut down. We'd love, as always, your feedback. Send us emails, securityunlocked@microsoft.com. Hit us up on the Twitters. On with the pod. Natalia Godyla:On with the pod. Nic Fillingham:Well, welcome back to the Security Unlocked podcast, Cole and Justin, and welcome to the Security Unlocked podcast, Melissa. Thanks for joining us today. We have three wonderful guests, with maybe a, a fourth special guest appearing at the end. And today we're gonna be talking about a blog post appearing on the Security blog from April the 1st, called Automating Threat Actor Tracking, Understanding Attacker Behavior for Intelligence and Contextual Alerting. All of the authors from that blog are here with us. Cole, if I could start with you, if you could sort of reintroduce yourself to the audience, give us a little bit, uh, about your role, what you do at Microsoft, and then perhaps hand off to one of your colleagues for the next intro.Cole Sodja:Sure. Will do, thank you. So, hi, I'm Cole. I work in the Microsoft 356 Defender group. I'm a statistician. Primarily my responsibilities are driving, kind of, research and innovation in general, with supporting threat analytics, threat hunting, threat research in general. Yeah, been doing that for about three years now, and I love it, and I that's a little bit about myself, I'll hand it over to Melissa. Melissa Turcotte:All right. My name's Melissa, I work with Cole, so in the same group, Microsoft 365 Defender. I'm also a statistician by background. I've been in the cyber domain for about probably seven years now. I was working for Department of Energy research laboratory in their cyber research group for five years, and I joined Microsoft a year ago. I like all sorts of problems related to cyber. My expertise probably would be in anomaly detection, but anything related to cyber, and there's data in a problem, I like to be involved.Nic Fillingham:And Justin.Justin Carroll:Hey. I also work in the Microsoft 365 Defender team, doing threat intelligence. My main focus is uncovering new threats and actor groups and understanding what they're doing, different modifications to how they're conducting their attacks, and the outcomes of those attacks, and then figuring out the most effective ways to either, communicate that out to customers or action on detection capabilities to stop them from succeeding.Nic Fillingham:Listeners of the podcast will note that you have a super sweet ninja turtles tattoo, is that correct? Justin Carroll:This is accurate, this is definitely accurate. Nic Fillingham:And, and we may or may not have a super secret fourth guest on this episode, who may join us towards the end, who you would, you would know from an very early episode of the podcast, but perhaps we'll keep them secret until the very end. Thank you all for joining us, thank you for your time. Again, we're referring to a, a blog post that, that all of you authored from April 1st. This is a, quite a complex, and, and sort of technical blog post, which I know a lot of our audience will love. Nic Fillingham:I got a little lost in the math, but I, I absolutely was enthralled by what you all have undertaken here. Cole, if I could start with you, can you give us, give us an overview of what's covered in this blog post, and sort of what this project was, how you tackled it, and what we're gonna talk about, uh, on this episode today.Cole Sodja:Yeah. So if I step back, being someone kind of still fairly new in learning, uh, to cyber security, uh, I approached things pretty much with just using data, right? Doing data driven imprints, as I'd say. And through my research, what I started to, um, kinda ask myself is, can we kinda get ahead of cyber security attacks, you know, from a post-breach perspective? Once we see an adversary in a network, can we start to make some predictions, basically, on what they're likely gonna do? Who is the adversary, or is it human operated, is it an automated script, for example. And then if we recognize the adversary, kinda recognize their tactics, their techniques, their procedures, can we say, okay, we're, we're likely gonna see they're gonna ransom this enterprise, for example.Cole Sodja:So I tried to look at it as more of a data mining exercise initially, it's like, can I recognize these type of patterns, and then how predictive are these patterns that we're seeing in terms of what likely is gonna occur. Or put it another way, what type of threat is this, essentially, to the enterprise? So, so that's kinda the background, the motivation. Now, when I started this project, back with Justin and then with Melissa, it started really as let's look for particular, uh, threat actors that we're aware of, that we recognize, that we know about, and see, like, can we start, from a data perspective, classifying okay, is it this group, is it that group, and what does this group tend to do? Cole Sodja:And one of the challenges in that is, is sparsity. Basically, we don't have a lot of labels sitting around out there saying, it's threat actor group A, B, C, D, and so on. We have handfuls of those. Some of these actors, they don't tend to do attacks very frequently, right? They're extremely sparse. So, so one challenge of this, and one the motivation is, how can we actually partner with threat intelligence, for example, and our threat hunters, to try and essentially encode or extract some of their information to help us build models, to help us reason over the uncertainty, essentially. Cole Sodja:And when we say probabilistic modeling, that's what we mean. It's how do we actually quantify this uncertainty, both in what we believe about the actors, or the adversaries in general, as well as what they're gonna do, right, once they've breached your network. So that's kinda how it started, and what this blog's really about is kinda giving a walk-through, essentially, of what we did initially with this research. It started with, and Justin will talk about this in a moment, it started with looking at few, select threat actors that are very serious. Cole Sodja:We started to understand their behaviors more and more and we thought it was a good opportunity, initially, to try and build a model to, again, understand what they're doing, track what they're doing, because they do change their tactics over time, as well as just see if we could get ahead of them. Can we actually notify a customer in advance, before, uh, for example, their organization's ransomed? So, so that's one part of the blog that we'll discuss, and I'll hand it over to my good friend Justin to take it from here.Justin Carroll:So, like, one of the, the main challenges that we kinda face in the intelligence sphere is understanding the particulars of an actor and when they are present in an environment. A lot of times, you'll see the intelligence is really focused on a very particular indicator such as, like, a known IP address that's malicious, or a single behavior. But it's kinda difficult to frequently pivot them out to understand when a suspected attacker is in an environment. A lot of that is due because they don't always do the exact same behaviors when they are compromising... Organization or device. There will be some variation and it basically requires manual enrichment a lot of the times of devices to try and understand the specifics of the attacks and what Justin Carroll:... the final outcomes o- wh- out of that attack, so this opportunity presented one to work with data scientists to, like, really supercharge our efforts so that we could kinda come in understanding a much bigger picture and knowing, essentially, what behaviors that we saw occur and then which ones we might suspect. A lot of times with these human operated ransomware ones, the time to alert, to notify of the expected outcome is often fairly short, in particular with, uh, one of the ones that we worked on to kinda test this method out. We had seen very short instances from time to compromise to ransom, so, um, this was to try and see if we could have a, a highly confident method of enriching that intelligence, um, and then working with other teams to get those alerts out.Natalia Godyla:If I could jump in here for a moment. So, at the beginning of your description, you noted that typically you'd use manual enrichment. Can you talk a little bit about that? So prior to this probabilistic model, how did you go through that manual enrichment process to try to, uh, predict what threat actors they were or determine what stage of an attack it was?Justin Carroll:It would be something along the lines of, let's say, you had intelligence from either a partner team or open source intelligence that says, you know, "These threat actors are using this IP address as part of their attack," and then looking for the presence of that and then finding out what actually occurred on those devices to understand the entirety of the attack, or looking more generically and saying, like, "Okay, we know these attackers like to use a particular behavior as part of their credential theft," and then so looking for all sorts of instances of that credential theft and then kinda continuing to pivot down into one that is leading to the behavior that y- you're looking for. One of the difficulties that you'll see in particular with this and other actors is, like, they will use multiple shared open source tools and payloads. Um, many of them aren't even malware, they're clean tools with legitimate purposes, so it can make it difficult to try and suss out the ones from malicious versus administrative use, so you have to look for that combination of different behaviors to indicate something malicious is afoot.Nic Fillingham:Justin, if I look at the blog, I think it might be the first chapter here, there's a MITRE ATT&CK framework diagram, Figure One, and it, uh, outlines sort of the steps taken here for how this model was able to, with high confidence, identify the, the actor and, uh, send an alert to the customer who was able to shut it down. I wonder if you could sort of, could you walk us through this, these sort of six steps as an example of, of how this work, how this worked in, in sort of real life?Justin Carroll:Yeah. I can walk through basically from a model's perspective, essentially, how it works. Timing, that's more a function of, like, how the attack, uh, typically progresses with this actor. Technically speaking, what the model's really doing is it's encoding each behavior we have, in this case, each MITRE technique in particular in terms of what's the confidence that once we see, for example, initial access follow... Under, let's say, RDP brute force, followed by lateral tool transfer with subset of tools recognized, that particular sequence right there, that's where the model would be like, "Okay, the probability that it's this particular threat actor group conditional on those two things occurring in sequence will be X," and that sequence could occur in a matter of minutes or even days and weeks, dependent on the actor, of course, we're talking about. Justin Carroll:With the, the actor we're showing in this graph, this actor typically will penetrate a network through RDP brute force, but then w- sometimes the, they won't immediately transfer their tools. They might wait a day or two, or sometimes they'll, they'll do it very fast, like, once they basically compromise a log-in then, uh, they'll, they'll go to that machine, there might be some, um, discovery related commands before they transfer or they might just transfer their tools and then that will be the attack box, basically, in which they stage their attack, and then they'll do some additional things.Justin Carroll:So at each step, basically, or each stage of the attack, as we like to call it, the model is basically gonna then update its probabilities and say, "Okay, based on all the information I've seen up to this stage, the probability that it's this actor is P and now, conditional that it's this actor with probability P, the probability that we'll now see, for example, defense evasion and this 'tack will be Q," or, or we could even go further in the attack stage to say, "Now, given all this, what's the probability that we'll see, for example, ransomware or inhibit system recovery in the coming hour? Or in the coming, you know, X time?" Justin Carroll:So the model's able to do that, but it's primarily conditional on the stages it's observed up to a point in time, not so much in terms of the time it takes for the actors to do X.Natalia Godyla:So, in this blog and in our discussion today, we're gearing up to talk about probabilistic graphical modeling as a way to address the challenge that, Cole and Justin, you've set up for us today, and, and for any of our listeners who'd like to follow along in the blog, the blog is titled "Automating threat actor tracking: Understanding attacker behavior for intelligence and contextual alerting" and you can find it on the Microsoft Security blog. I'd love to dive into the probabilistic graphical modeling and perhaps start with a definition of what that means. So, M- Melissa, could you give us an overview of this approach?Melissa Turcotte:Yeah. We have this problem which what they are essentially saying is, we have a collection of things which... I'm a statistician so I often call them variables, but, you know, features, if you will, if that's m- more easy for you to understand, but we, th- these TTPs, th- right. The sets of things that the actors are doing, and we have a collection of them. And given some collection of these, we wanna make a statement about whether or not it's ransomware or whether it's not a specific threat actor, or a group of actors. Right? And this is, this is, like, a perfect, um, example of where probability can help you make these decision, and one thing I'd like to stress is that no one of these features gives you enough information about whether or not it's this actor or this, this group of actors, or it's ransomware, you know, whatever your variable interest is.Melissa Turcotte:It really is the collection of these together that, you know, kind of in Justin's mind, as an analyst, he's, he's making these connections in his head, and I wanna be able to replicate that in some sense, I wanna take into account his knowledge and kind of his decision making process, combined with the data that I have, to make these probabilistic statements about what I think is happening. And graphical models are really great here, probabilistic graphical models in particular, as they kind of provide this joint probability distribution over all these features, and the variable of interest, in this case, is kind of, maybe is it this actor, but not necessarily. I mainly wanna know something about any one of these other features. I may already know it's this actor, and I may wanna be like, "Wh- what are the common things I see this actor do?"Melissa Turcotte:So, so graphical models really shine in this case where you have this collection of things that you are observing, and you kind of want to ask questions about any subset of them. Given some observations of others, and so th- this is a really great tool to use in this setting, and it's also quite interpretable. So if you kind of look, if you're looking at the blog and you see this Figure Two, which is a toy example, but y- you kind of, as a human, you can look at that and you can kind of understand that, "Okay, so I'm seeing transfer tools and lateral movement are related." Um, and you can kind of understand sort of wh- what the relationships the model is making. Um, and so that kind of provides this extra, you know, benefit of this in that, yeah, I can talk an analyst through what this kind of is showing and then i- it's quite interpretable for them even if they don't understand the underlying maths, and that's kind of something we really wanna strive for. Um, you shouldn't have to understand the underlying maths to kind of understand the decisions that are being made.Melissa Turcotte:It's really attractive in this sense, and then the Bayesian networks, why I really like it is kind of, the Bayesian paradigm is... So you, you have, you know, statistics, generally, or data science, you have some data and you're kind of, you know, making inference given the set of data to make statements about things of interest. So the data tells you something about your beliefs and the state of the world, but you have your own subjective beliefs about wh- what you think could and could not happen. The, the Bayesian paradigm kind of combines those two things, so it's, you have your beliefs and then you have what the data is telling you, a- and your ultimate kind of predictions are based on the combination of those things. And generally, the, the way it works is the more data you have, the data will always win through.Melissa Turcotte:So this problem, bringing it back to attacker prediction, is a case where we don't have a lot of data, right? We don't... Companies get attacked... Or we say, companies get attacked all the time but not at the scale at which we collect the underlying data, so like, you know, we have, you know, you as a user are performing actions, logging into computers you use... You know, this shows up in the data thousands of times a day, whereas an attack happens kind of, like, on a monthly scale, so c- the scales of attacks to the data we're getting is just really small, and then when you go into attacks that kind of we've labeled as being attributed to a threat actor, I mean, that's even way smaller. So it's, it's kind of a small data problem, uh, in terms of the number of labels you have.Melissa Turcotte:But what we do have is this analysts who have spent years tracking these people and have their kind of, you know, beliefs about what they do and how they changed over time. And so we Melissa Turcotte:Wanna capture that. We definitely want to include the evidence we see and the data, but we wanna capture that really rich knowledge that we get from the analysts. And so kind of that's where the Bayesian network part becomes attractive because it, it provides a very principled way to, to capture the analysts' expertise, combine that information with the data we're seeing to make these ultimate predictions.Natalia Godyla:For our audience, could you really quickly describe a Bayesian network?Melissa Turcotte:So, a Bayesian network is a way of building a model for a collection of variables whereby the idea is that you have different variables which are related to each other. It, it, it kind of helps draw out or show what those relationships are so, like, in the graph, you know, if there's an arrow from impact... Or from transfer tools to impact that's saying if I see transfer tools, that has a direct impact... I'm gonna use the word impact twice here. Has a direct impact on whether or not I'm going to see impact. So, so it's kind of the way the variables relate to each other and the way the probabilities change according to those relationships. And so a Bayesian network encodes all this information. Nic Fillingham:If I can take another swing at that one... Thank you, Melissa. I'm wondering what were some of the other, uh, techniques that you either considered for this approach? Like, did you experiment with other methods and then ultimately chose Bayesian?Melissa Turcotte:Yes, um, in fact, uh, so the initial kind of... The perhaps most obvious thing to do is to c- to think of decision trees, right? You s- you're, you're, you're seeing, you know, these things over time. Okay, I saw, um, what was the first one? Initial access with this... You don't go as broad as initial access, but I saw initial access using this, you know, minor technique. And so you can kind of think, like, you, you, you have a tree that's kind of... Okay, I saw this, I didn't see this, but I saw this and I didn't see this, so now I think it's this actor. But kind of where this is preferable is the fact that, as Paul says, we don't want to see the whole attack happen before we make a statement about what we think it is. And Bayesian networks work really well in, in the absence of some observed variables. Cole Sodja:Yeah, I'll just quickly chime in. I agree with Melissa. So, I did experiments, for example, with several models including decision trees. Even, um, different forms of Bayesian decision trees like BART for example. And in addition to what Melissa is saying where, for example, predicting the probability that it's threat actor conditioned on certain variables we saw, uh, we might also, as Melissa pointed out, want to say, okay, let's predict, for example, that this threat actor is going to do impact or a certain form of impact. And with decision trees, that means basically you're building multiple decision trees to do that. You can't just build one decision tree... Well, let's put it this way. You can't easily build one decision tree to have multiple target variables. That's something you get for free with the Bayesian network. Another thing I'll say in addition to what, um... To marginalization is the Bayesian network is more general. So, it could actually handle kind of a broader graphical structure. The decision tree is a specific graph. Cole Sodja:So, it kind of already inhibits you, if you will, to learning a certain structure over the data. Whereas the Bayesian nets, they could give you a little more general structure. We could also build these models that are time dependent, what are called dynamic Bayesian networks. That's something much harder to do with tree models. So, it's just a more flexible model as well as I would say. In my experiments, the Bayesian network did perform better on average than the set of decision trees I considered.Nic Fillingham:I'd like to better understand the relationship between this model and folks like Justin. So, is Justin, as a very experienced threat analyst, is Justin helping you define labels and helping you sort of build some of the initial... I'm, gonna get the taxonomy wrong here, so please correct me. But the initial sort of properties of the model? Or is, is Justin, as an analyst, interpreting what you sort of think you have in the model? How, how do I understand the relationship between the analyst and, and how they're providing their expertise into, into this model?Melissa Turcotte:All three.Nic Fillingham:Oh, great. (laughs)Melissa Turcotte:All three things you said is actually correct. So, so hopefully we, we've explained it somewhat well. So, yes. The first stage, right Justin? The analysts are providing us our label data. So, yes. That's the first thing. And then they also help us kind of, you know, you have the raw data, but that's kind of... There's so much data processing that goes... That, that happens before it's kind of... This data's kind of in this tabular forms that's like, yes, we... You know, these are the features we are tracking, so think of your TTPs, the different notes in your graph. Getting the data into that, kind of that schema, the threat analysts help with. So, you know, help define what, what these tactics, techniques, and procedures are that we should track. Like you said, you, you can't be super broad. Lateral movement doesn't really have a lot of meaning, um, to kind of like the different ways in which someone can do lateral movement and how granular w- you want to go. Melissa Turcotte:So, we discuss with the analysts all the time to kind of build up, you know, the ontology, if you will. And then, you know, as a first stage, like I said, it's a small data sample, so we're like... Justin helps inform what the model thinks about in a probabilistic sense. So, you... One thing I might ask him, I, I would be like... If I saw net... you know I'm borrowing from our toy example, but if I saw a network scanning modify system process, transfer tools, but didn't see any of the others, do you think it would be this actor X? Or do you think it would be ransomware? And he would be like, hmm, I would probably 60% certain. I can take that information and encode that directly so that, in the absence of any data, the model would return 60%. It would... If I didn't see any data, it would return what Justin believed was the probability in the presence of a certain number of variables. Melissa Turcotte:And then we kind of see data and we update our beliefs over time based on that. And then, also, after we've kind of trained these things, I go back to Justin and say does this make sense to you? So, he, he's kind of involved in all three, the whole process.Nic Fillingham:Melissa, I think you're telling me you've built a virtual Justin. Melissa Turcotte:We... That, that is what we are literally trying to do. And back it up... And, you know, and back it up with data as well. I'd, I'd like to like... You know, I'm a firm believer that everyone has their subjective beliefs, Justin has beliefs as well. Oftentimes, I can prove analysts wrong. Be like, they think something, I'm like, well, the data is telling me something else. So, we need to figure out, you know, that discrepancy. But, yes. We are essentially trying to build virtual Jus- uh, Justins. Although, like, th- there... I don't think there's any stage upon which we won't need the analysts to constantly feed back in with the new information they have. Nic Fillingham:Got it. And then can it come full circle? Justin, how do you as an analyst, how do you get smarter and better at what you do by what this model is, is telling you? What's the feedback loop look like here for you?Justin Carroll:It's one of those where, basically, using the model kind of super-charged my abilities where, instead of having to look at this very granular kind of like ad hoc, oh, this may be interesting, now I have the instances already serviced to me, and I have a good understanding of what success rate through the kill chain the attacker was able to get. And maybe figure out which ones that I needed to enrich more to understand was there data that we can add into the model because they've done something different that we need to capture and then look for opportunities in that way. So, really, it's basically... It made it where, give or take, sometimes it would take anywhere from 10 to 20 minutes sometimes to try and figure out, like, is this who I think it is? And like, what have they done? What are their goals? To just looking at the result from the model. And within usually seconds, being like, yeah, that looks exactly right. That's... It's confirmed, I think that's spot on. Natalia Godyla:So, Justin, was there something that was the most surprising in working with this model? Something that the model taught you either about threat actors or any details about the features? Justin Carroll:One of the things was kind of reexamining My confidence levels on different parts of the attack. Um, where Melissa was stating, for instance, you know, the data suggesting this and the models coming to this conclusion, uh, you know, thinking that it's this probability, and there would be times where I'd have to kind of reevaluate and think, like, hmm, I might've been missing something or overestimating the prevalence of a particular thing and saying it's related to such. Like, uh, I can tend to get very biased based on my narrow scope of the attacks that I'm looking at and think that it's related to this thing, but the model was able to provide a lot of clarity to some of the behaviors that maybe I didn't think were as confident a signal or extremely confident signal and I wasn't giving them the appropriate weight. That's one of the advantages of using it to understand what the attacker's doing, is I let it do much of the leg work once everything's kind of coded in. And then occasionally, like if we found opportunities where it was like, hmm, this still isn't quite right, then it could be tuned as a c- um, as necessary. Justin Carroll:I think that was probably one of the biggest ones of kind of trying to work through and actually spell out, like, my own thinking processes when I'm evaluating the data. It was something that you just kind of do without thinking, where you're constantly, as an intelligence analyst, looking at data and making conclusions on that data. But you're not usually saying, like, okay, I saw this so I'm gonna give it a 60% probability that it's this. And like, you're, you're just kind of sometimes it's either gut intuition or working on it that way. But actually having the model encode and return back what it was understanding made a, a pretty big impact in trying to understand how my own decision processes work and basically how best to kind of think Justin Carroll:About these different, wide array of attacks that we're constantly investigating.Nic Fillingham:The types of indicators that you're building this model on, again please correct me on my taxonomy here, but you're not looking for, you know, NFO files or like ASCII art or, you know, the actual threat actors name being sort of hidden somewhere in the jpeg that they drop as a, as a for the LOLs, like, they're... You're not looking for a sort of a literal signature of these threat actor groups, you're, you're, what you're, what you're doing is you're, you're seeing the actions that have been taken and without any other way of attributing them to an individual group, you're piecing them together. Nic Fillingham:And as you, as you get more actions and you piece them together based on the, the labels that you get from people like Justin, you're able to, to ultimately have a high probability that it's this threat group actor and they're doing this thing and they're likely to do this thing next. Have I got that right? You're, they're... In no way shape or form are you actually finding a secret text file that has the name, you know, the, the, the handles for all the hackers who are doing it for the LOLs.Cole Sodja:So let me just quickly jump in, you pretty much nailed it. I'll say this, so, we wanted to do both actually, right, because we don't want to restrain the model if it's, if core's gonna add predictive power, so like you said, we're not actually searching, grepping for example, for a threat actor name and some file or image, certainly not that level. But, for example, some of the actors, maybe they have common infrastructure, maybe they use particular types of tools in their attack typically, right? Like, maybe there's a SHA-1 out there they've used a lot in their attack, or, or recurring IP addresses they use as part of brute forcing. Cole Sodja:Those are there, but those are very specific and if you just relied on those, like Melissa was saying, either one or a few of those, you're not gonna generalize. You'll probably miss that attacker, right? But we certainly don't want to exclude it from the model because, um, if we happen to see that, the model will, uh, come back with a different type of probability, right? It'd be like, okay. Now the model might be more confident early, rather than waiting to see how the rest of the kill chain progresses. On the more general side, we probably won't go to the MITRE categories, 'cause they're a little too general, right? But if we go to some of the sub techniques, we don't actually have to look at the particular types of executables, or tools, or IPs used. Cole Sodja:Sometimes just the timing and sequencing is enough actually, to narrow down to, maybe not a particular threat actor, but a group of actors or, more generally, we can say with high competence, you know, this is a human adversary. They're taking this amount of time to do discovery commands, they're, they're doing lateral these type of ways. And the model could recognize that, even without knowing the particular commands, it's just seeing the more general techniques involved, right? So we do a bit of both, actually. We tend to want to rely more on, kind of, the general attacks or indicators as you're saying, that's right. But, we certainly don't want to throw away specifics that are reuse because we could get ahead of the attack much earlier too. So it's a bit of both at the end of the day.Melissa Turcotte:So yes, Nic, if, if, if you have an evil bit, look for the evil bit. You don't need data science for that. Nic Fillingham:(laughs)Natalia Godyla:And how is this model being used today, meaning is this a model that's being used by our internal security team to protect Microsoft and its customers, is it being used by a Microsoft threat experts group or is this actually embedded in some of our solutions today, and our customers are feeling that benefit? And what is the future intent of the model?Justin Carroll:One of those... So, there are multiple uses that are in place for the model. So one of the big things for me, so in my own selfish interest, it's intelligence, it's one of the easiest ways that I can keep tabs on the attacker and continually build new profiles and understand, basically, reports out, this is what they're doing, this is how they're doing it, this is how active they are. Like, are we seeing, you know, large volumes of their attack, are they taking a break, that kinda stuff. Then, the Microsoft threat experts are using it as a signal to help understand attacks early on in the kill chain so that they can get those notifications out ideally before the ransom, which can be quite difficult a lot of the times depending on the adversary and how quickly they seek to ransom. A lot of times there isn't a great deal of time.Cole Sodja:Yeah, there's other products, for example, M365D. So, um, there are plans, uh, it requires some engineering, ultimately, because this is a big product, um, huge customer base and so on. But there are already plans in motion to take what we've built already, as part of this framework, and integrate that into that product. There's other products as well, both from a threat intelligence perspective, and possibly kind of from SOC alerting perspective as well, that I'm in active discussions with other products across Microsoft to do the POC, make sure it works with their data, make sure they're comfortable and then work with their engineering team to at least get that in the plan. Those are ongoing discussion but M365D does have, kinda, I'll say, in their planning cycle, to get this in the product. Nic Fillingham:I wonder if this might be a good time to bring our secret special guest on microphone, Josh, if you're there, I think I might ask, uh, might wonder if you could jump in on this one. I think you've understated the power of what you've built here. From everything that you've just explained, you know, within a couple of minutes of a threat actor getting initial access to have a high probability index to be able to contact the customer and say, here's who we think is inside your network, here's what we think they're gonna do next, so they can shut it down. This is the next level, right? And, and Josh, when we interviewed you on episode three, you were hinting at this, if I'm not mistaken. Is this, is this sort of what you guys have been working on?Joshua Neil:Yeah, I'm so proud that we, that we took it from concept to realized value for the customers and, and at this point we've had that impact with your customers in stopping human operations. And, and so it's really exciting and, and it's, it's on the journey but, you know, if I extract an overall theme from this, it's consistent with that podcast that we had before because I was sort of complaining about AI. And I was sort of complaining about what we see in some of the, in some of the branding and marketing that, that folks do in, in cyber security. And I think this team and, and the work they've done exemplifies the right applications of data driven methods. Joshua Neil:There is no magical, artificial intelligence today. What there is is, and this is a, an experience that all of us on the data science team have had over the, over the past few years, and really for me about 20 years, is we can use data and some mathematics and some computing to begin to automate and accelerate what the humans are doing. And so, by sitting very closely with, and working very hard with the human experts like Justin, we're explicitly encoding their knowledge into models. So that's one thing is that the data science we're doing is to automate some of the stuff they're doing today. But the intention is not to solve the world, not to give our customers a license to solve security, we're, we're not gonna be able to do that. What we are able to do is uplift the sophistication of our customers operations. Joshua Neil:So, you know, what Justin sort of reflected on, uh, he's able to do a more interesting job, a more sophisticated job, because we're taking the data and his knowledge and encoding it and accelerating and automating some of the stuff that he's having to do manually now. And that's where the real nuts and bolts, you know, and the real rubber meets the road here, is that there's no magic gun that's gonna blow away all the adversaries with, with AI. What there is is hard work between data scientists and threat expertise to uplift their capabilities and accelerate their effectiveness in the face of the adversary. And that's what I would like to get across to the, to the listeners, is that by hard work and careful and close collaboration between data science and threat expertise, that's how we really make progress in this space.Nic Fillingham:Thank you so much Josh. And I just wanted to quickly clarify, from a previous comment from Cole, so this model is in use now, correct? Folks like Justin, Microsoft threat analysts, they are using this model now to make the model better, and to be able to get that additional information and those confidence levels in, in, in doing their analyst work. And so Microsoft threat expert customers are directly benefiting from this work, as of today. That's correct, is it?Joshua Neil:That's correct. We've sent targeted attack notifications to customers based on this model.Nic Fillingham:You've all been very, very, generous. Natalia Godyla:Thank you for that. And, and thank you to the whole team here for joining us on the show today. Melissa Turcotte:Absolutely.Cole Sodja:My pleasure.Joshua Neil:It was a lot of fun as always. And, and thank you, Nic and Natalia for this.Natalia Godyla:Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us at MSFTSecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on future episode. Until then, stay safe...Natalia Godyla:Stay secure.
4/21/2021

Below the OS: UEFI Scanning in Defender

Ep. 24
All of us have seen– or at least, are familiar with – the antics of Tom and Jerry or Road Runner and Wile E. Coyote. In each one the coyote or the cat set up these elaborate plans to sabotage their foe, but time and time again, the nimble mouse and the speedy birdareable tooutsmart their attackers.In our thirdepisode discussing Ensuring Firmware Security,hosts Nic Fillingham and Natalia Godylaspeak withShweta JhaandGowtham Reddyabout developing thetoolsthat allow for them to stay one step ahead ofcybercriminals in the cat & mouse game that is cyber security.In this Episode You Will Learn:• Thenewcapabilities within MicrosoftDefenderto scan theUnified Extensible Firmware Interface (UEFI)• How theLoJaxattack compromised UEFI firmware • How UEFI scanning emerged as a capabilitySome Questions that We Ask:• Has UEFI scanning always been possible?• What types of signals is UEFI scanning searching for?• What are the ways bad actors may adjust to avoid UEFI scanning?Resources:Shweta Jha’sLinkedIn:https://www.linkedin.com/in/jhashweta/Gowtham Reddy’sLinkedIn:https://www.linkedin.com/in/gowtham-animi/Defender Blog Post:https://www.microsoft.com/security/blog/2020/06/17/uefi-scanner-brings-microsoft-defender-atp-protection-to-a-new-level/NicFillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Transcript[Full transcript can be found at https://aka.ms/SecurityUnlockedEp24]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft's security, engineering, and operations teams. I'm Nic Fillingham-Natalia Godyla:And I'm Natalia Godyla. In each episode we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla:And now, let's unlock the pod.Natalia Godyla:Hello Nic. Welcome to Episode 24. How's it going with you today?Nic Fillingham:Going well, thank you, Natalia. Yes, uh, welcome to you and welcome to listeners to Episode 24 of Security Unlocked. On today's podcast, we speak with Shweta Jha and Gowtham Reddy from the Microsoft Defender for Endpoint engineering team about capabilities in MDE to scan down into the UEFI layer. Now this is the third of three conversations we have that started back in Episode 11 with Nazmus Sakib where we talked about secure core PCs and, and firmware integrity. Then in Episode 14 we spoke with Peter Waxman about the Pluton processor and some of the new work that's happening there to imbed more security tech into sorta silicon onto the actual CPU die itself. And today we're sort of rounding that conversation out with Shweta and Gowtham to talk about how Microsoft Defender for Endpoint can now scan down or can scan down into the UEFI layer. You're gonna hear a bunch jargon, a bunch of technical terms like, I guess, UEFI. That's, we, we could start there.Natalia Godyla:Yes. And UEFI is the Unified Extensible Firmware Interface, so it is the software interface that lies between an operating system and firmware, and is an evolution of BIOS. And we'll also talk about MosaicRegressor which, for those of you that don't know, is the second ever UEFI rootkit which was discovered in 2020, but was used in an attack against NGOs in 2019. And, and for me, the timeline is shocking, second ever in the past year. Normally we hear about the continuous increase of a certain type of attack over the years, and here we're just at the second ever.Nic Fillingham:Yeah. It's a real interesting part of the conversation where we talk about the history of BIOS attacks, firmware attacks, UEFI attacks, and to learn that this has been sort of traditionally a pretty challenging area for attackers to, to breech and compromise. But, you know, Shweta and Gowtham have been, you know, very much ahead of the curve and, and being ahead of, of attackers in, in being able to develop these new capabilities to, from the operating system, scan down to the UEFI layer and look for malware, look for compromise. And it's a, it's a fascinating conversation. Again, it's sort of a completion of three episodes starting with Episode 11 and 14. So if you haven't listened to those, I recommend you add them to the queue. But I guess on with the pod.Natalia Godyla:On with the pod.Nic Fillingham:Welcome to the Security Unlocked podcast. Shweta Jha and Gowtham Reddy, welcome both of you. Thanks for being here.Gowtham Reddy:Thank you.Shweta Jha:Thank you so much for having us. We're so very excited.Nic Fillingham:I'm very excited, too. Now this is gonna be the third conversation in a sort of a mini series that we're running here on the podcast. We started with Nazmus Sakib who introduced us to the idea of secure core PCs and we talked about some of the challenges of firmware integrity and keeping firmware safe. Then we spoke Peter Waxman in another episode to learn about Pluton, the history of, of that technology and sort of what's coming for the Pluton processor. And today we're actually gonna talk about some new capabilities, or newish as of 2020, in Defender to scan down into the UEFI layer. Before we jump into to all that, let's just do some introductions for the audience. Shweta if we could start with you. Who are you? What is your role? What do you do day-to-day at Microsoft? Tell us, what you like the audience to know about you?Shweta Jha:Absolutely. Thank you, Nic. My name Shweta Jha. I am a program manager with Microsoft Defender for Endpoints, and I've been building security solutions features and products, and I'm super excited about it because security is the need today for our, uh, customers. And a few of the features that I built with my team were part of anti-tampering. Investment that we did, EDR block as part of be able to blocking and containment. And then we are gonna talk a lot about UEFI scanner. So pretty much around building security solution and features in this team and helping our customers.Nic Fillingham:Fantastic. And, and Gowtham, welcome to the podcast. If you could also introduce yourself. Uh, tell us about your role. What does your day-to-day look like?Gowtham Reddy:Hi. This is Gowtham Reddy. I'm an engineering manager in Microsoft Defender, uh, Endpoint. So before engineering manager, so I was working as an engineer in the same team for last six years. So I work on, uh, many of the rootkit technologies, the Defender, uh, has and, uh, the remediation technologies to remediate many of the malwares that are present on the system. I have been where I working on this fantastic team, developing like durable protection features that were, and compliment the ever changing malware fields.Nic Fillingham:That's great. So, again, welcome to both of you. Thanks for your time. One of the things we do on the, uh, Security Unlocked podcast here is we, we don't necessarily cover the latest announcements. We, we sort of look back over the last sort of three to six months for interesting sort of technology, interesting advancements in, in security technology, and we bring experts on to, to talk about these new features and capabilities after them sort of being in the wild. Today we're talking about the UEFI scanning capabilities that are in Microsoft Defender, and there's a blog post that, that both of you helped author back in, in June of 2020, which feels like a decade ago, but I guess it's more like six or seven months. So I wondered if one of you might be able to just walk us through. What was that announcement made in that blog post? What was sort of the news? And then I think maybe if the other one or maybe just following, I'll, I'll leave it to how we, how we split this up. But what was announced back in June? And sort of what's happened since then? How have those new capabilities sort of rolled out and what are we seeing with customers actually using them?Shweta Jha:So I, I guess I can get us started, and then I'll hand it over to Gowtham definitely to talk more on the technical details and the, the attacks that we see in the wild, and that's why we kind of built this UEFI scanner. So as you understand, this is a journey, right? We built a layered defense in that security solutions. And when we build any security solution, we need to make sure that we take a holistic approach. So if you look at the operating level of security solutions, we've been getting pretty great at operating level security solutions. And it's not only Microsoft. If you see other security providers as well, they have been doing great, too.Shweta Jha:So what does that mean? It means that because the operating system level security solution is really great, it does making difficult for attackers to not get detected at that level. It's a constant battle, so they have been looking into other means where they can go into the system undetected, and that's where if you look at the data, you would find that in recent past the attacks across hardware and firmware level has been on the rise. So we built UEFI scanner keeping in mind that we should be able to detect those type of attacks, because those type of attacks are not only very dangerous, but often time they are not detected. They persist even if you reboot the system. So the nature of these type of attacks is very dangerous, and keeping that in mind, we decided to build UEFI scanner.Gowtham Reddy:So I can add like why we did, uh, build the UEFI scanner. So because of the operating system security features that Microsoft is constantly working on, the bad guys are trying to go in, down and down in the layered architecture. And so some of the traits of the ia64 went onto the BIOS, tampering the BIOS and, uh, tampering the MBR, the master board required and, uh, VBR based bootkits. So Defender has evolved into that space of counting the MBR and, uh, detecting the bootkits and the boot time. Gowtham Reddy:So as a logical evolution the bad guys are, uh, from the stage of Colonel to the MBR, MBR to the UEFI. So we were anticipating that this kind of evolution is quite possible and the UEFI implants were not very far. So that's the time we found the first UEFI implant called LoJax. So that was a triggering point when we completely committed to ourselves to expand our root kit technology, to detect any kind of rotates presence in the UV. So that was our core idea of expanding or rotating to the layer much below the operating system. So there were some challenges Natalia Godyla:If you don't mind me jumping in, I had a question around that. So...Gowtham Reddy:Mm-hmm (affirmative)Natalia Godyla:... the way you're framing it is that when we started to notice the threat landscape moved to this layer, we decided to invest in this type of technology. What about the technology itself? Had there always been this opportunity to tackle UEFI scanning, or is there something new that we're leveraging in order to solve this problem Now that might not have been around beforehand? Gowtham Reddy:That's a good question. So there was always a chance to exploit the UEFI, but it's about the timing of the attackers to get at target this space because the rest of the platform and ecosystem is getting more and more secure. So the UEFI is not new. So it was there a decade ago, but the implants are new because of the advances in the operating system. Nic Fillingham:So Gowtham, tell us about the LoJax attack that happened. Was it the first or it was one of the first detected compromises of the UEFI firmware? Can you tell us some more about, about that? If folks aren't familiar with it like me? Gowtham Reddy:Mm-hmm (affirmative).So that definitely some theoretical researcher driven, before the LoJax, but the LoJax is a fast known exploitation instance where we know we found it in the wild. It is quite possible even before that a UEFI implant demonstrated in many of the black hat conferences, but those are theoretical in nature. So the research had access to the device and they demonstrated it. But LoJax's is the one where from operating system level. So a particular malware, I would say it as a root kit, which has tried to intrude from kernel mode to the UEFI, and they have installed a UEFI driver. So if we consider the operating system as a drivers, even the firmware itself had some drivers. So they were able to install a driver which actually in turn drops the another kernel mode driver, advanced operating system boots up. it's about the boot sequence. Gowtham Reddy:So first the firmware starts running and it initializes all the system, and then it invokes operating system. So in the LoJax's case, after the firmware is completed, it has already dropped the kernel driver on the operating system, if it is not present. So that means by the end of the firmware sequence, so we have a presence of a kernel driver. And when that kernel driver starts, that is a user mode, malware starts, kicks in. So this keeps repeating even after you were re-install the wares, even if you change the hard disc, the same pattern will be fought. So that's how the LoJax's type work. Nic Fillingham:And I wonder, do we know, what was the breakthrough that made LoJax possible? UEFI has been around for a while. UEFI for probably predates LoJax. And obviously before UEFI, there was sort of the more standard sort of BIOS that probably most folks are familiar with. Can we talk a little bit more about how LoJax came about and sort of what maybe changed or what the breakthrough was on the attacker side? Gowtham Reddy:I would say that there were a couple of open source read-write drivers, which has a capability to access the firmware, using a special interface called SPI. SPI is a something called serial peripheral interest. So using the serial peripheral interface, any kernel driver can instruct the platform hardware layer to read and write any content in the flash. So I think like many of the security industry knows a driver called a read drive, everything, they call it as RWE. So this is the driver using which anybody can read any offset, any device memory, and write. I think this is, the prevalence of this kind of open source tools might be help attackers to develop this kind of ecosystem of all the sequence of the malware, the root kits. Shweta Jha:In addition to what Gowtham said, definitely the work that researchers were doing in this space, it always starts with researcher trying to do something and then attackers trying to find other means. So here are the things. Attackers usually do exploit things that are not done in a right way. So in this case, for example, if there are certain configuration that you need to, or your partner needs to make sure that those are in place, for example, rewrite where you are not providing writing access, just the reading access, and so on. Shweta Jha:So typically in all these type of attacks would see that misconfigured devices are exploited the most, and that misconfiguration happens at the time when the devices are getting built. So that is another factor why these attacks are very successful, because there are misconfigured devices, because while building the devices, somebody messed to configure it and right way. And if you look at the journey, that's where you have a secure core PC, which is designed be secured, making sure that the things that are needed to protect the computer against these types of attacks that are there out from the first day. Natalia Godyla:So my question is about the application of this new technology. So I really appreciate you walking through that attacker workflow. So what type of signals is UEFI scanning, looking for? What is it using to enrich the context of the existing end point data?Gowtham Reddy:That's a very good question. So basically the level of details that UEFI scanner can get is enormous. So this is the area where like the defender has a content scanning. So, uh, we have, uh, extended our content scanning to every file that is present inside the firmware. So this help the defender research to write any kind of content scanning signatures to detect any bad content. So that means in this case, if research knows any implant, so we have a capability to scan the 600 million devices to know if any of our customers have impacted with the specified malicious file. Gowtham Reddy:And this is just one part of our UEFI scanner. And the other part of it is detecting any anomalous behavior inside the firmware. For example, in many of the supply chain attacks like Solarigate, it's quite possible that some of the OEMs channels were compromised and the deliver the firmware updates with the malicious modules in it. Gowtham Reddy:So in this case, our UEFI scanner collects all the metadata about the new for- firmware update and we run heavy amount models in our cloud. And that will tell us if there is an unknown anomaly that exists in this particular firmware update. Instead of a known malware implant so that the UEFI scanner has the two capabilities. One is detecting a known malicious implant, and the other one is anomalous from where presence of a fax. So in this case, we act both ways. Nic Fillingham:What does an anomaly look like in this context? Gowtham Reddy:Anomalies look like, for example, if you have a firmware is a, firmware is a file system, like a typical drive. A presence of an driver file, probably a hedge P driver file or an unsigned driver file. On a Dell OEM is constrained to the anomaly. Because we have trained the model of all the known Dell firmwares with them, a ML model. So any new image with the unexpected file, it will be immediately flagged. Nic Fillingham:And why is ML the sort of approach you've taken here versus sort of heuristics? I would have thought that there's a pretty limited set of content. They could make up sort of firmware and firmware instructions. Obviously, I don't know anything about this space, so I'll caveat that there, but, um, could you talk about why ML versus heuristics versus something else?Gowtham Reddy:In the days of, uh, BIOS, so you are a expectation was right. The bast consists of a series of micro code, Gowtham Reddy:... which is, uh, very limited. And, uh, in the context of UEFI, you have a full file system, uh, which has, like, uh, thousands of files; individual files. And, uh, this causes... Uh, creates, uh, basically a huge amount of, uh, the vectors space, which to scan or to collect the metadata. Gowtham Reddy:So it's not just simple collection of mecra- microcodes. It contains the drivers, it contains the services, it contains a lot of other things. It's a file system like NTFS.Nic Fillingham:Got it. So because UEFI is, as you say, a file system as opposed to... What was BIOS? BIOS was not a file system? BIOS was, uh, sort of a discreet, sort of, low level executable?Gowtham Reddy:Yeah, i- i- it is just a sequence of, uh, microcode instructions that will be run on the firmware. So basically, i- it has a s- uh, fi- se- set of microcodes. Nic Fillingham:So the machine learning models that you reference, w- where are they running? Are some of them running locally? Are they all running in the Cloud? Is it a mixture of the two?Gowtham Reddy:They're all running in the Cloud for now. So we have MDATP Cloud services where we run all this clo- uh, demo models. So our models are really very effective. So recently, we got in, uh, so- so, uh, the UEFI alert by, uh, mal- model. Apparently, it's a kind of, um, true positive because, um, there was a Microsoft engineer who was working on a hardware space.Gowtham Reddy:So he take, uh, firmware. And he kept a developer driver and he flashed on his own device. And, uh, our UEFI scanner immediately caught it and we... the security administrator got an alert and there was an investigation happen. So we are pretty ready to catch any kind of such things now.Natalia Godyla:So we all know it's a cat and mouse game with the threat actors. So what is the team anticipating in terms of how the actors will adjust their processes to evade this new UEFI scanning technology?Gowtham Reddy:That's a good question. We're trying to validate something in a- a lower level of trust, the lower level of ring other than the kernel. So definitely, there is a chance that attacker can modify the firmware presence. Uh, he can spoof the content when defender tries to scan. So this is quite, uh, possible. But we are already working on mitigating that kind of an attacks. Nic Fillingham:So now that this feature, these capabilities, have been live in the product for, uh, I guess over six months at this point, w- what have you learnt? What have you seen in the telemetry? What have you seen in the types of attacks and, I guess, even sort of false positives that have- have come through from- from this new, uh, capability?Gowtham Reddy:Uh, that's a very good question. So we learnt a lot of things. The UEFI file system has never scanned before. So we got some false positives on the content that we scan but we immediately fine-tuned our signatures.Gowtham Reddy:Back in... Six months before, when we published a blog, we only know the first UEFI known implant called LoJax but often we share... There was a second implant called Public. That's called MosaicRegressor and our UEFI scanner has well detected the MosaicRegressor implant. Uh, the- the telemetry count was small. So we did, uh, able to detect the mi- MosaicRegressor.Nic Fillingham:So in this first six months, as well as the LoJax campaign, uh, what's the taxonomy here? How do we f- refer to it?Gowtham Reddy:Uh, we can consider... W- we are, uh, tracking them as an UEFI implant malware or UEFI rootkit. So this is the category we are looking at. So right now, we have, uh, LoJax and we have a MosaicRegressor as, uh, two big families in this space.Nic Fillingham:Big families. Got it. Shweta Jha:Yeah, about MosaicRegressor, I wanted to add a little bit more just to complement what, uh, Gowtham mentioned, how powerful this tool is. And how powerful this particular feature is. So if you read through the MosaicRegressor, uh, breach, it was a nationwide targeted attack.Shweta Jha:This was targeted for diplomats. And this attack, as Gowtham described, first they would insert one module. Uh, that one module would get undetected and then that module would try to do other stuff, like try to, uh, get in touch with command and control and get another, uh, module and so on.Shweta Jha:So the entire c- chain is so very interesting. And I'm glad that we built this feature and we were able to detect it because it's so powerful. Most of the security solution, they're not able to detect because they don't have this, uh, such great capabilities.Shweta Jha:But look at the way this attack was carried. It was pretty much targeted, pretty much nationwide for a few countries, originated from one country. So the sophistication level in the nature itself speaks for it and I'm glad that we, as in our product, we have this capability which can even, you know, unknown, first seen, it can detect those type of attacks as well.Natalia Godyla:In the process of developing this new technology, where were there false starts? What techniques did you try but didn't work to solve this problem?Shweta Jha:Little bit on the journey, right? We have been working on it. Um, so Gowtham explained about how we have rootkit, bootkit level and then we went to the UEFI site and we had to be extremely careful because it's, like, uh, it has a high integrity and high severity of going wrong.Shweta Jha:So we had to be very careful making sure that the running system is not damaged and at this point, I'll hand it over to Gowtham because he can explain, in detail, each and every pieces that we took into consideration to making sure that our customers' device remain intact. So go ahead Gowtham.Gowtham Reddy:Yeah. Thanks Shweta. So, uh, we have indeed explored, uh, many mechanisms like accessing the PCI space from the operating system itself, which we didn't continue to proceed because of some of the pushback from the kernel team to update the haul.Gowtham Reddy:So actually, uh, to accessing any peripheral device from the PCI bus, there are a couple of complications because the peripherals have, uh, specific implementation of Reads and Writes, the bus Reads and Writes. So, uh, the approach we took was, uh, using the SBI interface, which is pretty much, kind of, an, uh, universal interface which is developed by Motorola by a long time ago.Gowtham Reddy:So luckily, what worked in our favor was most of the Intel p- s- uh, chipsets, they support the SBI based access. So they support the SBI, uh, using which we can use the memory map mechanisms to access the PCI space.Gowtham Reddy:So basically, here, what happened was instead of directly using the hardware primitives, we used, uh, software primitives because the chipsets are well supporting the SBI interface. So that's how we landed in our approach. Nic Fillingham:I wanted to circle back to the use of machine learning here in- in solving this problem. How big are the signal sets that you're getting to train the model? How big is the model?Nic Fillingham:Is the model that you use here, to detect anomalies in the firmware layer, is it as sophisticated and large as something as, like, looking for malware on endpoints? Or are we talking, like, a much sort of smaller more, sort of, n- nuance. No, that's not the right word. Sort of a smaller bespoke model?Gowtham Reddy:Uh, I can take that question. So u- usually, uh, in the endpoint when- when applying the malware, um, in machine learning models, we heavily focus on the individual file properties, like file headers, file footers and some file p- properties and so on. Gowtham Reddy:But UEFI case, we built a brand new machine learning model based on the properties of the UEFI image itself. So thanks to David, from our MDATP team. So he come up with a model where... which takes input signals as specific to the UEFI firmware image.Gowtham Reddy:To give some examples, each firmware drive has a lot of GUIDs, called firmware GUIDs. And then they have some properties called, uh, file types and properties. Every property that we took was specific to the firmware. So they are not generic to the specific malware files that we see regular malware detections. So these are highly tailored to the signals from the UEFI firmware image. Nic Fillingham:And were you able to reuse some of the anomaly detection Nic Fillingham:Algorithms are purchased from other parts of the defender engineering org, or did you have to sort of build a brand new model and a brand new way to detect anomalies? Shweta Jha:Yeah. So, we definitely used our existing infrastructure. So, as you know? Uh, we have a massive backend system where we get tons of signals and we run tons and tons of AI and ML model to detect the anomalies and to detect the new trends and so on. So, as Gordon was talking, for this particular UV, AI and ML model, even though where we had to tweak it to make sure that we capture the inputs that are UV specific, the models were used, the pipeline to collect the data that were used and the channel where we surface it to our customers. So, if you look at the end to end story, the way we do things are we detect, we remediate, and we also notify to our SecOps that, "Hey, these are the things that happened in your environment." And that goes in the form of alerts or incidents and so on. So, we used exactly same infrastructure, same pipeline, but specific to UV. Natalia Godyla:So, I know a little earlier in this episode, we talked about the learnings after being in market. What about the impact to SecOps teams? Do we have any early numbers to talk through about what this has raised for our customers? Shweta Jha:That's a great question. We do see here and there, though the number is not pretty high on the implant, but we do see in numbers there, like, as Gordon mentioned about a mosaic regression. We did find that and there are few others also. But I think the most important aspect of this unique feature is that, just a little bit forget about this feature and see that today's world, today, there is no UV scanner, the security admins or SecOps, they, they don't know what is happening at this level. They have tons of device in their organization. And these devices are at this level is completely black box for them, because they don't know whether it is configured well. They don't know if there are implants there. They don't know if there are vulnerabilities that could be exploited. Shweta Jha:So, there's the power of this UV scanner. One is, you know, so we, we built a solution keeping in mind that we will not only detect, we will bring these, these things where they don't have visibility today to understand what is going on. So, the focus area, and then the objective that we have is to detect the implant, either using the heuristic detection or the AI, ML but also read through each and every configuration that are happening at this level and the vulnerabilities that exist at this level and bring that to the, SecOps attention, so that when they look at it, they can take appropriate action to remediate it. So, that's the next step. And that is the work right now, we are currently doing. We do not have, in the form of report, we do see it in our data and we want to make sure that these are available to our SecOps. But just to tell you, there are tons and tons of misconfigured device out there. And it's, it's a little tricky.Gowtham Reddy:To add more about the misconfiguration. So, it's about like the PC settings, like a UV, the BIOS read-write or whatever the settings we'll use to see in when we go to the BIOS in the past. So, the UV must be configured well to support the secure boot, to use the TPM and to use any of the hardware provided features, it must be configured well. If it is misconfigured, you won't get any protection. So, if you have a helmet in your backseat, when you are driving, it won't help you. So, you had to keep it on your head. Shweta Jha:(laughs). That's a great analogy. Nic Fillingham:That leads us to, what is the guidance here for Sec admins and security teams out there? How do they enable this functionality? Is it on by default in, in certain places? What do we need to do to make sure that, that customers are getting the full protection from this capability? Shweta Jha:So, uh, this, this feature is enabled by default on all the devices. Um, we made sure that this is available. And the great news is that it is not only, you know, Windows 10, it is available for servers, download as well. So, that's the power that we have in our solution. Ultimately, if you look at what is the future that are gonna look like, secure core PC is the future we should be heading towards. But because enterprises and customers are not there yet, uh, we have UV scanner to compliment it. The other thing, if we have to talk about the futuristic roadmap, right now, we built the scanner for UV, but there are other network devices like network adapter and things like that. There is a scope to extend these types of capability to those devices as well, because those, there is a possibility to get those devices exploited too. So, that's something we are considering to work through. Nic Fillingham:Got it. So, just to confirm there, so, this new capability is on by default in any device that is being protected by the defender service. Is, is it, is it as simple as that or is there sort of more to it?Shweta Jha:Yes. Any device which is having defender antivirus running.Natalia Godyla:Thank you for that. That was super helpful. And thank you both for joining us on the show today. Shweta Jha:Thank you, Natalia. It was pleasure to be here and talking with our customers. Thank you so much for hosting us. Gowtham Reddy:Thank you Natalia and Nick for hosting us. So, it's been wonderful time talking to you about UV scanner. Thank you so much. Nic Fillingham:Thank you both for your time. Thanks for bringing great innovation to the security space. Shweta Jha:Absolutely. It's a constant journey and we're on it. Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode. Nic Fillingham:And don't forget to tweet us @msftsecurity or email us @securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla:Stay secure.
4/14/2021

Inside Insider Risk

Ep. 23
Throughout the course of this podcast series, we’ve had an abundance of great conversations with our colleagues at Microsoft about how they’re working to better protect companies and individuals from cyber-attacks, but today we take a look at a different source of malfeasance: the insider threat. Now that most people are working remotely and have access to their company’s data in the privacy of their own home, it’s easier than ever to access, download, and share private information.On today’s episode, hosts Nic Fillingham and Natalia Godyla sit down with Microsoft Applied Researcher, Rob McCann to talk about his work in identifying potential insider risk factors and the tools that Microsoft’s Internal Security Team are developing to stop them at the source.In This Episode, You Will Learn:• The differences between internal and external threats in cybersecurity• Ways that A.I. can factor into anomaly detection in insider risk management• Why the rise in insider attacks is helping make it easier to address the issue.Some Questions We Ask:• How do you identify insider risk?• How do you create a tool for customers that requires an extreme amount of case-by-case customization?• How are other organizations prioritizing internal versus external risks? Resources:Rob McCann’s Linkedin:https://www.linkedin.com/in/robert-mccann-004b407/Rob McCann on Uncovering Hidden Risk: https://www.audacy.com/podcasts/uncovering-hidden-risks-45444/episode-1-artificial-intelligence-hunts-for-insider-risks-347764242Insider Risk Blog Post: https://techcommunity.microsoft.com/t5/security-compliance-identity/don-t-get-caught-off-guard-by-the-hidden-dangers-of-insider/ba-p/2157957Nic Fillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/ Transcript[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp23]Nic Fillingham:Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham. Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security. Deep dive into the newest threat intel, research and data science. Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security. Natalia Godyla:And now, let's unlock the pod. Natalia Godyla:Hello Nic, welcome to today's episode, how's it going with you? Nic Fillingham:Hello Natalia, I'm very well, thank you, I hope you're well, and uh, welcome to listeners, to episode 23, of the Security Unlocked podcast. On the pod today, we have Rob McCann, applied researcher here at Microsoft, working on insider risk management, which is us taking the Security Unlocked podcast into- to new territory. We're in the compliance space, now. Natalia Godyla:We are, and so we're definitely interested in feedback. Drop us a note at securityunlocked@microsoft.com to let us know whether these topics interested you, whether there is another avenue you'd like us to go down, in compliance. Also always accepting memes. Nic Fillingham:Cat memes, sort of more specifically. Natalia Godyla:(laughing) Nic Fillingham:All memes? Or just cat memes? Natalia Godyla:Cat memes, llama memes, al- Nic Fillingham:Alpaca- Natalia Godyla:... paca memes. Nic Fillingham:... memes. Yeah. Alpaca. Yeah, this is a really interesting uh, topic, so insider risk, and insider risk management is the ability for security teams, for IT teams, for HR to use AI and machine learning, and other sort of automation based tools, to identify when an employee, or when someone inside your organization might be accidentally doing something that is going to create risk for the company, or potentially intentionally uh, whether they have, you know, nefarious or sort of malicious intent. Nic Fillingham:So, it really- really great conversation we had with- with Rob about what is insider risk, what are the different types of insider risk, how is uh, AI and ML being used to go tackle it? Natalia Godyla:Yeah, there's an incredible amount of work happening to understand the context, because so many of these circumstances require data from different departments, uh, uniquely different departments, like HR, to try to understand, well is- is somebody about to leave the company, and if so, how is that related to the volume of data that they just downloaded? And with that, on to the pod. Nic Fillingham:On with the pod. Nic Fillingham:Welcome to the Security Unlocked podcast, Rob McCann, thank you so much for your time. Rob McCann:Thank you for having me. Nic Fillingham:Rob, we'd love to start with a quick intro. Who are you, what do you do? What's your day to day look like at Microsoft, what kind of products or technology do you touch? Give us a- give us an intro, please. Rob McCann:Well, I've been at Microsoft for about 15 years, I am a- I've been an applied researcher the entire time. So, what that means is, I get to bounce around various products and solve technical challenges. That's the official thing, what it actually means is, whatever my boss needs done, that's a technical hurdle, uh, they just throw it my way, and I have to try to work on that. So, applied scientist. Nic Fillingham:Applied scientist, versus what's a- what's a different type of scientist, so what- what's the parallel to applied science, in this sense? Rob McCann:So, applied researcher is sort of a dream job. So, when I initially started, they're sort of the academic style researcher, that it's very much uh, your production is to produce papers and new ideas that sort of in a vacuum look good, and get those out to the scientific community. I love doing that kind of stuff. I don't so much like just writing papers. And so, an applied researcher, what we gotta do, is we gotta sort of be this conduit.Rob McCann:We get to solve things that are closer to the product, and sort of deliver those into the product. So we get very real, tangible impact, but then we're also very much a bridge. So, part of our responsibility is to keep, you know, fingers on what's going on in the abstract research world and try to foster, basically, a large innovation pipe. So, I freaking love this job. Uh, it's exactly what I like to do. I like to solve hard technical problems, and then I like to ship stuff. I'm a very um ... I need tangible stuff. So I love it. Nic Fillingham:And what are you working on at the moment, what's the scope of your role, what's your bailiwick? (laughing) Rob McCann:My bailiwick is uh, right now I'm very much focused on IRM, which is insider risk management, and so what we've been doing over the last year or so, insider risk management GA'd in February of 2020, I want to say. So, Ignite Today is a very festive sort of one year anniversary type thing. That with compliance solutions. So, over this last year, what we've done a lot of is sort of uh, build a team of researchers to try to tackle these challenges that are in insider risk, uh, and sort of bring the science to this brand new product. So, a lot of what I'm doing on a daily basis is on one hand, the one hand is, solve some technical things and get it out there, and the other hand is build a team to strengthen the muscle, the research muscle. Natalia Godyla:So, let's talk a little bit more about insider risk management. Can you describe how insider risk differs from external risk, and more specifically, some of the risks associated with internal users? Rob McCann:It's uh, there's some overlap. But it's a lot different than external attack. So, first of all, it's very hard, not saying that external attack is not hard, I- I work with a lot of those people as well. But insiders are already in, right? And they already have permissions to do stuff, and they're already doing things in there. So there's not like, you have a- a ... some perimeter that you can just camp on, and try to get people when they're coming in the front door. Rob McCann:So that makes it hard. Uh, another thing that makes it hard is the variety of risks. So, different customers have different definitions of risk. So, risk might be um, we might want to protect our data, so we don't want data exfiltrated out of the company. We might want trade secrets, so we don't want people to even see stuff that they shouldn't see. We don't want workplace harassment, uh, we don't want sabotage. We don't want people to come in, and implant stuff into our code that's gonna cause problems later. It's a very broad space of potential risks, and so that makes it challenging as well. Rob McCann:And then I would say the third thing that makes it very challenging is, what I said, different customers want- have different definitions of risk. So it's not like ... like, I like the contrast to malware detection. So, we have these external security people that are trying to do all this sophisticated machine learning, to have a classifier that can recognize incoming bad code. Right? And sort of when they get that, like, the whole industry is like, "Yes, we agree, that's bad code, put it in Virus Total, or wherever the world wants to communicate about bad code." And it's sort of all mutually agreed upon, that this thing is bad. Rob McCann:Insider risk is very different. It's um, you know, this customer wants to monitor these things, and they define risk a certain way. Uh, this customer cares about these things, and he want to define risk a certain way. There is a heightened level of customer preferences that have to be brought into the- the intelligence, to- to detect these risks. Natalia Godyla:And what does detecting one of those risks look like? So, fraud, or insider trading, can you walk through what a workflow would look like, to detect and remediate an insider attack? Rob McCann:Yeah, definitely. So- so, first of all, since it's such a broad landscape of potential damage, I guess you would say, first thing the product has to do is collect signals from a lot of different places. We have to collect signals about people logging in. You have to collect signals about people uploading and downloading files from a- from OneDrive, you have to ... you have to see what people are sharing on Teams, what people are ec- you know, emailing externally. If you want the harassment angle, you gotta- you know, you gotta have a harassment detector on communications. Rob McCann:So the first thing is just this huge like, data aggregation problem of this very broad set of signals. So that's one, which in my mind is a- is a very strong advantage of Microsoft to do this, because we have a lot of sources of signals, across all of our products. So, aggregating the data, and then you need to have some detectors that can swim through that, uh, and try to figure out, you know, this thing right here doesn't quite look right. I don't know necessarily that it's bad, but the customer says they care about these kind of things, so I need to surface that to the customer. Rob McCann:So, uh, technics that we use there a lot are anomaly detection. Uh, so a lot of unsupervised type of learning, just to look for strangeness. And then once we surface that to the- the customer, they have to triage it, right? And they have to look at that and make a decision, did I really- do I really want to take action on this thing? Right? And so, along with just the verdict, like, it's probability 98% that this thing is strange, you also have to have all this explanation and context. So you have to say, why do I think this thing is strange? Rob McCann:And then you have to pull in all these things, so like, it's strange because they- they moved a bunch of sensitive data around, that- in ways they usually didn't, but then you also need to bring in other context about the user. This is very user-centric. So you have to say things like, "And by the way, this person is getting ready to leave the company." That's a huge piece of context to help them be able to make a decision on this. And then once the customer decides they want to make a decision, then the product, you know, facilitates uh, different workflows that you might do from that. So, escalating a case to legal, or to HR, there are several remediation actions that the customer can choose from. Nic Fillingham:On this podcast, we've spoken with a bunch of data scientists, Nic Fillingham:... and sort of machine learning folks who have talked about the challenge of building external detections using ML, and from what you've just explained, it sounds like you probably have some, some pretty unique challenges here to give the flexibility to customers, to be able to define what risk needs to them. Does that mean that you have to have a customized model built from scratch for every customer? Or can you have a sort of a global model to help with that anomaly detection that then just sort of gets customized more slightly on top based on, on preferences? I, I guess my question is, how do you utilize a tool like machine learning in a solution like this that does require so much sort of customization and, and modification by the, by the customer? Rob McCann:That's, that's a fantastic question. So, what you tried to do, you scored on that one.Nic Fillingham:(laughs).Rob McCann:You try to do both, right? So, customers don't wanna start from scratch with any solution and build everything from the ground up, but they want customizability. So, what you try to do, I always think of it as smart defaults, right? So, you try to have some basic models that sort of do things that maybe the industry agrees is suspicious type, right? And you expose a few high-level knobs. Like, do you care about printing? Or do you care about copying to USB? Or do you want to focus this on people that are leaving the company? Like some very high level knobs. Rob McCann:But you don't expose the knobs down to the level of the anomaly detection algorithm and how it's defining distance and all the features it's using to define normal behavior, but you have to design your algorithm to be able to respect those higher level choices that the u- that the user made. And then as far as the smart default, what you try to do as you pr- you try to present a product where out of the box, like it's gonna detect some things that most people agree are sort of risky, and you probably wanna take a look at, but you just give the, you offer the ability to customize as, as people wanna tweak it and say, nah, that's too much. I don't like that. Or printing, it's no big deal for us. We do it. We're printing house, right? Nic Fillingham:Does a solution like this, is it geared towards much larger organizations because they would therefore have more signal to allow you to build a high fidelity model and see where there are anomalies. So, for example, could the science of the insider risk management work for a small, you know, multi hundred, couple hundred person organization? Or is it sort of geared to much, much larger entities, sort of more of the size of a, of a Microsoft where there are tens of thousands employees and therefore there's tens of thousands of types of signal and sort of volume of signal.Rob McCann:Well, you've talked to enough scientists. I look at your guys's guest list. I mean, you know, the answer, right, more data is better, right? But it's not limiting. So, of course, if you have tons and tons of employees in a rich sorta like dichotomy of roles in the company, and you have all this structure around a large company, if you have all that, we can leverage it to do very powerful things. But if you just have a few hundred employees, you can still go in there and you can still say, okay, your typical employees, they have this kind of activity. Weird, the one guy out of a 100 that's about ready to leave suddenly did something strange, uh, or you can still do that, right? So, you, you got to make it work for all, all spectrums. But more data is always better, man. Um, more signals, more data, bring it on. Let's go. Give me some computers. Let's get this done. Natalia Godyla:Spoken like a true applied scientist. So, I know that you mentioned that there's a customized components inside of risk management, but when you look across all of the different customers, are you seeing any commonalities? Are there clear indicators of insider threats that most people would recognize across organizations like seeing somebody exfiltrate X volume of data, or a certain combination of indicators happening at once? I'm assuming those are probably feeding your smart defaults?Rob McCann:Correct. So, there's actually a lot of effort to go. So, I s- I said that we're sort of a bridge between external academic type research and product research. So, that's actually a large focus and it happened in external security too. As you get industry to sort of agree like on these threat matrices, and what's the sort of agreed upon stages of attack or risk in this case. So, yeah, there are things that everybody sort of agrees like, uh, this is fishy. Like, let's make this, let's make this priority. So, that, like you said, it feeds into the smart defaults. The same time we're trying to, you know, we don't think we know everything. So, we're working with external experts. I mean, you saw past podcasts, we talked to Carnegie Mellon, uh, we talked to Mitre, we talked to these sort of industry experts to try to make this community framework or, uh, language and the smart defaults. Uh, and then we try to take what we can do on top of that. Nic Fillingham:So, Rob, a couple of times now, you've, you've talked about this scenario where an employee's potentially gearing up to leave the, the company. And in this hypothetical situation, this is an employee that may be looking to, uh, exfiltrate some, some data on their way out or something, something that falls inside the scope of, of identifying and managing, uh, insider risk. I wonder, how do you determine when a user is potentially getting ready to leave the company? Is that, do you need sort of more manual signals from like an HR system because an employee might've been placed on a, on a, on a review, in a review program or review period? Or, uh, are you actually building technology into the solution to try and see behaviors, and then those behaviors in a particular sort of, uh, collection in a particular shape lead you to believe that it could be someone getting ready to leave the company? Or is it both or something else? Rob McCann:So, quick question, Nick, what are you doing after this podcast?Nic Fillingham:Yeah.Rob McCann:Do you want a job? Because it feels like you're reading some of my notes here (laughter). Uh, we, uh-Nic Fillingham:If you can just wait while I download these 50 gigs of files first-Rob McCann:(laughs).Nic Fillingham:... from this SharePoint that, that I don't normally go to, and then I sort of print everything and then I can talk to you about a job. No, I'm being silly. Rob McCann:No, I mean, I mean, you hit the nail on the head there. It's, uh, there are manual signals. This is the same case with say asset labels, like file labels, uh, highly sensitive stuff and not sensitive stuff. So, in both cases, like we want the clear signals. When the customers use our plugins or a compliance solution to tell us that, you know, here's an HR event that's about ready to happen. Like the person's leaving or this file's important. We are definitely gonna take that and we're gonna use it. But that's sort of like the scientists wanna go further. Like what about the stuff they're not labeling? Does that mean they just haven't got around to it? Or does that mean that it's really not important? Or like you just said, like, this guy is starting to email recruiters a lot, this is like, is he getting ready to leave? So, there's definitely behavioral type detection and inference that, uh, we're working on behind the scenes to try to augment what the users are already telling us explicitly. Natalia Godyla:So, what's the reality of insider risk management programs? How mature is this practice? Are folks paying attention to insider risk? Is there a gap here or is there still education that needs to happen? Rob McCann:Yeah. So, there has been people working on this a lot longer than I have, but I do have to say that things are escalating quickly. I mean, especially with modern workforce, right? The perimeter is destroyed and everybody's at home and it's easier to do damage, right? And risk is everywhere, but some, you know, cold, hard numbers, like the number of incidents are going up, b- like, over the last two years. But I think Gardner just come out and said in, in the last two years, the number of incidents have went out by about half. So, the number of incidents are happening more probably, maybe 'cause of the way we work now. The amount of money that people, uh, companies are spending to address this problem is going up. I think Gardner's number was, when, uh, the average went up several million over the last couple of years, um, they just sort of released an insider risk survey and more people are concerned about it. So, all the metrics are pointing up and it just makes sense with the way the world is right now. Nic Fillingham:Where did sort of insider risk start? What's sort of the, the beginning of this solution... what did the sort of incubation technology look like? Where did it start? Uh, are you able to talk to that? Rob McCann:I mean, sure. A little bit. So, this was before me, so a lot of this came out of, uh, DSRE, which is our, our sort of internal security team for, at Microsoft babysitting our own network. So, they had to develop tools to address these very real issues, and the guys that I did a podcast with before Tyler Mirror and, and Robin, they, um, they sorta, you know, brought this out and started making it a proper product to take all these technologies that we were using in-house and try to help turn them into a product to help other people. So, it sort of organically grew out of just necessity, uh, in-house. But as far as like industry, like, uh, Carnegie Mellon, uh, certain National Insider Threat Center and I think they've been, uh, studying this problem for over a decade.Nic Fillingham:And as a solution, as a technical solution, did it start with like, sort of basic heuristics and just looking for like hard coded flags and logs, or did it actually start out as a sort of a data science problem and, you know, the sort of basic models that have gotten more sophisticated over time? Rob McCann:Yeah. So, it did start, start out with some data science at the beginning as well. Uh, so of course he always have the heuristics. We do that in external attack too. Heuristics are very precise, they, uh, allow us to write down things that are very specific. And they're very, very important part of the arsenal. A lot of people diss on heuristics hero sticks, but it's a very im- very important part of that, that thing. But it also has, it started out with some data science in it, you know, the anomaly detection is a big one. Um, and so there were already some models that they brought right from, uh in-house to detect when stuff was suspicious. Natalia Godyla:So, what Natalia Godyla:... what's the future of IRM look like? What are you working on next?Rob McCann:Well, I mean, we could, you could go several ways. You know, there could be broadness of different types of risk. The thing that I enjoy the most is sort of the more sophisticated ways of doing newer algorithms, maybe for existing charters, or maybe broad charters. Rob McCann:Uh, one thing that, I- I'm very interested in lately is the sort of interplay between supervised learning and, and anomaly detection. So you can think of as, uh, semi-supervised. That's a thing that we've actually been playing with at Microsoft for, for a long time. Rob McCann:I've had this awesome, awesome journey here. I've, I've always been on teams that were sorta, like ... It's kinda like I've been an ML evangelist. Like, I always get to the teams right when they're starting to do the really cool tech, and then I get to help usher that in. So, I got to do that in the past with spam filtering, when that was important. Remember when Bill Gates promised that we were gonna solve spam in a, in two years or whatever. Those were some of the first ML models we ever did i- in Microsoft products, and even back then we're playing with this intersection of, you know, things look strange, but I know that certain spam looks like this, so how do you combine that sort of strangeness into sort of a semi-supervised stuff ...Rob McCann:That's the stuff that really floats my boat is ho- how do you, how do you take this existing technology that some people think of as very different ... There's unsupervised, there's supervised, uh, there's anomaly detection. How do you take that kinda stuff and get it to actually talk to each other and do something cooler than you could do on one set or the other? That's where I see the future from a technical standpoint behind the scene for smarter detectors, is how we do that kind of stuff. Rob McCann:Product roadmap, it's related to what we're, we talked about earlier about the industry agreeing on threat major sees and customers telling us what's the most important to them. That, that's stuff's gonna guide, guide the product roadmap. Um, but the technical piece, there's so much interesting work to do.Natalia Godyla:When you're trying to make a hybrid of those different models, the unsupervised and supervised machine learning models, what are you trying to achieve? What are the benefits of each that you're trying to capture by combining them?Rob McCann:Oh, it's the story of semi-supervised, right? I have tons and tons of data that can tell me things about the distribution of activity, I just o-, d-, only have labels on a little bit of it. So, how do I leverage the distributions of activity that's unlabeled with the things that I can learn from my few labeled examples? And how do I get those two things to make a better decision than, than either way on its own? Rob McCann:It's gonna be better than training on just a few things in a supervised fashion, 'cause you don't have a lot of data with labels. So you don't wanna throw away all that distributional information, but if you go over to the distributional information, then you might just detect weirdness. But you never actually get to the target which is risky weirdness, which is two different things.Nic Fillingham:Is the end goal, though, supervised learning, so if you, if you have unsupervised learning with a small set of labels, can you use that small set of labels to create a larger set of labels, and then ultimately get to ... I'm horribly paraphrasing all this here, but, is that sort of the path that you're on?Rob McCann:So, we're gonna try to make the best out of the labels that we can get, right? But, I don't think you ever throw away the unsupervised side. Because, uh, I mean, this c-, this has come up in the external security stuff, as well, is if you're always only learning how to catch the things that you've already labeled, then you're never gonna really s-, be super good at detecting brand new things that you don't have anything like it. Right?Rob McCann:So, you have to have the ... It's sorta like the Explore-exploit Paradigm. You can think of it, at a very high level you can think of supervised as you're exploiting what you already know, and you're finding stuff similar to it. But the explore side is like, "This thing's weird. I don't know what it is, but I wanna show it to a person and see if they can tell me what it is. I wanna see if they like that kinda stuff."Rob McCann:Uh, that's sorta synergy. That's, that's a powerful thing.Nic Fillingham:What's the most sophisticated thing that the IRM solution can do? Like, have you been sort of surprised by the types of, sort of, anomalies that can be both detected and then sort of triaged and then flagged, or even have automated actions taken? Is there, is there a particular example that you think is a paramount sort of example of what, what this tech can do?Rob McCann:Well, it's constantly increasing in complexity. First of all, anybody who's done applied science knows how hard it is to get data together. So when I work with the IRM team, first of all, I'm blown away at the level of the breadth of signals they've managed to put together into a place that we can reason over. That is such a strong thing. So the, their data collection is super strong. And they're always doing more. I mean, these guys are great. If I come up with an idea, and I say, "Hey, if we only had these signals," they'll go make it happen. It is super, super cool.Rob McCann:As far as sophistication, I mean, you know, we start, we start with heuristics, and then you start doing, like, very obvious anomaly detection, like, "Hey, these, this guy just blew us out of the water by copying all these files." I mean, that's sort of the next level. And then the next level is, uh, "Okay, this guy's not so obvious. He tries to fly under the radar and sort of stay low and slow. But can we detect an aggregate? Over time he's doing a lot of damage." So those more subtle long-term risks. That's actually something we're releasing right now.Rob McCann:Another very powerful paradigm that we're releasing right now is, not just individual actions, but very precise sequences of actions. So you could think of it in a external as kill chain. Like, "They did this, and then they did this, and then they did this." That can be much more powerful than, "They did all three of those separately and then added together," if you know what I mean.Rob McCann:So that sort of interesting sequences thing, that's a very powerful thing. And once you sorta got these frameworks up, like, you can get arbitrarily sophisticated under the hood. And so, it's not gonna stop.Nic Fillingham:Rob, you talked about working on spam detection and spam filters as previous sort of projects you were working on. I wonder if you could tell us a little bit about that work, and I wonder if there's any connective tissue between what you did back then and, and IRM.Rob McCann:Yeah, so I've worked on a lot more than spam. So, I got hired to do spam, to do the research around the spam team, but it quickly, uh, it was this newfangled ML stuff that we were doing, and, uh, it started working on lots of different problems, if you can imagine that. And so we started working on spam detection, and, and phish detection. We started working on Microsoft accounts. We would, we would look at how they behave and try to detect when it looks like suddenly they've been compromised, and help people, you know, sort of lock down their accounts and get, and get protection.Rob McCann:All those things it's been cool to watch. We sorta, we sorta had a little incubation-like science team, and we would put these cool techniques on it and it would start working well, and then they've all sort of branched out into their own very mature products over the years. A- and they're all based very heavily on, uh, the sort of techniques that, that have worked along the way.Rob McCann:It's amazing how much reuse there is. I mean, I mean, let's boil down what we do to just finding patterns in data that support a business objective. That's the same game, uh, in a lot of different domains. So, yes, of course, there's a lot of overlap.Nic Fillingham:What was your first role at Microsoft? Have you always been in, in research on applied research?Rob McCann:I have always been a spoiled brat. I mean, I, I just get to go work on hard problems. Uh, I don't know how I've done it, but they just keep letting me do it, and it's fun. Uh, yeah, I've always been an applied researcher.Nic Fillingham:And that, you said you joined about 14 years ago?Rob McCann:Yep. Yep, yep. That was even back before, uh, the sort of cluster machine learning stuff was hot. So we, I mean, we used to, we used to take, uh, lots of sequel servers and crunch data and get our features that way, and then feed it into some, like, single box, uh, learning algorithms on small samples. And, like, I've got to see this progression to, like, distributed learning over large clusters. In-house first, we used to have a system called [Cosmos In-House 00:28:04]. I actually got to write some of the first algorithms that did machine learning on that. It was super, super rewarding. And now we have all this stuff that we release to the public and Azure's this big huge ... It's a very, very cool to have seen happen.Nic Fillingham:Giving the listener maybe a, uh, a reference point for, for your entry into Microsoft-Rob McCann:(laughs)Nic Fillingham:... is there anything you worked on that's either still around, or that people would have known? I think, like, just the internal Cosmos stuff is, is certainly fascinating. I'm just wondering if there's a, if there's a touchstone on the product side.Rob McCann:Spam filtering for Hotmail. That was my first gig.Nic Fillingham:Nice! I, I cut my teeth on Hotmail.Rob McCann:Yeah, yeah-Nic Fillingham:Yeah, I was a Hotmail guy. I was working on the Hotmail team as we transitioned to Outlook.com.Rob McCann:Mm-hmm (affirmative).Nic Fillingham:And I was, uh, down in Palo Alto, I can't even remember. I was somewhere, where- wherever the Silicone Valley campus is-Rob McCann:SVC-Nic Fillingham:We were rolling like a boar-, a boardroom waiting for the new domain to go live, and we got, like, a 15 minute heads-up. So I'm just Nic@Outlook.com. That's, that's my email address, and I got, I got my wife her first name at Outlook.com. Nic Fillingham:Were you there for that, Rob? Do you have a, did you get a super secret email address?Rob McCann:I was not there for the release, but as soon as it was out, I went and grabbed some for my kids. So I w-, I keep my Hotmail one, 'cause I've had it forever, but, uh-Nic Fillingham:Yeah.Rob McCann:... I got all my kids, like, the, the ones they needed. So.Rob McCann:It's amazing how much stuff came out of the, that, that service right there. So I talked about identity management that we do for Microsoft accounts now. I, that stuff came from trying to protect people, their Hotmail accounts. So we would build models to try to determine, like, "Oh, this guy's suddenly emailing a bunch of people that he doesn't usually," anomaly detection, if you can imagine, right? The-Nic Fillingham:Yeah-Rob McCann:... same thing works. Rob McCann:All that stuff, and then it sorta grew in, and then Microsoft had a bigger account, and then that team's kinda like, "Hey, you guys are doing this ML to detect account compromise, can you come, like, do some Rob McCann:... of that over here," and then it grew out to what it is today. A lot of things came from the OML days, it was very fun.Natalia Godyla:Thinking of the different policies organizations have and the growing awareness of those policies, over time, employees are going to shift their tactics. Like you said there are some who are already doing low and slow activities that are evading detection, so, how do you think this is going to impact the way you try to tackle these challenges, or have you already noticed people try to subvert the policies that are in place?Rob McCann:Yeah, so that's the, that's the next frontier, which is w-, you know, why I said we started just getting into, like, the low and slow stuff. It's gonna be like all other security, it's gonna be, "These guys are watching this thing, I gotta try something different."Rob McCann:Actually that's a good motivation for the sort of the high-level approach we're taking, which is tons of signals, so there's not very many activities you could do. You could print, copy to USB, you could upload to something, you could get a third-party app that does the uploading for you. There's not very many avenues that you could do that we're not gonna be able to at least see that happening. Rob McCann:So you couple that with some, that mountain of data with some algorithm that can try to pick out, "This is a strange thing, and this is in the context of somebody leaving." It's gonna be an interesting cat-and-mouse, that's for sure.Natalia Godyla:Do you have any examples of places where you've already had to shift tactics because you're noticing a user try to subvert the existing policies? Or are you still in the exploration phase trying to figure out what really, what this is really going to look like next?Rob McCann:So, right now I don't think we've had ... We haven't got to the phase yet where we're affecting people a lot. Uh, this is very early product, we're a year in. So, I don't see the reactions yet, but I, I guarantee it's gonna happen. And then we're gonna learn from that, and we're gonna say, "Okay, I have the Explore-exploit going. The Explorer just told me that something strange that I've never seen before happened." We're gonna put some people on that that are experts that figure out what that's gonna be. We're gonna figure out how to bring that into the fold of agreed-upon bad stuff, so we're gonna expand this threat matrix, right, as we go along? And we're gonna keep exploring. And that's the same for every single security product.Nic Fillingham:Rob, as someone that's been able to sort of come into different teams and, and different solutions and, and help them, as you say, sort of bring more academic or theoretical research into, into product, what techniques are you keeping your eye on? Like, what's, what's coming in the next two or three years, maybe not necessarily for IRM, maybe just in terms of, as machine learning, as sort of AI techniques are evolving and, and, and sort of getting more and more mature, like, what, where are you excited? What are you, what are you looking at?Rob McCann:So you want the secret sauce, is what you're asking for?Nic Fillingham:That's exactly what I want. I want the secret sauce.Rob McCann:(laughs) Um, well, I mean, there's two schools of thought. There's one school of thought which is, "You better keep your finger on the pulse, because the, the new up-n-comers, the whippersnappers are gonna bring you some really cool, cool stuff." And then there's the other school of thought which is, "Everything they've brought in the last ten years is a slight change of what they, was before, the previous ... It's a cycle, right, as with s-, i- ... Science is refinement of existing ideas.Rob McCann:So, I'm a very muted person that way, in that I don't latch on to the next latest and greatest big thing. Um, but I do love to see progress. I s-, just see it as more of a multi-faceted gradual rise of mankind's pattern-recognition ability, right?Rob McCann:Things that excite me are things that deal with ... Like, big data with big labels? Super, super cool stuff happening there. I mean, like, you know, who doesn't like the word deep learning, or have used it-Nic Fillingham:What's a big label? Is there a small label?Rob McCann:(laughs) No, I mean lots of labeled data. Like, uh-Nic Fillingham:Okay.Rob McCann:... yes.Nic Fillingham:Big data sets, lots of labels.Rob McCann:Yes. That stuff, um, that's exciting. There's a lot of cool stuff we couldn't do two decades ago that are happening right now, and that's very, very powerful. Rob McCann:But a lot of the business problems in security, especially, 'cause we're trying to always get this new thing that the bad guys are doing that we haven't seen before. It's very scarce label-wise. And so the things that excite me are how you inject domain knowledge, right? I talked about, we want customers to be able to sort of control on some knobs that you, like, focus the thing on what they think's important. Rob McCann:But it also happens with security analysts, because, there's a lot of very smart people that I get to work with, and they have very broad domain knowledge about what risks look like, and various forms of security. How do you get these machines to listen to them, more than them just being a label machine? How do you embed that domain knowledge into there?Rob McCann:So there's a lot of cool stuff happening. Uh, in that space, weak learning is one that's very popular. Came out of Stanford, actually. But I'm very la-, I'm very, very excited about what we can do with one-shot, or weak supervision, or very scarce labeled examples. I think that's a very, very powerful paradigm.Nic Fillingham:Doing more with less.Rob McCann:That's right. Rob McCann:And transfer learning, I'm sure you guys have talked to a lot of people about that. That's another one. A lot of things we do in IRM ... Well, in, in lots of security is you try to, like, leverage labeled, uh, supervised classification ... Like, think about HR events. Rob McCann:So, maybe I could, don't have a m-, a bunch of labeled, "These are IRM incidents" that I can train this big supervised classifier on. But what I can do is I can get a bunch more HR events, and I can learn things, like you said, that predict that an HR event is probably happening, right? And I chose that HR event, because that's correlated with the label I care about, right? So, I can use all that supervised machinery to try to predict that proxy thing, and then I can try to use what it learned to get me to what I really want with maybe less labels.Nic Fillingham:Got it. My final IRM question is, from what I know about IRM, it feels like it's about protecting the organization from an employee who may maliciously or accidentally do something they're not meant to do. And we've used the example of an employee getting ready to leave the company. Nic Fillingham:What about, though, IRM as a tool to spot well-meaning, but, but practices that, that o-, expose the company to risk? So instead of, like, looking for the employee that's about to leave and exfil 50 gigs of cat meme data that they shouldn't, what about, like, just using it to identify, "You know what, this team's just sort of got some sloppy practices here that's sort of opening us for risk. We can use the IRM tool to go and find the groups that need the, sort of the extra training, and to, need to sort of bring them up to scratch. And so it's almost more of a, um, just thinking of it more in sort of a positive reinforcement sense, as opposed to sort of an avoiding a negative consequence.Nic Fillingham:Is that a big function of IRM?Rob McCann:Yeah, I mean, I, I'm sorry if I didn't, uh, communicate that well, but, IRM is definitely intentional and unintentional. In s-, in some of the workflows the way you can do when we detect risky activity is just send an email to the, uh, to the employee and say, "Hey, this behavior is risky, change your ways, please," right? Rob McCann:So, you're right, it's, it can be a coaching tool as well, it's not just, "Data's gonna leave," right? Intentionally.Nic Fillingham:Got it. You've been very generous. This has been a great conversation. I wondered, before you leave us, do you have anything you would like to plug? Do you have a blog, do you have a Twitter? Is there a- another podcast? Which one were you on, Rob?Rob McCann:Uncovering Hidden Risk. I would also like to point you guys to, uh, an inside risk blog. I mean, we, we publish a lot on, on what's coming out and where the product is headed, so it's: aka.ms/insiderriskblog. That's a great place to sorta keep abreast on the technologies and, and where we wanna go.Nic Fillingham:That sounds good. Well, Rob McCann, thank you so much for your time. Uh, this has been a great conversation, um, we'll have to have you back on at some point in the future to learn more about weak learning and other th-, other sort of, uh, cool new technique you hinted at.Rob McCann:Yeah. I appreciate it. Thanks for having me.Rob McCann:(music)Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode.Nic Fillingham:Until then, stay safe.Natalia Godyla:Stay secure.
4/7/2021

The Language of Cybercrime

Ep. 22
How many languages do you speak?The average person only speaks oneor twolanguages, and for most people that’s plentybecause even as communities arebecoming more global, languages are still very much tied to geographic boundaries.Butwhat happens when you go on the internet where those regions don’t exist the same way they do in real life?Because the internet connects people from every corner of the world, cybercriminals canperpetratescamsin countriesthousands of miles away. So how doorganizationslike Microsoft’s Digital Crime Unit combatcybercrimewhen they don’t even speak the language of the perpetrators?On today’s episode ofSecurity Unlocked, hostsNic FillinghamandNataliaGodylasit down withPeterAnaman,Principal Investigator on the Digital Crimes Unit,to discusshowPeterlooks at digital crimes inavery interconnected world and how language and culture play into the crimes being committed, who’s behind them, and how to stop them.In This Episode, You Will Learn:• Some of the tools the Digital Crime Unit at Microsoft uses to catch criminals.• How language and culturalfactors into cyber crime• Whycyber crimehas been onthe rise since Covid beganSome Questions We Ask:• How has understanding a specific culture helped crack a case?• How does a lawyer who served as an officer in the French Army wind up working at Microsoft?• Are there best practices for content creators to stay safe fromcyber crime?ResourcesPeterAnaman’s LinkedIn:https://www.linkedin.com/in/anamanp/NicFillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Bloghttps://www.microsoft.com/security/blog/Transcript[Full transcript can be found at https://aka.ms/SecurityUnlockedEp22]Nic:(music)Nic:Hello and welcome to Security Unlocked. A new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft's Security Engineering and Operations Teams. I'm Nic Fillingham.Natalia:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft's Security. Deep dive into the newest threat intel, research and data science.Nic:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia:And now, let's unlock the pod.Natalia:Hello, Nic. How is it going?Nic:Hello, Natalia. I'm very well, thank you. I'm very excited for today's episode. We talk with Peter Anaman, who is a return guest. Uh, he was on an earlier episode where we talked about business email compromise and some of the findings in the 2020 Microsoft Digital Defense Report. And Peter had such great stories that he shared with us in that conversation, that we thought let's bring him back. And let's, let's get the full picture. And wow, did we cover some topics in this conversation. I don't even know where to begin. How would, what's your TLDR for this one, Natalia?Natalia:Well, whenever your friends or family think about cyber security, this is it. One of the stories that really stuck out to me is, Peter went undercover, and has actually gone undercover multiple times, but in this one instance he used the cultural context from his family history, as well as the languages that he knows to gain trust with a bad actor group and catch them out. It's incredible. He speaks so many languages and he told so many stories about how he applies that to his day-to-day work in such interesting ways.Nic:Yeah, I love, for those of you who listened to the podcast, Peter really illustrates how knowledge of multiple cultures, knowledge of multiple languages, understanding how those cultures and languages can sort of intersect and ebb and flow. Peter has used that as powerful tools in his career. I think it's fascinating to hear those examples. Other listeners of the podcast who, who do have more than one language, who do understand and have experience across multiple cultures, maybe oughta see some, uh, some interesting opportunities for themselves in, in, in cyber security maybe moving forward.Nic:I also thought it was fascinating to hear Peter talk about working to try and get funds and sort of treasures and I think gold, l-literal gold that was taken during the second world war. And getting them back to it's original owner. Sort of like, a repatriation effort. As you say, Natalia, these are all things that I think our friends and family think of when they hear the words cyber security. Oh, I'm in cyber security. I'm an investigator in cyber security. And they have this sort of, visions, these Hollywood visions. Nic:This is, that's Peter. That's what he's done. And he's, he talk about it in his episode. It's a great episode.Natalia:And with that, on with the pod.Nic:On with the pod. Nic:(music)Natalia:Welcome back to Security Unlocked, Peter Anaman.Peter:Thank you very much. Thanks for having me back.Natalia:Well, it was a pleasure to talk to you, first time around. So I'm really excited for the second conversation. And in this conversation we really love to chat about your career in cyber security. How you got here? Um, what you're doing? So let's kick it off with a little bit of a refresher for the audience.Natalia:What do you do at Microsoft and what does your day-to-day look like?Peter:So in Microsoft, I work within the legal department. Within a group called the Digital Crimes Unit. We are a team of lawyers, investigators and analysts who look at protecting our customers and our online services from, um, organized crime or attacks against the system. And so we, we bring, for example, civil and criminal referrals in order to do that action. On a day-by-day basis, it's very, very varied. I focus more on business email compromise present with some, with some assistance on ransomware attacks and looking at the depths and the affiliates there. As well as looking at some attacks against the infrastructure based on automated systems. Peter:So it's kind of varied. So on a day, I could, for example, be running some crystal queries or some specialized database queries in order to look for patterns in unauthorized or illegal activity taking place in order to quickly protect our customers. At the same time, I have to prepare reports. So there's a lot of report writing just to make sure that we can articulate the evidence that we have. And to ensure we respect privacy and all the other rules, you know, when we present the data.Peter:And also, in addition to that, uh, big part of it is actually learning. So I take my time to look at trends of what's going on. Learn new skills in order to know that I can adapt and automate some of the processes I do.Nic:Peter, as someone with an accent, uh, I'm always intrigued by other people's accents. May I inquire as to your accent, sir. Um, I'm hearing, I think I'm hearing like, British. I'm hearing French. There's other things there.Peter:(laughs)Nic:Would you elaborate for us?Peter:Yes, of course. Of course. Oh so, I was born in Ghana, West Africa and spent my youth there. And later on went to the UK where I learned that, I had to have elocution lessons to speak like the queen. And so I had lesson and my accent became British. So but at the same time, I'm actually a French national. Um, I've been in the French army as an officer. And so, that's where the French part is. And throughout, I've lived in different countries doing for work. Uh, so I've learned a bit of German, a bit of Spanish on the way.Nic:I, I actually cheated. I looked at your, um, LinkedIn profile and I see you have six languages listed.Peter:Yes.Nic:The two, the two that you didn't mention, I am embarrassingly ignorant of Fante? And T-Twi, Twi? What are they?Peter:Twi and Fante are two of the languages that are spoken in Ghana. They're local languages. And so growing up, I always had that around me. When I went to my father's village where his, we communicate in that language. English is kind of the National Language but within the country, people really speak their own languages. So I've ticked it off now. Can I speak fluently in, in it? No, I've been away for too long. But if you put me there, I would understand everything they're saying. Nic:What are the roots of those two languages? Are they related at all? Or are they completely separate?Peter:They are related but one, one person cannot always understand the other. If you look more broadly, you look at for example, the African continent all are, you'll find that there are over, from what we understand, over, what was it? 2,000 languages are spoken on the continent. So sometimes a person, say on the east coast doesn't understand the person in the west coast, you know. And, and it's fascinating because, you know, when we look at cyber crime, we are facing a global environment. Which is actually pretty carved out, right? The physical world is still pretty segmented.Peter:And so when, for example, investigating some crimes taking place in Nigeria, well they speak pidgin English. And so we have to try and adapt to that to understand, what do they really mean when they say, X or Y? And so, you know, it kind of opens our mind at, as we're doing the investigations. So we have to really try and understand the local reality because the internet is not just one place. And I think, you know, working for, you know, Microsoft and with such an amazing diverse team, we've been able to share knowledge.Peter:So for example, in the case I mentioned, I went to my colleague in Lagos, Abuja. He went, oh, that's what it means. And we're like, okay great. That one makes a lot more sense. And so we can move on. So we have this kind of richness in the team that allows us to lean on each other and, you know, sort of drive impact. But yeah, language is very important. (laughs)Natalia:I was gonna ask, do you have any interesting examples in which the culture was really important to cracking in the case or understanding a specific part of a case that you were working?Peter:Yes. So there was one case I worked on earlier on which was in Lithuania. And in Lithuania, for a very long time, this group had been under investigation but they were very good at their Op Sec and used some, uh, different types of encryption and obsolete, obsolete communication to hide themselves. But what I learned from the chats and when I was, this was in an IRC, it started in IRC channels and then moved out of there afterwards. But I noticed that there was a lot of Italy. There was a lot of Italian references. And my grandfather was Sicilian so I've spent time in Italy. So I kind of understood that they traveled to Italy.Peter:So in part of the persona, I made reference to Sicily. And I just said, you know, that's where my grandfather's from. And this, didn't give a name obviously, but it kind of brought them closer, right? Because like, oh, yeah we, we get it. And after about two, three months, I was able to get them to send me pictures of them going on vacation in Italy. And unfortunately for them, the picture had geo-location on it. And also, we were able to blow it up to get the background of where they were in the airport and using the camera from the airport, we were able to identify who they were. And then go back to the passport, find their path and they got arrested a few weeks later. Peter:So but to get that picture, to get that inner information required a kind of, trust that was being built in the virtual world and that comes from trying to understand the culture. By teasing out, asking questions about who are you and what do you like. So that's just one example.Nic:N-no pressure in answering this question and we'll even, we'll even cut it out of the edit if it's one you don't wanna go with.Peter:(laughs) Sure.Nic:If you're good with it. But um, uh, I heard you now talk about personas and identities and y-you just sort of hinted at it in the answer to the previous question. It sounds like some of the work that you have done in the past has been about creating and adopting personas in order to go and learn more information about bad actors and groups out there in, uh, in cyber land. Is that accurate and are you able to talk about what that role and that sort of, that work look like, when you're performing it?Peter:Yeah. So before you have Peter:...persona, you have to understand where that persona's gonna be acted, right?Peter:And I'll give you an, an example of a story. Once I had to go to LA to give a presentation and when I got to the airport I got a cab. And in the cab I looked at the guy's, the license plate of the, of the person. And I said, I bet you, I can guess, which country you were born in. He was like, an African American kind of person. He goes, impossible. No one has guessed it, you will never know. I was, all right. Are you ready? You're from Ghana. And his mind was blown. He was like, how, how did you pin that to one country? I was like, well, in your name, you have Kwesi. And I know if you're born in a country, in Ghana and have Kwesi, it means you're born on a Sunday. So that fact that you have your, that name there, that means you were born from Ghana. He goes, you are right. And so that was that. Peter:And I said, I miss some food, the cuisine from my, from, from Ghana. And he goes, oh, I know a great place. It's in Compton. I said, go. Uh, when? So I went into my restroom, showered, go ready, try to g-got into a taxi and he goes, I'm not going into Compton. I was like, well, why not? I wanna go to that restaurant. And he goes, oh, no, no, no. I'm going to get robbed or something bad is going to happen to me. I was like, but it- By the way, he left, he went, I had a great meal. Afterwards, I spent two hours in the restaurant 'cause no taxi would come and pick me up. And eventually, the waitress took me to a local casino. And I got a cab there and I got back.Peter:Where, where I'm going with this story is about the environment. I didn't know what Compton meant, right? So if I created a persona that went there that didn't know the environment, they would not succeed. They would stick out like a sore thumb. They would, they would fail. So the first idea, is always to understand what are the different protocols.Peter:If I'm looking at, for example, FTP or IRC, the different peer-to-peer networks. Or I'm looking at NNTP and the old internet, you know. All of those work, you need different tools to work there. Different ways to collect evidence and different breadcrumbs you could leave that you need to know it may be needed. Because when you're there, you're there, right? And it's, you're leaving, you're leaving a mark. Also some people say, use proxies. Well, the problem with proxies that someone could know you got a proxy on. Because well, there's lots of systems out there. So it's about using the system. Understanding how it's interconnected so that when you show up, you show up without too much suspicion.Peter:The other thing I learned is that the personas have to, have to be kind of, sad. 'Cause what I found is that when they were a bit sad, like, I'm happy with your work and things like that. What I found, that's me, right? I found that people were more interested because people are kind by nature, right? And so when they see that you're sad, they're more likely to communicate with you. While, while if you're too confident, I can do everything. They're like, uh, no, that person. Peter:So I try to like, psychologically look at ways to make the person as real as possible, based on my experience, right, because if it was based on me, I would be called out. Because I will be inventing a character that's, was not real. If you try to give me a trick question, because it's based on me, the answer's gonna be the same. I've got, the persona is me. It's just different. And so that's how I took my time to understand it. I spend a lot of time learning the internet, the protocols, you know, how does P2P actually work. When I, going to an IRC channel or when I'm looking at the peer-to-peer network and looking at the net flow. So the data which is passing from my computer upload. What other information is flowing. Peter:Because if I can see it, they can see it, right? And at the same time I have to have the tools. So I was very fortunate to have, for example, some tools that can switch my IP address with any country, like, every minute. So I could really change personas and change location really rapidly and no one would know better 'cause I'm using different personas in different contexts, right?Peter:Now, I never lie. One of, one of the clear things is that you never, I never try and do anything illegal because I have to assume that law enforcement is on the other side. And that's not what I'm trying to do. So I'm not gonna commit the crime. I'm not going to encourage you to do the crime. I'm just listening and just being curious about you. But then people make mistakes because they share, they over share sometimes without knowing. Maybe they're too tired or something. Natalia:I have a bit of a strange question. So with the lockdown, culturally, people are expressing publicly that they feel like they're over sharing. Because they're all locked indoors. They have, their only outlet is to share online. So have you noticed that in your work in security? Do, are people over sharing in that underground world as well? Or there, there hasn't been an equal shift?Peter:No, I, I, I, actually think it's getting worse. Um, and part of the reason is, as more people go online, they're speaking more about how to be anonymous. So for example, I've seen a rapid increase in BackConnect. These are residential IP addresses used as proxies. Well 'cause now they're communicating to each other, saying, hey, we're all online and this is how you can get found out. And so there actually there's more sharing going on. You know, I look at this, many more VPN services out there. It just seems, they're better prepared. Now, obviously, we see a lot more, right? So I'm definitely seeing more sophistication because people are spending more time online. So they, they're not walking around waiting for the bus. They're reading, they're learning, they're adapting. They communicate with each other. Peter:I've even found like, cyber crime as a service, we've found clusters of groups of people. And when you look at that network, you could see. They're saying, oh, I offer phishing pages or I offer VPN. They become specialized. So now you have people that are saying, I am just gonna focus on getting your, for example, some exploits. Or I'm just gonna focus on getting you, um, some red team work so that you can go and drop your ransomware. You know what, they, they've become more specialized actually because they're online. And they've got the time to learn.Nic:Peter, you mentioned earlier, some time you spent in, I think, was it the French army, is that correct?Peter:Yes, that's correct.Nic:Do you want to talk about that? Was that your foray into security? Did it, did it begin with your career in the army? Or did it begin before then?Peter:Hmm. I think it started probably before then. In a sense that, once I left high school, I decided I wanted to study law. Because I wanted the system that I was gonna be working in. And so I went to law school, uh, in the UK. And when I came out, unfortunately, the market was not as good. So I couldn't get a job. And when I looked around at what other trenches I had. I found there was an accelerated cause to become an officer in the French Army. It's a bit like, West Point in the US. Or, and so to do that, it was basically two years, it a two year program condensed into four months. It was hard. And so (laughs) I-Nic:It was what? No sleep? Is that what it was? (laughs)Peter:Ahhh. I've lived through little sleep.Nic:No sleep before meals.Peter:Yeah. I had to, you know, even- Well one time, I even had to evacuated because I got hyperten- you know, uh, hypothermia. (laughs) It was, uh, sort of a character build, character builder, I like to call it that. Uh, but really I think that started the path. Uh, but for the security side was, was after that. So, 'cause of my debts from law school, I, I left the army and I went to, back to the UK. And there, the first job I found was to be a paralegal, photocopying accounts, bank accounts opened between 1933 and 1947. It was part of something called a survey. And it actually had something to do with the Nazi gold.Peter:So what happened is that during the second world war, a lot of peop- uh, people of Jewish origin, saw that they were gonna be persecuted and took their money to, uh, Switzerland and put them in numbered accounts. And kept the number in their head. While unfortunately, so many of them sadly, uh, were victimized, they died. And the number died with them. Well, the money stayed in the accounts and over time because the accounts were dormant, well, you had charges. And so the money left. Peter:And so this was something that Paul Volcker, I believe it was, started the survey to get the Swiss banks to comply and give the money back to the families as result. So I was part of a team investigating one of the banks there. And although I started photocopying, I looked at, using my military skills, to be very efficient. So I was the best photocopier.Natalia:(laughs)Peter:And uh, and we were five levels underground. And that's what I did and I worked hard. And then after a few weeks, I got promoted to manage, uh, photocopiers. The people photocopying. We were a great team. And after that, they realized I was still hanging around because everyone was sleeping. 'Cause working five levels underground is a bit depressing sometimes. Peter:And so eventually, I became a data analyst. And so now I had to do the research on the accounts to try and find someone writing in pen, oh, this number is related to this other main account. Or this there piece of evidence is linked to this name. And so basically, for about, I think about three years, I basically, I eventually ran the French team and we looked at all the French cards opened from that period. And that started the investigations and sort of, trying to think deeper into evidence and how to make it work. Natalia:I really didn't think of myself as being cool before this, but I'm definitely not cool after hearing this. It's been validated, these stories are way beyond me. Peter:(laughs) Well, no. Just stories.Natalia:(laughs) So what brought you to Microsoft? That how did you go from piracy investigation to working at Microsoft as an investigator?Peter:So what took place was actually, my troubles created by Microsoft. So back in 2000 it was Microsoft who actually saw that the internet was becoming something that could really hurt internet commerce and e-commerce of role and wanted to make sure Peter:But they could contribute to it, and participate by building this capacity. And all the way through, they were one of my clients, at, essentially. And at some point, I realized that in my career, working for different customers, clients is great, because you learn, you don't have something different. So, for example, a software company is very different to a games company. Is different to a publishing company, is different to a mo- motion picture company, although it's digital piracy, it's actually very different in many respects. And I have- I saw how Microsoft was investing more in the cloud at that time, and I saw that as a big opportunity to really help a bigger threat to the system, right? Peter:And when I say to the system, E-commerce, 'cause everything was booming, this was in like 2008. And so, I decided that I would work for them. And actually, they offered me the job. So, I- I didn't, you know, I'm very privileged to be where I am now. But the, the, the way they positioned it is that they were looking for someone to help develop systems to map out, create a heat map of online piracy. I was like, "Wow, this is a global effort." So, uh, that's what I came on board with. And I built actually, a, a system similar to Minority Report, whereby I got basically these crawlers that I built that would go out and visit all these pirate sites. And you'll find this fascinating 'cause... Well, I found it fascinating, in some cases- Natalia:(laughs). Peter:... as we accessed the forums that we're offering, you know, download sale, RapidShare was one of the companies at the time, as we shut them down, they have crawlers in the forum, which will go and replace them. So, we had machine or machine wars, where we would shut down a URL, and then they would put another one. The problem is that our system was infinite. That is, we can, the machine can keep clicking. For them, they had about 10 groups of files. And so once they reached number 10, that was it. So, I found a way to automate the systems. And then after that using the, the Kinect, do you remember the Xbox Kinect? Nic:Cer- certainly. Peter:Managed to hack that, and the way it happened is that I built a map on Bing, whereby the Kinect could look in my body structure. And as I moved my hand, it would drill in to a country. And when I pushed, it would create, like, a, a table on the window with the number of infringements, what products were offered, when was the last time it was detected. And then, I could just wave it away and it would go, and then I could spin the world, it was a 3D map to go to another country and say, "What are the concentrations of piracy?" In this way, we had a visualized way of looking at crime as they were taking place online, and then zoom in and say, "We need to spend more effort here." Right? Peter:So, as well, just getting data analytics, but in a 3D format. And so, that was part of the excitement when I joined, is how to do that. Another example is, I found that, I read some research where it said that basically humans only spend a minute and a half on any search query. You know, in itself it doesn't mean much. But imagine you have a timer and it's one second, two seconds, three seconds, right? You're waiting for a minute and a half, right? So, 90 seconds, let's double that and say 180 seconds. Basically, let's say three minutes, it means that if you go to anyone you know, and ask them, "Go and search for Britney Spears downloads." And you look too, go, do, do the search, and they will click a link, nothing. Go next, click next, and they'll keep going. Peter:Before the three minute mark, they'll stop. They'll change the query, they'll do something different. Because they wouldn't get a result. Which means that when you do a search, and a search has got a million results, uh, it doesn't really matter. People are not going to go through the million. So, I started to think about the problems that when executives and people were saying, "Oh, I go on the internet, and I can find bad stuff." I was like, "Okay, but you can do like in three minutes. How about I build a robot that will pretend to be you, and go and find the infringements within that three minute window? Which is about 400 URLs. But I'm going to hit it with like send 100 queries, distributed." Peter:All of a sudden, we were finding the infringements before anyone could click on it, because we would report it to Google, Bing, Yandex, Baidu. And they would remove it from the, from the search results. And then, we had a measurement system, which would check and see, if I was a human, how many seconds would it take before I found an active download? Right? You could automate it. And so, we had a dashboard that could show that, and it worked. You know, we could, we saw a decline in the number of complaints because, well, it wasn't as visible. Now, if you knew where the pirate bay was, yeah, okay. But that wasn't really what we were doing. We were looking at protecting people from getting downloads which contain malware, or something nefarious, right? And, and, so we built these systems to protect consumers, essentially.Natalia:So, is there a connection, or maybe a community behind the work that you've done in piracy and the world of copyright? Uh, any, any best practices that are shared with content creators who are equally concerned with a malware being in their content, or just the sheer, the sheer fact that someone is pirating their content?Peter:I think from a contents per- perspective, and there are several amazing organizations out there, such as the BSA, Business Software Alliance, you have the MPAA, you know, you have the RIAA, and also IACC, the International Anti-Counterfeiting Coalition. Who have just incredible guidance for their members, which are specialized. So, for example, when you look at counterfeit goods, that's a very different thing to like, say, video, because video is distributed in a diff- different way. But one thing, which I think is important is that you don't just leave your, your house open, you lock it with a key, otherwise, someone will just come in and take your stuff. Peter:So, I think the same with contents, that when we create content, we have to find a way to work not only with different organizations that are looking to protect those rights, but also assume your own responsibility of locking your door. For example, what security could you put on it? Right? To maintain it? And how could you work with law enforcement who are there to protect the law, right? There are, I think there are different things that could be considered but most of it really, I would say the best is to start with the industry association, because they are much more specialized, and can give better advice, depending on the nature of the content that the person has. Peter:But, you know, when we were looking at online piracy, it wasn't just online piracy, because, you know, Microsoft participated in something called Operation Pangea. This was an Interpol driven operation where we found that a Russian organization that was distributing software for download in the millions of dollars, we took action to dismantle their payment mechanism. So, Visa and MasterCard would stop the payment on their website. So, they moved to prescription drugs, and they started selling prescription drugs. And so, for certain, it's really not in Microsoft's mandate to do that, right? Peter:But what we did is that we provided the expertise, and the knowledge we have to law enforcement to detect these websites. There were about 10,000 of them, and then drill down to say, "What's the payment gateway?" Because that's a choke point, you know, a criminal, definitely does what he does for the money. You know, you're not gonna rob a bank if there's no money there, right? So, with that in mind, they were able to do really, massively disrupt this organization. And that's because Microsoft looks at providing its expertise, and also learning from other people's expertise, right? But to tackle this bigger problem that impacts all of us.Nic:Peter, I'd love to circle back to language for a sec here. And when you were talking about the languages that you speak, and, and the importance of understanding culture. From your perspective, do you think there are countries, language groups, ethnic groups that are disproportionately... Well, I'm trying to think of the most elegant way to say, not protected or not protected as well as they could because they speak a language that is, you know, not as prevalent? So, you know, I looked at, you know, I'd never heard of the two, the two, uh, Ghanaian languages that you had on your- Peter:Mm-hmm (affirmative). Nic:... on your profile there, I'm not even gonna say them right, but Fante and- Peter:(laughs), so, it's Fante and Twi. Nic:Fante and Twi. So- Peter:Perfect. Nic:... native Fante, and Twi, I'm, I'm assuming there's, there's hundreds of thousands, maybe even millions of speakers of those- Peter:Yeah. Yes, absolutely.Nic:... two languages?Peter:Yes, yeah. Nic:Do AI and ML systems allow for supporting people that, you know, either don't speak English, or a sort of major international language?Peter:You're touching on something, which is very near and dear to me, 'cause it's a whole different conversation. And if you look at the history of language, there's, a, a great group of seminars written about it. It's actually I think, I believe, somewhere, I read somewhere that 60% of languages are actually not written. Right? And yes, you can go and see Microsoft has, translates between say, 60 or 100 pairs of languages, and Google the same. But what about the others? What about the thousands of others, that I think there are over 6,000 languages in the world. You're right. I mean, earlier this year, if I may be personal, I'm trying to adopt a baby girl. And so, I went to Ghana to try and manage the situation, which is very slow. Peter:And when I was there, I just saw the reality that, you know, they don't have access to resources, right? Because a book costs money. And so even for AI, how would they even know what AI is? So, I think there is an increasing gap, which is taking place. We can't keep build, building bigger walls, because it's just not going to work. We gotta be, we gotta think bigger than that. And so, one of the ideas is that when we look at some of the criminals, like I've had quite a few of them, a lot of them go to the same technical universities, for example, in West Africa. Well, why is that? It's cause I think they develop skills, and then they leave, and they can't get a job. And so, they end up being pulled into a life of cybercrime. So, culture Peter:It's I think becoming an important thing is that, there is a bigger and bigger divide 'cause not as many people have access to the resources, and how can we as a community who do have access, sort of proactively contribute to that? 'Cause we can't, there's no way you can, you know, just Nigeria has 190 million people. That's a lot of people, that's a lot people. The African continent has 1.2 billion. Asia, four billion, was like, um, I think it's like, is it two, three billion? No, two billion? Something like that but it's a lot people- Nic:It's a lot. Peter:... outside, right? (laughs). And so I think, I'm glad you brought that up 'cause I think it's a- an interesting conversation that we need to develop even, even more. Natalia:So, just trying to distill some of that down. So, are, are you saying then that, uh, at least when we're looking at language, there is a greater diversity of threat actors than there are targets? That those targets are centralized more around English speakers, but because of disproportionate opportunities in other parts of the world, we see threat actors across a number of different languages, across a number of different cultures? Peter:Yes. I, I think that's, that's a goo- uh, kind of a good summary of that, but I'll probably take it a step further and say, from my vantage point, again, you know, there are many other more brilliant people out there than me, I can only speak of what I've seen. I still find there are concentrations, right? When you look at business email compromise, and you go and pick up a newspaper and say, "Show me all articles about BEC, the biggest crime right now in the world, and show me all the people who've been arrested." Guess what? They're all from one place, West Africa. Why? Because if you look at the history of that crime, BEC, it was a ruse. Before that it used to be called, it was all under the category of Advanced E-fraud, but it used to be a lottery scam. Oh, the Bill and Melinda Gates lottery, you've won $25 million, or, uh, the Nigerian prince, right?Peter:Some people call 419 which is a criminal code in Nigeria. And then it went further back, they used to send faxes. Or, a lot of people developed a culture called the Yahoo boys, right? They it called Yahoo-Yahoo. And what they do is you go on YouTube, and you search for Yahoo-Yahoo, you'll see them like there's a whole culture behind that. They're dancing, they say, "This is my Monday car, my Tuesday car." And because they're making money and their communities are not, the community helps them because they get money. The stolen money is shared, and so now it becomes harder to break that because it becomes part of a culture. And so, that's why we see a lot more there I think than for example, in the US, or in Russia or in other countries it's 'cause I think there was, there's a, they have this kind of lead way that they'd be doing it for a lot longer and have a better sense of how to be sly. Nic:It sounds like the, the principles of reducing crime apply just as generally in the cyberspace as they do too in the, the non-cyber space. Whereas if you can give opportunities and lu- you know, um, lucrative opportunities to people, to utilize the skills that they've developed, both sort of in an orthodox or in an unorthodox fashion- Peter:Mm-hmm (affirmative). Nic:... then they're gonna put those skills to good use. But if you, if you train them up and then don't give them any way of using those skills to, to go, you know, ma- make a living in a, in a positive sense, they're, they're gonna turn to other, other avenues. Sounds like in, in, in parts of West Africa, that is business email compromise.Peter:Right, it is. And if I could just add two things there, one is that, you know, when I started looking at how to address cyber, online criminality, I have to look at the physical part of it. And in the physical world, there's actually, I call them neighborhoods. You have good neighborhoods, and bad neighborhoods, right? There are some neighborhoods you go to, no one's going to pick pockets you, right? Everyone's got a nice car or whatever. The other neighborhoods you go to, and there are some shady people in the corner, probably selling drugs or something. You know, uh, I'm, I'm being very simplistic, but I'm just trying to say, there are differences in neighborhoods in the physical world, and those need to be looked at as well. Because even if you gave education or a job to someone in a bad neighborhood, because of the environmental pressure, they may not be able to leave that neighborhood because they could be pressured into it. Peter:Online it's the same, I found that you see there are clusters of criminal activities that happen. And in those virtual they're interconnected, it's like, like two, or three levels, they know each other mostly. And so, we can have this kind of, we have to think more holistically, I suppose. I'm trying to say, Nic, that, it, we also have to look at the neighborhood and how do you make sure, for example, that neighborhood they have a sports field or the streets are clean because it makes you feel good, right? There's, there are other environmental factors that I think we may need to consider in a more holistic way. We, we can move much faster that way, because there are different factors, uh, which contribute to this.Nic:So, Peter, I honestly feel like we could keep chatting for the next four hours, right? Natalia:(laughs), I know. Peter:(laughs). Nic:We, we, (laughs). We, we've already, (laughs), eaten up a, a lot of your time, and we've covered a lot of ground. I'd love to circle back one final time to, to language and really sort of ask you is, eh, maybe it's not language, but is there something that you sort of feel particularly passionate about in your career at Microsoft? What you've done so far, what you're working on, and what you hope to do moving forward, is language and opening up accessibility through language, and other sort of cultural diversity? You, you, you, spoke a lot about that in the last sort of, you know, 45 minutes. Is that, is that something that you're personally, uh, invested in, and would like to work more on in the future? And, and if not, what other areas are you, are you looking forward to in the future? Peter:It's, it's absolutely something I'm, I'm very passionate about. And within Microsoft, as an example, the company has invested a lot in diversity and inclusion and equity, and it ended last year, but I was the president of the Africans in Microsoft employee resource group, for example, which has close to a thousand people. And all of it is about helping, working in a two way street, where we help our community, who are at times new in the country. And so, don't understand the cultural differences and how do we help them better, not integrate, but be themselves. And also, allow others that don't understand that they may be a minority, but there's so much richness to that diversity and how it makes teams stronger, because then you're not all looking through the same lens and you can bring in, you know, different perspectives about it. So, I'm absolutely invested in that, not just here in the US but also, you know, the African continent. Peter:And, and I'm very fortunate to be working in a company that's actually pushing me to do that. You know, the company is, is doing amazing things when it comes to diversity and inclusion. And yes, there's room to be made, but at least they're active. Going back really quickly to what you mentioned about language and AI, when we look at the internet, the internet is still zeros and ones. So, when you look at machine learning models, a lot of it is looking for like over 250 signals, right? In a, in one site. And it's not just about the language, it's about different languages, computer code and human code. And so, the machines are bringing those two together, which can help better secure platforms. Natalia:And just as we wrap up here, is there anything you want to plug? Any resources, any groups that you'd like to share with our audience? Peter:I think for me, you know, always try and keep updated on security. So, you know, the Microsoft Security Bulletin is a, is a great source for, uh, up-to-date information. Also, I think there are many other organizations that people can search for and reach out to me on the antenna. If you're not a bad guy or girl, I'll- Natalia:(laughs). Peter:... I'll share, (laughs), we, we can, um, actually, you know, I try to mentor as many people in our industry because, eh, together we become stronger. So, do reach out if you want to. Natalia:Awesome. Thank you for that, Peter. It was great having you on the show again, and I can honestly say, we'd be happy to have you back, and it was infinitely fascinating. Peter:Thank you very much for the invitation again. And, uh, it was a pleasure participating. Natalia:By the way, [foreign language 00:38:17]. Peter:Uh, there you go. Natalia:If you ever want to. Peter:(laughs). Natalia:(laughs). Peter:(laughs). Nic:Natalia, I didn't know you speak Spanish.Natalia:(laughs). Peter:(laughs). Natalia:Well, we had a great time unlocking insights into security from research to artificial intelligence, keep an eye out for our next episode. Nic:And don't forget to tweet us @msftsecurity or mail us at securityunlockedatmicrosoft.com with topics you'd like to hear on a future episode. Until then, stay safe.Natalia:Stay secure.
3/31/2021

The Human Element with Valecia Maclin

Ep. 21
For Women’s History Month, we wanted to share the stories of just a few of the amazing women who make Microsoft the powerhouse that it is. To wrap up the month, we speak with Valecia Maclin, brilliant General Engineering Manager of Customer Security & Trust, about the human element of cybersecurity.In discussion with hosts Nic Fillingham and Natalia Godyla, Valecia speaks to how she transitioned into cybersecurity after originally planning on becoming a mechanical engineer, and how she oversees her teams with a sense of humanity - from understanding that working from home brings unique challenges, to going the extra mile to ensure that no member of the team feels like an insignificant cog in a big machine - Valecia is a shining example of what leadership should look like, and maybe humanity too.In this Episode You Will Learn:• The importance of who is behind cybersecurity protocols• How Microsoft’s Engineering, Customer Security & Trust team successfully transitioned to remote work under Valecia’s leadership• Tips on being a more inclusive leader in the security spaceSome Questions that We Ask:• What excites Valecia Maclin about the future of Cybersecurity• How does a mechanical engineering background affect a GM’s role in Infosec• How Valecia Maclin, General Manager of Engineering, Customer Security & Trust, got to where she is todayResources:Valecia’s LinkedIn:https://www.linkedin.com/in/valeciamaclin/Advancing Minorities’ Interest in Engineering:https://www.amiepartnerships.org/SAFECode:https://safecode.org/Microsoft’s TEALS:https://www.microsoft.com/en-us/tealsMicrosoft’sDigiGirlz:https://www.microsoft.com/en-us/diversity/programs/digigirlz/default.aspxNic’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Transcript[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp21]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham. Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel research and data science. Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft security. Natalia Godyla:And now let's unlock the pod. Hey Nic, welcome to today's episode. How are you doing today? Nic Fillingham:Hello Natalia, I'm doing very well, thank you. And very excited for today's episode, episode 21. Joining us today on the podcast is Valecia Maclin, general manager of engineering for customer security and trust someone who we have had on the shortlist to invite onto the podcast since we began. And this is such a great time to have Valecia come and share her story and her perspective being the final episode for the month of March, where we are celebrating women's history month. So many incredible topics covered here in this conversation. Natalia, what were some of your highlights? Natalia Godyla:I really loved how she brought in her mechanical engineering background to cybersecurity. So she graduated with mechanical engineering degree and the way she described it was that she was a systems thinker. And as a mechanical engineer, she thought about how systems could fail. And now she applies that to cybersecurity and the- the lens of risk, how the systems that she tries to secure might fail in order to protect against attacks. And I just thought that that was such a cool application of a non-security domain to security. What about yourself? Nic Fillingham:Yeah. Well, I think first of all, Valencia has a- a incredibly relatable story up front for how she sort of found herself pointed in the direction of computer science and security. I think people will relate to that, but then also we spent quite a bit of time talking about the importance of the human element in cybersecurity and the work that Valecia does in her engineering organization around championing and prioritizing, um, diversity inclusion and what that means in the context of cybersecurity. Nic Fillingham:It's a very important topic. It's very timely. I think it's one that people have got a lot of questions about, like, you know, we're hearing about DNI and diversity and inclusion, what is it? What does it mean? What does it mean for cybersecurity? I think Valecia covers all of that in thi- in this conversation and her perspective is incredible. Oh, and the great news is, as you'll hear at the end, Valecia is hiring. So if you like me are inspired by this conversation, great news is actually a bunch of roles that you can go and, uh, apply for to go and work for Valecia on her team.Natalia Godyla:On with the pod?Nic Fillingham:On with the pod. Valecia Maclin, welcome to the Security Unlocked podcast. Thank you so much for your time. Valecia Maclin:Thank you, Nic and Natalia. Nic Fillingham:We'd love to start to learn a bit about you. You're, uh, the general manager of engineering for customer security and trust. Tell us what that means. Tell us about your team, us about the amazing work that you and- and the people on your team do. Valecia Maclin:I am so proud of our customer security and trust engineering team. Our role is to deliver solutions and capabilities that empower us to ensure our customers trust in our services and our products. So I have teams that build engineering capabilities for the digital crimes unit. We build compliance capabilities for our law enforcement and national security team. And our team makes sure that law enforcement agencies are in compliant with their local regulatory responsibilities and that we can meet our obligations to protect our customers. Valecia Maclin:I have another team that provides on national security solutions. We do our global transparency centers on where we can ensure that our products are what we say they are. I have two full compliance engineering teams that build capabilities to automate our compliance at scale for our Microsoft security development lifecycle, as well as, uh, things like, uh, advancing machine learning, advancing open source security, just a wealth of enterprise wide, as well as stakeholder community solutions. Um, I could go on and on. We do digital safety engineering, so a very broad set of capabilities all around the focus and the mission of making sure that the products and services that we deliver to our customers are what we intend and say that they are Nic Fillingham:Got it. And Valencia so how does your engineering org relate to some of the other larger engineering orgs at Microsoft that are building, uh, security compliance solutions?Valecia Maclin:So our other Microsoft organizations that do that are often building those capabilities within a particular product engineering group. Um, customer security and trust is actually in our corporate, external and legal affairs function. So we don't have that sales obligation. Our full-time responsibility is looking across the enterprise and delivering capabilities that meet those broad regulatory responsibility. So again, if we think about our digital crimes unit that partners with law enforcement to protect our customers around the world, well building capabilities for them or digital safety, right? If you think about the Christ church call and what happened in New Zealand, we're building capabilities to help with that in partnership with what those product groups may need to do. So, um, so we're looking at compliance more broadly. Nic Fillingham:Got it. And does your team interface with some of the engineering groups that are developing products for customers? Valecia Maclin:Absolutely. So when you think about the work that we do in the open source security space, our team is kinda that pointy end of the spear to do, um, that assessment and identify here where some areas are that we need to put some focus and then the engineering, the product engineering groups will then and build, go and build that resiliency into the systems. Nic Fillingham:To follow up questions. One is on the podcast, we've actually spoken to some- some folks that are on your team. Uh, Andrew Marshall was on an earlier episode. We spoke with Scott Christianson, we've had other members of the digital crimes unit come on and talk about that work, just a sort of a sign post for listeners of the podcast. How does Andrew's work, uh, fit in your organization? How does Scott's work fit into your organization? Valecia Maclin:So, um, both Andrew and Scott are in a team, um, within my org, uh, that's called security engineering and assurance, and they're actually able to really focus their time on that thought leadership portion. So again, if you think about the engineering groups and the product teams, they have to, you know, really focus on the resiliency of the products, what our team is doing is looking ahead to think about what new threat vectors are. So if you think about the work that Andrew does, he partnered with Harvard and- and other parts of- of Microsoft to really advance thought leadership and how we can interpret adversarial machine learning. Valecia Maclin:Um, when you think about some of our other work in our open source security space, it is let's look forward at where we need to be on the edge from a thought leadership perspective, let's prototype some capabilities operationalizes, so that it's tangible for the engineering groups that then apply and then, uh, my guys will go and partner with the engineering groups and gi- and girls, right? So- so, um, we will then go and partner with the product groups to operationalize those solutions either as a part of our security, um, development life cycle, or just a general security and assurance practices. Nic Fillingham:Got it. And I think I- I can remember if it was Scott or Andrew mentioned this, but on a previous podcast, there was a reference to, I think it's an internal tool, something called Liquid. Valecia Maclin:Liquid, yes, uh, yeah. Nic Fillingham:Is that, can you talk about that? Cause we, uh, it was hinted at in the previous episode? Valecia Maclin:Absolutely. Yes. Yeah. So Liquid, um, actually have a full team that builds and sustains Liquid. It is a, um, custom built capability that allows us to basically have sensors within our built systems. Um, and so when you think about our security development life cycle, and you think about our operational security requirements, it's given us a way to automate not only those requirements, but you know, ISO and NIST standards. Um, and then that way, with those hooks into the build systems, we can get a enterprise wide look at the compliance state of our bills as they're going on. Valecia Maclin:So a developer in a product group doesn't have to think about, am I compliant with SDL? Um, what they can do is, you know, once the- the data is looked at, we can do predictive and reactive analysis and say, hey, you know, there's critical bugs in this part of the application that haven't been burned down within 30 days. And so rath- rather than a lot of manual and testation, we can do, um, compliance a scale. And I- I just mentioned manual and testation of security requirements. Oh, one of my other teams, um, has recently just launched Valecia Maclin:.. the capability that we're super excited about that leverages what we call Coach UL or used to be called Simile. That again, is automating kind of on the other edge, right? So, with liquid, it's once we pulled in the build data. Um, we're working with the engineering groups in Microsoft now to, um, do the other edge where they don't have to set up a test that they're compliant with security requirements. Um, we're, we're moving very fast to, um, automate that on behalf of the developer, so that again, we're doing security by design. Nic Fillingham:So, how has your team had to evolve and change, uh, the way that they, they work during this sort of the COVID era, during the sort of work from home? Was your team already set up to be able to securely work remotely or were there sort of other changes you had to make on the fly? Valecia Maclin:So, you know, uh, as we've been in COVID, my team does respond to phenomenally. We were actually well positioned to work from home and continue to function from home. You know, there were some instances where from an ergonomic perspective, let's get some resources out to folks because maybe their home wasn't designed for them to be there, you know, five days a week. So, the, the technical component of doing the work, wasn't the challenge. What I, as a leader continuously emphasized, and it's what, what my team needed, frankly, is making sure we stayed with the connectedness, right?Valecia Maclin:How do we continue to make sure that folks are connected, that they don't feel isolated? That, you know, they feel visibility from their, from their managers? And consider I had, I had 10 new people start in the past year, entirely through COVID including three new college hires. So, can you imagine starting your professional-Nic Fillingham:Wow.Valecia Maclin:... career onboarding and never being in the office with your peers or colleagues and, and, you know, and the connected tissue you would typically organically have to build relationships. And so through COVID, during COVID, we've had to be very creative about building and sustaining the connective tissue of the team. Making sure that we were understanding folks, um, personal needs and creating a safe space for that. You know, I was a big advocate way back in August where I said, Hey folks, you know, 'cause the sch- I knew the school year was starting. And even though we hadn't made any statements yet about when returned to work would, you know, would advanced to, I made a statements to my team of, Hey, it's August, we've been at this for a few months. It's not going anywhere anytime soon. Valecia Maclin:So, I don't want us carrying ourselves as if we're coming back to the office tomorrow. Let's, you know, give folks some space to reconcile what this is gonna look like if they have childcare, if they have elder care, if they're just frozen from being in- indoors this amount of time. Let's make sure that we're giving each other space for that. Also during the past year, you know, certainly we had, I would say, parallel once in a generation type events, right?Valecia Maclin:So, we had COVID, but we also had, uh, increased awareness, you know, of, of the racial inequities in our country. And for me as a woman of color that's in cybersecurity, I've spent my entire career being a, a series of first, um, particularly at the executive table. And so, you know, so it was a, an opportunity we also had in the past year to advance that conversation so that we could extend one another grace, right? So I personally was touched by COVID. I, I lost five people in the past year. Um, and I was also-Nic Fillingham:I'm so sorry. Valecia Maclin:Yeah. (laughs) And you keep showing up, right? And I was personally touched as a black woman who once again, has to be concerned about, you know, I have, uh, I have twin nephews that are 19, one's autistic and the other is not, but we won't allow him to get a driver's license yet 'cause he, my, my sister's petrified because, you know, that's a real fear that a young man who's 6'1", sweetest thing you would ever see, soft-spoken, um, but he's 6'1". He has, you know, dreadlocks in his hair or locks. He would hate to hear me say they were dreads. He has locks in his hair. Um, and he dresses like a 19 year old boy, right?Valecia Maclin:But on spot, that's not what the world sees. And so, um, that's what we're all in. Then you think about what's happening now with our Asian-American community. That's also bundled with folks who are human, having to be isolated and endorse, which that's not how humanity was designed. And so we have to remember that that shows up. And, and when you're in, in the work of security, where you're always thinking about threat actors, and I often say that some of our best security folks have kind of some orthogonal thinking that's necessary to kind of deal with the different nuances.Valecia Maclin:When you, when you are thinking about how do you build resiliency against ever evolving threats, (laughs) not withstanding the really massive one that, you know, was the next one we, we dealt with at the end of the last calendar year. Those are all things that work in the circle. And I always say that people build systems, they don't build themselves. And in this time more than ever, hopefully, as security professionals, we're remembering the human element. And we're remembering that the work that we do, um, has purpose, which is, you know, why I entered this space in, in the first and why I've spent my career doing the things I've done is because we have a phenomenal responsibility increasingly in a time of interconnectedness from a technology perspective to secure our way of life. Nic Fillingham:Wow. Well, on, on that note, you talked about sort of why you went into security. I'd love to sort of, I'd love to go there. Would you mind talking us through how you sort of first learnt of security and, and why you're excited about it, and how you made the decision to, to go into that space? Valecia Maclin:Absolutely. So, mine actually started quite awhile ago. I was majoring in mechanical engineering and material science, uh, at Duke university. I was in my junior year and, um, I should preface it with, I did my four year engineering degree in three and a half years. So, my, my junior year was pretty intense. I worked, was working on a project for mechanical engineering that I'd spent about seven hours on and I lost my data.Nic Fillingham:Ah!Valecia Maclin:I was building a model, literally, I sat at the computer because, you know, you know, back then, you know, there weren't a whole lot of computer resources, so you try to get there early and, and, and snag the computer so that you could use it as long as you needed to. I went in actually, on a holiday because I knew everybody would be gone. So, if I, I could have the full day and not have to give up the computer to someone. So, I'd spend seven hours building this model and it disappeared. Valecia Maclin:And it was the, you know, little five in a 10 floppy, I'm pulling it out, I'm looking at the box (laughs). It's gone. The, the, the model's gone. I was gonna have to start all over. I started my homework over again, but then I said, I will never lose a homework assignment like that again. So, I went and found a professor in the computer science school to agree to do an independent study with me, because as a junior, no one was gonna allow me to change my major for mechanical engineering that far in, at Duke University. So, (laughs) not, not my parents, anyway. So, I, um, did an independent study in computer science and taught myself programming. So, I taught myself programming, taught myself how to understand the hardware with, with my professors help, of course. But it was the work I did with that independent study that actually led to the job I was hired into when I graduated. Valecia Maclin:So, I've never worked as a mechanical engineer. I immediately went into doing national security work, um, where I worked for companies that were in the defense industrial base for the United States. And so I, I started and spent my entire career building large scale information systems for, you know, the DOD, for the intelligence community, and that vectored into my main focus on large, um, security systems that I was developing, or managing, or leading solutions through. So, it started with loss data, right? (laughs) You know, which is so apropos for where we are today, but it started with, you know, losing data on a software, in a software application and me just being so frustrated Valecia Maclin:Straight and said, that's never gonna happen to me again (laughs) that, um, that led me to pursue work in this space. Natalia Godyla:How did your degree in mechanical engineering inform your understanding of InfoSec? As you were studying InfoSec, did you feel like you were bringing in some of that knowledge? Valecia Maclin:One of the beautiful things and that was interesting is I would take on new roles, I'll, I'll never forget. Um, I, I got wonderful opportunities as, as my career was launched and folks would ask me, well, why are you gonna go do that job? You've never done that before, you know, do you know it? (laughs) And so what that taught me is, you know, you don't have to know everything about it going in, you just need to know how to address the problem, right? So, I consider myself a systems thinker, and that's what my mechanical engineering, um, background provided was look at the whole system, right? And so how do you approach the problem? And also because I also had a material science component, we studied failures a lot. So, material failure, how that affected infrastructure, you know, when a bridge collapse or, or starts to isolate. Um, so it was that taking a systems view and then drilling down into the details to predictively, identify failures and then build resiliency to not have those things happen again. Is that kind of that, that level of thinking that played into when I went into InfoSec. Natalia Godyla:That sounds incredibly fitting. So, what excites you today about InfoSec or, or how has your focus in InfoSec changed over time? What passions have you been following? Valecia Maclin:So, for me, it's the fact that it's always going to evolve, right? And so, you know, obviously the breaches make the headlines, but I'm one, we should never be surprised by breaches, just like we shouldn't be surprised by car thefts or home invasions, or, you know, think about the level of insurance, and infrastructure, and technology, and tools and habits (laughs) that we've, uh, we've developed over time for basic emergency response just for our homes or our life, right? Valecia Maclin:So, for me, it's just part of the evolution that we have, that there's always gonna be something new and there's always gonna be that actor that's gonna look to take a shortcut, that's gonna look to take something from someone else. And so in that regard, it is staying on the authence of building resiliency to protect our way of life. And so I, I am always passionate and again, it's, it's likely how I, you know, spent almost, you know, over 27 years of my career is protecting our way of life. But protecting it in a way where for your everyday citizen, they don't have to go and get the degree in computer science, right? Valecia Maclin:That they can have confidence in the services and the, the things that they rely on. They can have confidence that their car system's gonna break, that the brakes are gonna hit, you know, activate when they hit it. That's the place I wanna see us get to as it relates to the dependency we now have on our computer systems, and in our internet connected devices and, and IOT and that sort of thing. So, that's what makes me passionate. Today it may look like multi-factored authentication and, you know, zero trust networks, but tomorrow is gonna look like something completely different. And what I, where I'd love to see us get is, you know, think about your car. We don't freak out about the new technologies that show up in our car, you know, 'cause we know how, we, we, we get in and we drive and, and we anxiously await some people.Valecia Maclin:I, I'm kind of a control freak, I wanna still drive my car. I don't want it to drive itself (laughter). Um, but nevertheless, with each, you know, generational evolution of the car, we didn't freak out and say, Oh my gosh, it's doing this now. If we can start to get there to where there's trust and confidence. And, and that's why I love, you know, what my org is responsible for doing is, you know, that there's trust and confidence that when Microsoft, when you have a Microsoft product or service, you, you, you can trust that it's doing what you intend for it to do. And, and that's not just for here, but then, you know, when you're again, whether it's the car, or your refrigerator, or your television, that's where I'd love to, that's where I want to see us continue to evolve. Not only in the capabilities we deliver, but as a society, how we expect to interact with them. Natalia Godyla:Are you particularly proud of any projects that you've run or been part of in your career? Valecia Maclin:I am. And it's actually what led me to Microsoft, I had my greatest career success, but it, it came also at, at a time of, of, of my greatest personal loss. Literally they were concurrent on top of each other. And so I was responsible, I was the, the business executive responsible for the cybersecurity version of, of, of the JEDI program. Uh, so I was the business executive architecting our response to that work that was what the department of Homeland Security. I worked for a company that at the time wasn't known for cybersecurity, and so it was a monumental undertaking to get that responsibility. And the role was to take over and then modernize the cybersecurity re- system responsible for protecting the .gov domain. So, it was tremendously rewarding, especially in the optic that we have today. I received the highest award that my prior company gives to an individual. Valecia Maclin:I was super proud of the team that I was able to lead and, and keep together during all the nuances of stop, start, stop, start that government contracting, um, does when there's protests. But during that same time, you know, 'cause it was, so it was one of those once in a career type opportunities, if you've ever done national security work, to actually usher an anchor in a brand new mission is how we would label it, um, that you would be delivering for the government. But at the same time, that, that wonderfully challenging both technically and from a business perspective scenario was going on, I, in successive moments, lost my last grandparent, suddenly lost my sister. 12 months later, suddenly lost my mother, six months later had to have major surgery. So, that all came in succession while I was doing this major once in a career initiative that was a large cyber security program to protect our government. Valecia Maclin:And I, I survived, (laughs) right? So, um, the, the program started and did well, but I, I then kind of took a step back, right? Once I, I, uh, I'd promised the company at the time of the government that I would, I would give it a year, right? I would make sure the program transitioned since we'd worked so hard to get there. And then I took a step back and said, Hmm, what do I really wanna do? This was a lot (laughs). And so I did take a step back and got a call from Microsoft, actually, um, amongst some other companies. Uh, I thought it was gonna take a break, but clearly, um, others had, had different ideas. And so, um, (laughter) I had, I had multiple opportunities presented to me, but what was so intriguing and, and what drew me to Microsoft was first of all, the values of the company. You know, I'm a values driven person and the values, um mean a lot and I'm gonna come back to that in a moment. Valecia Maclin:But then also I, I mentioned that the org I lead is in corporate external and legal affairs. It's not within the product group. It's looking at our global obligations to securing our products and services from a, not just a regulatory perspective, but not limited by our, our sales target. And so the ability to be strategic in that way is what was intriguing and what, what drew me. When you think about the commitments the company has made to its employees and to its vendors during a time, um, that we've been in, it says a lot about the fabric of, of who we are to take that fear of employability insurance and those sorts of things that are basic human needs, to recall how early on we still had our cafeteria services going so that they could then go and provide meals for, for students who would typically get school meals. And at the same Valecia Maclin:... time it meant that those vendors that provide food services could continue to do their work. When you think about our response to the racial inequity and, and justice, social justice initiative, and the commitments were not only, not only made, but our, our keeping is the fabric of the company and the ability to do the work that I'm passionate about, that, that drew me here. Nic Fillingham:You talked about bringing the human element to security. What does that mean to you and how have you tried to bring that sort of culturally into your organization and, and, and beyond?Valecia Maclin:So, if you think about the human element of security, the operative word is human. And so as humans, we are a kaleidoscope of gender, and colors, and nationalities and experiences. Even if you were in the same town, you have a completely different experience that you can bring to bear. So, when I think about how I introduce, um, diversity, equity and inclusion in the organization that I lead, it is making sure that we're more representative of who we are as humans. And sometimes walking around Redmond, that you don't always get that, but it's the, you know, I, I come from the East Coast. So, you know, one of the going phrases I would use a lot is, I'm not a Pacific Northwestner or I don't have this passive aggressiveness down, I'm pretty direct (laughs). And so that's a different approach, right, to how we do our work, how we lean in, how we ask questions. Valecia Maclin:And so I am incredibly passionate about increasing the opportunities and roles for women and underrepresented minorities, underrepresented, uh, minorities in cybersecurity. And so we've been very focused on, you know, not just looking at internal folks that we may have worked on, worked on another team, you know, for years, and making sure that every opportunity in my organization is always opened up both internally and externally. They're always opened up to make sure that we're, we're looking beyond our mirror image to, um, hire staff. And it's powerful having people that think the same way you do, because you can coalesce very quickly. But the flip side of that is sometimes you can lose some innovation because everybody's seeing the same thing you see. And, and it's so important in, in security because we're talking about our threat actors typically having human element, is making sure that we can understand multiple voices and multiple experiences as we're designing solutions, and as we're thinking about what the threats may be. Natalia Godyla:So, for women or, uh, members of minority groups, what guidance do you have for them if they're not feeling empowered right now in security, if they don't know how to network, how to find leaders like yourself, who are supporting DNI? Valecia Maclin:One of the things I always encourage folks to do, and, and I mentor a lot is, just be passionate about who you are and what you contribute. But what I would say, uh, Natalia, is for them to take chances, not be afraid to fail, not be afraid to approach people you don't know, um, something that I got comfortable with very early as if I was somewhere and heard a leader speak on stage somewhere, or I was, uh, you know, I saw someone on a panel internally or externally, I would go up to them afterwards and introduce myself and ask, you know, would you be willing to have a career discussion with me? Can I get 30 minutes on your calendar? And so that was just kind of a normal part of my rhythm, which allowed me to be very comfortable, getting to meet new executive leaders and share about myself and more importantly, hear about their journeys. Valecia Maclin:And the more you hear about other's journey, you can help cultivate a script for your own. And so, so that's what I often encourage 'cause a lot of times folks are apr- afraid, particularly women and, and minorities are afraid to approach to say, think, well, you know, I don't know enough, or I don't know what to ask. It can be as simple as, I heard you speak, I would love to hear more about your story. Do you have time? Do you have 20 minutes? And then let, you know, relationships start from there and let the learning start from there. Nic Fillingham:As a leader in the security space, as a leader at Microsoft, what are you excited about for the future? What what's sort of coming in terms of, you know, it could be cultural change, it could be technology innovation. What, what are you sort of looking and seeing in the next three, five, 10 years? Valecia Maclin:For me it the cultural change. I'm looking forward and you heard me kind of allude to a little bit of this of, you now have the public increasingly aware of what happens when there's data loss. I'm so excited to look forward to that moment when that narrative shifts and the public learns and knows more of security hygiene, cyber security hygiene. And, and not, you know, both consumer and enterprise, because we take for granted that enper- enterprises have nailed this. And, and we're in a unique footing as a company to have it more part of our DNA, but not every company does. And so that's what I'm looking forward to for the future is the culture of that young person in the midst of schooling, not having to guess about what a cybersecurity or security professional is, much like they don't guess what a lawyer or a doctor is, right? So, that's what I look forward to for the future. Nic Fillingham:Any organizations, groups that you, you know, personally support or fans of that you'd also like to plug? Valecia Maclin:Sure. So, I actually support a, a number of organizations. I support an organization called Advancing Minorities in Engineering, which works directly with historically black colleges and universities to not only increase their learning, but also create opportunities to extend the representation in security. I also am a board member of Safe Code, which is also focused on advancing security, design, hygiene across enterprises, small midsize and large businesses. And so, so those are, are certainly, uh, a couple of, of organizations that, you know, I dedicate time to.Valecia Maclin:I would just encourage folks, you know, we have TEALS, we have DigiGirlz. everyone has a role to play to help expand the perception of what we do in the security space. We're not monolithic. The beauty of us as a people is that we can bring our differences together to do some of the most phenomenal, innovative things. And so that would be my ask is in, whatever way fits for where someone is, that they reach out to someone and make that connection. I v- I very often will reach down and, uh, I'll have someone, you know, a couple levels down and say, Oh my gosh, I can't believe you called and asked for a one-on-one. Valecia Maclin:So, I don't wait for folks to ask for a one-on-one with me. I, I'll go and ping and just, you know, pick someone and say, Hey, you know, I wanna, I just wanna touch base with you and see how you're doing and see what you're thinking about with your career. All of us can do that with someone else and help people feel connected and seen. Natalia Godyla:And just to wrap here, are you hiring, are there any resources that you want to plug or share with our audience, might be interested in continuing down some of these topics? Valecia Maclin:Absolutely. Thank you so much. Um, so I am hiring, hiring data architects, 'cause you can imagine that we deal with high volumes of data. I'm hiring software engineers, I'm hiring, uh, a data scientist. So, um, data, data, and more data, right?Natalia Godyla:(laughs).Valecia Maclin:And, um, and software engineers that are inquisitive to figure out the, the right ways for us to, you know, make the best use of it. Natalia Godyla:Awesome. Well, thank [crosstalk 00:35:11] you for that. And thank you for joining us today, Valecia.Valecia Maclin:Thank you, Natalia. Thank you, Nic. I really enjoyed it.Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla:Stay secure.
3/24/2021

Identity Threats, Tokens, and Tacos

Ep. 20
Every day there are literally billions of authentications across Microsoft – whether it’s someone checking their email, logging onto their Xbox, or hopping into a Teams call – and while there are tools like Multi-Factor Authentication in place to ensure the person behind the keyboard is the actual owner of the account, cyber-criminals can still manipulate systems. Catching one of these instances should be like catching the smallest needle in the largest haystack, but with the algorithms put into place by the Identity Security team at Microsoft, that haystack becomes much smaller, and that needle, much larger.On today’s episode, hostsNic Fillingham and NataliaGodyla invite back Maria Puertos Calvo, theLeadDataScientistin Identity Security and Protection at Microsoft,to talk with us about how her team monitors such amassive scale of authentications on any given day.Theyalsolookdeeper into Maria’s background and find out what got her into the field of security analytics andA.I. in the first place, and how her past in academiahelpedthattrajectory.In this Episode You Will Learn:• How the Identity Security team uses AI to authenticate billions of logins across Microsoft• Why Fingerprints are fallible security tools• How machine learning infrastructure has changed over the past couple of decades at MicrosoftSome Questions that We Ask:• Is the sheer scale of authentications throughout Microsoft a dream come true or a nightmare for a data analyst?• Do today’s threat-detection models share common threads with the threat-detection of previous decades?• How does someone become Microsoft’s Lead Data Scientist for Identity Security and Protection?Resources:#IdentityJobs at Microsoft:https://careers.microsoft.com/us/en/search-results?keywords=%23identityjobsMaria’s First Appearance on Security Unlocked, Tackling Identity Threats with A.I.: https://aka.ms/SecurityUnlockedEp08Maria’s Linkedin: https://www.linkedin.com/in/mariapuertas/Nic’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Transcript[Full transcript can be found at https://aka.ms/SecurityUnlockedEp20]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel, research, and data science. Nic Fillingham:And profile some of the fascinating people working on Artificial Intelligence in Microsoft security. Natalia Godyla:And now, let's unlock the pod.Nic Fillingham:Hello, Natalia. Welcome to episode 20 of Security Unlocked. This is, uh, an interesting episode. People may notice that your voice is absent from the... This interview that we had with Maria Puertos Calvo. How, how you doing? You okay? You feeling better?Natalia Godyla:I am, thank you. I'm feeling much better, though I am bummed I missed this conversation with Maria. I had so much fun talking with her in episode eight about tackling identity threats with AI. I'm sure this was equally as good. So, give me the scoop. What did you and Maria talk about?Nic Fillingham:It was a great conversation. So, you know, this is our 20th episode, which is kind of crazy, of Security Unlocked, and we get... We're getting some great feedback from listeners. Please, send us more, we want to hear your thoughts on the... On the podcast. But there've been a number of episodes where people contact us afterwards on Twitter or an email and say, "Hey, that guest was amazing," you know, "I wanna hear more." And Maria was, was definitely one of those guests who we got feedback that they'd love for us to invite them back and learn more about their story. So, Maria is on the podcast today to tell us about her journey into security and then her path to Microsoft. I won't give much away, but I will say that, if you're studying and you're considering a path into cyber security, or you're considering a path into data science, I think you're gonna really enjoy Maria's story, how she sort of walks through her academia and then her time into Microsoft. We talk about koalas and we talk about the perfect taco.Natalia Godyla:Yeah, to pair with the guac which she covered the first time around. Now tacos. I feel like we're building a meal here. I'm kind of digging the idea of a Security Unlocked recipe book. I, I think we need some kind of mocktail or cocktail to pair with this.Nic Fillingham:Yeah, I do think two recipes might not be enough to qualify for a recipe book. Natalia Godyla:Yeah, I mean, I'm feeling ambitious. I think... I think we could get more recipes, fill out a book. But with that, I, I cannot wait to hear Maria's episode. So, on with the pod?Nic Fillingham:On with the pod.Nic Fillingham: Maria Puertos Calvo, welcome back to the Security Unlocked podcast. How are you doing?Maria Puertos Calvo:Hi, I'm doing great, Nic. Thank you so much for having me back. I am super flattered you guys, like, invited me for the second time.Nic Fillingham:Yeah, well, thank you very much for coming back. The episode that we, we, we first met you on the podcast was episode eight which we called Tackling Identity Threats With AI, which was a really, really popular episode. We got great feedback from listeners and we thought, uh, let's, let's bring you back and hear a bit more about your, your own story, about how you got into security, how you got into identity, how you got into AI. And then sort of how you found your way to Microsoft. Nic Fillingham:But since we last spoke, I want to get the timeline right. Did you have twins in that period of time or had the twins already happened when we spoke to you in episode eight?Maria Puertos Calvo:(laughs) No, the twins had already happened. They-Nic Fillingham:Got it.Maria Puertos Calvo:I think it's been a few months. But they're, they are nine, nine months old now. Yeah.Nic Fillingham:Nine months old. And, and the other interesting thing is you're now in Spain.Maria Puertos Calvo:Yes.Nic Fillingham:When we spoke to you last, you were in the Redmond area or is that right?Maria Puertos Calvo:Yes, yes. The... Last time when we, we spoke, I, I was in Seattle. But I was about to make this, like, big trip across the world to come to Spain and, and the reason was, actually, you know, that the twins hadn't met my family. I am originally from Spain, and, and my whole family is, is here. And, you know, because of COVID and everything that happened, they weren't able to travel to the US to see us when they were born. So, my husband and I decided to just, like, you know, do a trip and take them. And, and we're staying here for a few months now. Nic Fillingham:That's awesome. I've been to Madrid and I've been to... I think I've only been to Madrid actually. Where, where... Are you in that area? What part of Spain are you in?Maria Puertos Calvo:Yes, yes. I'm in Madrid. I'm in Madrid. I, I'm from Madrid.Nic Fillingham:Aw- awesome. Beautiful city. I love it. So, obviously, we met you in episode eight, but if you could give us, uh, a little sort of mini reintroduction to who you are, what's your job at Microsoft, what does your... What does your day-to-day look like, that'd be great.Maria Puertos Calvo:Yeah. So, I am the lead data scientist in identity secure and protection, identity security team who... We are in charge of making sure that all of the users who use, uh, Microsoft identity services, either Azure Active Directory or Microsoft account, are safe and protected from malicious, you know, uh, cyber criminals. So, so, my team builds the algorithms and detections that are then put into, uh, protections. Like, for example, we build machine learning for risk based authentication. So, if we... If our models think an authentication is, is probably compromised, then maybe that authentication is challenged with MFA or blocked depending on the configuration of the tenet, et cetera. Maria Puertos Calvo:So, my team's day-to-day activities are, you know, uh, uh, building new detections using new data sets across Microsoft. We have so much data between, you know, logs and APIs and interactions b- between all of our customers with Microsoft systems. Uh, so, so, we analyze the data and, and we build models, uh, apply AI machine learning to detect those bad activities in the ecosystem. It could be, you know, an account compromised a sign-in that looks suspicious, but also fraud. Let's say, like, somebody, uh, creates millions of spammy email addresses with Microsoft account, for example to do bad things to the ecosystem, we're also in charge of detecting that.Nic Fillingham:Got it. So, every time I log in, or every time I authenticate with either my Azure Active Directory account for work or my personal Microsoft account, that authentication, uh, event flows through a set of systems and potentially a set of models that your team owns. And then if they're... And if that authentication is sort of deemed legitimate, I'm on my way to the service that I'm accessing. And if it's deemed not legitimate, it can go for a challenge through MFA or it'll be blocked? Did, did I get that right?Maria Puertos Calvo:You got that absolutely right.Nic Fillingham:So, that means... And I think we might've talked about this on the last podcast, but I still... I... As a long-term employee of Microsoft, I still get floored by the, the sheer scale of all this. So, there's... I mean, there's hundreds of millions of Microsoft account users, because that's the consumer service. So, that's gonna be everything from X-Box and Hotmail and Outlook.com and using the Bing website. So, that's, that's literally in the hundreds of millions realm. Is it... Is it a billion or is it... Is it just hundreds of millions?Maria Puertos Calvo:It depends on how you count them. Uh, if it's per day, it's hundreds of millions, per month I think it's close to a billion. Yes, for... Of users. But the number of authentications overall is much higher, 'cause, you know, the users are authenticating in s- in s- many cases, many, many times a day. A lot of what we evaluate is not only, like, your username and password authentications, there's also the, you know, the model authe- authentication particles that have your tokens cash in the application and those come back for request for access. So, the... We evaluate those as well. Maria Puertos Calvo:So, it's, uh... It's actually tens of billions of authentications a day for both the Microsoft account system and the Azure Active Directory system. Azure Active Directory is also a... Really big, uh, it's almost... It's, it's getting really close to Microsoft account in terms of monthly, monthly active users. And actually, this year, with, you know, COVID, and everybody, you know, the... All the schools, uh, going remote and so many people going to work from home, we have seen a huge increase in, in, in monthly active users for Azure Active Directory as well.Nic Fillingham:And do you treat those two systems separately? Uh, or, or are they essentially the same? It's the same anomaly detection and it's the same sort of models that you'd use to score and determine if a... If an authentication attempt is, is, uh, is legitimate or, or otherwise?Maria Puertos Calvo:It's, like, theoretically the same. You know, like, we, we use the same methodology. But then there are different... The, the two systems are different. They live in different places with different architectures. The data that is logged i- is different. So, these, these were initially not, you know... I- identity only, uh, took care of those two systems, like, a few years ago, before they w- used to be owned by different teams. So, the architecture underneath is still different. So, we still have to build different models and maintain them differently and, you know, uh, uh, tune them differently. So, so it is more work, but, uh, the, the theory and the idea, their... How we built them is, is very similar.Nic Fillingham:Are there some sort of trends that have, you know, appeared, having these two massive, massive systems sort of running in parallel but with the same sort of approach? What kind of behaviors or what kind of anomalies do you see detected in one versus the other? Do they sort of function sort of s- similar? Like, similar enough? Or do you see some sort of very different anomalies that appear in one system and, and not another.Maria Puertos Calvo:They're, interestingly, pretty different. Uh, when we see attack spikes and things like that, they don't always reflect one or the other. I think the, the motivation of the people that attack enterprises and organizations, it's, it's definitely from the, the hackers that are attacking consumer accounts. I think they're, you know, they're so in the black market separately, and they're priced separately, you know, and, and differently. And I think they're, they're generally used for different purposes. We see sometimes spikes in correlation, but, but not that much.Nic Fillingham:Before we sort of, uh, jump in to, to your personal story into security, into Microsoft, into, into data science, is the... You know, these... Talking about these sheer numbers, talking about the hundreds of millions of, of authentications, I think you said, like, tens of billions that are happening every day. Is that a dream for a data scientist to just have such a massive volume of data and signals at your fingertips that you can use to go and build models, train models, refine models? Is that, you know... Is this adage of more signal equals better, does that apply? Or at some point do you now have challenges of too much signal and you're now working on a different set of problems?Maria Puertos Calvo:That's a great question. It is an absolute dream and it's also a nightmare. (laughs) So, yeah. It is... It... And I'll tell you why for both, right? Like, a... It is a great dream. Like, obviously, you bet... The, the sheer scale of the data, the, you know, the, the fact... There are a lot of things that are easier, because sometimes when you're working with data and statistics, you have to do a lot of things to estimate if, Maria Puertos Calvo:... it's like the things that you're competing are statistically significant, right? Like, do I have enough data to approach that this sample, it's going to be, uh, reflection of reality, and things like that. With the amount of data that we have, with the amount of users that we have, it's the, we don't have that, we, we don't really have that problem, right? Like we are able to observe, you know, the whole rollout without having to, to figure out if what we're seeing, you know, it's similar to the whole world or not. Maria Puertos Calvo:So that's really cool. Also, because we're, you know, have so many users, then we also have, you know, we're a big focus for attackers. So, so we can see everything, you know, that happens in, in, in the cybersecurity world and like the adversary wall, we can find it in, in our data. And, and that is really interesting. Right. It's, it's really cool. Nic Fillingham:That sounds fascinating. But let, let, let's table that for a second. 'Cause I'd love to sort of go back in time and I'd love to learn about your journey into security, into sort of computer science, into tech, where did it all start? So you grew up in Madrid, is that right? Maria Puertos Calvo:Yes. I grew up in Madrid and when I was finishing high school and I was trying to figure out like, why do I do, I just decided to study telecommunication engineering, it's what's called a Spain, but it's ev- you know, the, the equivalent who asked degrees electrical engineering. Because I was actually, you know, really, really interested in math and science and physics. They were like my favorite subjects in high school. I was pretty, really good at it actually. Maria Puertos Calvo:And, but at the same time, I was like, well, this, you know, an engineering degree sounds like something that I could apply all of this to. And the one that seems like the coolest and the future and like I, I, is electrical engineering. Like I, at that time, computer science was also kind of like my second choice, but I knew that in electrical engineering, I could also learn a lot of computer science. Maria Puertos Calvo:It w- it has like a curriculum that includes a lot of computer science, but also you learn about communication theory and, you know, things like how do cell phones work? And how does television work? And you can learn about computer vision and image processing and all, all kinds of signal processing. I just found it fascinating. Maria Puertos Calvo:So, so I, I started that in college and then when I finished college, it was 2010. So it was right in the middle of the great recession, which actually hits Spain really, really, really badly when it came to the, the labor market, the unemployment back then, I think it was something like 25%-Nic Fillingham:Wow.Maria Puertos Calvo:... and people who were getting out of school, even in engineering degrees, which were traditionally degrees that would have, you know, great opportunities. They were not really getting good jobs. People, only consulting firms were hiring them, um, and, and really paying really, really little money. It was actually pretty kind of a shame. So I said, what, what, what should I do? And I, I had been a good student during college, so, and I had a professor that, you know, he, that I had done my kind of thesis with him and his research group. Maria Puertos Calvo:And he said, "Hey, why didn't you just like, continue studying? Like, you can actually go for your PhD and, because you have really good grades, I'm sure you can just get it full of finance. You can get a scholarship that will like finance, you know, four years of PhD. And you know, that way you don't have to pay for your studies, but also you kind of like, you're like a researcher and you have, uh, like money to live." And I was like, well, that sounds like a really good plan.Nic Fillingham:Sounds good.Maria Puertos Calvo:Like I actually, yeah. So, so I could do in that. And, and I, you know, then my master said, this masters say, wasn't computer science, but it was very pick and choose, right? Like, like you could pick your branch and what classes you took. And so the master's was the first half of the PhD was basically getting all your PhD qualifying courses, which also are equivalent to, to doing your masters. Maria Puertos Calvo:So I picked kind of like the artificial intelligence type branch, which had a lot of, you know, classes on machine learning and learn a lot of things that are apply that are user apply machine learning, it's like, uh, natural language processing and speech and speaker recognition and biometrics and computer vision. Basically, all kinds of fields of artificial intelligence, where, where in the courses that I took. And, and I really, really fou- found it fascinating. There wasn't, you know, a data science degree back then, like now everybody has a data science degree, but this is like 10 years ago. Uh, at least, you know, in Spain, there wasn't a data science degree.Maria Puertos Calvo:But this is like the closest thing, uh, that, and that was my first contact with, uh, you know, artificial intelligence and machine learning. And I, I loved it. And, and then I did my masters thesis on, uh, kind of like, uh, biometrics in, in terms of applying statistical models to forensic fingerprints to, to understand if a person can be falsely, let's say, accused of a crime because their fingerprint brand only matches a fingerprint that is found in a crime scene. Maria Puertos Calvo:So kind of try to figure out like, how likely is that. Because there have been people in the past that having wrongly convicted, uh, because of their fingerprints have been found in a crime scene. And then after the fact they have found the right person and then, you know, like, uh, it's not a very scientific method, what is followed right now. So that, that was a really cool thing too, that then I never did anything related to that in my life, but, but it was a very cool thing to study when I was in, in school. Nic Fillingham:Well, that, that's fair. I've, I've got some questions about that. That's fascinating. So how did you even stumble upon that as a, as a, as a, as a research focus? Was there a, a particular case you might've read in the, in the news or something like, I, I think I've never heard of people being falsely accused or convicted through having the same fingerprints, I guess, unless you're an identical twin. Maria Puertos Calvo:Mm-hmm (affirmative). (laughs) Actually, I can tell you because I have identical twins, but also that, because I studied a lot of our fingerprints is that identical twins do not have the same fingerprints.Nic Fillingham:Wow.Maria Puertos Calvo:Uh, because fingerprints are formed when you're in the womb. So they're not, they're not like a genetic thing. They happen kind of like, as a random pattern when, when your body is forming in the womb, and they happen, they're different. Uh, so, so humans have unique fingerprints and that's true, but the problem with the, the finger frame recognition is that, it's very partial, and is very imperfect because the, the late latent, it's called the latent fingerprint, the one that is found in a crime scene is then recovered, you know, using like some powder, and it's kind of like, you, you just found some, you know, sweaty thing and a surface, and then you have to lift that from there. Right. Maria Puertos Calvo:And, and that has imperfections in, and it only, it's not going to be like a full fingerprint. You're going to have a partial fingerprint. And then, then you, basically, the way the matching works is using this like little poin- points and, and bifurcations of the riches that exist in your fingerprint. And, and then, you know, looking at the, the location and direction of those, then they're matched with other fingerprints to understand if they're the same one or not. But the, because you don't have the full picture, it is possible that you make a mistake. Maria Puertos Calvo:The one case that it's been kind of really, really famous actually happened with the Madrid bombings that happened in 2004, where, you know, they, they blew up, uh, some trains and, and a couple of hundred people died. Then they, they actually found a fingerprint in one of the, I don't remember, like in the crime scene and it actually match in the FBI fingerprint database. It matched the fingerprint of a lawyer from Portland, Oregon, I believe it's what it was. And then he was initially, you know, uh, I don't know if you ended up being convicted, but, but you know, it wasn't-Nic Fillingham:He was a suspect.Maria Puertos Calvo:... it was a really famous case. Yes. I think he was initially convicted. And then, but then he was not after they found the right person and they, they actually found that yeah, both fingerprints, like the, the guy whose fingerprint it really was. And these other guys, they, their fingerprints both match the crime scene fingerprint, but that's only because it was only a piece of it. Right. You, you don't put your finger, like, you don't roll it left to right. Like when you arrive at the airport, right. That they make you roll your finger, and lay have the whole thing it's, you're maybe just, you know, the, the, the criminal fingerprint is, is very small.Nic Fillingham:Was that a big part of the, the research was trying to understand how much of a fingerprint is necessary for a sort of statistically relevant or sort of accurate determination that it belongs to, to the, to the right person?Maria Puertos Calvo:Yeah. So the results of the research they'd have some outcome around, like, depending on how many of those points that are used for identification, which are called minutia, depending on how, how many of those are available, it changes the probability of a random match with a random person, basically. So the more points you have, the less likely it is that will happen. Nic Fillingham:The one thing, like, as, as we're talking about this, that I sort of half remember from maybe being a kid, I don't know, growing up in Australia is don't koalas have fingerprints that are the same as humans. Did I make that up? Do you know anything about this? Maria Puertos Calvo:(laughs) I'm sure, I have no idea. (laughs) I have never heard such a thing. Nic Fillingham:I have a-Maria Puertos Calvo:Now I wanna know. Nic Fillingham:...I'm gonna have to look this up.Maria Puertos Calvo:Yeah.Nic Fillingham:I have a feeling that koa- koalas, (laughs) have fingerprints that are either very close to or indistinguishable from, from humans. I'm gonna look this one up. Maria Puertos Calvo:I wonder if like a koala could ever be wrongly convicted of a crime. Nic Fillingham:Right, right. So like, if I want to go rob a bank in Australia, all I need to do is like, bring a koala with me and leave the koala in the bank after I've successfully exited the bank with all the gold bars in my backpack. And then the police would show up and they arrest the koala and they'd get the fingerprints and they go, well, it must be the koala. Maria Puertos Calvo:Exactly. Nic Fillingham:This is a foolproof plan. Maria Puertos Calvo:(laughs)Nic Fillingham:I'm glad I discussed this with you on the podcast. Thank you, Marie, for validating my poses.Maria Puertos Calvo:Now, now you can't publish this.Nic Fillingham:Oh, we talked about fingerprints. Oh, crumbs you're right. Yeah. Okay. All right. We have to edit this out of the, (laughs) out of there quick. Maria Puertos Calvo:(laughs)Nic Fillingham:Um, okay. I didn't realize we had talked so much about fingerprints. That's my fault, but I found that fascinating. Thank you. So what happens next? Do you then go to Microsoft? Do you come straight out of your education at university in Madrid, straight to Microsoft? Maria Puertos Calvo:Kind of and no. So what happens next is that while I, I finished the master's part of this PhD, and at this time I'm actually dating my now husband, and he's an American, uh, working in Washington D.C. as an electrical engineer. So I, you know, I finished my master's and my, I say, why, why do I kind of wanna go be in the US uh, so I can be with him. And, you know, I have the space, the scholarship they'll actually lets me go do research abroad and you know, like kind of pays for it. So Maria Puertos Calvo:Find, um, another research group in the University of Maryland, College Park, which is really, really close to, to DC. And, and I go there to do research for, uh, six months. So, I spent six months there also doing research. Uh, also using, uh, machine learning for, for a different around iris recognition. And, you know, the six months went by and I was like, "Well, I want to stay a little longer," like, "I, you know, I really like living here," and I extended that, like, another six months. I... And at that point, you know, I wasn't really allowed to do that with my scholarship, so I just asked my professor to, you know, finance me for that time. And, and, uh, and at that time, I decided, like, you know, I, I actually don't think I wanna, like, pursue this whole PHD thing. Maria Puertos Calvo:So, so I stayed six more months working for him, and then I decided I, I, I'm not a really big fan of academia. I went into research in, in grad school in Spain mostly because there weren't other opportunities. I was super, you know, glad I did 'cause I, I love all the research and the knowledge that I gained with all... You know, with my master's where I learned everything about Artificial Intelligence. But at this point, I really, really wanted to go into industry. Uh, so I applied to a lot of jobs in a lot of different companies. You know, figuring out, like, my background is in biometrics and machine learning. Things like that. Data science is not a word that had ever come to my mind that I was or could be, but I was more, like, interested in, like, you know, maybe software roles related to companies that did things that I had a similar background in.Maria Puertos Calvo:For like a few months, I was looking in... I, I didn't even get calls. And I had no work experience other than, you know, I had been through college and grad school. So, I had... You know, and, and I was from Spain and from a Spanish university, and there was really nothing in my resume that was, like, oh, this is like the person we need to call. So, nobody called me. (laughs) And, and then one day, uh, I, I received a LinkedIn message from a Microsoft recruiter. And she says, "Hey, I have... I'm interested in talking to you about, uh, well, Microsoft." So I said, "Oh, my God. That sounds amazing." So, she calls me and we talk about it, and she's like, "Yeah, there's like this team at Microsoft that is like run mostly by data scientists and what they do is they help prevent fraud, abuse, and compromise for a lot of Microsoft online services." Maria Puertos Calvo:So, they, they basically use data and machine learning to do things like stopping spam for Outlook.com, doing, like, family safety like finding, like, things on the web that, that should be, like, not for children. They were also doing, like, phishing detection on the browser. Um, like phishing URL detection on the browser and a co- compromise detection for Microsoft Account. And so I was like, "Sure, that sounds amazing." You know? "I would love to be in the process." And I was actually lying because I did not want to move to Seattle. (laughs) Like, at that time, I was so hopeful that I will find a job at, you know, somewhere in DC on the east coast, which is like closer to Spain and where, where we lived in. But at the same time, you know, Microsoft calls and you don't say no mostly when nobody else is calling you. Maria Puertos Calvo:Um, so, so I said, "Sure, let's, you know, I, uh... The, the least I can do is, like, see how the interview goes." So, I did the phone screen and then I... They, they flew me to Seattle and I had seven interviews and a lunch inter- and a lunch kind of casual interview. So, it was like an eight hour interview. It was from 9:00 to 5:00. And, you know, everything sounded great, the role sounded great. Um, the, the team were... The things that they were doing sounded super interesting. And, to my surprise, the next day when I'm at the airport waiting for my flight to, to go back to DC, the recruiter calls me and says, "Hey, you, you know, you passed the interview and we're gonna make you an offer. You'll have an offer in the... In the mail tomorrow." I was like, "Oh, my God." (laughs) "What?" Like, I could not... This... It's crazy to me that this was, like, only seven years ago, it... But yeah.Nic Fillingham:Oh, this is seven... So, this was 2014, 2013?Maria Puertos Calvo:Uh, actually, when I did the interview, it was... It was more, more... It was longer. It was 2012. Nic Fillingham:2012. Got it.Maria Puertos Calvo:And then I... And then starting my Microsoft in 2013.Nic Fillingham:Got it.Maria Puertos Calvo:I started as a... I think at that time, they called us analysts. But it was funny because the, the team was very proud on the, the fact that they were one of the first teams doing, like, real data science at Microsoft. But there were too many teams at Microsoft calling themselves, and basically only doing, like, analytics and dashboards and things like that. So, because of that, the team that I was in was really proud, and they didn't want to call themselves data scientists, so they... I don't know. We called ourselves, like, analysts PMs, and then we were from that to decision scientists, uh, which I never understood the, the name. (laughs) Uh, but yeah. So, that's how I started.Nic Fillingham:Okay, so, so that first role was in... I heard you say Outlook.com. So, were you in the sort of consumer email pipeline team? Is that sort of where that, that sat?Maria Puertos Calvo:Yeah. Yeah, so, uh, the team was actually called safety platform. It doesn't exist anymore, but it was a team that provided the abuse, fraud, and, and, like, malicious detections for other teams that were... At the time, it was called the Windows live division.Nic Fillingham:Yes.Maria Puertos Calvo:So, all the... All the teams that were part of that division, they were like the browser, right? Like, Internet Explorer, Hotmail, which was after named Outlook.com. And Microsoft Account, which is the consumer ecosystem, we're all part of that. And our team, basically, helped them with detections and machine learning for their, their abusers and fraudsters and, and, you know, hackers that, that could affect their customers. So, my first role was actually in the spam team, anti-spam team. I was on outbound, outbound spam detection. So, uh, we will build models to detect when users who send spam from Outlook.com accounts out so we could stop that mail basically.Nic Fillingham:And I'd loved to know, like, the models that you were building and training and refining then to detect outbound spam, and then the kinds of sort of machine learning technology that you're, you're playing today. Is there any similarity? Or are they just worlds apart? I mean, we are talking seven years and, you know, seven years in technology may as well be, like, a century. But, you know, is there common threads, is there common learnings from back there, or is everything just changed?Maria Puertos Calvo:Yes, both. Like, there, there are, obviously, common threads. You know, the world has evolved, but what really has evolved is the, the, the underlying infrastructure and tools available for people to deploy machine learning models. Like, back then, we... The production machine learning models that were running either in, like, authentication systems, either in off- you know, offline in the background after the fact, or, or even for the... For the mail. The Microsoft developers have to go and, like, code the actual... Let's say that you use, like, I don't know, logistic regression, which is a very typical, easy, uh, machine learning algorithm, right? They had to, like, code that. They had to, you know... There wasn't like a... Like, library that they could call that they would say, "Okay, apply logistic regression to, to this data with these parameters. Maria Puertos Calvo:Back then, it was, like... People had to code their own machine learning algorithms from, like, the math that backs them, right? So, that was actually... Make things so much, you know, harder. They... There weren't, like, the tools to actually, like, do, like, data manipulation, visualization, modeling, tuning, the way that we have so many things today. So, that, you know, made things kind of hard. Nothing was... Nothing was, like, easy to use for the data scientists. It... There was a lot of work around, you know, how do you... Like, manual labor. It was like, "Okay, I'm gonna, like, run the model with these parameters, and then, like, you know, b- based on the results, you would change that and tweak it a little bit. Maria Puertos Calvo:Today, you have programs that do that for you. And, and then show you all the results in, like, a super cool graph that tells you, uh, you know, like, this is the exact parameters you need to use for maximizing this one, uh, you know, output. Like, if you want to maximize accuracy or precision or recall. That, that is just, like, so much easier.Nic Fillingham:That sounds really fascinating. So, Maria, you now... You now run a team. And I, I would love to sort of get your thoughts on what makes a great data scientist and, and what do you look for when you're hiring into, into your team or into sort of your, your broader organization under, uh, under identity. What perspectives and experience and skills are you trying to sort of add in and how do you find it? Maria Puertos Calvo:Oh, what a great question. Uh, something that I'm actually... That's... The, the answer of that is something I'm refining every day. The, you know, the more, uh, experience I get and the more people I hire. I, I feel like it's always a learning process. It's like, what works and what doesn't. You know, I try to be open-minded and not try to hire everybody to be like me. So, that's... I'm trying to learn from all the people that I hire that are good. Like, what are their, you know... What's, like, special about them that I should try to look in other people that I hire. But I would say, like, some common threads, I think, it's like... Really good communication skills. Maria Puertos Calvo:Like, o- obviously the basics of, you know, being... Having s- a strong background in statistical modeling and machine learning is key. Uh, but many people these days have that. The, the main knowledge is really important in our team because when you apply data science to cyber security, there are a lot of things that make the job really hard. One of them is the, the data is... What... It's called really imbalanced because there are mostly, most of the interactions with, with the system, most of the data represents good activities, and the bad activities are very few and hard to find. They're like maybe less than 1%. So, that makes it harder in general to, to, to get those detections. Maria Puertos Calvo:And the other problem is that you're in an adversarial environment, which means, you know, you're not detecting, you know, a crosswalk in, in a road. Like, it's a typical problem of, of computer vision these days. A crosswalk's gonna be a crosswalk today or tomorrow, but if I detect an attacker in the data today and then we enforce... We do something to stop that attacker or to... Or to get them detected, then the next day they might do things differently because they're going to adapt to what you're doing. So, you need to build machine learning models or detections that are robust enough that use, use what we call features or, or that look at data that it's not going to be easy... Easily gameable. Maria Puertos Calvo:And, and it's really easy to just say, "Oh, you know, there's an attack coming from, I don't know, like, pick a country, like, China. Let's just, like, make China more important in our algorithm." But, like, maybe tomorrow that same attacker just fakes IP addresses Maria Puertos Calvo:Addresses in, in a bot that, that is not in China. It's in, I don't know, in Spain. So, so, you just have to, you know, really get deep into, like, what it means to do data science in our own domain and, and, and gain that knowledge. So, that knowledge, for me, is, is important but it's also something that, that you can gain in the job. But then things like the ability to adapt and, and then also the ability to communicate with all their stakeholders what the data's actually telling us. Because it's, you know... You, you need to be able to tell a story with the data. You need to be able to present the data in a way that other people can understand it, or present the results of your research in, in a way that other people can understand it and really, uh, kind of buy your ideas or, or what you wanna express. And I think that that is really important as well.Nic Fillingham:I sort of wanted to touch on what role... Is there a place in data science for people that, that don't have a sort of traditional or an orthodox or a linear path into the field? Can you come from a different discipline? Can you come from sort of an informal education or background? Can you be self-taught? Can you come from a completely different industry? What, what sort of flexibility exists or should there exist for adding in sort of different perspectives and, and sort of diversity in, in this particular space of machine learning?Maria Puertos Calvo:Yes. There are... Actually, because it's such a new discipline, when I started at Microsoft, none of us started our degrees or our careers thinking that we wanted to go into data science. And my team had people who had, you know, degrees in economics, degrees in psychology, degrees in engineering, and then they had arrived to data science through, through different ways. I think data science is really like a fancy way of saying statistics. It's like big data statistics, right? It's like how do we, uh, model a lot of data to, like, tell us to do predictions, or, or tell us like what, how the data is distributed, or, or how different data based on different data points looks more like it's this category or this other category. So, it's all really, like, from the field of statistics.Maria Puertos Calvo:And statistics is used in any type of research, right? Like, when you... When people in medicine are doing studies or any other kind of social sciences are doing studies, they're using a lot of that, and, and they're more and more using, like, concepts that are really related to what we use in, in data science. So, in that sense, it's, it's really possible to come to a lot of different fields. Generally, the, the people who do really well as data scientists are people who have like a PhD and have then this type of, you know, researching i- but it doesn't really matter what field. I actually know that there, there are some companies out there that their job is to, like, get people that come out of PhD's programs, but they don't have like a... Like a very, you know, like you said, like a linear path to data science, and then, they kind of, like, do like a one year training thing to, like, make them data scientists, because they do have, like, the... All the background in terms of, like, the statistics and the knowledge of the algorithms and everything, but they... Maybe they're, they've been really academic and they're not... They don't maybe know programming or, or things that are more related to the tech or, or they're just don't know how to handle the data that is big. Maria Puertos Calvo:So, they get them ready for... To work in the industry, but the dat- you know, I've met a lot of them in, in, in, in my career, uh, people who have gone through these kind of programs, and some of them are PhDs in physics or any other field. So, that's pretty common. In the self-taught role, it's also very possible. I think people who, uh, maybe started as, like, software engineers, for example, and then there's so much content out there that is even free if you really wanna learn data science and machine learning. You can, you know, go from anything from Coursera to YouTube, uh, things that are free, things that are paid, but that you can actually gain great knowledge from people who are the best in the world at teaching this stuff. So, definitely possible to do it that way as well.Nic Fillingham:Awesome. Before we let you go, we talked about the perfect guacamole recipe last time because you had that in your Twitter profile.Maria Puertos Calvo:Mm-hmm (affirmative). (laughs)Nic Fillingham:Do you recall that? I'm not making this up, right? (laughs)Maria Puertos Calvo:I do. No. (laughs)Nic Fillingham:All right. So, w- so we had the perfect guacamole recipe. I wondered what was your perfect... I- is it like... I wanted to ask about tacos, like, what your thoughts were on tacos, but I, I don't wanna be rote. I don't wanna be, uh, too cliché. So, maybe is there another sort of food that you love that you would like to leave us with, your sort of perfect recipe?Maria Puertos Calvo:(laughs) That's really funny. I, I actually had tacos for lunch today. That is, uh... Yeah. (laughs)Nic Fillingham:You did? What... Tell me about it. What did you have?Maria Puertos Calvo:I didn't make them, though. I, I went out to eat them. Uh-Nic Fillingham:Were they awesome? Did you love them?Maria Puertos Calvo:They were really good, yeah. So, I think it's-Nic Fillingham:All right. Tell us about those tacos.Maria Puertos Calvo:Tacos is one of my favorite foods. But I actually have a taco recipe that I make that it's... I find it really good and really easy. So, it's shrimp tacos.Nic Fillingham:Okay. All right.Maria Puertos Calvo:So, it's, it's super easy. You just, like, marinate your shrimp in, like, a mix of lime, Chipotle... You know those, like, Chipotle chilis that come in a can and with, like, adobo sauce?Nic Fillingham:Yeah, the l- it's got like a little... It's like a half can. And in-Maria Puertos Calvo:Yeah, and it's, like, really dark, the sauce, and-Nic Fillingham:Really dark I think. And in my house, you open the can and you end up only using about a third of it and you go, "I'm gonna use this later," and then you put it in the fridge.Maria Puertos Calvo:Yes, and it's like-Nic Fillingham:And then it... And then you find it, like, six months later and it's evolved and it's semi-sentient. But I know exactly what you're talking about.Maria Puertos Calvo:Exactly. So that... You, you put, like, some of those... That, like, very smokey sauce that comes in that can or, or you can chop up some of the chili in there as well. And then lime and honey. And that's it. You marinate your shrimp in that and then you just, like, cook them in a pan. And then you put that in a tortilla, you know, like corn preferably. But you can use, you know, flour if that's your choice. Uh, and then you make your taco with the... That shrimp, and then you put, like... You, you pickle some sliced red onions very lightly with some lime juice and some salt, maybe for like 10 minutes. You put that on... You know, on your shrimp, and then you can put some shredded cabbage and some avocado, and ready to go. Delicious shrimp tacos for a week night.Nic Fillingham:Fascinating. I'm gonna try this recipe. Maria Puertos Calvo:Okay.Nic Fillingham:Sounds awesome.Maria Puertos Calvo:Let me know.Nic Fillingham:Maria, thank you again so much for your time. This has been fantastic having you back. The last question, I think it's super quick, are you hiring at the moment, and if so, where can folks go to learn about how they may end up potentially being on your team or, or being in your group somewhere?Maria Puertos Calvo:Yes, I am actually. Our team is doubling in size. I am hiring data scientists in Atlanta and in Dublin right now. So, we're gonna be, you know, a very, uh, worldly team, uh, 'cause I'm based in Seattle. So, if you go to Microsoft jobs and search in hashtag identity jobs, I think, uh, all my jobs should be listed there. Um, looking for, you know, data scientists, as I said, to work on fraud and, and cyber security and it's a... It's a great team. Hopefully, yeah, if you're... If that's something you're into, please, apply.Nic Fillingham:Awesome. We will put the link in the show notes. Thank you so much for your time. It's been a great conversation.Maria Puertos Calvo:Always a pleasure, Nic. Thank you so much. Natalia Godyla:Well, we had a great time unlocking insights into security, from research to Artificial Intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.Natalia Godyla:Stay secure.
3/17/2021

Re: Tracking Attacker Email Infrastructure

Ep. 19
If you use email, there is a good chance you’re familiar with email scams. Who hasn’t gotten a shady chain letter or suspicious offer in their inbox? Cybercriminals have been using email to spread malware for decades and today’s methods are more sophisticated than ever. In order to stop these attacks from ever hitting our inboxes in the first place, threat analysts have to always be one step ahead of these cybercriminals, deploying advanced and ever-evolving tactics to stop them.On today’s podcast, hosts Nic Fillingham and NataliaGodylaare joined by Elif Kaya, a Threat Analyst at Microsoft. Elif speaks with us about attacker email infrastructure. We learn what it is, how it’s used,and how her team is combating it. She explains how the intelligence her team gathersis helpingtopredict how a domain is going to be used, even before any malicious email campaigns begin. It’s a fascinating conversation that dives deep into Elif’s research and her unique perspective on combating cybercrime.In This Episode, You Will Learn:• The meaning of the terms “RandomU” and “StrangeU”• The research and techniques used when gathering intelligence on attacker email structure• How sophisticated malware campaigns evade machine learning, phish filters,and other automated technology• The history behind service infrastructure, theNetcurstakedown, Agent Tesla, Diamond Fox,Dridox,and moreSome Questions We Ask:• What is attacker email infrastructure and how is it used by cybercriminals?• How does gaining intelligence on email infrastructures help usimprove protection against malware campaigns?• What is the difference between“attacker-owned infrastructure”and“compromised infrastructure”?• Whywasn’tmachine learning or unsupervised learning a technique used when gathering intelligence on attacker email campaigns?• What should organizationsdoto protect themselves? What solutions should they have in place?Resources: What tracking an attacker email infrastructure tells us about persistent cybercriminal operations:https://www.microsoft.com/security/blog/2021/02/01/what-tracking-an-attacker-email-infrastructure-tells-us-about-persistent-cybercriminal-operations/Elif Kaya:https://www.linkedin.com/in/elifcyber/Nic’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Transcript[Full transcript can be found at https://aka.ms/SecurityUnlockedEp19]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham. Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel, research, and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft security. Natalia Godyla:And now, let's unlock the pod.Nic Fillingham:Hello, Natalia. Welcome to episode 19 of Security Unlocked. How are you? Natalia Godyla:I'm doing great. I'm excited to highlight another woman in our series for Woman's History month, so this'll be number two. And I'm excited to talk about email infrastructures.Nic Fillingham:Yes, I am too. Email, we use it every day. We probably use it more than we, we want. We love it. We can't live without it. What's your first memory of email? What was your first email address?Natalia Godyla:I was an AOL-er. First email was glassesgirl2002@AOL.com. I'm super proud of that one. Nic Fillingham:What's the reference to 2002?Natalia Godyla:I'm pretty sure that's when I got my first pair of glasses (laughs).Nic Fillingham:Ah. And you- Natalia Godyla:I was very excited. I threw a cupcake party.Nic Fillingham:Oh, wow. Natalia Godyla:(laughs) Nic Fillingham:So I'm, I'm pretty old. It was sort, sort of the mid 90s, and I remember like, hitting websites where it asked for an email address, and I'm like, what is an email address? Natalia Godyla:(laughs) Nic Fillingham:I probably used the internet the best part of, you know, six months before someone explained it to me. And I worked out how to get a Hotmail address, which is called Hotmail because it was actually based on the, the acronym H-T-M-L, and they just put a couple other letters in there to expand it out to say Hotmail. And I remember being, thinking like I was the bees knees, because I was nicf12@hotmail.com. Natalia Godyla:(laughs) Nic Fillingham:We should have asked our guest Elif Kaya, who you're about to hear from, about her first email address, but we didn't. Instead, we talked about a blog that she helped co-author, uh, that was published beginning of February called, "What Tracking and Attacker email infrastructure tells us about persistent cyber criminal operations." It's a fascinating conversation, and Elif walks us through all of the research that she did here where we learn about attacker email infrastructure and how it's used and created and managed. Nic Fillingham:There's a bunch of acronyms you're going to hear. The first one, DGA, domain generation algorithm. You're going to hear StrangeU and RandomU, which are sort of collections of these automatically created domains. And if you sort of want to learn a bit more about them, it's obviously in the blog post as well. Natalia Godyla:Yes, and in addition to that, you'll hear reference to Dridex. So, as the RandomU and StrangeU infrastructure was emerging, it was parallel to the disruption of the Netcurs botnet, and those same malware operators who were running the botnet were also using malware like Dridex. And Dridex is a type of malware that utilizes macros to deliver the malware. And with that, on with the pod.Nic Fillingham:On with the pod. Nic Fillingham:Elif Kaya, welcome to the Security Unlocked podcast. Thank you for joining us.Elif Kaya:It's great to be here. Thanks for having me.Nic Fillingham:Now, you were part of the. uh, team that authored a blog post on February 1st, 2021. The blog post is "What tracking and attacker email infrastructure tells us about persistent cyber criminal operations." Loved this blog post. I've had so many questions over the years about how these malware campaigns work. What's happening behind the scenes? Where are all the, the infrastructure elements? How are they used? And this blog helped answer so much and sort of joined dots. Nic Fillingham:If you are listening to the podcast here and you're not sure what we're talking about, head to the Microsoft security blog. It is a post from Feb 1st. But Elif, could you sort of give us an overview? What was discussed in this blog post? What was sort of the key take away? What was the research that you conducted?Elif Kaya:Sure. So uh, I'm part of a, a email research and threat intelligence team, uh, that supports the defender product suite at Microsoft, and what we primarily focus on is tracking email campaigns and email trends over a long period of time and documenting those. So, this blog post kind of came along series of documentation, which we started to bubble up these trends in infrastructure, which is one of my focus areas, starting back in March and running uh, all through the end of the year, where a large series of disparate email campaigns, kind of stretching from very commodity malware that is available for like 15, 20 dollars, to things associated with big name actors, and et cetera, were being delivered with very similar characteristics, despite on the surface the malware being very different, the outcomes being very different, or the cost of the malware targets being very different. Elif Kaya:And so, we were able to see within each of these individual campaigns that the infrastructure supporting the email delivery was a consistent theme. So, it starts with when these domains that were used as email addresses to send these from, uh, started being registered to the current day and kind of what campaigns they helped facilitate, when they were registered, and et cetera. So, when people usually talk about infrastructure that supports malware, a lot of the terms get used overlapping. So, when people refer to infrastructure, they generally are referring to the see to addresses, call back addresses that the attacker that owns the malware owns. Elif Kaya:But what we've been seeing much more frequently, and what we wanted to explain with the blog post, is that in really concrete ways like you said with actual examples, is that the malware and cyber crime infrastructure is very modular. And so, when we say infrastructure we could mean who's sending the emails from their servers, who's hosting the email addresses, who's posting the phish kits, who's hosting the delivery pages that deliver the malware, and who's writing the malware. And then later, who's delivering the ransomware. Elif Kaya:And so these could, in any particular campaign or any particular incident that a sock is looking at, be entirely different people. And so, the reason we wanted to do this blog and detail kind of what we did here and go through each of the cam- malware campaigns that was delivered, was to kind of show like, if you're only focusing on each malware campaign, the next one's going to be right cued up and use all the same infrastructure to deliver maybe something maybe more evasive that, that you'll have to get on top of. Elif Kaya:And so, by doing this tracking you can kind of up level it once more, and instead of spending all you time trying to evade one particular malware strain that's going through constant development, you could put a higher focus at stopping kind of the delivery itself, which, we actually detail through the blog, was very consistent over nine months or so, but had a lot less attention focused on it. Elif Kaya:So, some of the cases that we discuss in the blog are cases like Makop, which was used very heavily, and in especially South Korea, all throughout April and all throughout the spring, and is still pretty prevalent in terms of direct delivery ransomware in that region. It's usually delivered through other means, but what we saw and what we theorized is that whenever the standard delivery mechanisms for those malware are interrupted, they'll kind of sample other infrastructure delivery providers, which is what we describe as StrangeU and RandomU in the blog. Elif Kaya:We use the term StrangeU and RandomU to differentiate two sets of DGA, or domain generation algorithm domain structures that we saw. StrangeU always uses the word strange. Not always, but nearly about 95% of the time. And Random U, couldn't find a better name, but it's just a standard random DGA algorithm, where it's just a bunch of letters and characters. We don't really have a fancy name to give it, but we were able to kind of coalesce around what that was internally, and track the domains as they were registered there. And then, shortly after they would be registered, they would start sending mail from those domains.Nic Fillingham:Elif, were you and the team surprised by how much interconnected overlap, agility, and sharing, for one of a better term, they were across these different groups and campaigns and techniques? Were you expecting to see lots of disconnected siloed activities, techniques, groups, et cetera, et cetera? Or were you expecting this amount of overlap, which we'll get to when we sort of explain the, the stuff in the blog?Elif Kaya:So, I think it was less that it was a bit of a surprise, and more that we don't often get a pristine example like this. Frequently, when we look at the connected infrastructure, they don't use domains necessarily. They'll use the botnet itself and IP addresses for delivery or other things. So, when we came across this one, we do normally handle and really do a deep dive in individual incidents and cases, so this was a little bit more of a unique example of like, hey, there's really clear patterns here. What can we learn by tracking it over a long period of time, in ways that other metrics are a little harder to track?Elif Kaya:But yeah, I, I would say that in general, most email campaigns and phishing campaigns, malware campaigns that you kind of run across, they are gonna have these threads of interconnectivity. They're just going to be at different levels. So, whether that's going to be a level that is kind of more visible for uh, blue teams like the email addresses, the domains themselves, or whether that's going to be something more femoral like IP addresses and hosting providers, or whether that's going to be something that's proxy even more so, like a cluster of compromised domains, similar to, to, you know, what Emotet uses, uh, or use to use, collected in a botnet that has a different way of clustering itself. Elif Kaya:And so for these, we were able to just kind of have something that bubbled to the top and made it easy to connect the dots, as well as other items in the header in the malware that we were able to identify. But I think through tracking this, we were able to kind of reaffirm and make a good piece of public example for blue teams that this is a very common method. This is a very common modular technique, Elif Kaya:... And it's very simple for attackers to stand this kind of thing up and offer their services to other places. And that's part of why we reference the Necurs botnet as well. Dridex makes a big appearance in the StrangeU and RandomU deliveries, especially later on in our tracking of them, and Dridex is also a prominent, um, delivery from a lot of other of these types of delivery botnets that have happened in the future, whether that's CutWail or, uh, Necurs or other, um, botnets like that. So it, it's very common but it's sometimes very hard to kind of keying in on all of the distinct components of it and evaluate like, is it worth it in this instance to key in on it, um, when our main goal is like, what is the most effective thing we can do to stop the deliveries?Natalia Godyla:I'd love to talk a little bit about the history that was described in the blog for the service infrastructure. So from what I understand, the Necurs takedown created a gap in the market where StrangeU and RandomU were able to step in and provide that in- necessary infrastructure. So why was that the replacement? Was there any connection there? And as a second part to that question, what does the evolution of these infrastructures look like? How are they accessible to operators that want to leverage them?Elif Kaya:Right. So in this one I can delve a little more into kind of just intuition and, and doing that, because my full-time role is not specifically to, you know, track all the, all the delivery botnets there are and active. The reason that we made the connection to Necurs wasn't because there was an actual connection in terms of affirming this is filling the same role that it was, or this is filling a hole. Because we don't have necessarily a clear picture of every delivery botnet there is. Because the timeframe was very close and because we were able to see shortly after, uh, StrangeU and RandomU started delivering, they initially only had pickup from commodity malware that we could find. So very cheap malware for the first few months of their delivery, such as Makop. Uh, we saw some Agent Tesla, we saw some Diamond Fox.Elif Kaya:But as it progressed on, it started picking up the bigger names like Dridex and doing larger campaigns that were more impactful as well. And so by the time that Necurs had ended, we had also seen them doing a lot of those bigger name malwares as well. And so the reason why we tried to make that comparison was largely to show that something very simple and kind of perhaps much less sophisticated and lasting for a lot less length of time as Necurs in the environment can get customers quickly. And so while we didn't do a deep dive into any of the amount of like, how is it being advertised, how are they getting the customers, what we wanted to show is that regardless of what methods they're using to get the customers, they're able to get-Elif Kaya:Basically the, the amount of research that was done for Necurs was much more in depth than the amount of research that was necessarily done here. And it was also done from a different angle, that angle was much more operator focused and our angle was much more, what was delivered, what was the impact, what were the trends between all of the different mails? And so we're mostly trying to just position it as, this fulfilled a similar, uh, outcome and got a lot of coverage of something that was very big, lasted for a very long time, many years, and something where somebody just started registering some domains, setting up some mail servers, was able to kind of get off the ground and running in just a few months for relatively low cost.Nic Fillingham:So El, if we normally start with an introduction or, or I, I got so excited about this topic that I jumped straight into my first question and I didn't give you an opportunity to introduce yourself. And I wondered, could you do that for us? I know you're, I believe you're a threat analyst or a threat hunter, is that correct?Elif Kaya:Yeah, so I'm currently a threat analyst, and you've actually had other people, I think, from my team on here already before. But yeah, I, I'm a threat analyst at Microsoft. I've been on this particular team for about a year now, specifically focusing in email threats, web threats, and I do have especially some focus in infrastructure tracking and domain, uh, generation algorithms in general and trying to make sure that our emails and campaigns that we're tracking are properly scoped and that we're able to kind of extract as many TTPs as we can from them. Elif Kaya:And so the role of our team and the role of myself in particular on the team is, when we do these individualized campaigns we look for the IOCs and things like that in it. We scope it, but what we're really looking for is, um, the trends of what's happening so that we can kind of try and pinpoint and escalate to the other teams internally the most impactful changes we could make to the product, or the most impactful changes we could recommend that customers do, if it's something that we don't have a product for or we don't have a protection for, in order to protect against the campaign. And so in this particular instance with this infrastructure, our goal here was to kind of really reiterate to customers that despite all this complexity, the spaghetti-like nature of this, at the end of the day all these different campaigns used kind of a lot of the same both delivery to deliver the email, but the Word documents that they delivered were also very similar.Elif Kaya:There, there were a lot of configurations that can be made on the endpoint to kind of really nullify a lot of these campaigns despite what we were able to see and some really evasive techniques that they were developing, the malware operators, over the time.Nic Fillingham:Yeah, I, I wonder if you could talk a little bit about how the research was actually conducted. A lot of these domains were not hosted by Microsoft infrastructure, as I, as I understand it. I think you sort of cover that a little bit in the blog. So how do you as a, in, you know, in your role, how do you go about conducting this research? Are you setting up honey pots to try and, uh, receive some of these, these emails and just sort of be a part of the campaign, and then you, you conduct your analysis from there? What, how do you go about, uh, performing this research?Elif Kaya:So the bulk of the research I think is performed with various, like some of it is honey pots and some of it's that. A lot of the research that is covered in the blog after we, uh, analyze the malware campaigns, which is a service we offer through, um, MTE, which I think there have been people from MTE that have come on as well, as well as analysis that we do, again, based on, uh, the malware samples that we receive and the email samples that we receive from reports, from externally as well as from open source intelligence. A lot of the domain research here, though, is actually done from, uh, open information. So any domain registrations that there are, the registration fingerprint, as I like to call it, which is all the metadata related to the registration, is publicly available. And so we collect a lot of that information and search it internally. Elif Kaya:And this is always something that I like to advise and encourage blue teams at any particular organization, you know, if they have a little bit of extra funding, to try and invest in as well. Because it's definitely, even though it's free and publicly available, you're generally gonna have to get a subscription or set up some kind of collection order to query the "who is" databases and the passive DNS databases that you'll need in order to do some of these pivots. But it kind of starts with finding the malware campaigns and then finding the emails, and then pivoting up towards everything else we can do. And once you have kind of a net of what you're looking for, sender domains and et cetera, you can then kind of go backwards and say, "Okay, now show, show me all the malware campaigns that we have investigated that, that have these components to them. Show me all the phishing campaigns that have these components to them."Elif Kaya:And so it's kind of going up and then going back down, but all clustered around that registration data and that domain data. Uh, because whether an attacker decides to use IP addresses or whether they decide to do domains, there's usually always some component of their campaign that they have to use attacker-owned infrastructure for, if that makes sense. We see a lot and it's very common for attackers to u- use compromised infrastructure, so WordPress sites, things like that, to host a lot of their architecture. But especially for things like C2s for mail delivery and other things, they're gonna want some resilient infrastructure that they'll own themselves. And so at what point in the chain they decide to do that is usually an opportunity for us to be able to see if there's any OPSEC errors on their part, and also see if they've conducted other campaigns with that same infrastructure. Yeah, and so differate- differentiating between attacker-owned infrastructure and compromised infrastructure is an additional critical component.Natalia Godyla:Now I'm trying to decide which question to go forward with. Can you describe the distinction between those two?Elif Kaya:Right. So attacker-owned infrastructure would be something the attacker sets up themselves. So they have to think of the, and populate the data in the domain address and the registration and the tenant themselves. So this encompasses both when attackers use free trial subscriptions for cloud services, it's whenever they go log into Namecheap and they register their own domains, as well as when they have dedicated IP hosting or bulk group hosting as well that they have decided like, "For this portion of my campaign," whether that's command and control, whether that's delivery or et cetera, "I need to make sure that I'm in control of this." We have seen examples where compromised infrastructure, which is the reverse of that where especially small businesses, parked domains, and other insecure WordPress sites, sites that have other types of vulnerabilities, will be compromised and used to, again, do any, any component of that kill chain, whether that's sending mails, hosting the malware, and will be used to do those things as well.Elif Kaya:So compromised infrastructure is when the attacker will utilize someone else. The benefit for attackers is it's definitely a lot harder for defenders to identify or take action against that, especially because they don't know how long it'll be compromised for, if it'll ever not be compromised, if the attacker's only leasing access to the compromised domain through a, a kind of, uh, cyber crime as a service provider or not. It becomes harder for the defenders to defend against and detect, because it has less points of contact and familiarity with other compromised domain. If somebody compromises a blog about kittens and a blog about race cars, it's gonna be pretty hard for a lot of things to pick up exactly what's similar about them, because some Elif Kaya:... other human worlds apart has made the whole blog but if one attacker has-Nic Fillingham:Probably Natalia GodylaElif Kaya:... made five to 15 different sites in a day. (laughs) Yeah, it's a, it's going to have a lot more in common. But the downside of compromised domains for attackers is a, they often have to lease them from the people that initially compromise them and c, those compromised domains could become uncompromised, they have to now maintain access to something they didn't make. And we did also see that with OMO Tech, over the summer when it had come back after being quiet for very long, and people had replaced their payloads on compromised sites with, uh, I think chips with CAATs, something like that. We're back to CAATs.Nic Fillingham:You're speaking our language here, like we're, we're, we're on the edge of our seat, you said CAAT like twice in like a minute.Natalia Godyla:(laughs)Elif Kaya:But when an attacker comprises a lot of their infrastructure on compromised infrastructure, other attackers could compromise it, defenders could compromise it, anyone can kind of... They have to now protect it, whereas if they made it from nowhere and no one owned it, except for them, it's kind of a lot easier for them to just hang out. Because then the kind of only person that's looking out for them a lot of the time, is if somebody is connecting the dots on the infrastructure or the hosting providers, like I think the ones that we cover here is like, IronNet, Namecheap, et cetera, if they're looking out for somebody hosting on their, their infrastructure. But if somebody is just sitting there, they're just being quiet, they're just sending mail, nobody's going to notice that they're compromised probably. Whereas if you're a small business owner and your site ends up on a block list, you're going to go start asking questions, you're going to start trying to get that fixed or take your site down.Nic Fillingham:Elif, I'd love to come back to what you talked about with the way that you conducted this research and you, you, you said that getting subscriptions to Huawei Services and DNS records, this is all public record. But there is still some tools required to pass through that information and, and create the pivots. We were talking offline, before we started recording, I'll paraphrase here and please correct me, that you didn't utilize really machine learning as a tool to discover this techniques. Is that, is that correct? Can you talk more about what techniques you did use and didn't use and why something like machine learning or unsupervised learning was not either necessary in this space or wasn't necessary to discover these techniques?Elif Kaya:Yeah, I mean, I could talk to the, the techniques that I used and well, I can't say explicitly like why machine learning would or would not be helpful here because I'm not an expert on machine learning. I think in the different campaigns that I've worked on in my career in security, whether it's this one or before I came to Microsoft, I did some more independent research on a large set of Chrome extensions that were also connected by various, uh, commonalities to get those taken down. A lot of this research that can be pretty impactful and pretty widespread doesn't require ML in order to parse and to navigate. And I think part of the reason that ML is a bit unsuited for this at the moment, is because there hasn't been as much manually focused research. And there's been a lot of research done by independent researchers and people in the security community but I have seen a lot less focus in terms of data from tech companies in doing and making publicly available some of this infrastructure surrounded research. Elif Kaya:And so what I mean by that is that a lot of security companies focus a lot on the actor name. They focus a lot on the reverse engineering of the malware and those are critical components. In part because that's what the products that they're sometimes selling is AV Surfaces and things like that and that's the point in time that they are protecting against the threat. But when it comes to the infrastructure, companies that would be the most positioned to protect against that threat or have products to protect against that threat, aren't necessarily doing the manual body of research currently necessary I think, in order to guide ML to kind of identify this work. And so right now to say, " Oh, would this be something that ML would be suited to step in?" Elif Kaya:And I think that it could in the future be suited to step in slightly but I also think that the way that this works, is currently operating at a level that actually does benefit from, from manual analysis at this time. In part, because it, it doesn't actually take tools that are generally above or beyond what is in a lot of analyst tool set with basic scripting and things like that. Because right now there has been such a non focus from security companies and blue teams, I think on infrastructure and infrastructure commonalities and the way that these campaigns are so modular that, for lack of a better word, there's not a lot of sophistication in it. Most of the sophistication we see in these campaigns are designed to evade automated technology. They're designed to evade ML. They're designed to evade phish filters. They're not really designed to evade humans looking at them, because I think you and me looking at those strange new domains, like you can look at a cluster of them and be like, "These aren't real sites, they're not real."Natalia Godyla:(laughs)Nic Fillingham:Yeah. I'm not, I'm not going to visit a website called, I'm gonna pick one up here like, eninaquilio.u... Maybe I would actually, that, that looks really cool. (laughs) Okay, gonesa.usastethkent, it's got like no vowels, like he replies strange secure world.Elif Kaya:And so we don't actually see a lot of, I guess, advancement in that space from attackers. A lot of the advancement is there in different parts that aren't necessarily bubbled up, but it's happening in the malware itself, in order to evade AV in order to not get alerts that fire on them. It's not necessarily happening to use something other than a macro or send from something other than an obvious phishing email or if obvious phishing source. And a lot of times, uh, one part that's one of my favorite part is these, these registrations frequently use the, .us domain. Many top level domains actually prohibit different parts of obfuscation for the registration record. And so when you register a domain, obviously the attacker kind of doesn't want to use real data, it's not the real name. But they'll use like memes and other things in the registration information, because it's fake data but then you can go and pivot and find where they've used the same meme before. And so-Nic Fillingham:Look for old domains registered by Rick Astley. Natalia Godyla:(laughs)Elif Kaya:Yeah, I think there was one-Nic Fillingham:You might be too young for that, me and my friend-Elif Kaya:There was, there was one that I think was used, I forget for which one of these malware campaigns where a lot of the registrations were actually happening under a registered email, that was something like, hiIhateantiviruspleaseleavemealone@gamer.com or something (laughs)or like, youcan'ttakethisdown.com. And I was like-Nic Fillingham:Try me. Natalia Godyla:Challenge accepted.Nic Fillingham:It's like a big red, a big red arrow pointing at them. Elif Kaya:What is happening in the infrastructure space for a lot of these things is happening pretty rapidly, it's happening at pretty low costs. And it's also happening and looking a lot different and is in a way a lot less glamorous, than a lot of the reverse engineering that is necessarily done but it's very critical. Or the more nation state tracking that is, uh, very popular when or companies are selling threat intelligence products to customers. But when it comes to like security, kind of in a sock, a lot of put is going to get through the doors, regular phishing emails.Natalia Godyla:So if the campaigns are targeting the automation that's built in, like you said, the phishing filters, what should organizations be doing to protect themselves? What solution should they have in place, processes?Elif Kaya:So some of the big things that I remember from these particular campaigns, um, is if you are rolling any kind of mail protection service or mail service in general, please periodically check your allow lists. The allow lists will frequently have entire IP ranges, entire domain ranges and so even domains like these ones that are very randomized and they're strange and you've never received an email before in your life. Sometimes the configurations of your allow lists for emails can completely cause the mails to bypass other filters. So definitely whether you're running Microsoft for your mail protection or not, please periodically check your allow lists and your filters and kind of have a good understanding of like, do I have any instances where phishing or malware would bypass other protections? Have I set that up? So that's one thing that I think does cut down a lot on some of these, making it to inboxes.Elif Kaya:And other as we... And part of the reason why we highlighted at each of the malware campaigns involved here is, uh, the suite of... I always forget the acronym, ASR rules, advanced security rules or configurations that Microsoft offers for office in particular for macro executions and malicious office executions, routinely outside of this blog and other, it's still office word documents, it's still Office Excel documents, it's still macro buttons. And so re-evaluating your controls there and your protections there, especially looking at some of the automatic configurations that we have available now to just turn on, that is going to help there a lot as well. I think are the two biggest like controls that I would recommend people for these kind of items, is checking kind of your allow lists pretty periodically and what your filtering policies are. And checking your, specifically, if you are using Office 365 internally, whether you have configurations set up to not necessarily even just restrict but there are more granular configurations now that you can set up to specifically restrict DLL and other execution from office macros as well.Nic Fillingham:Elif, in the section of the blog where it talks about the dry decks campaigns big and small June to July and beyond. It reads here, that it feels like you uncovered a section of sort of experimentation and testing of sort of new techniques. There's references to Shakespeare, there's something I've never heard of called, VBA stomping. Can you talk a little bit about what kinds of experimentation and creativity that you stumbled upon as part of this research? First of all, and what is VBA stomping?Elif Kaya:Uh, so VBA stomping, I think we might've actually met VBA purging in the blog. I'm trying to remember Elif Kaya:...whether, I think it might've been VBA purging, but surprisingly VBA stomping and purging are separate, but they fulfill the same kind of function, which is to try and make that macro, that like spicy button that everybody wants to press a little harder for malware detection engines to detect. So VBA stomping and purging both operate a little bit differently, but their main goal is to kind of obfuscate the initial VBA code from the actual amount malicious code in general. So that when antivirus engines try and examine it, they're going to see all that Shakespeare text and they're not going to see the malware. And as for the Shakespeare text, (Laughs) it's actually still on virus total. I think if people go and check for any of the files that reach out to the bethermium.com and DFIR, the blog did a great writeup called I believe "Tried X toward dominance" which actually covers in their sandbox what happened after they ran this doc. Which was eventually moved to a PowerShell empire attempts within their sandbox. Elif Kaya:But yeah, as far as I can tell from the Shakespeare use for this, it's actually not the first time that poetry (Laughs) and kind of Shakespeare has been used to obfuscate malware. There have been other rats in the past that have used this. Uh, we couldn't find any similarity like this, this was not those. But oddly enough, there is occasionally every now and then poetry or Shakespeare, other things that is used as obfuscation techniques to kind of pat out documents. And in this case, what we actually found is every iteration of the word document that we could find, had all of the functions and pretty much all the code within the document was replaced by different random lines. Elif Kaya:So there wasn't actually any contiguous lines within it. So if you looked at two docs, one might have some lines from Hamlet, one might have some lines from some other kind of literature document as well. But I imagine that it was more so just additional stuff to make it. If you're looking for a function in this document, it's gonna look different in this one. If I had to guess, I would say it's probably something similar to an actual defensive technique that we, we being, I guess, myself-Nic Fillingham:(Laughs)Elif Kaya:...had a few talks on conferences before called I believe Polyverse the company, um, coined the term, but Poly scripting where you use each iteration of something is gonna have a different function name and a different code. But it's all internally, um, it's all going to, the interpreter is going to still interpret it, even though it's random text from externally. In order to help protect against in the case of polyverse and polyscripting, protect WordPress sites from easy exploit. But in the case of the Shakespeare document, probably to prevent against easy YARA rules and things to detect their code, don't click the spicy button. (Laughs) Nic Fillingham:Elif. What do we know about these domains that have all been identified? The StrangeYou, the RandomYou, are, they still active? Have they been shut down? Do they get sent back to the DNS registrar? What's the process? What does it look like?Elif Kaya:So we have made sure that at least on our end, and turn to our products, that these domains and any new iterations of them, of these particular strains that we identify are blocked, as well as the malware we cover in the report. Those are within our products. As for the domains, because they're not hosted on Microsoft infrastructure, we kind of report them and that's, that's about as much as we can do in terms of their activity. I have no doubt that the operators behind this, will probably just create a new strain, but is also not necessarily set in stone, that the operators behind RandomYou and StrangeYou are the same operators. It could be that they are just operating in a similar kind of space and time to fulfill similar functions. Elif Kaya:There was a few campaigns where they both sent the same campaign, which lends a bit of credence to them potentially being at least similarly operated, but nothing concrete. So it is very highly likely that, that they'll just continue to operate under new strains. Uh, and probably the next strain that they'll have will either be more of these, uh, or they'll create a new one. And by a new one, I mean, instead of the word strange, maybe they'll use the word. I don't know, doc.Nic Fillingham:How about cat? Elif Kaya:Could be cat.Nic Fillingham:Or has that been exhausted. Elif Kaya:It could be cat. We haven't exhausted the number of cat domains that there could be. Nic Fillingham:So it sounds like, uh, you know, one of the things you said in the blog, and I think you mentioned it earlier that paying attention to infrastructure can actually allow uh, Defenders, SOCs, Blue teams to get ahead of a new campaign if a campaign is leveraging existing infrastructure. And so is that the takeaway from this blog post for those folks listening to the podcast right now and reading the blog, is your one sentence takeaway here, like pay attention to infrastructure? Don't forget about the infrastructure? Is that, is that sort of what you'd like folks to come away with? Elif Kaya:Yeah, I absolutely. And that's kind of my secret wish with the blog and my secret wish with most of the work that I do, is that it'll make Defenders and Blue teams focus less on the glamor and less on the kind of actor attribution and more on what is working right now. What do I need in my environment? What do I not need environment, my environment? And one of the key points I'll hone in on in order to kind of demonstrate that is these .us domains .us is a, a t- top level domain frequently used, uh, maliciously, but it's also frequently used for reasonable good purposes. What some of our tracking internally does and tracking that I've done before I went to Microsoft, is that attackers have trends of top level domains that they prefer to use from month to month. Certain malware strains, like using some top level domains, other, over others for a variety of reasons. Elif Kaya:But if you are running SOC and you were running Blue team get kind of creative about how you can take different steps to either monitor track or block infrastructure that is unnecessary to your organization. Not to impede or cause any kind of interference from productivity, but to kind of keep an eye on attacks and trends that you don't know about yet. For example, .su domains or .icu domains, uh, you might not have almost any benign presence for that in your environment. And so you might want to create custom alerts or custom rules to say like, "Hey, if I see this, maybe this could be the next malware campaign that Microsoft or somebody else hasn't written about but I'm a target of." And so kind of get creative about that, uh, especially if you have those kinds of capabilities within your network to filter on a mail comes in or mail comes out. Natalia Godyla:So just stepping away from the block for a minute, what about yourself personally speaking, what are you most passionate about in your work right now? What are you looking to achieve? What is your big goal I guess? Elif Kaya:So for myself and the reason that I, I'm still kind of in this field and at Microsoft doing the job that I'm doing right now is, I, I would really like to use these kinds of examples to bubble up what Blue teams that have less funding that are less glamorous and individual people can use in order to find threats. So I really want to try and shift the focus away from big groups or big actors or attribution and more towards what I consider the end goal for security. For me, which is how can I stop people from getting impacted. And so for myself and my own passions and interests insecurity outside of just what I do for work, I'm very focused in web security and browser security, I think there is a big gap that a lot of people focused as well as consumer security. Elif Kaya:A lot of these issues that we consistently pop up over and over again, kind of happen in part because of a lack of focus in consumer security. And by consumer, I kind of mean individual non corporations or small corporations. And so kind of the lack of focus in that and leaves a lot of people with the knowledge, but without the tools and resources easily available in order to kind of set themselves up for success. That's kind of a state of compromised websites that are used for botnets and et cetera. Right now, as well as, you know, privacy and security issues that individual users face in their regular day-to-day life with browser extensions and et cetera, where a lot of times browser extension research and browser research in general might get deprioritized due to its focus on individual consumer privacy versus things like malware, which focus a lot of the time on enterprise. Elif Kaya:But at least from my perspective, I'm very passionate about malvertising and, and the ways the advertising and web security and email security kind of coalesce around using a lot of the success that they have on individual people in order to leverage those attacks against bigger corporations later. That's where I like to focus a lot of my energy and research. Nic Fillingham:Uh, Elif Kaya, thank you so much for your time and thank you for, uh, contributing this great blog posts and helping us wrap our heads around email infrastructure. Elif Kaya:Thanks for having me. Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode. Elif Kaya:And don't forget to tweet us at msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then stay safe. Natalia Godyla:Stay secure.