<img src="https://ad.doubleclick.net/ddm/activity/src=11631230;type=pagevw0;cat=pw_allpg;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=1;num=1?" width="1" height="1" alt="">

Ep11 | Cybersecurity Challenges in Centralized Cloud Systems: Lessons Learned from the Microsoft Outlook Breach

Air Date: July 20, 2023
 

 

Listen to industry experts as they discuss the attack on Microsoft's cloud-based Outlook email systems by a China-based hacker group, which resulted in the theft of a cryptographic key and unauthorized access to multiple Microsoft customer accounts, including government agencies. The breach raises concerns about relying solely on centralized cloud systems, for cyber protection. This sobering event emphasizes the importance of embracing incremental data-centric security controls so in the event someone should gain unauthorized access to your systems they won't automatically have access to your most sensitive data, too.

Rob McDonald: Hey everybody. I really appreciate you joining us today. I'm really excited to dig right into a very recent topic that is important for us all to discuss and that is really around the recent Microsoft Outlook breach and the cyber security challenges and implications of it. I am accompanied today by Michael Wilkes and David London. So Michael is the adjunct professor at NYU in the cyber security area and also a technology pioneer on the oil and gas in the Quantum group at the World Economic Forum. So, excited to have you with us today, Mike.

Michael Wilkes: Thank you.

Rob McDonald: Yeah, Absolutely. David is the managing Director Cyber Security of the Chertoff Group. Also, 10 years at Booz Allen DoD civilian agencies and commercial on the cyberward gaming and preparedness areas. A lot of ground covered there in one statement. I'm sure that was a fun time and gonna give us a really unique perspective today. Thanks for joining us.

Rob McDonald: Awesome. So guys maybe we can just kick this off right in the deep end, right. Let's start, what has happened. Go through, kind of the lineage of the events to kind of frame the conversation today around implications and beyond. Who wants to start us? Mike, you want to start us off with kind of how this all started?

Michael Wilkes: But real quick for those that are tuning in for the first time, this is the investmentable Rob McDonald who's a value philosopher and privacy apps through this.

Rob McDonald: Yeah, Mike. I appreciate that. I'm SVP of Strategy and Field CPO at Virtru. The least important person in this room but I do appreciate that Mike. But yeah.

Michael Wilkes: Who knows? You might get some new visitors right for this cast, for this webinar. Yeah, I took a look, thankfully, I was able to ignore the early releases and impacts and analysis of this breach. But apparently on, what, June 16th, I think the U.S. Department of State, according to CNN and Washington Post, they observed something called a Mail Items Access Event acting kind of strange on their platform and so they saw some strange client app IDs coming in their Microsoft 365 logs. Number one, immediate observation right there. Somebody turned on an interesting log that was not default and an interesting event which is also not default, and so whoever that smart monkey is at the U.S. Department of State, kudos, you are doing a killer killer job but moving on to the play by play.

Michael Wilkes: So the agency identified a whole bunch of access that's coming in not from the normal, monitored kind of behaviors. It's an APT, of course; it's believed to be China. Their classifying at Microsoft is Storm 0558, and they believe that this started May 15th, so that means that the Chinese were reading the Department of State's emails for at least a month before they were detected and expunged, but maybe they haven't been expunged and that's the interesting thing to talk about later. Maybe there was more, there was a compromise. We don't know about it yet but they said about 25 organizations, ten or nine or so, we're in the US. Very targeted is what this means because if you crafted a key that allowed you to basically pull every email from every user in these organizations plus their personal accounts, you have a big breach, and this is not a forged key. I think we talked about that as we were getting ready for this call. It's not forged if it was properly signed by the properly issued, key content code. It was stolen, and so forged and stolen, I think are a little bit different. And then there was a bug in the validation, lack of separation of environments that led to this turning into a pretty big infiltration of that. And prior to today, Microsoft did not give you access to these logs unless you pay through the teeth, for an E5 level subscription at what I think is this price 57 dollars per user per month. But thankfully Microsoft is getting into a position of saying, hey, CISA's kind of going to beat us up because this is best practices to log your stuff, and so look at your logs and to have anomaly detection on your logs. And so that's my reader's digest summary.

Rob McDonald: Yeah Mike, that's great, and David you touched on a point there Mike that I want, I don't want to gloss over and then we can dig into the technical pieces. This is news that the logging depth is going to be opened up to these broader tiers which is great. And I think, obviously from the pressure of this and the implications of this, great, but it is happening and David, I'd love your two cents on them doing that and what that means for everybody.

David London: So just to kind of, Michael provided a great summary of what I would call a very complex incident, weaponization of the technology supply chain. It's certainly not the first time we've seen that, where you threaten actors can expand the blaster radius by kind of exploiting the concentration risk that practitioners are now encountering as the industry kind of further consolidates. I think you're right, there was a kind of loud chorus among practitioners and technologists that we shouldn't be withholding logs that exist purely to kind of increase revenue, increase margins. I think Microsoft has heard that call, putting their preview audit premium logging and kind of increasing the fidelity and availability for kind of standard or E3 users instead of just E5 users. I will say also, there's still little opacity around how the private key material was kind of stolen and then exploited initially from Microsoft. And then, I think in that vacuum of information, some kind of alternative theories have risen that I've been kind of trying to track whether or not they have tons of credence still unknown, but one of those is that, it wasn't Microsoft itself that was actually exploited, but legacy kind of Outlook infrastructure where keys, private keys, are actually held on servers on PREM is one of the theories. So it was actually the U.S. government agencies that were compromised in addition to that and I think this is the real doomsday theory that Microsoft's actual certificate authority was subverted. Obviously, that would have substantial implications for kind of trusted computing across the industry like yeah, we don't know if these theories hold any water but I think the idea here is that in the absence of true understanding of that kind of initial vector and access, a number of kind of alternative views are beginning to emerge.

Rob McDonald: Yeah, I mean I think you're right so we will know more in the future obviously as these things evolve, but I think you two did a really good job of kind of going through here are the categories of the things that could have happened or probably did happen to necessitate this and none of those buckets are really great, right, So whether that was a directly, targeted successful event against Microsoft or an on-PREM infrastructure resulting, but regardless, that is a kind of ordained configuration, right? The actual key material that left is– it includes legitimate key material, which is why to Mike's point this and you look at all of the news, it says a forged authentication and that makes the audience feel like okay, there probably was some kind of bug in the verification or authentication stack where that key was presented. But that's really not the case, right? That key material was direct.

Michael Wilkes: Yeah, I think it undermines our trust in cryptography which Microsoft is just trying to transfer, and distract us, I think, from that voice of language. What I think is interesting is that the bad guys are using local, hyper local compromise, IP and endpoints, to do these attacks. So if they know that you log in from Atlanta, they're gonna get a Comcast home device and they're gonna get on that network, and they're gonna exfiltrate the data to a very proximal network, proximal IP address. So you're anomaly detection, cannot just be tied to GEO, right? It's not like it's an IP address that results in ru, dot, you know, CN IP network blocks.

Michael Wilkes: And so that's an important element, I think of the detection, if you want to mitigate this type of risk for any of your platforms, third party, or on-PREM, you need to have those logs going into a scene. Those scenes need to be analyzing for patterns and one of the patterns has to be novel app IDs or novel off events that are a combination of IP address and volume, time of day, and identifier. And here, IP addresses are just going to become increasingly less interesting because the bad guys know how to live off the land. They're going to use digital ocean servers in New York. They can use digital ocean service in Singapore, which is what happened with the colonial pipeline. All sorts of examples of how a typical single factor anomaly has to be a multi-factor anomaly and you need what's called adaptive off in order to identify that kind of thing happening.

Michael Wilkes: There's a good dozen other telemetry events that you can bring in that most people don't have configured and some providers don't even offer. One of my favorite ones would be your Telco provider on your MFA app. If you're suddenly T-Mobile one day and AT&T the next, that could be a sim swap jacking. What happened with Jack Dorsey and Twitter where you socially engineer someone to say, hey, I just changed my phone number and now you capture the MFA and you do MFA bypass.

Rob McDonald: Yeah, it's interesting and I want to touch on the level of sophistication for the majority of the market. I want to touch on that. Before we do that,the area you are really discussing there, Mike, this kind of behavioral view of this telemetry, taking in enough where you can see the pattern changes between both user behavior and just systemic and over time usage behavior. It's getting more complex because if I in this particular case didn't start with the extraction of the mail data, I started to maybe even look at the logs of how that user is using that data. And then I'm just mimicking that play by play and I'm local, to your point. Even a sophisticated behavioral analysis could not show that that was different because I'm literally going to replay a day over day type of usage, like a slow drain from a local location which becomes very complex, even heightening your point because if I'm just not doing a sophisticated correlation of these telemetry events, there's just no way I'm going to be able to put together a better picture of that behavior.

Michael Wilkes: Remember Microsoft is pretty tight-lipped about some of the details, but the mail items access events did include app ID, and client app ID, and those were novel. And so, if they can mimic obviously, after they capture your credential and they are actually logging in as you, then the client ID, app ID, all that is the same, but that's one of the anomalies here, was that it was a Microsoft service account that was coming in and just consuming mailboxes.

Rob McDonald: Yeah, and David, I mean, I think about this, we're talking about this and this is a little bit of an academic conversation, obviously, right. What should we do, which is, right, you have to have a model from which to apply that to your environment based on your risk level and your budgetary constraints and skills restraint. But the truth is, this did require, as Mike pointed out, someone that was uniquely well positioned and smart enough to think about what do I need to be looking for, and thank goodness they exist and they did what they did. How common is this in the environment today? Is it just the majority of these organizations being able to have the resources and skills to do that or is this rare?

David London: Yeah, I mean, Michael gave the sort of props to the state bureaucrat or security practitioner, who was able to, kind of put this, a little bit of luck in that obviously, but deserves, a significant amount of praise. I would say, put it in the category of sort of heavy is the crown or careful what you wish for as we think about, the increased volume of logs, who knows to Microsoft for finally provisioning, enabling, getting a kind of finer grain visibility. We hope that they'll be a kind of a steady march among some other major technology organizations but many of the organizations I didn't say.

David London: Yeah, and so I think there's a broad acknowledgment among our clients. There's kind of a blind spot. That being said, even with the logs and the telemetry that organizations we work with do not necessarily have the process, the technical skills, the bandwidth to kind of absorb those. Colleague of mine lovingly calls this kind of log, hell that a lot of organizations find them in the spaghetti ball of pain because not only do you have kind of an overwhelming architectural issue, but you also have kind of an economic model where the meter is always running, so that kind of storage and ingestion of all of these logs that you may or may not actually be enriching and using to find that needle in the haystack and create the mosaic are actually being used. And so there's logging strategies now that are being applied for frankly, for more mature organizations, where there's some middleware, like a fluent D or some sort of log aggregator, where that enables further parsing. I just caution as organizations are obviously hungry for more telemetry and more logging and more visibility, they need to kind of complement that and operationalize it with the appropriate kind of infrastructure and tooling as well as kind of discipline and sanity to try to take all of that data and begin to find the, stitch together the indicators so that you can action and event the kind of the cascading impact of the Microsoft breach.

Rob McDonald: Yeah, I think that's really interesting, David because I think the industry has done a really good job of creating a transactional revenue model for ingesting massive amounts of data and good on us for being able to do that, but that ingestion was not the outcome. The outcome was actionable insights, right, so I'm curious to get both of your takes on, while we're on this topic area, what is happening today? What do you think needs to happen from a tech perspective or a change in outcomes perspective to push down these actionable insights or what technology can be leveraged to get those actions down-market. So clearly up-market a lot of these organizations have the budget and sophistication but they represent an important part of the target base, but they're not the entire. Is any of this being pushed down and made more accessible to more organizations or is this still an issue in your opinions?

David London: I mean, the one thing that we find some success with our clients is kind of operationalizing, as they try to bring sanity to the universe and threat actors threat behaviors and then some level of kind of ingestion enrichment and actioning is using, and this doesn't necessarily work for very novel attacks, you know, Microsoft for example you're basically during dealing with Zero Day, but the mitre attack knowledge base provides that level of kind of enumeration and visibility on threat behavior and even for smaller organizations, there's an accessibility to it because you're not looking at the full universe of threat actor techniques. You can focus on the threat actors, the threat behaviors that are most likely to come after your organization, and then for those that have some level of sophistication, begin peeling back the layer around for those tactics and techniques. What are the mitigations that are in place and what are the kind of sources, data components, the kind of actor breadcrumbs that you can begin tuning your organization towards so that you're not missing the ability to identify and indicate and create a mosaic. That's again, even large enterprises are still struggling with this but we are seeing more smaller organizations. While they may not be doing that house, they can at least keep their managed service provider honest by asking questions about. Do you use a mitre attack? Are you leveraging that knowledge base to make sure that we're safe and secure?

Rob McDonald: Yeah. Mike, do you have a perspective on that?

Michael Wilkes: Yeah, one of the things I mentioned in one of my NYU classes is Splunk. Splunk is an expensive addiction, and I remember, they're in the Gartner Magic quadrant for nine years in the corner in the running, and it's not for nothing that they do volume based pricing, but they have to change that model because it's a lot of noise, and there's a lot of duplicate events and they don't have any incentive to do before ingesting and indexing. And so some of the better pricing models, I think out there are now, keep 180 dot days of logs but only index the last 90 and then be able to rehydrate if you have to look past 180 because Microsoft just bumps their retention for you, but other companies have regulatory requirements to keep being more than that. So, you're causing another attack surface to be opened up by pushing those logs to a third party for your archiving and attention purposes, but I don't know what the number was for Splunk, but when I was lecturing on this originally it was 18,000 customers One of the largest customers was doing six petabytes per day, bursting to 12 petabytes per day. At that point, you don't actually pay for bytes. You're just picking some number out of the hat and saying 12 million dollars, Let's just agree. 12 million dollars is good but there's some competitors, right, like Elasticsearch is doing. Before they ingest, Devo has come along. There's other vendors. I'm full disclosure not related to any of them or invested in any of them, but I do think there's some room for improvement here, where even the small mom and pop shops can potentially get some kind of telemetry and visibility because the folks got hit with Solarwinds

Michael Wilkes: A lot of them had turned off logging so they had no idea that they had been breached or if they were currently breached by APT-29 and Cozy Bear and that's the sad state I think of most people's things. It's not sexy. Zero day that we have to go chase and defend. It's the basics. Patch your servers and don't go mindlessly going after highs and Criticals. Only go for stuff that's actually going to turn into an exploit. Have you heard of EPSS, Environmental Predictability Security Scores,

Rob McDonald: Yes.

Michael Wilkes: The follow on the CVSS that Michael Reutman used to talk about and then they got acquired by Cisco and then they enriched his data model with all of their DFIR data. And now they have some really good configurations and guess what, the number one correlation is, for any particular CVE to ever turn into a breach? The word Microsoft occurs in it.

Rob McDonald: That's excellent.

Michael Wilkes: It's the number one indicator. If you've got 50 or so indicators, for machine learning, obviously correlation is strong. But for example, in an alternate multiverse, but if Apple in an alternative multiverse, if Apple were the dominant Enterprise platform, then of course, it would be the word Apple. So I'm not calling out Microsoft for being other than the big fatted cow that everyone is slaughtering these days, apparently, so.

Rob McDonald: Mike, I want to elevate a point you just made, which is ,you were mentioning these organizations that are in a regulatory burden to have a longer tail of data retention and that's important because that's equivalent of CISA and NIST, frameworks saying, this is what you should be doing and then a broad swath of the industry saying I can't afford to do that, or I can't do that because I don't have access to that. So, to your point and I don't want to dilute this point because it's important. We love these ecosystems, Microsoft and Google. They create a lot of business, productivity outcomes, based on the value of their applications. The other side of that coin is in order to meet a certain level of posture, which is required today, these level of log details must be made available, so good on Microsoft for doing this and to your point, we hope more of the industry follows suit and kind of meets us at this baseline expectation of today, right? It's not like this is something that's cutting edge. This is really what's fundamentally required to even start this journey around more advanced actionable insights and correlations, right?

Michael Wilkes: Yeah, I think it's useful to also just think about some of the basic mitigations that folks can stand up, which is to have logs and to have the logs have interesting fields in them. One of the things I realized when I started investigating some stuff from this was many years ago so it's not MNPI. We were pitching to build a new infrastructure in a new website for Western Union and Western Union was having problems with transactions, not going through money transfers. That's the core revenue that they have. And so we go out and meet them, we give them this presentation and then we go out to have drinks with customer support and that's helpline and they were talking about having a daily stand up and how they're like, yeah, we don't know what's going on. So basically someone calls up and says my money transfer didn't go through and the help desk persons were, oh are you doing this from the office? I was like you should try it from home because firewalls in offices often interfere with this process. And of course if the person says no, I'm calling it from home. It's like you should try it from the office because sometimes home routers. And so people always come up with a coping strategy for an impossible situation and they're just gonna try to survive and deal with it. I actually looked at the logs and Western Union was both at once relieved, and of course, felt terribly stupid because it turned out the log files always have all the information. The log file said, don't run this service as administrator. Don't run this services as administrator. Guess what they were doing, running a particular service as administrator and it was the source of these failed transactions and it was in the log files. So they felt liberated and enlightened, saying oh my God we fixed it. We don't have to have a daily stand up to figure out why all these money transfers aren't going through and of course, they felt really stupid because it was in the logs all along and they just never looked.

Rob McDonald: That makes a good point. Let's go back for a moment before we kind of get into the deeper implications. I mean, we went through kind of that PX fill event regardless of how that happened. Let's talk about that for a moment. David, you touched on it for a second, which it looks like there were some maybe fundamental custodial issues around best practices. Again, Microsoft's a complex organization so this is really not trying to specifically target things they did wrong but it is an opportunity for us to discuss what are those best practices because we all have key material and our key life cycle management processes that we know are critical. We talk a bit about methods that could have resulted or lack of methods that could have resulted in that key ex-fill and what are some best practices to help mitigate that.

David London: I mean, and the downside here and obviously, the downside is an unknown volume of Excel traded data and sensitive information coming from our U.S. government agencies. But for the average consumer who's reading this news on CNN or the Wall Street Journal, we've been told and I think we all subscribe to it that multi-factor authentication is one of, if not, the most risk-reducing kind of control capability. it's gained substantial–it's become, kind of, a bright line expectation, among insurance providers and for third party audits.

David London: And, there's a lot of complexity there, but when you hold the token, you are bypassing multi-factor authentication. And so to kind of the average American, my bank is now asking me to do this, my health care providers are now asking me to do that and now there's back-end chaos that's happening out of my control. I'm doing everything I can to kind of exercise strong cyber kind of hygiene and awareness.

David London: I think from a technical perspective, it's putting aside these sort of conspiracy theories. It's clear that that private key material was not kind of stored, secured adequately, at Microsoft. Still unclear exactly how that occurred. We do know that their private key material was stored in a hardware security module so within pockets of their other segments of their organization, they were putting additional kinds of controls and vigilance around this extremely sensitive material. Michael referenced yesterday, when we were chatting that, the best practice is to take it offline, so that it is not accessible. I think there's a kind of broader discussion around kind of secure by design and threatened modeling. And so, obviously Microsoft as a very sophisticated organization with a massive kind of security apparatus that's conducting threat modeling and in that review, but when fundamentally it needs to begin with kind of understanding, enumerating, and then securing high value assets and if private key material, that's kind of the keys to the kingdom across all of these organizations isn't a high value asset for Microsoft, I don't know what is. And so as you think about defining those high value assets, and then matching those values, those features to the kind of requirements and security, understanding, kind of contemplating critically about what harm can be done and how can that harm be mitigated and conducting that gap analysis and then kind of validating the kind of security and integrity of those controls. That level of security in this particular corner of Microsoft, particularly the sensitive corner of Microsoft, did not appear to be. It may have happened, but it didn't happen as kind of extensively, or rigorously as it could have because, obviously, that material was exposed. I'll also just say that as we've already alluded to, highly sophisticated, threatened nation state level threat actors and this again gets to the concentration risk. Where these threat actors, whether that's for financial gain or, in this case, espionage, are looking to target the whales; they're looking to get more return on their attack investment, and so Microsoft, other major technology providers, are going to continue to be in the kind of crosshairs of very sophisticated threat activity.

Michael Wilkes: Highly sophisticated attack right? That's copy paste on every breach disclosure regardless of whether it was actually a highly sophisticated attack. In this case it was and I will grant them that because this was targeted. Like I said, if you have assigned a key for all of Microsoft and you only go after 25 organizations that we don't have as of today, what if this plays out like Lastpass we could easily be talking about last year, right? What happened with Lastpass, they targeted Devops SRE. They got a remote code execution on a multimedia server at home and they just waited for him with the key log. They waited for him to log into the dev vault and get access to the dev environment, and guess what? Dumb monkey decided to put copies of everyone's vaults of production; they put it in the dev environment. Fraud was never touched in the last pass instance. And in this case, I think we had an environment segmentation and separation issue as well. There's no reason for this Microsoft service account to be touching anything that was on O365's exchange online and that was one of the failures. It was a validation failure and that's what they call the bug and the code that they fixed. So, like I said, we don't have to lambaste Microsoft. We could easily go after Lastpass and say, but of course, what I find most interesting is not the nation states. That's like worrying about sharks, right, and getting eaten by sharks. You want to worry about drowning in a swimming pool or something, right? This is the more common thing that's happening and now I think we have the democratization of crime. When you have script kiddies like lapses who don't have advanced techniques. They are not persistent. They pop and filtrate in 72 hours and they're doing it for the freaking lulls. They didn't even ask Microsoft or Envidia for ransom. They say, un-gate the GPUs. We want to be able to do Bitcoin mining, right? It's almost like social justice or to impress your friends. If you're a particular airman who, you know, was exfiltrating, you know, security reports to impress their twelve friends in a Discord server recently, right? So I think that's the real risk, right, risk board teenagers have been elevated to the level of well-funded, sophisticated nation state actors these days.

Rob McDonald: And I think that's interesting to me and the laws of the original intent in this space, right. It's getting ready to say it all got started, to be honest, the interest in this, and I think, one thing you guys highlighted here in different perspectives, is get at, how to elevate, where in this custodial path, You have these implications so that it can be served before it is an issue. So, Microsoft's a large organization. Clearly they're doing this throughout modeling, right, so clearly, they're thinking about where do I have the highest risk? So it's lack of process or just a simple oversight or frameworks not getting at the actual issue. And, one thing I like to think about is how can we change the perspective? How can we change the context of a particular situation to make sure that you're thinking about where this blast radius can be impacted. Data and key material. The journey they take to go from origin to access and manifestation in some real world workflow, taking a holistic view of that journey because at the end of the day that is the proxy about which everything is done, whether you're a human or a non-human asset. The data that's mobilized and moved. Key material is also data, right? They have to come together. I mean, these organizations could be thinking, and this is recommended in a lot of these frameworks. I'm not saying this is some novel idea, but could we take a more journey-oriented view to the data flows within an organization to better highlight the points at which mobilization crosses a contextual boundary, where clearly this is a concentration point for impact or an excessive blast radius?

David London: Yeah. I mean, we consistently encounter organizations that are burning a lot of calories on building or visualizing network architecture, network topology diagrams, and fine-grain highly sophisticated. all the kinds of core assets and their relationship with each other. What we don't see is the data.All these organizations tend to fall down by trying to establish and document the relationship between the infrastructure assets, the network topology and the data that's moving either inside and outside the network. And what that does is leave a blind spot around, for the first specific servers and applications that are kind of posting and processing that data, which of those are the most critical in which you need to be. How do you build a more graduated process? We do see and this is why compliance doesn't equal security, and we talked about yesterday, we do see organizations able to map where PCI data is going because that's a PCI DSS remit.

David London: But outside of when you think about outside of payment card data which is only one class of sensitive data obviously within an organization, a real blind spot around how organizations are trying to visualize where the data is and how it's traveling and being processed. And so Rob, I think you talked about that journey and I think that that journey is very murky for most organizations, so it becomes harder to bring some level of kind of discipline and accountability to data flows and data security.

Rob McDonald: Yeah. Mike, are we failing forward a bit because we're using compliance and what did lead the organization in the past which was kind of the service oriented IT operations perspective, of these kind of artifacts. Are we failing forward because what we're bringing those forward is to wait, to paint the picture and the perspective of the organization and we need a different view? Is that a way to look at it?

Michael Wilkes: Yeah, I think one of the things that you were intimating with your question was that we can't incrementally win. This is an asymmetric battle. The bad guys have to winT once. They pop the infrastructure, they get stuff and then we learn a year later. It was worse, right, the last fast thing unfolded across several vehicles. First it was like just a couple accounts, nothing bad and then it was like something bad and it was like all 30 million volts are now being blown course in an offline Hyper Computer Center somewhere so please rotate all of your passwords. So I think that you need to find, what is the gestalt shift if you think about that drawing of a duck, looks like a rabbit, right? Or the old lady and young lady drawing. You're not hopefully arguing about the facts which should be the ink dots on the page. What you're arguing with when you want to have an extortion shift and change people's perspective, is you're saying I can't see rabbit anymore. I only want to see duck or vice versa. One of the two. So you want to flip the perspective without really changing anything. I think this is a good tactic for disruption in a good way. And so, if you think about what we need to disrupt here. Water, I'm sorry, data, is like water flowing downhill. Gravity always wins. The water, the data, will always get where it's needed. There's, and I'm teeing this up for Virtru here, of course. There's no chance for you to stop that ever from happening unless of course, she happened to encapsulate your data in some type of magical three letter acronym called Trusted Data Format. I think that's a great solution and I've been advocating it for a while, even before new folks working at Virtru. And I think it's a really important distinction is that there's no perimeter. There's no such thing as perimeter based security. The bad guys said they're inside on the inside. You have to assume compromise and they're buying your slack token on the dark web for $10, waiting for this, support team's party, then messaging with that great party, last night, I lost my phone. Can someone activate my new number? Customer supports wired to help. They help 28 million dollars worth of FIFA code exit the door by lapses in 72 hours, right? That's the thing that we have to worry about.

Michael Wilkes: And so how do you change the Gestalt? How do you change the paradigm? I think, and this will be my last sort of soapbox Dietribe, security chaos engineering, embrace failure, fear failure. Jim Croce had a song, "You don't Stand on Superman's cape, you don't spit in the wind." I think we should spit in the wind and we should stand on Superman's Cape. I think we need to learn how to fail. We need to experiment with failure and we need to embrace it rather than fear it. And so, there's a great book on security chaos engineering by Kelly Shortridge and Aaron Reinhart. Everyone should go read it. It's about resilience and resilience is not about pounding out the dense on a VW beetle's fender. It's not a mechanical engineering concept in this sense where you just simply have ducktal properties and tensile strength and you return to what you were before, the breach. The real resilience is the psychological and the ecological definition of the word which is adaptation and transformation. How do you become stronger and better for being breached and for failing because we all know that it's not a matter of, it's a matter of when and maybe how often. T-Mobile seems to get breached every six months. FireEye took a really hard asked compromise of a supply chain vendor and they were really low and slow and did a really great job. And I love FireEye for disclosing what happened there. So anyway, I think that's one of the ways forward is if we flip the paradigm and we embrace failure and we run into danger and we learn how to survive and become stronger because of it.

Rob McDonald: I think that's, I mean obviously, that's well said. I mean, I think, it's so often we create a robotic copy and paste of our terms of iteration. When we say iterate, we're really just robotically copying the same thing over and over. We're nearly not learning from what we're doing and taking a much more adaptive approach, meaning literally, there's a stimulus and that stimulus can be perpetrated by ourselves, and a more incorporated chaos model, or an external chaos model, which is things that are going to happen to you, incorporating that adaptively is a much better response. Mother Nature does this every day; it's how we fundamentally are able to evolve with those stressors and produce a more robust system. We need to learn from that. I think that was right. I love that example. I think that was super well-said. I think I want to get into the idea that is not lost on me. We're talking about some of these fundamentals: multi-factor authentication, which I think today is really hardware rooted or not if you really want to think about where we are today, but that's just my opinion. We talk about log collection. These are things that honestly we've been talking about for a very long time, so it's not lost on me that these are fundamentals that we're still not quite getting, right. We're trying to improve on, and then I think about things like these pending SEC regulations. So I'd love to talk about the collision of those two because those have cascading effects down market, not just these large organizations. David, board implications. Think about this type of breach, think about these upcoming, really, I'd love to talk to you about board implications for what this could, should, and likely will mean.

David London: I mean, there's no durst of information of any kind of government response. Obviously, you know, Burton broadening beyond the SEC, particularly on the heels of solar winds. You have the Biden Executive order with a lot of numerous protein. A big focus on software and products security, a follow-on strategy, and an implementation plan, which actually seems to have less detail in the strategy itself but obviously, rising expectations and accountability for vendors particularly around the strategy and the implementation plan where that accountability shift is from the user to the vendor to the producer of the software. And so I think, companies are beginning to align to that practice frameworks like just your software development framework. Those selling into the government now need to self attest to their overall kind of software lifecycle hygiene.I think these are all good things. I think it provides a set of best practices without being too prescriptive but of course the U.S. government is going to have to ingest all of this and pick winners. So I think that bureaucratic burden is going to be a challenge. As far as kind of the Security Exchange Commission, almost a year ago, actually a little over a year ago, they released a notice of proposed rulemaking that is going to increase kind of board level oversight, as well as additional kinds of disclosure obligations. And so there's still a level of ambiguity about it because the actual rule has not been released. It was supposed to drop in April, then May, and here we are in July. But for any organization,

Michael Wilkes: Yeah, they pushed it to October. David London: Yeah, I've heard the same so we'll see, but you know that they have used the term material and so if an organization suffers from a material breach they have to report it and disclose it through an 8kK within four business days.

Michael Wilkes: We had a challenge to that in the IT sac. No one has a good definition of what a breach is. Even the MTs I sac has no good definition of what an incident is for a vessel at sea and so I think there's a lot of concern.You can't ask NIST to come up with that one for us, right?

David London: But Mike, what SEC has said is that it's a substantial likelihood that a reasonable shareholder would consider it important. Isn't that guidance enough?

Michael Wilkes: I don't think most shareholders are reasonable, but that's a conflation of terms going on there. But I love this topic and I have a couple of things to say. One, board implications regulatory oversight. I would love to think– you know how we have this threat monitoring and threat management and EDR, XDR and, the other new inventions by Gartner and Forrester for acronyms. What if we actually had a term called regulatory intelligence? Where the regulators actually stepped up in the starting to do what they're doing now, which is prosecute some of these insider threats. What i'm ashamed of learning, or, I guess not ashamed of, but dismayed of learning, is that DNO paid for solarwinds, 26 million dollar class action suit. So, guess what the subtitle is there? Lesson AVERTED, meaning I mean, okay they did a wells notice, there's never been a seesaw with an l's wells notice before, so there's going to be some follow-on, but I think CISOs are now elevated to the class of being at the board table in order to be thrown under the bus, and so we're cannon fodder, essentially, right? We've been elevated. No one knows what we're talking about. We're all talking about risk, but no one uses the same words. So we really need to come up with a better language for some of this stuff that's not moon language by security officers and I think the other thing is that S bombs to the rescue.Sbombs had nothing to do with solarwinds. Why? Because there's a properly minted manifest of included, malicious code, as bombs would have nothing on us in solar.

Rob McDonald: Very similar to our current situation in terms of veracity from origin. Michael Wilkes: Exactly, and bombs are a red herring in my mind because even if you publish, let's say you're Etsy and you do 300 at least as a day, right, because you are a super agile superfleet, you're not like one of these waterfall companies that does sprints, every two weeks, we do a release, or once a quarter, even for large financial institutions. So we can't upgrade Oracle that fast. Let's do a quarterly, so S bombs are always going to be out of date and I want an award on this CISO series podcast for coming up with the best bad idea and I said just copy someone else's S bomb and include it in your manifest and see if anyone notices.

Rob McDonald: Absolutely fair.

Michael Wilkes: Because there's no way people are going to be able to do deltas on this, and yes, you're giving away the packing list for all of your software. And yes, you could find an embedded log for J potentially and be faster with asset management. So I think of asset management in S Bombs being related and useful, but there's no way like I said, you need an escrow copy. You need to publish 300 per day if you're Etsy, and then you need to do deltas to see if anyone included any malicious backdoor command and control software, not gonna happen. So those are my grants in this particular area, but I love this concept of regulatory intelligence. I would love to see it emerge, just artificial intelligence is supposedly emerging. If the regulatory agencies can at least put the word continuous in front of monitoring, that would be a big step and that's what I worked on at the World Economic Forum for two years, just to put the word "continuous" in front of monitoring. And it was like pulling hair and the gnashing of teeth and of course, there's still some interpretation as well. Continuous means, if you go to the Air Force, you say, hey, do you guys do continuous monitoring again? Yeah, we audit our Air Force bases every three years,

Rob McDonald: Yeah, exactly. That's their definition. They've interpreted what it means for them.

Michael Wilkes: I'm talking five minute time series data, but no, we're not talking five minute time series data.

David London: I think the only other note I'd make on this in a proposed rulemaking material events that we're hearing from our clients is you're publicly disclosing a breach very soon as you're still gaining kind of an understanding of the actual damage and persistence and nature of the attack. And what does that do to the response? What is that due to the remediation plan? What does that do for the work that needs to be done without tipping off the adversary?

David London: And so, there's a lot of, I believe, still some discussion around, is there some kind of back channel notification process that doesn't immediately become public so that an organization can look to kind of, understand root cause and remediate before the whole world knows.



Michael Wilkes: Well, what was that hard drive manufacture that got breached and the bad guys were trolling the incident response team by sending them Zoom photos of them, investigating them, and telling them where they were looking for them and where they were hiding. I think it was Western Digital, and so, I mean you need a secure ops room before you can actually claim containment in your incident response and so the bad guys are already there watching you like the guy that hit, you know, Uber in 2021. He was posting on Slack saying I owned you and people just thought it was funny, and they thought it was like an April Fool's joke or something, right.

Rob McDonald: Yeah, I mean so, Mike and David, I think to talk to unpack this shift a little bit more. We're kind of going back a little bit here and we were just talking about these remediative capabilities and remediation. What does remediation mean? A lot of times, remediation means understanding what's going on, closing it down so it doesn't happen again or stops happening, but what already has happened happened, right? This is a reality especially in this situation because that data has gone, right? It's out, it's no longer within the arm of your control. You had visibility of it leaving, but that is where the telemetry stopped, which is really interesting. So this shift, this concept, you brought it up a little bit. I'd love to get your take on this data centric view, to throw another buzzword in the ring. We've talked about zero trust forever. There's always been a data pillar within zero trust and we spend a lot of time talking about and describing the world, describing the data; we describe it as an application, we describe it in humans, we describe it in flows but base reality, is the data at the end of the day, and that's the closest thing you have to describing the real world because that data is where the value is being derived. In this scenario with Microsoft, there was no separation, truly of a centric governance lever from the application governance layer, so as soon as Igain access to the application, I had all the things and you no longer had control, if that makes sense, right. So I'd love to get your take on implementation of a data centric model, the ability to change the way we think about giving organizations control because you have extracted your data governance concept from your application governance concepts so that even though that data has left, you can still have visibility and control and what's going on. So even though you see the breach and you're taking remediations from shutting it off, you also may have an additional lever to claw back control to that data. Is that a concept that you two have thought about and would be beneficial in this scenario?

Michael Wilkes: I believe Canary tokens is a really good idea.I love to put a Word document. It's my Github repo that's called Infosec Passwords.x and it's got to beacon in a macro in it and whoever opens it, I get that signal right, so if you assume breach you want to have decoys. And so think about deception. It Deception is coming into this 800 171's rev, and if you don't have a deception program, I'll tell you fire up a hundred virtual machines in the cloud. Fire1 00 decoys; you just reduce your attack surface by 50% and there's outsource services that are doing this and these live interactive decoys, where they're quicksand. The bad guys think the compromise is a real Active Directory server, who and of course the latency suddenly goes from 50 milliseconds to 500 to 5,000. It's a quick stand, right. Keep them engaged with your decoy as you learn what they're trying to do. It takes a little bit of hope to let the bad guy stay on your stack, but you need to emulate Mac addresses, you can't just throw up a Mac address of a random VM. The bad guys are looking at Mac addresses. I think they're the only ones that care actually about MAC addresses. I always thought it'd be funny if everyone just spoofed the MAC address for all of their infrastructure to be a next computer, and just destroyed the entire in-signal content of Mac addresses because they're not used for anything anyway.

Rob McDonald: Boys in the whole pool.

Michael Wilkes: Yeah, but anyway, so that the canary tokens are good because you can set up SSH triggers when someone logs in interactively on a shell that never should be interactively logged into. And I think these beacons can be really helpful for us to get early warning if we assume breach and assume compromise.

Rob McDonald: Assume breach and assume compromise. I mean, that's obviously the position we try to take and I think very similar to the way we think about this at Virtru, a portion that we think about is that decoy gave you additional signal that is directly actionable because it was in a valuable location. You already knew that I was valuable. So if you're there, it's valuable.

Michael Wilkes: What if Microsoft put a primary signing key that was canary token and its use signaled like a silent alarm of duress under the table at the teller at the bank and RSA used to have a passcode that you could enter as well. They don't do it anymore because if someone has a gun to your head and they're making you wire 10 million dollars to the Cayman Islands because they know you have access to a swift account or something with funds, you could enter the silent duress pin code. It was still unlock, but it would then tell people to come to your house. So I think we should build in these kinds of snares in response, rather than just, like I said, don't fear breach, figure out how to embrace it.

Rob McDonald: Yeah, we completely agree. Ithink the beacon orientation around the payload itself is exactly how we think of data. As data is moving, we're enabling telemetry on the data horizontally across the applications, so you can, even though this may be not a decoy artifact, right, these data centric applied envelope, the way we think about it at Virtru, it's fundamentally giving you that insight beyond contextual boundaries that you had control before. So it's very similar to that model of intentionally inserting a decoy. So in this case if that lab payload is out there, you can pull an additional lever to apply additional adaptive access burden revoke, which is, I think that's that shift we want to see. I mean David, I'd love to get your take on that DCS concept and also of Mike had to say,

David London: You know I totally agree with Mike. I mean, I think I talked a little bit about organizations, the level of opacity around the kind of data governance organizations ability to truly understand, build a kind of high confidence data inventory, prioritizing tagging and in the absence of that, finding other tools and technologies that will allow kind of further, whether it's masking anonymization or safeguards on data, that's going to, relieve organizations from the challenge of trying of harness and bring a level of sanity to their overall data universe, I think is going to be essential.

Rob McDonald: Yeah, so gentlemen there's a lot of security professionals, some at the CISO level, a lot supporting that, so in the organization today listening, thinking about what can I do? I'm concerned; I've put a lot of trust in these large providers of which I'm getting a ton of business value from and you always like to think that they're doing everything they should and they really are trying, so no one here is indicating that they're not, but they are also a body of human beings like any other company. That's the reality that we face today. They're listening and what are some parting words, some prioritization?. You can help them overcome this concern, around the implications of this. Give them a couple of points to part in today's webinar.

Michael Wilkes: One of the things I'll pull was from the education space. When a high school or a middle school gets breached and it happens a lot these days because there's no honor among thieves where critical infrastructure schools and hospitals were kind of off limits. They have data that goes back like 20 years and they now have a burden to notify someone that graduated from that freaking middle school, 20 years ago. You need a good data record retention policy and you need to enforce it. Set up a data lifecycle room with prejudice. Make sure that you do not have data and your legal team will love you for it. Why? Because it cannot be subpoena, It cannot be a part of discovery. And if you are a pack rat organization, and you have data over seven years that's HR related, you're in trouble. Seven years is the standard for the U.S. and some industries in Canada. I think it's maybe 11 years, so, if you have eight years of HR data, that's just increasing your risk. For a data x-file event that turns into 20% larger contact in outreach and follow up and lifelock subscriptions, and all the kinds of things you have to do when your data's been popped. So minimize your data. That's my number one approach because I see almost no organization that does data retention well and efficiently. And of course, JP Morgan just got fined for pruning data that they were supposed to keep, so be careful about what you delete.

Rob McDonald: Both directions. Yeah, I think there's a lot of compounding leverage. That's a really good point that if starting there gives you an immediate benefit, to your point, on reducing your risk service area, appreciate that, Mike. David, what about you?

David London: Yeah, and we talked about this kind of steady march to the cloud and I don't think anybody disputes the efficiencies, some of the economies of scale, the ability to kind of massive workloads in cloud resources and cloud services, I guess, I'd advise organizations as they think about it, don't look at that process or initiative as a risk transfer campaign because as we see from the Microsoft breach, as we saw from Solarwinds, as we see from the widening weaponization of the technology supply chain and the consolidation of the technology industry particularly in the cloud space, that there are still significant risks associated with that. And with those risks, there are additional mitigation safeguards. And so I guess I'd advise companies as they begin– most have already moved to the cloud one way or another, whether that's through SAAS or otherwise– to take a pause to understand and be thoughtful about what resources are moving to the cloud and what kind of countermeasures and additional visibility can I put it in place, because I still have a level of accountability on my data and to my company, into my customers, whether that's through increased and finer grain logging, whether that's through additional tools and visibility through vendors like Obsidian that provide more cloud visibility whether that's through additional kind of data governance and understanding what's on PREM versus what's in the cloud as well as a really appreciating the economic models, and are you really gaining a level of efficiency and ROI because there's obviously a significant amount of marketing dollars in cloud as well as just a trend in the industry. So really kind of understanding the trade-offs before making significant decisions with major implications into the future.

Rob McDonald: Really appreciate that, David. And I think what really jumped out at me today talking to you all about the breadth of the approaches and the techniques that could have been used and should be used to help an organization, reduce this risk, is that defense in depth is not a simple, scalar quantity. Everyone should take the opportunity to look at the rungs in that depth strategy and see, are just some of these look more like the others, or is there too much concentration in one particular organization or one particular stack, where it's giving you the illusion of depth and separation that you're not actually getting that impact and ROI, and I think that's an important thing to think about. It's hard to do because sometimes there's just a concentration of these things. I want to really thank you Mike, and thank you David, for spending the time with us. You've been very generous with your time and really appreciate it.

Michael Wilkes: Thanks, we'll have to talk again. I didn't even get to talk about quantum security or m sharks and lasers, you know, analogy.

David London: I was waiting for that one.

Rob McDonald: Yeah, I think that deserves a conversation all by itself.

Michael Wilkes: Yeah, you got to have two things, sometimes not just one.

David London: Yep, thanks Rob.

Enjoy a coffee on Virtru!

Fill the form below to claim your gift.