In this episode of the RE: Human Layer Security podcast, Tim Sadler is joined by Tim Fitzgerald, the chief information security officer at ARM and former chief security officer at Symantec.
Now, Tim believes that people are inherently good. And to think of employees as the weakest link when it comes to cybersecurity is undeserving. Tim thinks employees just want to do a good job. Sometimes mistakes happen, which can compromise security. But rather than blaming them, Tim urges leaders to first ask themselves, whether they’ve given their people the right tools, and they’ve armed them with the right information to help them avoid these mistakes in the first place. In this interview, we talked about the importance of changing behaviours, how businesses can make security part of everybody’s job, and how to get boards on board.
And if you want to hear more Human Layer Security insights, all podcast episodes can be found here.
Tim Sadler: As the CISO of ARM, then what are some of the biggest challenges that you face? And how does that affect the way you think about your security strategy?
Tim Fitzgerald: I guess our challenges are, you know, not to be trite, but they’re sort of opportunities as well. That by far, the biggest single challenge we have is ARM’s ethos around information sharing. As I noted, we have a belief, that I think it has proven out to be true over the 30+ years that ARM has been in business, that the level of information sharing has allowed ARM to be extraordinarily successful and innovative.
So there’s no backing up from that as an ethos of the company. But that represents a huge amount of challenge because we give a tremendous amount of personal freedom for how people can access our information and our systems, as well as how they use our data to share both internally with our peers, but also with our customers who we’re very deeply embedded with, you know. We don’t sell a traditional product where we, you know, they buy it, we deliver it to them, and then we’re done. The vast majority of our customers spend years with us developing their own product based on our intellectual property. And so that the level of information sharing that happens in a relationship like that is, is quite difficult to manage, to be candid.
TS: Yeah, it really sounds like you’ve been balancing or having to think about not just the effectiveness of your security strategy or your systems but also that impact to the productivity of employees. So has Human Layer Security been part of your strategy for a long time at ARM or even in your career before ARM?
TF: In my career before ARM, at Symantec. Symantec was a very different company, you know, more of a traditional software sales company. It also had 25,000 people who thought they knew more about security than I did. So that presented a unique challenge in terms of how we work with that community, but even at Symantec, I was thinking quite hard about how we influence behaviour.
And ultimately, what it comes down to, for me is that I view my job and human security as somewhere between a sociology and a marketing experiment, right? We’re really trying to change people’s behaviour in a moment, not universally and not their personal ethos. But will they make the right decision in this moment, to do something that won’t create security risk for us?
You know, I sort of label that sort of micro transactions. We get these small moments in time, where we have an opportunity to interact with and influence behaviour. And I’ve been sort of evolving that strategy as I thought about it at ARM. It’s a very different place in many respects, but trying to think about, not just how we influence their behaviour in that moment in time, but actually, can we change their ethos? Can we make responsible security decision-making part of everybody’s job? And I know that there’s not a single security person who will say they’re not trying to do that, right. But actually, that turns out to be a very, very hard problem.
The way that we think about this at ARM is that we have, you know, a centralized security team and I guess, ultimately, security is my responsibility at ARM. But we very much rely on what we consider to be our extended employee, or extended security team, which is all of our employees. Essentially, our view is that they can undo all of the good that we do behind them. But I think one of the things that’s unique about how we look at this at ARM is, you know, we very much take the view that people aren’t the weakest link. That they don’t come with good intent, or they don’t want to be good at their job or that they’re going to take shortcuts just to, you know, get that extra moment of productivity, but actually that everybody wants to do a good job. And our job is to arm them with both the knowledge and the tools to be able to keep themselves secure rather than trying to secure around them.
“We’re really trying to change people’s behaviour in a moment, not universally and not their personal ethos. But will they make the right decision in this moment, to do something that won’t create security risk for us? ”
Tim Fitzgerald
CISO at ARM
And, just to finish that thought, we do both, right? I mean, we’re not going to stop doing all the other stuff we do to kind of help protect our people in ways that they don’t even know exist. But the idea for us, here, is actually that we have rare opportunities to empower employees to take care of themselves.
One of the things we really like about Tessian is that this is something we’ve done for our employees, not to our employees. It’s a tool that is meant to keep them out of trouble.
TS: Yeah, I think I think that’s a really, really good point. You know, I think a lot of what you’re talking about here, as well as just security culture, and really establishing a great security culture as a company. And I love that for employees rather than to employees. I mean, it sounds like this really, you know, you have to at the core of the organization, and be thinking about the concept of human error in the right way when thinking about security decision making.
And I guess, thinking that people are always going to make mistakes. And as you said, it’s just because they, you know, they are people, and maybe walk us through a bit more about how you how you think or what advice you might have for some of the other organizations that are on the line today about how they might talk to, you know, their boards or their other teams about rationalising this risk internally and working with the fact that our employees are only human.
TF: Yeah, for me, this has been the most productive dialogue we’ve had with our board and our executive around security. I think most of you on the phone will recognise that when you go in and you start talking about the various technical layers that we have, that are available to protect our system, the eyes glaze over pretty quickly. And they really just want to know whether or not it works.
The human security problem is one that you can get a lot of passion on. In parts, because, I think it’s an unrecognized risk in the boardroom. That while the insider – meaning sort of a traditional insider threat that we think about which is a person who’s really acting against our best interest – can be very, very impactful. At least at ARM, and certainly in my prior career, the vast majority of issues that we have, and that have caused us harm over the last several years have been caused by people who do not wish us harm.
“The human security problem is one that you can get a lot of passion on. Partly because I think it’s an unrecognized risk in the boardroom. ”
Tim Fitzgerald
CISO at ARM
They’ve been people just trying to do their job, and making mistakes or doing the wrong thing, making a bad decision at a moment in time. And trying to figure out how we help them not to do that is a much more difficult problem than trying to figure out how to put in a firewall or putting DLP. So we really try to separate that conversation. There are a lot of things we do to try and catch that person who is truly acting against our best interest but that actually, in many ways, is a totally different problem. At ARM, what accounts for more than 70% of our incidents, and certainly more than 90% of our loss scenarios is people just doing the wrong thing. And making the wrong decision, not that they were actively seeking to cause ARM harm.
If I might just give a couple of examples because it helps bring it home. The two most impactful events that we’ve had in the last two years at ARM was somebody in our royalties, you know, we sell software, right? So every time somebody produces a chip, we get paid. So that’s a good thing for ARM. But having somebody who’s royalty forecast gives you a really good sense of what markets they intend to enter and where they tend to go as a company.
And most of our customers compete with each other because they’re all selling similar chips, software design into various formats. So having one customer having somebody else’s data would be hugely impactful. And in fact, that’s exactly what we did not that long ago. Somebody pulled down some pertinent information for a customer into a spreadsheet, and then fat fingered an email and sent it to the wrong customer. Right, they send it to Joan at Customer X instead of Joan at customer Y. And that turned out to be a hugely impactful event for us as a company, because this is a major relationship and we essentially disclosed a strategic roadmap from one customer to another. A completely avoidable scenario. And it is a situation where that employee was trying to do their best for their customer and ultimately made a mistake.
TS: Thanks for sharing that example with us. I think it’s a really, really good point. And I think for a long time in security, we were talking about insider threats, and people immediately think about malicious employees and malicious insiders. And I think it’s absolutely true what you say that, the reality is that most of your employees are, you know, trustworthy and want to do the right thing. But they sometimes make mistakes. And when you’re doing something as often as, say, sending an email or sharing data, the errors can be disastrous, and they can be frequent as well…
TF: …it’s the frequency that really gets us right? So insider threat – the really bad guy who’s acting against our best interest. We have a whole bunch of other mechanisms that, while still hard, we have some other mechanisms to try and find them. That’s an infrequent high impact. What we’re finding is that the person who makes a mistake is high frequency, medium to high impact. And so we’re just getting hammered on that kind of stuff. And the reason we came to Tessian in the first place was to address that exact issue. As far as I really believe in where you guys are going in terms of trying to address the risk associated with people making bad choices versus acting against our interest.
TS: This concept of high frequency, I think, is super interesting. And one of the questions I was actually going to ask you was around that. Hackers and cyber attacks get all the attention because these are the scary things. And naturally, it’s what you know, boards want to talk about, and executives want to talk about. Accidents almost seem less scary. So they get less focus. But this frequency point of how often we share data. We send emails, and it’s, you know, it has analogies in other parts, other parts of our lives as well with like, we don’t think twice before we get in a car. But actually, you know, it’s very easy to have human error there. Things can also be really bad. Do you think we need to do more to educate our, again, our boards, our executive teams and our employees to actually sort of open their eyes to the fact that inadvertent human error or accidents can be just as damaging as, as attackers or cyber attacks?
TF: Yeah, it depends on the organization. But I would suggest that generally, we do need to do more. We, as an industry, we’ve had a lot of amazing things to talk about to get our board’s attention over the last 10 years. These major events, and loss scenarios, often perpetrated by big hacking groups, sometimes nation-sponsored, are very sexy to talk about that kind of stuff and use that as justification for the reason we need to, to invest in security.
And actually, there’s a lot of legitimacy behind that. Right. It’s not that that’s fake messaging. It’s just, it’s just part of the narrative. The other side of the narrative is that, you know, we spend more time on now than we do on nation-state type threats. Because what we’re finding is not only by frequency, but by impact right now, the vast majority of what we’re dealing with is avoidable events, based on human error, and perhaps predictable human error.
I very much chafe at the idea that we think of our employees as the weakest link, right? I think it sort of under serves people’s intent and how they choose to operate. So rather than that, we try to take a look in the mirror and say, what are we not providing these people in order to help them avoid these types of scenarios?
And I think if you change your perspective on that, rather than see people as an intractable problem, and therefore we can’t, you know, we can’t conquer this. If we start thinking about how we mobilise them as part of our overall cybersecurity strategy and defense mechanisms, it causes you to rethink whether or not you’re serving your populace correctly.
And I think in general, not only should we be talking to our senior executives and boards more, more clearly about where real risk exists, which for most companies is right in this zone. But we need to be doing more to help those people combat rather than casting blame or thinking that the average employee is not trustworthy, or will do the wrong thing.
You know, I’m an optimist. So I genuinely believe that’s not true. I think if we give people the opportunity to make a good decision, and we make the easiest path to get their job done, the secure path, they will take it. That is our job as security professionals.
“Hackers and cyber attacks get all the attention because these are the scary things. And naturally, it's what you know, boards want to talk about, and executives want to talk about. Accidents almost seem less scary. So they get less focus.”
Tim Sadler
CEO at Tessian
TS: Yeah, I think the huge point there and you know, the word that was jumping out for me is this concept of empowerment. And I think it is strange sometimes when you look at a lot of security initiatives that companies deploy, and how we almost don’t factor in that concept of the impact it will have on an employee’s productivity.
And I guess at Tessian, we’re great believers that, you know, the greatest technology we’ve created has really empowered society. So it’s made people’s lives better. And we think that security technology should not only keep people safe, but it should do it in a way that empowers them to do that best work. When you were sort of thinking about how to solve this problem of inadvertent human error on email people sending emails to the wrong people, or dealing with the issue of phishing and spear phishing. What consideration did you have for other solutions that were out there? You know, what did Tessian address for you that you couldn’t quite address with those other platforms?
TF: Yeah, a couple things. So coming from Symantec as you might expect, I used all of their technology extensively and one of the best products Symantec offers is their DLP solution. So I’m very, very familiar with that. And I would argue we had one of the more advanced installations in the world running internally at Symantec.
So I’m extremely familiar with the capability of those technologies. I think what I learned in my time and doing that is when used correctly in a finite environment, a finite data set, that type of solution would be very, very effective in keeping that data where it’s supposed to be and understanding movement in that ecosystem. When you try and deploy that, broadly, it has all the same problems, as everything else is, you start to run into the inability of the DLP system to understand where that data is supposed to be. Is this person supposed to have it based on their role and their function? It’s not a smart technology like that. So you end up trying to write these very, very complex rules that are hard to manage.
What I liked about Tessian is that it gave us an opportunity to use the machine learning in the background, to try and develop context about whether or not something that somebody was doing was, was either a typical, or perhaps just by the very nature, and maybe it’s not a typical, maybe it’s actually part of a bad process. But by their very nature of the type of information they’re sending around and the characteristics of information, we can get a sense of whether or not what they’re doing is causing us a risk. So it doesn’t require recipes, completely prescriptive about what we’re looking for. It allows us to learn with the technology and with the people on what normal patterns of behaviour look like, and therefore intervene when it matters and not, and not sort of having to react every time another bell goes off.
To be clear, we still use DLP in very limited circumstances. But what we found is that was not really a viable option for us, particularly in the email stream. To be able to accurately identify when people were doing things that were risky, versus, you know, moving a very specific data set that we didn’t want them to.
TS: Yeah, that makes a tonne of sense. And then if you’re thinking about the future, and sort of, you know, what you hope Tessian can actually become, you know, where, where does it go from here? What’s the opportunity for, for Tessian as a Human Layer Security platform?
TF: Yeah, I recall back to talking to you guys, I guess, last spring, and one of the things I was poking at was, you have all this amazing context of what people are doing an email, and that’s where people spend most of their time. It’s where most of the risk comes from for most organizations. So how can we turn that into beyond just you know, making sure someone doesn’t fat finger and email address, or they’re not sending a sensitive file where it’s not supposed to go? Or, you know, the other use cases that come along with Tessian? Can we take the context that we’re gaining through how people are using email, and create more of those moments in time to connect with them to become more predictive?
Where we start to see patterns of behaviour of individuals that would suggest to us that they are either susceptible to certain types of risk, or, you know, are likely to take a particular action in the future, there’s a tremendous amount of knowledge that can be derived from that context, particularly if you start thinking about how you can put that together with what would traditionally be kind of the behavioural analytics space.
Can we start to mesh together what we know about the technology and the machines with real human behaviour and, therefore, have a very good picture that would help us? It would help us not only to find those actual bad guys who were in our environment that we know were there, but also to get out in front of people’s behaviour, rather than reacting to it after it happened. And that, for me, that’s kind of the holy grail of what this could become. If not predictive, at least start leading us towards where we think risk exists, and allowing us an opportunity to intervene before things happen.
TS: That’s great, Tim, thanks so much for sharing that with us.
TS: It was great to understand how Tim has built up his security strategy, so that it aligns with and also enhances the overall ethos of the company. More information sharing equals a more innovative and more successful business. I particularly liked Tim’s point, when he said that businesses should make the path of least resistance the most secure one. And by doing that, you can enable people to make smart security decisions and build a more robust security culture within an organization.
As Tim says, It’s security for the people, not to the people. And that’s going to be so important as ways of working change.
If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here.