Ben Aung is the Chief Risk Officer at SAGE, formerly served as a Deputy Government Chief Security Officer in the UK government, and is a Tessian customer. He discussed insider threats, fear uncertainty and doubt (FUD), and the Great Resignation with Tessian CEO and Co-Founder, Tim Sadler, on the RE: Human Layer Security podcast. Listen here, or read the Q&A below.
Tessian: How has this year been for you and your team at SAGE?
Ben: I’m surprised how much we’ve managed to achieve under challenging circumstances.
We’ve managed to get to a “business-as-usual” state much faster than I would have expected, and many of the kind of “doomsday” threats that we might have been anticipating as a result of COVID haven’t really materialized for me.
Tessian: What are your thoughts on insider threats? Could you share a bit about how you’ve been focused on insider threats throughout your career?
Ben: Most of my career in government has been in information security, computer security, or cybersecurity—depending on which term was de rigueur at the time—but when I joined the Cabinet Office in 2012, my first gig I got there was as the Senior Policy Adviser in the National Security Secretariat for insider threats.
Soon after I joined, we were dealing with the aftermath of the Edward Snowden disclosures, which—as many people will remember—were a seismic event in the insider threat world, and caused a great deal of reflection and introspection around how much confidence we could have in some of the very long-standing controls that we’d had around mitigating the most severe insider incidents, particularly in the national security context.
That was a real “baptism by fire” for me in the insider world. I was working across the Five Eyes countries and trying to join up what we all thought was a fairly consistent understanding of how to fight insider threats, but I found out we were all doing things in slightly different ways.
My experience of working with the intelligence community in that very high threat, high impact context was that—in amongst all of the complexity, and “smoke and mirrors,” and spookery—many of the issues were just fundamental people issues or control issues that I expect nearly every organization to face, in one way or another.
Tessian: According to stats, insider threats have risen almost about 50% in the past two years. Why do you think it’s such a challenging problem to solve?
Ben: I think we overcomplicate it, would be my headline. We don’t think holistically about the interventions we can make in the lifecycle of an individual or an insider incident that might reduce both the opportunity and the impact.
We often put too much emphasis on hard technical controls. We lock systems down, so they become unusable, and people just find ways to circumvent them.
We put too many eggs in one basket, and we don’t think about all the little things we can do that cumulatively, or in aggregate, can support us.
The other thing I’d say is—cybersecurity, as an area of risk, is too populated with anecdotes and an absence of data. And it’s too driven by the worst-case scenarios, rather than the everyday, which I think are too often the starting point for the more severe events that happen later down the line.
Tessian: How do we take steps towards that more data-driven approach, and what’s your advice to people who may agree that they find themselves swayed by headlines and the “fear factor”?
Ben: As security professionals, we sometimes have quite thankless roles in an organization. And actually bringing a bit of excitement and interest—it’s an interesting part of the job, and sometimes adds a bit of “mythology.”
The point is that the most effective interventions are some of the most boring and the most mundane. By that, I mean—if you look across all of the most severe insider incidents of the last “x” years, effective line management would have been one of the key mitigations.
Effect line management, good pastoral care, good understanding of employee wellbeing, good performance management processes, basic controls around access, audit, and monitoring.
I think because these things have existed for such a long time, and we don’t associate them with insider risks, then they’re either overlooked, they’ve degraded, they’re boring—they don’t attract investment in the same way that other things do.
The goal is to bank all of that stuff, get that foundation in place, and then supplement with some of the specialist tools that are available on the market—like Tessian—where you can say, “I’ve got confidence in some of these fundamentals, now I want to take that step and really understand my enterprise and what’s happening in and out of it in a much more sophisticated way.”
Tessian: There have been a number of incidents reported in the news where disgruntled employees are being targeted by cybercriminals to assist in malicious activities. Is this something that concerns you?
Ben: I used to think about this a lot in government, where the notion of a “blended attack”—particularly in the nation-state context—is very relevant.
There’s often a misconception that a hostile state actor says, “I’m going to launch a cyberattack on the UK,” or “I’m going to compromise ‘x’ system”—they have an objective, and often cyber or remote attacks are the cheapest way to achieve that objective.
But in some cases, they won’t be. And a blended attack, where you use some kind of close-access technology that’s deployed by a compromised individual as a precursor to a remote attack, is a threat model that governments have to deal with.
And some of the techniques that governments can deploy against one another are absolutely crazy… the level of creativity and imagination at play… That is a very real risk in that context, and I think it’s inevitable that elements of it are going to find their way out into the commercial world.
The key consideration is: what is the cost/benefit equation that the actor is going to be relying on? And as soon as you start including vulnerable individuals, you do increase operational risks as an attacker. The ransomware groups wouldn’t care too much about that, but it’s about whether they get the pay-off they need for the level of effort they put in. And I guess, in many cases, they would.
If you just look, in more of a social context, about how teenagers and children can be blackmailed by people on the other side of the world, then there’s no reason why someone seeking monetary gain—through a ransomware attack or otherwise—wouldn’t do the same.
I haven’t seen any real evidence that it’s happening at any sort of scale, but I think having people in your organization—like we try and achieve at SAGE—who will report early… there’s a sort of “no consequence” reporting rule in SAGE and in many organizations, where we just want to know. I think that’s one of the most effective mitigations.
This Q&A was adapted from our RE: Human Layer Security podcast. You can hear the full interview here,