Early in my career, the scale of cybersecurity data was illustrated as follows:
"Imagine all the security events that impact us as the size of Manhattan NYC. Then the amount machines can process - is about the size of a laptop. And the amount a human can review is about the size of a postage stamp."
Filtering the Noise
The first purpose of technology in cybersecurity is to filter out the noise. The lowest quality, untargeted attacks.
These attacks will create problems for the unprepared, but the tiniest security effort will block them and keep you secure. For example, the risk that cybercriminals could access your sensitive shared folders across the internet.
This risk is why your laptop and home internet connection have firewalls. An often automatic and unnoticed technology solution to fix a genuine security and business risk of maintaining the privacy of sensitive data.
With risks like open file shares, we often find out the hard way. Someone steals the data, and only then do we create a technology defence to block the attack.
Unfortunately, usually, we must learn the lessons repeatedly for each new technology. For example, moving to cloud service providers changed local file shares to "S3 buckets" on Amazon AWS.
Rapid7 warned of the problem with publicly accessible S3 bucket "file shares" almost 10 years ago in 2013.
This prompted AWS to make the default configuration private. So files were not accessible over the internet… a firewall of sorts.
Despite this technology configuration, in 2021, Sennheiser was the victim of a data breach caused by a publically accessible S3 bucket file share. They had 28,000 customer details accessible to anyone with an internet connection.
Open file shares remain one of the most common sources of data breaches.
Whether it's 9 am or 4 pm doesn't matter to an algorithm. Deciding what attacks to filter can be a simple decision - and the same information will result in the same judgment.
For people - this isn't the case.
A key example of decision fatigue was from an Israeli parole hearing. Two highly similar cases were heard first thing in the morning and late in the afternoon. The first prisoner was granted parole, but the second was not - despite the almost identical details. The researchers concluded the cumulative effect of the decisions across the day had adversely impacted the second decision.
The idea of decision fatigue was first raised by Freud but only empirically confirmed by Roy F Baumeister in 1998. Put simply, everyone only has a finite amount of energy to put into decision-making. Once spent, the person makes increasingly irrational decisions.
This is why Obama famously said, "You'll see I wear only gray or blue suits, [..] I'm trying to pare down decisions."
So if a human or technology is capable of deciding what action to take in a security event, technology should make the decision. You'll get more consistent results that way.
Expect the Unexpected
"To expect the unexpected shows a thoroughly modern intellect." - Oscar Wilde.
Some argue that "humans are not very good at reasoning about low-probability events". People find it hard to fully understand the chance that something will actually happen. It's why people play the lottery and otherwise gamble.
In contrast, a technology algorithm can easily reason with tiny chances and draw a watertight, logical answer.
The problem with this view is that humans don't always need to logically reason to make a decision. It's why people have been able to drive a car for 140 years, whereas technology is still learning.
Instead of logically reasoning about the actions of another driver, people use intuition. In this subconscious process, experience allows people to pattern match and come up with a range of reasonable decisions.
Notably, the experiences we draw on don't need to be first-hand. Instead, they can be learnt from reading, listening or observing.
So even in complex environments, where the chance of receiving a phishing email is less than 1 in 100, humans can make decisions and take actions… albeit in a different manner than technology.
The Novel and New
But what if the chance of something happening wasn't known?
What if something was seen for the first time, and the chance of it occurring was unknowable?
This is something people experience daily. Not just drivers but anyone who leaves the house.
Without understanding probabilities, technology is lost and must resort to picking at random. In contrast, people can fall back on their intuition and rely on their closest pattern to match.
This process was the root cause of this entire piece. The phishing email received "didn't feel right". Intuition suggested there was something bogus about the request to change bank details.
What was novel and new was the "sense of urgency" and "authoritative tone" usually found in phishing emails wasn't there. If anything, the phishing email was courteous and written in the style of the alleged sender. This was not something the recipient had seen before from a scammer.
Yet, despite this, the email recipient decided something wasn't right and reported the message as a phishing attempt. At the same time, the existing technology was none the wiser.
Technology strengths lie in the large-scale, purely rational filtering of cybersecurity alerts and messages. When a situation is known and well understood, technology should be used to avoid using up the finite decision capacity of people.
However, if something is novel and new - only humans can have the answer.