Following last week's newsletter, a customer asked about enhancing their email security. We reviewed the risks - and found that only a subset of the team needed enhanced protection.
(Remember: The best way to secure a cybersecurity budget is to right-size the request to the risk and business needs.)
So we moved to review the solutions/risk mitigations available. The customer already has the foundations of security awareness training, phishing simulations, SPF, DKIM etc.
So how can we enhance the protection?
Will it be a technology solution or one based on working with people?
A technological approach can be a great solution. But technology is always flawed.
Technology is great at:
At their core, all technology solutions follow a set of rules - an algorithm. Often these algorithms, created by the solution's developers, have deep knowledge baked into them.
Recently, these algorithms have been increasingly tagged with Artificial Intelligence - AI. Often AI sounds impressive, but it may be less astounding in reality.
It's always worth remembering that algorithms based on "If X happens, then do Y" are classed as artificial intelligence as they are "AI Expert Systems".
Despite best efforts, technical algorithms are rigid. They are excruciatingly rational, and unless explicitly programmed to look for exceptions or variance, an algorithm cannot move to a different way of working.
An algorithm is only as good as the person who built it.
For well-established, well-understood tasks - that won't be an issue. But different approaches may offer better results for complex or evolving tasks, and an existing algorithm must be redesigned to use the new understanding.
While AI may blur the lines a little to suggest that its algorithm can 'learn' new ways of working, AI remains constrained by the scope in which it operates and the data it has received.
If algorithms are powerfully rational, people are the opposite.
Try as we might, the 180+ cognitive biases show people are not rational.
However, being non-rational can be extremely powerful.
As Rory Sutherland points out in Alchemy, people often make decisions and rationalise after the fact.
Cognitive biases and a lack of rationality help people handle the novel and new. This is found when we feel something is wrong - but can't explain why or how.
The feeling - often leads to people asking questions to find out more information. A self-starting action that an algorithm can't take.
We have yet to decide whether to use a technology or a people-based solution. We're still evaluating the options. But we're also looking at the risks in more detail.
When would technology help?
And when would a person be better placed to safeguard the company's data?
…something I'll address next week in Part 2.