Late last week, I read the NCSC's report on The near-term impact of AI on the cyber threat.
The summary...
There will be trouble ahead - for at least the next 2 years.
Cyber attacks will likely become more impactful and scalable.
With less-skilled and opportunistic cyber criminals gaining the most.
More plainly,
"To 2025, [generative AI] and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts."
So, now we must assume that every phishing link will be clicked.
And defend our schools, businesses and data on that basis.
It's incredible to remember that the impact of generative AI and large language models is new.
Very new.
The current boom and hype in generative AI started when OpenAI launched GPT-4 in March 2023.
That is less than a year ago.
Unfortunately, even though the conclusions were unsurprising, they made for uncomfortable reading.
It made me think...
If the cyber threat is going to escalate so much, how will the cybersecurity industry respond?
We should be seeing mass experimentation.
Looking at what is possible with this new technology.
After all, generative AI is a genesis idea.
But are we?
Last Thursday evening, I spent a few hours investigating.
I want to say I found a shining beacon of hope during my research.
But I didn't.
Rather than find experimentation and investigation, I found product and service pitches.
The clearest example I saw was a panel discussion from the ProducTank London meetup.
An AI Product Manager from PwC advocated the importance of understanding the return on investment before starting a new project.
Which is undoubtedly critical for mature, established technologies...
But we're talking about generative AI...
Conversations about investment don't have a place in experimentation.
And while the offensive, cybercriminal side of cybersecurity has always been a place of experimentation and continuous learning - as outlined in the NCSC report.
It seems some defenders are looking to skip the vital experimental phase and head straight for seeking to launch the next billion-dollar company idea.
Missing valuable opportunities to advance cybersecurity defences.
The time will come when generative AI is a mature technology, and investment returns can be discussed at programme boards.
But right now, we need to learn what works and what does not.
And this appears to be missing on the defensive side.
This leaves us relying on time-tested methods to prevent phishing & ransomware.
There are no silver bullets, but ensuring our [cybersecurity fundamentals are in place] is crucial to business resilience.
It's about:
And - if you have the functionality - enforce your business process rules with Conditional Access policies.
I suspect the tiniest policy mistakes and misconfigurations will become crucial in 2024 and 2025.