Fresh Thoughts #138: Generative AI Threats - Part 1

    Newsletter
Net Image

How do you respond to a new and unfamiliar challenge?
Let's say you're standing at the edge of a cliff...
Getting ready to jump into the sea...

Are you the first to jump? Everything will be finnnnneeee...

Or do you let others who are braver or more foolish than you go first?
Or better still, let someone else do it and tell you what happened over a nice chat?

Like business, we are continually faced with the novel and unfamiliar in cybersecurity.
So, how do we decide on the next move?
By analysing the threats, problems, and dangers the situation presents.

Threats and dangers are present in everything we do.
The trick is to know which are significant and need to be addressed...
And which are benign and can be safely ignored.

Last week, I worked on a Generative AI usage policy for a client's ISO 27001 certification.
Investigating what information was already available, I found few answers to the question...
What threat does using generative AI pose to a small business or school?
So, I completed a threat analysis for generative AI.

I found a mix of threats.
However, what interested me was that the majority of threats were common challenges in businesses, especially those that use cloud services like Microsoft 365 or Google Workspace.
There are certainly a small number of cybersecurity threats generative AI poses—like data leakage, GDPR compliance, and plagiarism—but these appear to be the exception rather than the rule.

This week, I will briefly outline the common challenges, and next week, in Part 2, I will cover the unique threats generative AI poses.

Common Business Challenge

Threat 1: Bias and Misinformation from Generative AI Responses
If you spend any time investigating generative AI models, you will certainly have heard the phrase "gen-AI hallucinates".
AI doubters use this as a shorthand to say, "You can't believe anything generative AI creates."

It is true that tools like ChatGPT are nothing more than statistical models that predict the next few characters in a sentence.
But I have yet to find a "hallucination" that hasn't been resolved by asking ChatGPT a better question.

Moreover - and thinking more broadly - have you ever been lied to during a business transaction?
A half-truth or a significant detail quietly omitted?
Or had someone say something that wasn't wholly accurate?

It turns out you can't fully trust the responses from generative AI...
Just like humans. 🤷

Threat Mitigation:

Common Cloud IT Challenges

Threat 2: Unauthorised Access to Generative AI Systems
Using a new technology does not remove the need for cybersecurity fundamentals.
When using generative AI, you must employ multi-factor authentication, use role-based controls on who can access data, and ensure vulnerabilities are updated.

Threat Mitigation:


Threat 3: Over-Reliance on the Availability of generative AI
Conversations about "the end of work" and the need for universal basic income increase the emotion of relying on generative AI within our businesses.
But like the telegraph, railways, electricity, and cloud computing - what starts as a strategic competitive advantage becomes a cost of doing business in the short- and mid-term.

So, a better comparison and framing of the issue is - What happens to your business if there is a power outage or the phone lines don't work?

Modern life requires outsourcing critical services.
When they are disrupted, they are missed, adversely impacting our lives.
That said, to remain competitive, we cannot go against this model of work.

Threat Mitigation:

  • Maintain resilience in the event of failure. This may be by using different suppliers or models.
  • Monitor service level performance and ensure availability is considered when purchasing generative AI solutions.



Threat 4: Shadow IT Usage of Generative AI
In the era of cloud computing, we found that staff members would vote with their credit cards and bill valuable services via expenses.
If you don't have a generative AI strategy and cybersecurity policy, there is a strong chance someone in your business is already using generative AI... and has not considered its threats.

Threat Mitigation:

  • Investigate, discuss, and decide on your business's approach to generative AI.
  • Make your solution easily accessible to anyone who wants to use generative AI.



Threat 5: Extract Confidential Information from Generative AI Training Data
An attack called a Model Inversion Attack can extract training data used to train the underlying AI model.
While this is mainly a concern for businesses like OpenAI, Anthropic, and Google - if your proprietary data has been included within the training set, this could become a problem.
This may occur when you fine-tune an AI model to produce content with your brand voice or submit your data to generative AI vendors as part of your generative AI usage.
It is crucial to understand that the data used to train the model can, under certain circumstances, be reconstructed and accessed by anyone else who uses the generative AI model.

Threat Mitigation:

  • Carefully review the data provided to generative AI vendors during model fine-tuning and day-to-day usage. This is the fundamental question we will answer in Part 2.

Final Thoughts

While generative AI can appear to be fantastical in its capabilities, many of the cybersecurity threats posed by its use are rather mundane. They are similar to what you would experience using any cloud service.
That said, a couple of threats need close attention, and I will cover those next week.

October 1, 2024
5 Minutes Read

Related Reads

Net Image

Fresh Thoughts #139: Generative AI Threats - Part 2

What are the unique threats you face when using generative AI?

Fresh Thoughts to Your Inbox

Fresh perspectives on cybersecurity every Tuesday. Real stories, analytical insights, and a slash through buzzwords.

We'll never share your email.

Subscribe to Fresh Thoughts

Our weekly newsletter brings you cybersecurity stories and insights. The insights that help you cut through the bull.

We'll never share your email.

Resources

Fresh Security Support

Your Questions

Blog

Fresh Sec Limited

Call: +44 (0)203 9255868