Generative AI can feel very human.
That is in part because a simplified model of how human brains work underpins the current generation of generative AI.
Listening to the hype - you will hear generative AI reproducing many positive aspects of human achievement:
But over the past year, as I have used and built products with generative AI daily, I have noticed something else.
Along with all the positives, generative AI comes with some frailties and quirks.
I've written in the past about social engineering AI - in a similar way to scammers sending phishing emails.
But - in a way - different generative AI models appear to have "personalities".
At the risk of anthropomorphising AI - let me explain.
Think of a time when you managed or worked with a team you knew well.
Over time, you worked out each person's strengths and relative weaknesses.
What they were good at...
And where their areas of relative weakness lay.
More specifically - you knew which tasks they should work on to be effective and which you would never give them.
In a sense - you gained an understanding of their personalities.
Over the months, I have found a similar situation with the generative AI models I use. My generative AI report cards read:
Do any of these personalities sound familiar?
For the next few years, it seems unlikely that there will be one model to rule them all.
Instead, like human teams, different generative AI models will excel in various areas.
Just like managing a human team, it is essential to understand each model's strengths and blend them to create a high-performing team.
The ideas and techniques we use today in team management will be around for a while - albeit with the participants being non-human.