I continue to be astounded by the capabilities of AI systems like ChatGPT.
But I see a dark cloud on the horizon...
We must stop worrying about the quality of AI output and instead focus on whether the abundance of AI output is aligned with our goals and business objectives.
For the last few weeks, I have continued to use ChatGPT to assist in various business tasks.
A friend is considering expanding their farm into organic duck eggs and requested help.
It took 68 seconds to create:
Some nuanced best practice details required fixing, but once we had finished reading the details, it was clear that the analysis was competent and useful.
And therein lies the issue...
Once we had finished reading...
Reading, reviewing, and evaluating AI output takes far longer than creating it.
The significant effort in a task is now in review and reflection, not creation.
The opposite of what has traditionally been the case.
This situation leads to the crucial question...
Is this abundance of excellent content helping advance the unique goals of our business or school?
Over the weekend, I used ChatGPT to help create a financial analysis tool.
On Saturday morning, I thought everything was going well—until I realised that the hundreds of lines of functional, well-documented code were not aligned with the project's objective.
I had gone down the wrong path.
It was useless and unproductive.
However, cognitive biases like the sunk cost fallacy immediately kicked in.
I briefly considered changing the project's objectives, so I didn't need to scrap the work.
I was anchoring on the hundreds of lines of code - which "must have taken days to write...".
But in reality, the code had taken seconds to create and restarting the development was mandatory.
With hindsight, it became clear this was a learned reaction.
The cost of creating new software is usually substantial.
Development teams are often measured against KPIs such as lines of code produced or development tasks completed.
In this situation, the worst thing you can do is scrap work and start again.
The traditional approach to solving this problem and maintaining KPIs is to force the existing source code into a new direction.
The situation reminded me of Rory Sutherland's rant on managing KPIs.
In summary, Rory sees a trend for businesses to focus too heavily on and reward standardised, bureaucratic procedures—like the amount of code produced—rather than the actual outcomes or qualities they aim to achieve.
Using AI, meeting traditional KPIs becomes a breeze.
However, the unintended consequence of focussing too heavily on KPIs can be to efficiently and effectively reach the wrong business outcome.
In cybersecurity, when we see a small, simple request that takes a long time to process - we know what will happen next...
A Denial of Service.
The system will be flooded with requests, and the delays in processing will back up until it becomes overloaded and can no longer function.
With AI, I see the same asymmetry in the time it takes to create and evaluate content.
Unfortunately, in this case, the system that will become overloaded will be staff within the business.
If this leads staff to effectively rubber stamp work without checking its alignment with the business's goals, the outcome will be disastrous.
Actively managed KPIs can help.
But only if they are fully aligned and representative of the stated business objectives and outcomes.
Are your KPIs aligned?