We were looking to collaborate with a software development agency late last year.
We knew what we needed.
The crux of the conversation should have been about how we could collaborate to deliver the outcome quickly and effectively.
A sales call was arranged...
and it was the strangest 20 minutes I've experienced in years.
I had assumed the call would be...
Instead, there was no conversation.
There was a sales script.
That must be followed.
Under all circumstances.
Our needs must fit within the script.
After 20 minutes, we ended the call.
It was clear the idea of collaboration was a non-starter.
The call felt robotic and fake.
Even though it was a human on the other end of the phone.
This is not a unique situation.
Twenty years ago - as call centres were first moved offshore - this was a relatively common issue.
The script was more important than the customer.
The outcome was determined, and the operators were the delivery mouthpiece.
It all fell under the idea of brand integrity.
It was assumed that only highly structured scripts would produce the intended outcome.
And maintain the brand's identity and integrity.
Over time, this has improved.
Only the most junior salespeople fall into the trap of sticking doggedly to sales scripts.
Sales trainers still use scripts as the foundation, but modern training is about hitting milestones and key qualification questions.
And brand integrity is more loosely controlled.
The mistake of tight control and blindly following the script at the cost of customer experience is repeated while training AI agents.
AI agents are trained using a technique called Prompt Engineering.
This new discipline explains how the agent should operate using natural language.
As many early adopters of Prompt Engineering are developers, they carry many expectations of control and predetermined outcomes.
And, like the sales scripts and early days of offshore call centres, the results feel synthetic, fake and robotic.
However, I have found a different approach that has been more successful over the last 6 months of training AI agents for back-office tasks.
Like a sales coach - I have been training AI agents to hit milestones and work towards the outcome.
Constraints are added on what should not be done, but the details are left to the AI model to determine.
This looser control creates interesting variance, which is less robotic. However, it also finds unexpected edge cases that need to be addressed.
Astute readers will notice that this idea of looser integrity and control follows the Biba Integrity Model - commonly used in security access control design.
From Wikipedia:
In the Biba model, users can only create content at or below their own integrity level (a monk may write a prayer book that can be read by commoners, but not one to be read by a high priest). Conversely, users can only view content at or above their own integrity level (a monk may read a book written by the high priest, but may not read a pamphlet written by a lowly commoner).
And to paraphrase for prompt engineering:
An AI agent may write content that can be read by the audience but not by the prompt engineer. Conversely, an AI agent may take direction and instructions from the prompt engineer but may not be directed by the audience - lest they corrupt the agent's intent.