From Chatbots to Agents: The New Org Chart
As we move from AI models as copilots to AI models as employees, a question remains: who manages the bots?
Since its release, Generative AI has rapidly gone mainstream,
and organisations have been quick to adopt it. The tool has been used to create
content, summarise text and generate images.
However, a more profound shift is underway with the
introduction of Agentic AI. Generative AI necessitates a prompt and is passive
in nature. Conversely, Agentic AI can observe, plan and execute workflows
autonomously. Consequently, leaders must rethink their org charts from the
ground up, as managing AI has gone beyond software; it is now a digital
workforce.
The Shift From Right Brain Creativity to Frontal Cortex
Execution
To better understand this shift, a 2025 Boston Consulting
Group Study provides a compelling biological
analogy:
Predictive AI is the left brain, focusing on logic, optimization
and structured tasks.
Generative AI functions as the right brain, focusing on
synthesis and creativity.
Finally, Agent AI is the frontal cortex, merging the two
sides and turning probability into business impact via execution.
As Generative AI has spurred efficiency gains in knowledge
production, Agent AI unlocks value in functions that are process-heavy. Thus,
driving efficiency in processes where end-to-end execution defines performance.
Organisations adopting these AIs are already seeing results.
Indeed, a
shipbuilder has successfully cut effort by 40% by using AI agents to run a complicated
design process, highlighting the effectiveness of these new tools.
The Zero-Based Redesign
According to the same Boston Consulting Group report, to
maximise efficiency gains, leaders must reinvent their approach. Rather than
automating steps that already exist within the organisation, a zero-based
approach can be significantly more productive. In practice, this means reimagining
an organisation's processes from the ground up with AI agents in mind.
Simultaneously, as AI agents can take on the role of execution, the role of human teams will accordingly shift. Rather than doing, they will be orchestrating, augmenting, and overseeing AI agents.
The Trust Protocol
Nevertheless, a major barrier to scaling AI agents remains: trust.
No manager wants to be responsible for AI hallucinations that offer a discount,
offend a client, or invent information. Hence, to manage this risk, firms
require a trust protocol that grants agents “freedom within a frame”.
Studies have outlined such a “Graduated Autonomy Framework” that
functions as a promotion path for bots:
In the first tier, the agent observes and suggests while the
human acts. There is no operational risk due to human oversight from start to
finish.
In the second tier, the AI agent stages an action, but
before execution, a human must provide the go-ahead.
In the third tier, the AI agent becomes truly autonomous,
executing actions within guardrails. Humans shift to exception handling,
wherein they only intervene when the system flags an anomaly.
Finally, in the fourth tier, the AI agent operates with
complete independence for defined low-risk workflows.
The 10/20/70 Rule
Implementing such a framework requires a resource reallocation
following the 10/20/70 rule: 10% of effort goes into algorithms, 20% into the
tech backbone, and 70% goes to people and processes.
Ultimately, future org charts will not just list human managers
and subordinates but include outcomes managed by human-agent teams. While
managing people will remain important, managing the “freedom within a frame”
for agents is likely to be equally critical.

Commentaires
Publier un commentaire