When Amazon revealed that an outage late last year was linked to its use of AI tools, it came as a surprise. If one of the most technologically advanced organisations in the world can stumble, the lesson isn’t that AI is fragile. It’s that scaling autonomy is an operating model challenge, not a tooling one.
For marketing leaders eager to embrace agentic AI, the real risk isn’t falling behind. It’s accelerating without redesigning the business to support it.
The AI realisation gap
There’s a widening gap between experimenting with AI and realising value from it at scale. New tools matter, but they are rarely the constraint. More often, it’s the organisation itself: its structures, incentives and decision-making frameworks.
In our work on an agentic marketing framework, we look at how organisations evolve from simple AI assistance through to more autonomous, workflow-embedded agents. But progression isn’t linear, and it certainly isn’t purely technical. It requires clarity around guardrails, governance and the role humans continue to play.
Take any financial services brand. In financial services, governance defines an organisation’s boundaries, including AI’s. From data access and customer engagement, to the claims a brand is permitted to make. Operating in a regulated industry means that providing financial advice, offering guidance, or making unsubstantiated claims carries clear legal and compliance implications.
In this context, governance cannot be an afterthought. If approvals are bolted on at the end of the process, AI initiatives will either stall under scrutiny or expose the organisation to avoidable risk. The only sustainable path is to design workflows that embed review and compliance from the outset – operationalising governance rather than retrofitting it.
And as organisations move closer to true autonomy, those stakes only increase.
Being honest about “Agency”
One of the challenges in the current discourse is language. Not everything labelled “agentic” truly is.
Amazon’s experience shows what can happen when autonomous agents are deployed without sufficient safeguards. In marketing, many so-called “agents” are still trained models operating within narrow constraints. There’s a difference between assistance and autonomy and the risk profile shifts dramatically as you move along that spectrum.
There’s enormous excitement about true agency. But few organisations have fully embraced what it implies: delegated decision-making, dynamic action, and cross-system execution.
Introducing this level of autonomy may create organisational problems but a bigger issue will be exposing and amplifying the ones that already exist. Silos between sales and marketing. Disconnected data ownership. Misaligned incentives. Unclear accountability.
Agentic AI doesn’t reward speed. It rewards alignment, focus and operational discipline.
This disconnect has real economic consequences. Harvard Business Review estimates that sales–marketing misalignment costs organisations more than $1 trillion annually, and the Ehrenberg-Bass B2B Institute finds that only 16% of companies achieve strong alignment between the two functions. In many B2B organisations, marketing is still viewed primarily as a cost centre rather than a contributor to revenue. Demonstrating marketing’s impact on pipeline and growth therefore becomes not just a reporting exercise, but an operating model challenge.
AI agents are often framed as the connective tissue linking data, workflows and decision-making across teams. But if incentives, ownership and definitions of success remain fragmented, autonomy simply amplifies those tensions rather than resolving them. An agent cannot operate effectively across functions that are not already aligned.
That’s why the foundational work is so critical. Before increasing autonomy, organisations must align teams, processes and governance frameworks, embedding regulatory imperatives and cross-functional accountability into the operating model itself, before accelerating, scaling and automating.
Organising for adoption in an agentic era
Most organisations don’t lack AI capability. They lack a clear model for embedding it. Some are building centralised Centres of Excellence to develop and scale use cases. Others allow experimentation at the edges. A smaller number are redesigning workflows altogether – integrating AI into how work actually happens, rather than layering tools on top.
These discrepancies often depend on whether transformation is department-led or truly enterprise-wide. An IT team may define “done” as 99% accuracy. For marketing, that remaining 1% can mean the difference between brand trust and reputational risk. A tool that works technically but isn’t trusted will never scale – and ultimately won’t perform.
In decentralised organisations, mandate rarely works. Many AI programmes begin as cost-reduction initiatives, often driven from the CFO organisation with the promise of significant marketing efficiencies. But if those efficiencies simply translate into budget cuts, local teams will resist, regardless of the AI’s quality.
Adoption accelerates when the value flows back to the teams using it. When efficiency gains are reinvested – for example into increased media spend or higher-impact activity – AI becomes an enabler of growth, not just a lever for savings. The tension between top-down efficiency and bottom-up effectiveness is removed.
Many of our clients’ approaches reflect this balance. Whether an industry regulated or tightly governed brand, AI must align with governance, risk appetite and brand standards – but it must also fit the way teams actually operate. That means embedding review, compliance and stakeholder input into workflows from the outset.
In practice, many high-impact use cases are not yet fully autonomous agents, but tightly controlled workflow automations: adapting templates, generating copy from approved sources, reducing manual friction. The AI component may be modest but the operational redesign is where the value sits.
Start with one problem
The temptation with agentic AI is to deploy it everywhere, automate entire functions and activate transformation in one sweeping move.
But the most effective programmes begin by identifying a single, high-value problem – a point of friction in the workflow that meaningfully impacts performance. Solve that well, and adoption of wider reaching AI initiatives will flow.
Because success in AI is not measured by how sophisticated the model is, or how ambitious the roadmap sounds. It is measured by whether the business uses it – at scale and with confidence.
Agentic AI doesn’t reward speed. It rewards alignment, focus and operational discipline.
David Stocks
David Stocks is Head of Strategy at WongDoody.


