
AI agents have captured the imagination of business leaders, promising to automate complex workflows, accelerate decision-making, and free employees for higher-value tasks. Yet the reality is sobering: Gartner predicts that 40% of AI agent projects will be cancelled within two years. The gap between promise and delivery is growing, and the root cause lies not in the technology itself, but in how enterprises are approaching autonomy.
Too many organisations deploy AI agents as if they were fully independent employees from day one, handing over control without building the trust, transparency, and structure needed for success. The result is misalignment, wasted investment, and in many cases, outright failure.
The autonomy trap
The prevailing model in many AI initiatives is to roll out agents with a high degree of autonomy and hope for rapid returns. Yet this approach frequently backfires, MIT research finds 95% of generative AI pilots fail to deliver measurable business value, most often because they aren’t integrated into workflows or guided by clear oversight. Agents generate outputs that teams don’t know how to use, act in ways that appear opaque, or create rework rather than efficiency.
It is the enterprise equivalent of hiring a new employee without giving them a role description, training, or a manager. Left unguided, even the most capable hire will flounder. AI is no different: intelligence alone is not enough. Autonomy without direction undermines adoption.
Gradual autonomy: a smarter path forward
A more effective approach is to treat autonomy as something that should grow gradually, in line with user trust and proven results. This autonomy model starts with tightly scoped responsibilities and human oversight. Over time, as the system demonstrates accuracy, reliability, and alignment with organisational goals, its independence can be expanded.
This systemic, gradual introduction of AI agents leads to more control, and in turn, more trust. Enterprises that allow AI to prove itself incrementally create a virtuous cycle. Users gain confidence, adoption grows, and the AI earns greater responsibility. It is the difference between building AI replacements and AI teammates.
Embedding AI into workflows, not bolting it on
Another common mistake is treating AI as a bolt-on feature rather than embedding it into core workflows. Generic copilots can be useful in isolated scenarios, but they often struggle to deliver consistent enterprise value because they lack context.
The most successful implementations are purpose-built agents that operate directly within existing systems and data. When AI agents are embedded into workflows, they can take on repetitive, time-consuming tasks such as surfacing insights, identifying anomalies, or generating experiment ideas. This enables human teams to redirect energy towards strategic and creative work, where judgement and empathy are irreplaceable.
Those that continue to launch agents with too much independence and too little oversight risk falling into the costly cycle of hype, disappointment, and abandonment.
Research supports this approach. McKinsey’s State of AI report identifies workflow redesign as the single factor most strongly correlated with bottom-line impact from AI. Enterprises that re-engineer processes to integrate agents see far higher returns than those that deploy them in silos.
Trust as the foundation of scale
No enterprise technology can succeed without user trust, and AI agents are no exception. Without it, systems are ignored, outputs are second-guessed, and adoption flatlines. Building trust requires role clarity, transparency, and human-in-the-loop safeguards. Defining exactly what the agent is responsible for, when it should escalate, and how its output will be used prevents confusion and duplication. Surfacing reasoning and evidence makes it clear how conclusions are reached. Keeping critical actions subject to human approval ensures confidence is built over time.
By treating AI agents as digital colleagues rather than simply a tool, organisations create the conditions for genuine collaboration.
AI agents amplify human efficiency, not displace it
Concerns about AI replacing human roles are frequently overstated. In practice, the highest-value use cases come from collaboration. When AI agents handle analysis, monitoring, or administrative burden, employees gain the freedom to innovate, solve problems, and make better decisions. That said, adoption alone isn’t enough. While 78% of organisations now use AI in at least one business function, only 38% provide formal training to their employees, according to McKinsey. Without that training, even useful AI tools can’t reach their potential, people may struggle to understand best practices, interpret outputs correctly, or integrate the tools into their workflows.
When the right skills and practices are in place, augmentation goes further still: AI agents can speed up experimentation by generating hypotheses, automating setup, and flagging results. In this way, they strengthen human capability rather than displace it, enabling organisations to learn faster, adapt more quickly, and build resilience.
Building AI responsibly
Enterprises can avoid the pitfalls that doom many AI agent projects by following a more measured path. Starting with high-quality, legally compliant datasets provides a solid foundation. Introducing agents gradually, with training and defined responsibilities, mirrors the onboarding process of a new employee. Assigning ownership for performance ensures accountability, while feedback loops allow human teams to share frustrations and ideas for improvement. Most importantly, initiatives should be prioritised based on whether they genuinely reduce time-to-insight or improve workflows, rather than on their novelty or surface appeal.
Looking ahead
The next 12 to 18 months will see fundamental shifts in how AI agents are deployed. Natural language interfaces will become standard entry points for complex applications, allowing employees to bypass dashboards and interact directly through conversation. Multi-modal capabilities will enable richer analysis across data types, providing more contextually relevant insights. Above all, the industry will shift from generic copilots toward outcome-driven, purpose-built agents. Enterprises that embed these agents into core functionality and allow autonomy to grow with trust will unlock sustainable competitive advantage. Those that continue to launch agents with too much independence and too little oversight risk falling into the costly cycle of hype, disappointment, and abandonment.

Ted Sfikas
Ted Sfikas is Field Chief Technology Officer at Amplitude, specialising in executive advisory, team leadership, and delivering measurable value through clear, proactive communication. At Amplitude, he supports customers with modern data strategies spanning Product, Analytics, AdTech and AI.