Home » Expert opinion » Controlled Agility Closes the AI Ambition Reality Gap
News Desk -

Share

Controlled Agility is emerging as the defining principle for organisations looking to turn AI ambition into measurable business outcomes. Vibhu Kapoor, Regional Vice President, Middle East, Africa and India at Epicor, explains why balancing speed, governance, and security will be critical as enterprises move from AI experimentation to real operational impact in 2026.

Through 2025, many organisations realised that while AI delivered on its promise, often this fell short of the level of transformation expected. Much of the progress has been in pilots and isolated wins that scale poorly. The gap between intent and impact remains wide. As 2026 begins, the expectations have therefore changed. Boards want results, not experiments. Technology leaders are being asked to prove value in ways that are measurable, operational and safe. Most importantly, they need to do it fast.

Across industries, there’s still a huge amount of AI value left on the table. This year, the focus of AI endeavours will therefore be on closing this gap, which will require a well thought through strategy.

Breaking Bureaucratic Barriers

One of the biggest hurdles of 2025 wasn’t AI capability but organisational inertia. Many companies treated AI like any major IT upgrade, plaguing it with slow governance cycles, multi-layer sign-offs and lengthy approval processes. That approach may work for large infrastructure projects, but it suffocates AI’s ability to iterate and improve.

In 2026, organisations that will make progress will be those that break from that pattern. The first six to twelve months of an AI programme are about gaining traction, learning what works and getting early wins that build confidence. That demands agility, not bureaucracy. It means creating space for teams to experiment, fail safely and adjust quickly. Without this, even the most promising AI initiative is likely to stall before it shows meaningful value.

From what we see in practice, while organisations may break the pattern, they can still find value in familiar metrics for the measurement of success. Cycle times, processing accuracy and clear productivity improvements give leaders something tangible to point to. When a reporting process becomes 25% faster or an order-to-cash cycle shortens materially, scepticism fades. AI stops being an abstract concept and becomes something that genuinely improves operational performance.

The organisations that progress fastest in 2026 will be the ones that deliberately loosen old structures while keeping governance proportionate.

Recalibrating Roles

Amid this shift, the nature of IT and operational roles is changing. Despite dramatic headlines, AI isn’t replacing IT teams. Rather, it’s reshaping what they do. While routine work becomes automated, the need for skilled people who understand how to apply AI only grows.

The most valuable IT professionals in 2026 will be those who can operationalise AI, understanding how models behave, how to troubleshoot outputs, and how to redesign workflows to take advantage of automation. These skills don’t require a PhD in machine learning. They require curiosity, hands-on practice and the confidence to use AI as a tool rather than something to fear.

This is where organisational leadership becomes critical. Teams need access to AI tools, guidance on responsible use and clear examples of how AI can improve daily work. The ERP analyst who automates data checks, the finance manager who uses AI to handle month-end reconciliation or the warehouse supervisor who optimises scheduling through predictions will be the real markers of progress. They build internal momentum far more effectively than grand gestures.

A Deepening Threat Surface

Of course, as AI becomes woven into everyday work, the threat landscape inevitably expands. One of the most urgent challenges of 2026 will be defending against AI-generated fraud as this will be more advanced, more convincing and harder to detect than traditional attacks. Deepfake voices, fake video and AI-generated messages can mirror legitimate behaviour so closely that classic security tools struggle to distinguish them. By the time something feels suspicious, the window for response may already have closed.

This is why ambition must be balanced with strong governance. Controls that protect systems and data remain non-negotiable. Identity management, access control and data protection still form the backbone of safe AI adoption. Yet as regulations evolve and models become more agentic, the scope of those controls must expand. Organisations will need transparency around model training, data lineage and how AI-driven decisions are reviewed.

Visibility is equally critical. AI increasingly operates behind the scenes, automating tasks or orchestrating workflows without users noticing. That invisibility can be a strength, but only when IT teams fully understand how AI integrates with core systems, especially enterprise platforms like ERP. If teams can’t explain where data is flowing, who has access or what actions an AI system is authorised to take, then the organisation simply cannot secure it.

Controlled Agility

2026 promises to be a year where AI is defined by performance. This year, the difference between experimentation and impact will come down to execution. The organisations that pull ahead will be those that create the right conditions for speed, empower their people to work smarter and reinforce security in line with new risks.

By Vibhu Kapoor, Regional Vice President – Middle East, Africa & India, Epicor.