Home » Expert opinion » AI Risk Rises as Traditional Governance Falls Short
News Desk -

Share

Enterprises have leaned heavily on traditional governance models built for predictable systems and static workflows as they accelerate AI adoption. But as AI becomes embedded across functions and decision-making in real time, these legacy approaches are struggling to keep pace with rapidly evolving risks. As Guru Sethupathy, GM of AI Governance at Optro, notes, the gap between governance frameworks and actual AI usage is widening across the enterprise.

For the past two years, most conversations about artificial intelligence in the boardroom have centred on one question: Are we adopting it quickly enough? Executives have worried about falling behind competitors, missing productivity gains, or failing to unlock new business models. As a result, budgets have been unlocked, and adoption has accelerated.

But as AI becomes embedded across the enterprise, a different question is quickly emerging. And it’s one that many leadership teams are far less prepared for. The issue is governance.

The ‘Out of the Box’ Nature of AI Adoption

Consider how AI is actually entering the enterprise. Business units are under pressure to demonstrate tangible use cases and ROI. As a result, engineering teams are using AI to code. Marketing teams may be experimenting with generative tools to produce campaign content at speed. HR departments could be deploying AI-powered self-service assistants to handle employee queries. Finance teams are likely exploring forecasting models that analyse vast datasets in seconds.

Alongside these formal initiatives, another layer of adoption is unfolding. Employees are turning to AI tools on their own. Through unsanctioned channels, they are leveraging AI to summarise reports, draft emails, analyse spreadsheets or generate ideas. This phenomenon, commonly called shadow AI, is spreading quickly because it solves an immediate problem: it helps people work faster.

Furthermore, almost all vendor tools either have or claim to have AI embedded in their technologies.

Taken together, the variety of ways that AI can enter an organisation reveals something important. AI is no longer a tool that sits neatly inside a specific IT system or with just one particular team. It is increasingly embedded in the workflows where work actually happens across all corners of the organisation. And that subtle shift changes the nature of business risk.

Rethinking Risk in the AI-Embedded Enterprise

For decades, organisations have managed technology risk by governing systems. New technologies are approved, deployed, secured and monitored through structured processes. Policies define acceptable use, training reinforces expectations, and oversight sits across teams such as IT, risk management and compliance. This model has worked well because most enterprise technologies behaved predictably.

AI, however, adds additional complexity. AI can be a complex technology system or a simple predictive model. It can be the entire technology or a small feature in a product. The same AI tool can be used in a multitude of ways that vary from the mundane to the very risky. And AI agents can automatically execute tasks on their own in opaque ways.

In other words, the primary risk surface includes both the AI technology as well as related human behaviour and judgement. And point-in-time checks are not sufficient, continuous monitoring is crucial. This is why traditional governance approaches often fall short.

Policies typically sit in documents employees rarely revisit. Training often happens months before someone encounters a real-world scenario involving AI. By the time an employee is deciding whether to paste sensitive information into a generative model or rely on an automated analysis, those guardrails are far removed from the moment the decision is made.

When technology becomes embedded directly in the moment decisions occur, risk management also needs to evolve.

The Ambiguity Around Accountability

Complicating matters further is the way responsibility for AI is often structured. Because the technology touches so many aspects of the organisation, oversight tends to be distributed across multiple functions, including IT, security, legal, compliance and executive leadership.

While the intention is shared ownership, the outcome is often fragmented visibility. No single function has a complete view of how AI is being used across the business, and no single authority always has the ability to act quickly and decisively when concerns arise. Responsibility becomes diffused at the precise moment when clarity is most needed.

A Leadership Shift

In boardrooms, the conversation around AI needs to evolve. Instead of asking, “Where should we deploy AI?” CEOs increasingly need to ask a different question: “How are we ensuring AI is being used safely and responsibly across the everyday flow of work?”

In practical terms, that begins with establishing clear authority when something goes wrong. This includes defining who has the ability to pause or shut down an AI system and ensuring escalation paths are understood before incidents occur.

It also means moving acceptable-use principles out of static documents and closer to where work actually happens, embedding guardrails within the systems employees use rather than relying on guidance that sits outside the workflow.

Training must evolve as well. Instead of one-off programmes delivered long before employees encounter AI tools, organisations are beginning to explore ways of delivering guidance in context, at the moment decisions are made, when the risk of misuse is highest.

And perhaps most importantly, many companies are recognising the limits of manual oversight. Periodic reviews and fragmented reporting processes struggle to keep pace with how quickly AI usage evolves. Continuous controls, automated monitoring and integrated risk visibility are becoming essential to maintaining awareness across the enterprise.

Balancing the New Risk Equation

All of this represents shifts in mindsets, capabilities, and technology around governance. For years, organisations have successfully governed technology deployments. However, AI cannot be treated in quite the same way. Instead, leaders must focus on understanding and addressing how technology augments human decisions inside the everyday flow of work.

The true differentiator moving forward is not an organisation’s AI, every organisation will have access to similar AI technology stacks, but rather, its capabilities around managing and governing its AI. Those organisations will move faster, while managing risk better, and that will be the winning formula.

By Guru Sethupathy, GM of AI Governance at Optro