Home » Emerging Technologies » Artificial Intelligence » AI Governance Gap Exposes Enterprise Risks
News Desk -

Share

AI governance is emerging as a critical concern as 85% of enterprises report that artificial intelligence is now central to their business strategy. It is deployed across multiple functions or embedded in core operations. However, new research from Optro, formerly AuditBoard, reveals a structural mismatch at the heart of enterprise AI governance.

According to the study, governance frameworks designed to oversee technology systems are now being applied to human behaviour. As a result, organisations face gaps that leave the most significant AI risk surface largely unmanaged.

Furthermore, findings from Optro’s 2026 Risk Intelligence Report, titled “The AI Oversight Gap: Adoption is Scaling. Governance Controls Aren’t,” show that while enterprises are accelerating AI adoption, their greatest risk exposure lies in employee interaction with AI systems.

More than a third of respondents, or 34%, identified staff inputting sensitive data into AI tools as the primary driver of risky AI usage. In addition, 21% cited insufficient employee training, while another 21% pointed to pressure to move quickly as a key factor behind unsafe AI practices.

This behavioural risk is further intensified by fragmented governance structures. Responsibility for AI oversight is widely distributed, with no single function holding clear ownership. The IT department accounts for the largest share at 25%, followed by risk management at 18%, cross-functional governance at 17%, and dedicated AI governance teams at only 10%.

Similarly, incident response responsibilities are also dispersed. Around 29% fall under risk, compliance, and internal audit functions. Meanwhile, 27% are handled by executive leadership, and 24% by IT and engineering teams. The remaining responsibilities are spread across other departments.

Moreover, authority to shut down AI systems is not centralised. It is shared among leadership, risk, IT, compliance, and security teams. Consequently, many organisations lack a clearly defined operational “kill switch.”

At the same time, the impact of this governance gap is becoming more evident. Over the past 12 months, 40% of organisations reported inaccurate AI outputs. Additionally, 33% experienced policy violations, while 28% received customer complaints linked to AI systems.

“AI adoption is moving faster than many organisations’ ability to fully understand and govern how it’s being used,” said Kristin Colburn, Leader of Data and AI Governance at Dayforce. She added that governance must evolve from reactive measures to proactive and continuous oversight.

Despite these challenges, there are signs of progress. Nearly three-quarters of respondents expect an increase in governance, risk, and compliance technology budgets over the coming year. Key investment areas include AI governance solutions at 43%, regulatory compliance tools at 41%, and upgrades to existing GRC platforms at 38%.

In addition, organisations are prioritising capabilities such as integration with GRC platforms, automated risk assessments, regulatory mapping, tracking, and third-party AI assessments.

“Governance should not be viewed as a barrier to innovation, but as foundational for enabling organisations to deploy high-integrity AI,” said Guru Sethupathy, GM of AI Governance at Optro. He emphasised that integrated monitoring and oversight across the AI lifecycle allow organisations to move faster and more securely.

Overall, enterprises must strengthen AI governance to address growing risks, improve oversight, and ensure responsible AI adoption across operations.