Home » Emerging Technologies » Artificial Intelligence » AI Agents Surge 466% Raising Security Risks
News Desk -

Share

AI agents are rapidly expanding in enterprises, according to new research from BeyondTrust. The company reported a 466.7% year-over-year increase in AI agents operating across enterprise environments.

The findings come from BeyondTrust’s Phantom Labs team. They were surfaced through Identity Security Insights on the Pathfinder Platform. The report highlights the rise of a “shadow AI workforce.” These are AI-driven identities deployed across cloud services and enterprise applications without centralized governance or clear visibility.

Moreover, researchers warned that organizations are introducing thousands of new machine identities. Many are deployed without fully understanding the access they inherit. Fletcher Davis, Director of Research at Phantom Labs, said that in several environments, AI agents had privileges similar to human administrators.

As a result, the shift from chatbot use cases to more autonomous systems is expanding the identity attack surface. He added that as agentic AI adoption grows, risks will continue to increase.

Furthermore, the research identified several concerning patterns. Shadow AI agents often operate outside formal IT governance. Many are deployed through low-code platforms or embedded applications. In addition, some AI identities appear governed in static reports but can elevate privileges during actual use.

At the same time, machine and AI identities are now outnumbering human identities by a significant margin. This ratio continues to accelerate. Researchers also found that long-lived API keys and static credentials are widely used by AI agents without proper rotation or lifecycle controls.

This rapid growth is driven by increasing adoption of AI-enabled enterprise platforms. These include Microsoft Copilot and Azure AI Foundry. AI features embedded in Salesforce and ServiceNow are also contributing. Additionally, AI-powered coding assistants and collaboration tools like Jira and Confluence are playing a role.

In some cases, organizations already operate more than 1,000 AI agents. However, many security teams are not fully aware of their presence.

Unlike traditional service accounts, AI agents can inherit permissions from users or service roles. They can also interact with APIs and enterprise tools. Importantly, they can act autonomously across systems. This combination creates attack paths that traditional security tools are not designed to detect.

BeyondTrust said its Identity Security Insights solution is built to address these risks. It helps uncover hidden identity relationships and map real-world attack paths. It also provides actionable guidance to reduce exposure.

The findings build on earlier Phantom Labs research. In one case, researchers demonstrated a breach scenario involving Microsoft Copilot Studio. AI agents leaked secrets and enabled unauthorized access to cloud infrastructure despite existing controls.

In another study, research into AWS Bedrock revealed risks linked to long-term API keys. These keys can automatically create IAM users with overly broad permissions. The team also released an open-source tool to detect and block such exposures.

Additionally, BeyondTrust introduced a free Identity Security Risk Assessment. The assessment provides visibility into AI agent risks as part of a broader identity security posture analysis. It connects enterprise identity systems with AI infrastructure. It identifies unmanaged identities, detects shadow AI, and maps cross-domain privilege paths. It also offers remediation guidance aligned with MITRE ATT&CK.

Overall, the report underscores the growing importance of securing AI agents as organizations continue to scale AI adoption.