Home » Emerging Technologies » Artificial Intelligence » Exabeam Expands AI Agent Security Analytics
News Desk -

Share

Exabeam has announced a major expansion of its Agent Behavior Analytics (ABA), strengthening visibility and protection for the growing use of AI systems. The update focuses on monitoring how employees interact with AI Agent tools, including queries, shared data, usage frequency, and access locations.

Without this visibility, organizations struggle to baseline normal AI behavior, investigate misuse, or detect insider threats. As a result, the expansion addresses a critical gap in enterprise security.

The company has added new support to detect agent behavior in OpenAI ChatGPT and Microsoft Copilot. This builds on existing visibility into Google Gemini. Together, these integrations turn AI services into rich sources of behavior telemetry that feed into threat detection, investigation, and response workflows.

According to Steve Wilson, Chief AI and Product Officer at Exabeam, AI agents are evolving beyond simple chatbots. He noted that these systems now authenticate, access enterprise systems, and execute real business processes. However, when compromised, their behavior often appears legitimate. Therefore, traditional guardrails such as prompt injection or hallucination detection are not sufficient.

Meanwhile, Pete Harteveld, CEO at Exabeam, said AI is transforming how organizations operate and scale. He emphasized that enterprises must understand how these systems function internally. The expanded analytics aim to help organizations manage risks while maintaining oversight and accountability.

To address emerging threats, Exabeam introduced five new capabilities. First, AI behavior baselining creates dynamic profiles by tracking request volumes, token usage, and activity patterns. Any deviation, such as unusual API spikes, is flagged early.

Next, prompt and model abuse detection identifies prompt injection, manipulation, and tool exploitation before escalation. The updated detection library is now five times larger and covers a broader threat spectrum.

In addition, identity and privilege monitoring ensures AI agents operate within defined permissions. It detects anomalies such as unexpected role assignments or privilege escalations.

Furthermore, agent lifecycle monitoring provides visibility into creation, modification, and usage. This enables security teams to track every stage of an agent’s activity.

Finally, the solution aligns with the OWASP Top 10 for Agentic AI. This introduces a measurable framework for managing AI-related risks.

Nithin Reddy, Global VP of Cybersecurity at Dayforce, highlighted that AI adoption is reshaping the risk landscape. He explained that both humans and autonomous systems now interact with enterprise data at scale. As a result, traditional detection models are no longer sufficient. He added that Exabeam provides clarity by helping teams focus on meaningful risks while supporting innovation.

Additionally, these updates extend across the Exabeam New-Scale and LogRhythm platforms. They are designed to improve workflows, reduce alert fatigue, and accelerate threat detection for security teams.

Overall, the expansion reinforces how enterprises can better secure the evolving AI agent ecosystem while maintaining operational efficiency and control over AI agent usage.