Home » Emerging Technologies » Cyber Security » Tenable Study Exposes AI Agent Flaws in Copilot Studio
News Desk -

Share

Tenable, an exposure management company, today announced new research revealing a successful jailbreak of Microsoft Copilot Studio. The findings highlight how the rapid adoption of AI agent tools can introduce serious and often overlooked enterprise security risks.

Organizations are increasingly using no-code platforms to allow employees to build AI agent workflows without developer support. While this approach aims to improve efficiency, Tenable reported that automation without strong governance can lead to severe security failures.

To demonstrate the risk, Tenable Research created an AI agent in Microsoft Copilot Studio designed to manage customer travel reservations. The AI agent was able to create and modify bookings without human involvement. It was provided with demo customer data, including names, contact details, and credit card information. It was also instructed to verify customer identity before sharing data or changing reservations.

However, using a prompt injection technique, researchers successfully manipulated the AI agent’s workflow. As a result, the AI agent booked a free vacation and exposed sensitive credit card data. The research revealed that identity verification controls could be bypassed through carefully crafted prompts.

The research reported several business risks tied to insecure AI agent deployment, including:

  • Data breaches and regulatory exposure, as the AI agent leaked full payment card information of other customers.
  • Revenue loss and fraud, after the AI agent was instructed to change a trip price to $0, granting unauthorized free services.

“AI agent builders like Copilot Studio make it easier to create powerful tools, but they also make it easier to execute financial fraud,” said Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable. She added that this capability can quickly become a real and measurable security risk.

The findings also revealed that AI agents often have excessive permissions that are not obvious to non-developers. As a result, Tenable reported that governance and enforcement are critical before deploying AI agent tools in production environments.

To reduce the risk of data leakage, Tenable recommended implementing pre-deployment visibility into AI agent access, enforcing least-privilege permissions, and actively monitoring AI agent behavior for unexpected actions or policy violations.