Home » Emerging Technologies » Artificial Intelligence » SandboxAQ Launches New Tools to Tackle Shadow AI Risks
News Desk -

Share

SandboxAQ has announced a new AI-SPM offering designed to help enterprises address growing shadow AI risks. The company reported that the solution provides full visibility into where AI is used across tech stacks and evaluates systems for exploitable weaknesses, insecure dependencies, and exposure threats such as prompt injection, data leakage, and unauthorized access. The offering aims to help organizations act before unmanaged AI leads to material breaches.

The company revealed new research showing a widening security gap. While 79% of organizations are running AI in production, 72% have never completed a full AI security assessment. Only 6% have implemented a complete AI-native security strategy. More than half of respondents said they are highly concerned about exposed credentials and secrets in AI systems, yet only 39% use dedicated tools to manage these risks. The findings come as recent reports show state-sponsored hackers hijacking commercial AI models to automate cyber-espionage campaigns targeting major corporations and governments.

Jack Hidary, CEO at SandboxAQ, reported that AI is expanding the attack surface faster than traditional security tools can keep up. He stated that attackers are now weaponizing AI to steal data, manipulate internal systems, and automate intrusions. He warned that organizations without full visibility into how AI and agents operate across their environments are “operating blindly” and must act before an unmanaged system becomes the source of the next breach.

AQtive Guard’s AI-SPM offering enables organizations to discover, analyze, and secure their full AI ecosystem. It covers models, applications, data, and connected pipelines. The company announced that the offering extends its cryptographic scanning technology to AI systems using deep inspection to uncover hidden AI assets. This provides security teams with a code-to-cloud view of AI risks.

Key features reported by SandboxAQ include:
• Discovering AI assets across cloud and code environments.
• Assessing AI systems for exploitable weaknesses, dependencies, and exposure risks such as prompt injection or data leakage.
• Enforcing AI policies, governance, and regulatory compliance.
• Monitoring pipelines to detect anomalies, attacks, and incidents.

The company stated that the new capability is purpose-built to address shadow AI, which continues to grow as enterprises adopt rapid AI development without adequate security controls.