CyberArk Launches FuzzyAI to Strengthen AI Security

News Desk -

Share

FuzzyAI has successfully jailbroken every major AI model tested, helping organizations identify and address vulnerabilities such as guardrail bypassing, harmful output generation, and information leakage in both cloud-hosted and in-house AI systems.

As AI continues to transform industries through applications in customer interactions, process automation, and internal improvements, the security challenges associated with AI models are also increasing. FuzzyAI provides a systematic approach to testing AI models against various adversarial inputs, revealing weak points in their security and ensuring safer AI deployment.

FuzzyAI features a powerful fuzzer, capable of exposing vulnerabilities using over ten distinct attack techniques, from bypassing ethical filters to prompt injection. This robust tool allows organizations and researchers to detect flaws in AI models, strengthening their defense mechanisms against emerging threats. FuzzyAI’s extensible framework also enables customization, allowing users to add their own attack methods to test domain-specific vulnerabilities. A community-driven ecosystem ensures continuous improvements in adversarial techniques and defense strategies.

Peretz Regev, Chief Product Officer at CyberArk, stated, “The launch of FuzzyAI underscores our commitment to AI security, enabling organizations to address the security risks inherent in the evolving landscape of AI model usage. Developed by CyberArk Labs, FuzzyAI empowers organizations to identify weaknesses and fortify their AI systems against potential threats.”

FuzzyAI is now available to organizations and researchers aiming to enhance their AI security, providing a vital tool to stay ahead of vulnerabilities and threats in AI development and deployment.