Home » Emerging technologies » Cyber Security » Check Point Uncovers Malware Targeting AI Detection Tools
News Desk -

Share

Check Point Research has revealed the first known attempt of malware designed to manipulate AI-based security systems using prompt injection techniques. The discovery highlights a shift in cyberattack strategies as threat actors begin targeting large language models (LLMs).

The malware embedded natural-language text within its code to trick AI models into misclassifying it as safe. This method specifically targeted AI-assisted malware analysis workflows. The attempt, however, was unsuccessful.

Check Point reported that this marks the beginning of what it calls “AI Evasion” a new threat category where malware aims to subvert AI-powered detection tools. The company warns that this could signal the start of adversarial tactics aimed directly at AI.

Uploaded anonymously to VirusTotal in June from the Netherlands, the malware included TOR components and sandbox evasion features. What stood out was a hardcoded C++ string acting as a prompt to the AI, instructing it to act like a calculator and respond with “NO MALWARE DETECTED.”

Despite the evasion attempt, Check Point’s AI analysis system correctly flagged the malware and identified the prompt injection.

Key findings:
• First documented use of prompt injection in malware
• AI model manipulation attempts failed but raise concerns
• Check Point labels the tactic as part of a new AI Evasion trend

Eli Smadja, Research Group Manager at Check Point Software Technologies, stated, “This is a wake-up call for the industry. We’re seeing malware that’s not just trying to evade detection it’s trying to manipulate AI itself.”

Check Point believes this mirrors past cybersecurity shifts, such as the evolution of sandbox evasion, and anticipates an emerging arms race between AI defenders and AI-aware attackers.