Thales Develops Metamodel to Detect AI-Generated Deepfake Images

News Desk -

Share

As part of a challenge organized by France’s Defence Innovation Agency (AID) during the European Cyber Week, Thales’s cortAIx unit has unveiled a revolutionary metamodel designed to detect AI-generated deepfake images. This new technology offers a powerful solution to address the growing concerns surrounding AI-driven disinformation, manipulation, and identity fraud.

The Thales metamodel aggregates multiple models that assign an authenticity score to images, helping to identify whether they are real or AI-generated. With the increasing use of AI platforms like Midjourney, Dall-E, and Firefly to create hyper-realistic images, the need for such detection systems has never been more critical. Experts predict that AI-generated deepfakes could lead to significant financial losses through identity theft and fraud in the coming years. Gartner’s report highlights that nearly 20% of cyberattacks in 2023 likely involved deepfakes as part of disinformation and advanced phishing efforts.

Christophe Meyer, Senior Expert in AI and CTO of cortAIx, Thales’s AI accelerator, emphasized the importance of this technology in combating identity fraud. “Thales’s deepfake detection metamodel addresses the problem of identity fraud and morphing techniques. By aggregating multiple methods, we can better protect biometric identity checks—a remarkable technological advance and a testament to our AI research expertise,” said Meyer.

How Thales’s Deepfake Detection Metamodel Works

The Thales metamodel employs advanced machine learning techniques, decision trees, and evaluations of model strengths to assess an image’s authenticity. Key methods used include:

  • CLIP (Contrastive Language-Image Pre-training): This model connects images with their textual descriptions, spotting inconsistencies and visual artefacts to identify deepfakes.
  • DNF (Diffusion Noise Feature): This method uses diffusion models to detect deepfakes by estimating the noise used to generate the image, helping to reveal AI manipulation.
  • DCT (Discrete Cosine Transform): DCT analyzes spatial frequencies within an image to uncover hidden artefacts that often remain undetectable to the human eye.

This multi-faceted approach ensures the detection of subtle anomalies in AI-generated content, offering robust protection for systems requiring biometric identity verification.

Thales’s Ongoing Commitment to AI Security

The team behind this innovation is part of cortAIx, Thales’s AI accelerator, with over 600 AI researchers and engineers. The team also developed BattleBox, a toolkit designed to assess and strengthen AI systems against vulnerabilities such as adversarial attacks and data extraction threats.

Thales’s commitment to advancing AI security was further demonstrated during the 2023 CAID challenge, where the company showcased its ability to locate AI training data even after it had been deleted, ensuring confidentiality in defense-related AI systems.

As the threat of AI-driven disinformation and identity fraud grows, Thales’s deepfake detection metamodel stands as a significant technological breakthrough in safeguarding digital identities and combating malicious AI usage.


Leave a reply