Home » Emerging technologies » Cyber Security » AI in Cybersecurity: Protector or Pretender?
News Desk -

Share

AI in Cybersecurity is transforming threat detection and response, but rising deepfakes and weak governance raise urgent questions about trust, safety, and control.

AI is making cybersecurity smarter and faster. IBM’s 2025 Security Report shows that AI can detect 85–90% of cyberattacks, way better than traditional methods. AI-powered Security Operations Centers (SOCs) are also cutting false alarms by 50% and automating about half of incident responses.

In the UAE, AI adoption in cybersecurity is accelerating rapidly. Industry reports from PwC and other leading consultancies show that a vast majority of companies in the region are integrating AI tools to improve threat detection, automate incident response, and speed up recovery times.

Tools like machine learning and behavior tracking help detect strange activity, find new types of attacks, and respond fast. With AI, security teams are cutting response times by 35%, a huge boost when dealing with massive amounts of data.

But Attackers Use AI Too

Unfortunately, the bad guys also use AI. A Fortinet report says there are over 36,000 AI-powered scans happening every second, 17% more than last year.

Hackers are creating ultra-convincing phishing emails, faking voices for scams, and running automated attacks. In the UK, one CEO’s voice was cloned by AI to trick an employee into sending $240,000. Top Media house shared a similar story where a journalist fooled a bank with a deepfake voice in just minutes.

A 2024 study found that 66% of people couldn’t tell AI-made audio from real voices. Even worse, 44% couldn’t spot fake videos. That shows how tricky AI-powered scams have become.

According to a University College London (UCL) study, participants were only able to detect artificially generated speech 73% of the time. The study involved 529 participants listening to real and AI-generated speech in English and Mandarin, showing humans struggle to reliably distinguish deepfakes.

Another University of Florida study in November 2024 tested 1,200 people to identify real audio from digital fakes. While participants claimed 73% accuracy, many were fooled by AI-generated details such as accents and background noise.

The Governance Gap

Even though many companies use AI, few have formal rules to ensure its safe use. A recent 2025 survey of legal teams in the financial sector revealed that while 90% of firms have adopted AI tools, only 18% have established official policies to govern their use, and just 29% consistently follow these policies. This gap highlights the urgent need for stronger governance frameworks to prevent AI from creating new vulnerabilities instead of solving existing ones.

What Companies Should Do

  1. Set Clear AI Rules: Make guidelines for how AI should be used, and keep checking that it’s used safely. For example, voice detection systems like PITCH can spot deepfakes with 88% accuracy.
  2. Strengthen Defenses: Use AI inside secure systems, and back it up with strong password systems and device protection.
  3. Train Your Team: Help your security staff understand how to spot AI-powered threats and use AI tools wisely. According to Darktrace, 74% of cybersecurity experts see AI threats as a big deal, and 90% think they’ll get worse soon.

AI is changing the cybersecurity game. But whether it helps or hurts depends on how we use it. To stay ahead, businesses need smart tools, smart rules and smart people. The future of digital safety isn’t just about tech, it’s about responsibility.