Cyber’s Double-Edged Sword — Leveraging AI to Keep Defenders Ahead

News Desk -

Share

“Cyber’s Double-Edged Sword” by Daniel Rapp, Group VP of AI/ML at Proofpoint, explores how AI shapes cybersecurity, offering insights on staying ahead in the digital arms race.

Generative artificial intelligence (AI) emerged as one of the hottest technology trends in the past year, garnering significant attention. The rapid development of large language models (LLMs) and tools such as ChatGPT has prompted organizations to explore the potential benefits of generative AI.

A recent study revealed that 82% of UAE businesses have started integrating AI tools into their operations, aiming to enhance productivity and cybersecurity. While we may be reaching the peak of ‘inflated expectations’ surrounding generative AI, it’s crucial for security leaders to recognize that AI offers a much wider array of applications.

Many security vendors have been incorporating artificial intelligence, particularly machine learning (ML), into their solutions for years, aiming to improve the efficacy of their solution set and boost their capabilities. AI holds immense potential for bolstering organizational security and empowering defenders to stay ahead of threats. Nonetheless, it’s essential to acknowledge that AI has much broader use cases. For example, AI/ML-driven features are only as good as the data and processes used to train the models, including the size and quality of the data sets, monitoring for data distribution changes, etc. The complexity of the technology creates additional hurdles and limitations. And despite AI’s capability to outperform humans at some complex tasks, it is not always the most effective approach.

In essence, AI isn’t a one-size-fits-all solution for every security issue. As you look to advance your defenses, consider the breadth of AI use cases and ask security vendors detailed questions to understand which use cases and solutions are the best fit for your organization.

The promise and capabilities of AI-powered defenses

AI systems are especially adept at identifying patterns in massive amounts of data and making predictions based on that data. Business email compromise (BEC) attacks have been growing in frequency, and they consistently avoid email security detection. Recent research by Proofpoint indicates a significant increase in these attacks within the UAE, with 85% of organizations being targeted by BEC attacks in 2023, a substantial rise from 66% in 2022.

The attacks are difficult to detect because they typically don’t have payloads such as links or attachments. Additionally, traditional API-based email security solutions scan the threats post-delivery, which requires very time-consuming effort by IT or security teams to populate the tool with data. Since this approach doesn’t scale well, many teams choose instead to only implement those controls for a select group such as senior executives. Threat actors, however, target a much broader category of people within the organization.

That’s where tools powered by AI/ML, including generative AI (GenAI), provide a tremendous advantage. AI/ML-driven threat detection, along with LLM-based pre-delivery detection, can be used to interpret the contextual tone and intent of an email. This pre-delivery approach protects your organization by blocking fraudulent and malicious emails before they reach your people, greatly minimizing their exposure to threats like BEC.

Not all AI-powered tools are equal

To work well, AI and ML solutions need massive amounts of high-quality data because the models learn from patterns and examples rather than rules. Proofpoint, for example, trains its models with millions of daily emails from a worldwide threat intelligence ecosystem. This ensures higher-fidelity detection and gives security, or IT teams confidence in the effectiveness of their security.

Therefore, before embracing new solutions that rely on AI and ML, ask prospective providers questions such as:

  • Where do they get their data for training algorithms? Obtaining data for general-purpose AI applications is easy, but threat intelligence data is not as abundant. The training data used by the vendor should reflect not only real-world scenarios, but also threats that are specific to your organization and people.
  • What do they use in their detection stack to supplement AI/ML? Intelligent technology is not as efficient, effective, or reliable for some types of threats. It’s crucial for a security solution to integrate other techniques, such as rules and signatures, or “human-in-the-loop” processes.

Even before diving into these details, evaluate whether AI is optimal for your specific challenges. AI models are complex and computationally intensive and may take longer to execute than less complicated functionalities. Sometimes, rules-based techniques are more effective, especially when a fast response is critical. Understand what security objective you are trying to achieve and what path is best for solving the problem.

The jury is still out on GenAI

Many security vendors have integrated AI quietly into their stacks for years now. However, we expect a lot more visibility around GenAI efforts.

For one, GenAI is moving much faster through hype cycles than any previous technology. Even governments, which are typically slow to react, have already raised the alarm.

As the security community tries to understand the implications of AI, we can’t overlook the fact that bad actors can also use it to their advantage. Hence another double-edged sword.

GenAI, in particular, has become the fastest-growing area of concern for organizations, according to a recent blog by Proofpoint. The data is an indication that IT and security teams are taking this threat seriously. And business leaders agree. In a global survey by Proofpoint last year of over 600 board members, 59% believed that emerging technologies such as GenAI posed a security risk to their organization.

Threat actors are already abusing this technology, using open-source large language models to develop malicious tools such as WormGPT, FraudGPT, and DarkBERT. These tools enable attackers to craft much better phishing emails and translate them into many more languages.

There’s no doubt that generative AI opens new possibilities for adversaries. But many of the worries may be inflated, at least at the moment. Threat actors will not give up their existing tactics or reinvent the wheel as long as their current models remain lucrative. Defenders must keep a sharp focus on the more immediate threats and ensure they have foundational defenses in place.


Leave a reply