Over a Third of Info Fed to AI Includes Regulated Personal Data

News Desk -

Share

Netskope, a Secure Access Service Edge (SASE) company, has published new research highlighting that regulated data—data which organizations are legally required to protect—constitutes more than a third of the sensitive information being shared with generative AI (genAI) applications. This poses a significant risk for businesses regarding costly data breaches.

The research reveals that three-quarters of businesses now completely block at least one genAI app to limit the risk of sensitive data exfiltration. However, fewer than half of organizations apply data-centric controls to prevent sensitive information from being shared in input inquiries, indicating a lag in adopting advanced data loss prevention (DLP) solutions.

Using global data sets, the research found that 96% of businesses now use genAI, a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 genAI apps, up from three last year, with the top 1% adopters using an average of 80 apps, a significant increase from 14. This increased use has led to a surge in proprietary source code sharing within genAI apps, accounting for 46% of all documented data policy violations. These shifting dynamics complicate enterprise risk control, underscoring the need for a more robust DLP effort.

Positive signs of proactive risk management are seen in the nuanced security and data loss controls organizations apply. For instance, 65% of enterprises now implement real-time user coaching to guide interactions with genAI apps. The research indicates that effective user coaching has been crucial in mitigating data risks, prompting 57% of users to alter their actions after receiving coaching alerts.

“Securing genAI needs further investment and greater attention as its use permeates through enterprises with no signs that it will slow down soon,” said James Robinson, Chief Information Security Officer, Netskope. “Enterprises must recognize that genAI outputs can inadvertently expose sensitive information, propagate misinformation, or even introduce malicious content. It demands a robust risk management approach to safeguard data, reputation, and business continuity.”

Netskope’s Cloud and Threat Report highlights that ChatGPT remains the most popular app, with over 80% of enterprises using it. Microsoft Copilot has shown the most dramatic growth in use since its launch in January 2024, at 57%. Additionally, 19% of organizations have imposed a blanket ban on GitHub CoPilot.

Netskope advises enterprises to review, adapt, and tailor their risk frameworks specifically for AI or genAI using efforts like the NIST AI Risk Management Framework. Specific tactical steps to address genAI risks include assessing existing AI and machine learning uses, data pipelines, and genAI applications to identify security vulnerabilities and gaps. Establishing fundamental security measures, such as access controls, authentication mechanisms, and encryption, is essential. Beyond the basics, developing a roadmap for advanced security controls, including threat modeling, anomaly detection, continuous monitoring, and behavioral detection, is crucial. Regularly evaluating the effectiveness of security measures and adapting them based on real-world experiences and emerging threats is also recommended.

For a detailed analysis, download the full Cloud and Threat Report: AI Apps in the Enterprise here. For more information on cloud-enabled threats and the latest findings from Netskope Threat Labs, visit Netskope’s Threat Research Hub.


Leave a reply