Source Code tops sensitive data shared with ChatGPT: Netskope study

News Desk -

Share

Netskope, a Secure Access Service Edge (SASE) company, has unveiled its latest research findings indicating that enterprise organizations face approximately 183 incidents of sensitive data being posted to ChatGPT per month, for every 10,000 enterprise users. The most significant category of sensitive data exposed is source code.

The research is part of the Cloud & Threat Report: AI Apps in the Enterprise, marking Netskope Threat Labs’ comprehensive analysis of AI usage in businesses and the associated security risks. Based on data from millions of enterprise users worldwide, Netskope discovered a notable 22.5% increase in the usage of generative AI apps over the past two months. This surge in AI app adoption amplifies the likelihood of users unintentionally sharing sensitive information.

Rapid Growth of AI App Usage

The study reveals that organizations with 10,000 or more users are employing an average of five AI apps daily, with ChatGPT leading the pack with over eight times more daily active users than any other generative AI app. At this rate, the number of users accessing AI apps is projected to double within the next seven months.

During the two-month study period, the AI app experiencing the fastest growth was Google Bard, which added users at a rate of 7.1% per week compared to ChatGPT’s 1.6%. However, despite Google Bard’s rapid growth, it is not expected to surpass ChatGPT for over a year. Nevertheless, the generative AI app landscape is anticipated to evolve significantly in the meantime, with numerous new apps in development.

Sensitive Data Input in ChatGPT

According to Netskope’s research, source code is the most frequently posted type of sensitive data in ChatGPT, with 158 incidents per 10,000 users per month. Other types of sensitive data shared on the platform include regulated data (such as financial and healthcare data), personally identifiable information (excluding source code), and most worryingly, passwords and keys, which are often embedded in the source code.

Addressing the Security Risks

Ray Canzanese, Threat Research Director at Netskope Threat Labs, emphasizes the inevitability of some users uploading proprietary source code or text containing sensitive data to AI tools that promise programming or writing assistance. He highlights the need for organizations to implement controls around AI usage to prevent sensitive data leaks. The most effective controls, as observed by Netskope, combine Data Loss Prevention (DLP) measures with interactive user coaching.

Blocking or Granting Access to ChatGPT

Netskope Threat Labs is actively tracking ChatGPT proxies and monitoring over 1,000 malicious URLs and domains exploited by opportunistic attackers capitalizing on the AI hype. These attackers deploy multiple phishing campaigns, malware distribution schemes, and spam and fraud websites.

While blocking access to AI-related content and applications may be a short-term solution to mitigate risks, it also hinders the potential benefits that AI apps offer in enhancing corporate innovation and employee productivity. Netskope’s data indicates that nearly 1 in 5 organizations in financial services and healthcare (highly regulated industries) have entirely banned the use of ChatGPT by employees, while in the technology sector, only 1 in 20 organizations have taken a similar approach.

James Robinson, Deputy Chief Information Security Officer at Netskope, highlights the need for security leaders to avoid blanket application bans that could impact user experience and productivity. Instead, organizations should focus on evolving workforce awareness and data policies to accommodate the productive use of AI products. The key lies in implementing appropriate controls, such as domain and URL filtering and content inspection, to protect against potential attacks.

Enabling Safe AI App Adoption

To enable the safe adoption of AI apps, organizations should focus on identifying permissible apps and implementing controls that empower users while safeguarding the organization from risks. Strategies include blocking access to apps that lack legitimate business purposes or pose disproportionate risks, providing user coaching to reinforce company policies on AI app usage, and employing modern Data Loss Prevention (DLP) technologies to detect posts containing potentially sensitive information.

To delve deeper into the Cloud & Threat Report: AI Apps in the Enterprise, click here. For more information on cloud-enabled threats and the latest findings from Netskope Threat Labs, visit Netskope’s Threat Research Hub. To receive Netskope Threat Labs blog posts, subscribe here.

Furthermore, Netskope has announced new solution offerings from SkopeAI, the Netskope suite of artificial intelligence and machine learning (AI/ML) innovations, to complement the report’s insights. SkopeAI harnesses the power of AI/ML to overcome the limitations of complex legacy tools and provide unparalleled protection using AI-driven techniques not found in other SASE products. Learn more about SkopeAI here.


Leave a reply