Is Your Data Safe with DeepSeek?

News Desk -

Share

In January 2025, Chinese-based AI startup DeepSeek introduced its open-source large language model (LLM) R1 – almost immediately making waves in the artificial intelligence space. Positioned as a cost-effective alternative to industry giants such as ChatGPT, Microsoft Copilot and others, the rapid rise of DeepSeek has sparked intense discussions about data privacy and security, especially when considering how the startup handles user data. Given all the conversation, what does DeepSeek actually mean for company data?

How Does DeepSeek Handle Your Data?

DeepSeek’s privacy policy reveals the company collects a wide range of personal data from its users. This includes basic profile information, user input into AI models, device and network details, cookies, and payment information. However, what has raised eyebrows among experts and regulators is the collection of highly sensitive data, such as “keystroke patterns” and access tokens from services such as Apple and Google when users sign in via these accounts. This means DeepSeek has the potential to track and store very detailed user behavior, far beyond basic interactions with its AI models.

The company’s privacy policy also notes that all data is stored on servers in China, introducing questions about the potential for government access to user data. While the data shared with DeepSeek is subject to the data protection laws in China, it does not specifically mention adherence to any country-specific data security laws such as the European Union’s GDPR – only saying that when it does need to transfer personal information out of the country of origin, it “will do so in accordance with the requirements of applicable data protection laws.” 

Concerns for the Middle East

The implications of DeepSeek’s data practices are particularly concerning for countries in the Middle East, where the importance of data security and data privacy has become a business imperative. As the digital transformation accelerates in the region, governments and businesses are placing greater emphasis on ensuring that the technologies they adopt comply with stringent (and often evolving) data privacy laws. For example, countries such as the UAE and Saudi Arabia have introduced comprehensive data protection regulations to safeguard citizens’ personal information.

However, as the Middle East embraces the use of generative AI and other advanced technologies, the region faces a unique challenge of balancing technological adoption while safeguarding sensitive data.

One key concern is the risk that sensitive data could be exposed to governments that may not have the same privacy protections as those in the Middle East. This is particularly relevant given that the region is home to numerous government and business entities dealing with highly sensitive information, from healthcare data to national security information. Furthermore, the financial implications of data breaches in the region are significant. According to IBM’s report on data breaches, the average cost of a data breach in the Middle East is US$7.9 million, marking a 15 percent increase over the past three years. This shows the growing financial and reputational risks faced by organizations that fail to secure their data properly.

Ensuring Data Security in the Age of AI

With the rapid expansion of AI in the Middle East, especially in sectors such as finance, healthcare, and government, it is crucial for organizations to prioritize data security when adopting new technologies. DeepSeek’s rise in popularity highlights the importance of ensuring AI tools not only meet the performance needs of organizations but also adhere to robust data protection standards. This means that organizations should take caution when implementing new AI tools, and opt for technologies that provide encryption, transparency about how data is collected and used, and clear adherence to both local and international data protection regulations. And when it comes to securing data in DeepSeek or other GenAI platforms, partnering with a vendor that truly prioritizes data security wherever it resides is key to protect data in GenAI tools.

When all is said and done, it’s clear that while DeepSeek may offer innovative capabilities, it also poses unique data security risks beyond those we’ve seen with general LLMs – especially when considering that DeepSeek may access, preserve or share collected data with government agencies. For enterprises and government agencies across the Middle East and beyond, leveraging AI tools that align with local data protection regulations and international privacy standards is essential in mitigating risks and ensuring the security of user data.

By Ozgur Danisman, Forcepoint VP Sales Engineering, EMEA