- Cyber Syrup
- Posts
- Italy Bans Chinese AI Firm DeepSeek Over Data Privacy Concerns
Italy Bans Chinese AI Firm DeepSeek Over Data Privacy Concerns
Italy’s data protection authority, the Garante, has officially blocked Chinese artificial intelligence (AI) firm DeepSeek from operating within the country

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.
Italy Bans Chinese AI Firm DeepSeek Over Data Privacy Concerns

Italy’s data protection authority, the Garante, has officially blocked Chinese artificial intelligence (AI) firm DeepSeek from operating within the country. The decision comes after the agency cited DeepSeek’s failure to provide sufficient information regarding its handling of users' personal data.
The move highlights growing concerns among global regulators regarding AI companies' data collection and storage practices, particularly those associated with firms based in China.
Lack of Transparency in Data Collection
The Garante’s investigation into DeepSeek began with a formal request for details about the company’s data-handling practices, including:
What personal data is collected via DeepSeek’s web platform and mobile app.
The sources of this data.
The legal basis for its collection.
Whether the data is stored in China.
However, the Italian watchdog found DeepSeek’s responses to be "completely insufficient." The company—operating under Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence—stated that it does not operate in Italy and therefore does not fall under European data protection laws.
As a result, the Garante took immediate action, blocking access to DeepSeek’s services within Italy while launching a formal probe into the company’s practices.
Precedent and Ongoing Regulatory Scrutiny
This is not the first time Italy’s data regulator has taken action against an AI company. In 2023, it issued a temporary ban on OpenAI’s ChatGPT, citing data privacy concerns. That restriction was later lifted after OpenAI implemented new privacy measures. However, in December 2024, OpenAI was fined €15 million for its handling of personal data.
DeepSeek’s ban arrives at a time when the company has seen a surge in popularity, with its AI services and mobile apps climbing to the top of download charts. However, it has also faced increasing scrutiny from lawmakers and regulators, particularly over:
Privacy policies: Questions remain about how the company processes and stores personal data.
Censorship and propaganda: There are concerns that DeepSeek aligns with Chinese government interests in controlling information.
National security risks: U.S. officials are reportedly evaluating whether DeepSeek could pose a security threat due to potential Chinese government influence.
Adding to these issues, DeepSeek recently reported being the target of "large-scale malicious attacks" but has since implemented a fix to address the security concerns.
Security Risks and Jailbreak Vulnerabilities
Beyond data privacy concerns, DeepSeek's AI models have also been found vulnerable to several security risks.
AI Jailbreak Techniques
Cybersecurity researchers have identified that DeepSeek’s large language models (LLMs) can be easily manipulated using various jailbreak techniques, such as:
Crescendo
Bad Likert Judge
Deceptive Delight
Do Anything Now (DAN)
EvilBOT
These exploits allow users to bypass the AI’s built-in safety filters and generate harmful or prohibited content. According to a report from Palo Alto Networks' Unit 42:
"They elicited a range of harmful outputs, from detailed instructions for creating dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection and lateral movement."
While DeepSeek has implemented safeguards, researchers found that carefully crafted prompts could still bypass restrictions, demonstrating the risk of these AI models being weaponized.
Prompt Injections and Data Leaks
AI security firm HiddenLayer also examined DeepSeek’s reasoning model, DeepSeek-R1, and found it vulnerable to prompt injections. Additionally, its Chain-of-Thought (CoT) reasoning technique can unintentionally reveal sensitive information.
Interestingly, HiddenLayer’s evaluation raised ethical and legal concerns about DeepSeek’s data sources. The company stated that:
"The model surfaced multiple instances suggesting that OpenAI data was incorporated, raising ethical and legal concerns about data sourcing and model originality."
Newly Discovered OpenAI Vulnerability
DeepSeek is not the only AI firm facing security challenges. OpenAI recently patched a jailbreak vulnerability in ChatGPT-4o called Time Bandit, which allowed attackers to manipulate the chatbot into bypassing its safety guardrails.
According to the CERT Coordination Center (CERT/CC), the exploit tricked ChatGPT into losing its "temporal awareness" by presenting queries as historical scenarios, making it provide restricted or dangerous information. OpenAI has since implemented a fix.
Conclusion
Italy’s decision to block DeepSeek underscores the increasing global scrutiny on AI firms regarding privacy, security, and ethical concerns. As AI technology continues to evolve, regulators worldwide are expected to enforce stricter compliance requirements to ensure data protection and prevent AI misuse.
DeepSeek’s future in Europe remains uncertain, with the possibility of further regulatory action depending on the outcome of Italy’s investigation. Meanwhile, security experts stress the importance of developing robust protections to prevent AI models from being exploited by bad actors.