• Cyber Syrup
  • Posts
  • Chinese AI Startup DeepSeek Exposed Sensitive Database Online

Chinese AI Startup DeepSeek Exposed Sensitive Database Online

Security Flaw in DeepSeek's Database Puts Sensitive Data at Risk

In partnership with

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.

The Daily Newsletter for Intellectually Curious Readers

If you're frustrated by one-sided reporting, our 5-minute newsletter is the missing piece. We sift through 100+ sources to bring you comprehensive, unbiased news—free from political agendas. Stay informed with factual coverage on the topics that matter.

Chinese AI Startup DeepSeek Exposed Sensitive Database Online

Security Flaw in DeepSeek's Database Puts Sensitive Data at Risk

DeepSeek, a rapidly rising Chinese artificial intelligence (AI) startup, recently left one of its databases exposed on the internet, potentially allowing malicious actors to gain unauthorized access to sensitive information. The vulnerability was discovered by cloud security firm Wiz, which found that the database allowed full control over database operations, including access to internal data.

According to Wiz security researcher Gal Nagli, the exposed ClickHouse database contained more than a million lines of log streams, including chat history, secret keys, backend details, API secrets, and operational metadata. Such exposure could have led to data breaches, privilege escalation, and further cyberattacks if exploited by malicious entities.

Extent of the Security Breach

The unsecured database was hosted at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, allowing unauthorized access without authentication. This meant that anyone with knowledge of the URLs could have accessed the system and executed SQL queries directly via a web browser.

The lack of authentication on such a critical system is a major oversight, especially for an AI company handling vast amounts of user interactions and potentially proprietary AI training data. While it remains unclear whether threat actors exploited this vulnerability, DeepSeek has since closed the security loophole after Wiz contacted them.

Security Risks of Rapid AI Expansion

The incident highlights the risks associated with the rapid deployment of AI services without proper security measures in place.

"The rapid adoption of AI services without corresponding security is inherently risky," Nagli said in a statement shared with The Hacker News. "While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like the accidental external exposure of databases."

This serves as a critical reminder for AI companies to prioritize data protection and security compliance. As AI models process massive datasets, exposing sensitive information can have serious legal and reputational consequences.

DeepSeek’s Meteoric Rise and Growing Scrutiny

DeepSeek has gained significant attention in AI circles for developing cutting-edge open-source AI models that claim to rival industry leaders like OpenAI while being more cost-effective and efficient. Its reasoning model, DeepSeek R1, has been described as “AI’s Sputnik moment,” marking a major leap in AI development.

The startup’s AI chatbot has topped app store charts across Android and iOS in multiple markets. However, DeepSeek has also reported facing large-scale cyberattacks, forcing the company to temporarily pause new user registrations as it works to strengthen its security infrastructure.

Regulatory and Ethical Concerns Surrounding DeepSeek

Beyond cybersecurity, DeepSeek is also facing scrutiny over its privacy policies and potential national security concerns due to its Chinese origins.

  • Italy’s data protection regulator recently requested details on DeepSeek’s data collection practices and its AI training sources. Shortly after this request, DeepSeek’s apps became unavailable in Italy, though it is unclear if this was a direct response to regulatory inquiries.

  • OpenAI and Microsoft are investigating DeepSeek over allegations that it may have used OpenAI’s API without authorization to train its models—a practice known as “distillation”.

"We know that groups in [China] are actively working to use methods, including what's known as distillation, to try to replicate advanced US AI models," an OpenAI spokesperson told The Guardian.

These developments underscore the increasing geopolitical tensions in AI development and the challenges of enforcing ethical AI practices.

Conclusion: AI Companies Must Prioritize Security and Compliance

The DeepSeek security breach serves as a stark reminder of the vulnerabilities AI companies face as they rapidly scale their platforms. While AI innovation is moving at an unprecedented pace, security must remain a top priority to prevent data leaks, regulatory backlash, and reputational damage.

As AI systems continue to integrate with critical industries, companies must ensure stronger cybersecurity measures, regulatory compliance, and transparency in their operations. This incident is a clear example of how overlooking basic security principles can lead to significant risks in the AI ecosystem.