• Cyber Syrup
  • Posts
  • Shadow AI Use on the Rise: Study Highlights Growing Risks and the Need for Enterprise Controls

Shadow AI Use on the Rise: Study Highlights Growing Risks and the Need for Enterprise Controls

A recent study by Software AG, published in October 2024, reveals a striking reality in today’s workplace: nearly half of all employees are using Shadow AI tools

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.

Shadow AI Use on the Rise: Study Highlights Growing Risks and the Need for Enterprise Controls

A recent study by Software AG, published in October 2024, reveals a striking reality in today’s workplace: nearly half of all employees are using Shadow AI tools—and most would continue doing so even if explicitly banned by their employer.

This growing trend reflects a significant challenge for corporate governance, security, and compliance teams as AI tools become increasingly accessible and essential to modern workflows.

What Is Shadow AI?

Shadow AI refers to the unsanctioned use of artificial intelligence tools by employees, often outside of corporate oversight. These tools, while highly beneficial for tasks such as content creation, coding, or meeting summarization, can also introduce security risks and data privacy concerns when used without proper controls.

Employees often turn to these tools out of a desire to improve productivity or streamline tasks. According to Michael Marriott, VP of Marketing at Harmonic Security, “Using AI at work feels like second nature for many knowledge workers now.” When official AI tools are too restricted or difficult to access, users will often resort to freely available alternatives via personal browsers.

Usage Is Common—and Often Unintentional

The motivation behind Shadow AI use is rarely malicious. Rather, it reflects employees’ intent to perform better, gain efficiency, or enhance their chances for career growth. However, many choose not to disclose their use of such tools, fearing disapproval or a lack of recognition for their efforts if credit is attributed to AI.

This lack of transparency results in organizations being largely unaware of the scope of Shadow AI use or the potential security risks it introduces.

Data Insights from 8,000 Users

Harmonic Security conducted an in-depth analysis of 176,460 AI prompts from 8,000 users, collected via browser extensions deployed during Q1 2024 and compared with data from Q4 2024. The study provides a glimpse into Shadow AI usage via web browsers, excluding AI use through mobile apps or API integrations.

Key findings include:

  • ChatGPT remains the most widely used generative AI platform.

  • 45% of prompts came through personal accounts like Gmail.

  • 68.3% of file uploads to ChatGPT were image files—raising concerns about embedded sensitive data.

  • 79.1% of sensitive data was sent to ChatGPT, including 21% to its free version, which may retain prompts for training.

Growing Risks: Sensitive Data and Foreign AI Models

The most critical insight from the report isn’t just usage patterns, but the nature of data being exposed. While there was a decline in some categories of sensitive data, others showed alarming growth:

  • Customer data: down from 45.8% to 27.8%

  • Employee data: down from 26.8% to 14.3%

  • Security data: down from 6.9% to 2.1%

Conversely:

  • Legal and financial data: up from 14.9% to 30.8%

  • Sensitive code: up from 5.6% to 10.1%

  • Personal Identifiable Information (PII): newly tracked in Q1 2025 at 14.9%

Another concerning trend is the use of Chinese AI platforms such as DeepSeek, Baidu Chat, and Qwen—used by at least 7% of employees. Data sent to these services could be accessible to foreign governments, including the Chinese Communist Party, raising concerns about espionage and data sovereignty.

From Awareness to Action: What Enterprises Should Do

The growing adoption of Shadow AI highlights the urgent need for organizations to shift from passive observation to proactive governance. Harmonic suggests that businesses must:

  • Implement policies that guide safe AI usage

  • Offer targeted training and coaching for employees

  • Monitor and analyze AI tool usage with intelligent enforcement tools

  • Strike a balance between innovation and risk management

“This isn’t a fringe issue,” Marriott concludes. “It’s mainstream. It’s growing. And it’s happening in nearly every enterprise, whether or not there’s a formal AI policy in place.”

Conclusion

The rise of Shadow AI use in the workplace underscores a broader transformation in how employees engage with technology. While AI tools offer enormous benefits, the unmonitored use of unsanctioned applications introduces new risks that organizations cannot afford to ignore.

Establishing clear AI usage policies, improving access to secure corporate tools, and educating employees will be critical in managing this evolving landscape—ensuring AI enhances, rather than endangers, the modern enterprise.