Generative AI in the Enterprise: Anatomy of a Silent Catastrophe

I hesitated for a long time before writing an article on this subject, as it seems both delicate and inevitable given the transformational impact of AI for all of us.

While corporate management celebrates the productivity gains brought by ChatGPT and its competitors, security teams face a far more concerning reality: generative AI has become, within just two years, the primary vector for sensitive data exfiltration in professional environments. This transformation is not a hypothetical catastrophe scenario, but a measured, documented reality that continues to accelerate.

The figures recently published by LayerX paint an unequivocal picture: 45% of employees now use generative AI tools, 77% of them regularly transfer data through these platforms, and 22% of these transfers contain personally identifiable information or banking data. Even more concerning, 82% of these operations occur from personal accounts, completely invisible to enterprise security systems. While CISOs perfect their DLP policies and refine their security perimeters, sensitive data leaves the organization through a channel that no traditional tool monitors: copy-paste operations in a browser window.

This situation is not the result of organized malice, but rather a collision between employees’ legitimate enthusiasm for revolutionary tools and the inability of traditional security architectures to adapt to these new usage patterns. Samsung, which had to temporarily ban ChatGPT after an employee uploaded sensitive source code, was merely the first in a long series of incidents that are redefining the very notion of data leakage.

The massive adoption of generative artificial intelligence is radically transforming the enterprise data security landscape. Recent research conducted by LayerX reveals a concerning reality: generative AI has now become the primary vector for sensitive data exfiltration, surpassing all traditional leakage channels. This detailed analysis examines the mechanisms, documented incidents, and operational implications for security teams.

The Scale of the Phenomenon: Irrefutable Numbers

LayerX’s Enterprise AI and SaaS Data Security Report 2025, based on real-world enterprise browser telemetry across the globe, establishes an unequivocal finding. Forty-five percent of employees now use generative AI tools in their professional environment, with ChatGPT alone capturing 43% of this adoption. This penetration places generative AI at the level of the enterprise’s most critical applications, representing 11% of all application activity, just behind email and online meetings.

Among generative AI users, 77% regularly perform copy-paste operations to feed their queries. Detailed analysis of these flows reveals that 22% of the data thus transferred contains personally identifiable information (PII) or Payment Card Industry (PCI) standard data. Even more alarming, 82% of these operations occur from personal accounts not managed by the enterprise, creating a major blind spot for sensitive data visibility and control.

File uploads constitute a second critical vector. Approximately 40% of files transferred to generative AI platforms contain PII or PCI data, with 39% of these transfers coming from non-corporate accounts. All of this data completely escapes traditional data loss prevention systems, which focus on file transfers and attachments.

Documented Incidents: When Theory Meets Reality

Samsung and the Source Code Leak (2023)

The Samsung incident represents one of the first documented cases of sensitive data exfiltration via ChatGPT. In 2023, a Samsung employee uploaded confidential source code to ChatGPT, presumably to get help with a technical problem. This action triggered an immediate response from management: Samsung temporarily banned the use of ChatGPT for all its staff. This incident perfectly illustrates how easily well-intentioned employees can compromise critical assets without understanding the implications of their actions.

The Redis Vulnerability of March 2023

In March 2023, a bug in the Redis open-source library used by ChatGPT’s infrastructure caused a significant data leak. This vulnerability allowed certain users to access other users’ conversation titles and first messages. Although the incident was quickly corrected, it highlighted the risks inherent to generative AI platforms themselves, independent of user practices.

Massive Account Compromise (2022-2023)

Between June 2022 and May 2023, Check Point researchers identified more than 101,000 devices infected by stealers (Raccoon, Vidar, RedLine) containing saved ChatGPT credentials. The Asia-Pacific region recorded the highest concentration of compromised accounts. This campaign demonstrates that ChatGPT accounts have become prime targets for cybercriminals seeking to exploit conversation history that may contain sensitive data.

Public Indexing of July-August 2025

In July 2025, a major incident revealed a design flaw in ChatGPT’s sharing functionality. A poorly designed option titled “Make this chat discoverable,” combined with the absence of appropriate web protection tags, allowed thousands of sensitive conversations to be indexed by public search engines. On July 31, researchers identified more than 4,500 indexed links. OpenAI disabled this functionality on August 1 and notified search engines, but cached versions remained accessible for several weeks. This incident underscores the importance of secure default settings and clear communication with users.

CVE-2024-27564: Active Exploitation of an SSRF Flaw

Veriti researchers discovered a Server-Side Request Forgery (SSRF) vulnerability in ChatGPT’s infrastructure, referenced as CVE-2024-27564 with a CVSS score of 6.5. This flaw allows attackers to inject malicious URLs into ChatGPT’s input parameters, forcing the application to make unintended requests on their behalf. Veriti’s analysis reveals that this vulnerability is being actively exploited, with more than 10,000 attack attempts recorded, 33% of which target U.S. organizations, primarily in the financial sector. Thirty-five percent of the analyzed organizations had vulnerable configurations in their IPS, WAF, and firewall settings.

ShadowLeak: Zero-Click Exfiltration via Deep Research

In September 2025, Radware revealed a particularly sophisticated vulnerability dubbed ShadowLeak, affecting ChatGPT’s Deep Research agent. This attack exploits an indirect prompt injection hidden in an email’s HTML code (tiny fonts, white-on-white text) invisible to the user but interpreted by the agent. When a user asks ChatGPT Deep Research to analyze their Gmail emails, the agent reads the malicious prompt and transmits sensitive data encoded in Base64 to an attacker-controlled server via the browser.open() tool. This zero-click attack requires no action from the victim and operates directly from OpenAI’s cloud infrastructure, bypassing local and enterprise defenses. The attack surface potentially extends to all connectors supported by ChatGPT: Box, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, and SharePoint.

Typology of Leaks: Three Main Vectors

User-Side Leaks: The Dominant Vector

User-side leaks represent the most frequent and most difficult vector to control. Employees transfer sensitive data to ChatGPT to accelerate their work, often without understanding the implications. A marketing manager pastes next quarter’s product roadmap into ChatGPT to reformulate a customer announcement. A financial analyst uploads an Excel file containing sensitive business data. A developer submits proprietary code to solve a bug. In each of these cases, data leaves the enterprise’s protected environment and can be stored, processed outside compliance boundaries, or even recorded by third-party infrastructure. These actions circumvent internal security policies and create potential regulatory violations (GDPR, HIPAA, SOX).

The majority of traditional DLP systems do not detect these transfers, as they occur through copy-paste in a browser window rather than through file downloads or email sending. This opacity transforms these leaks into a silent but pervasive generative data risk.

Platform-Side Leaks: Rare but Critical

Platform-side leaks, while less frequent, present a particular danger because they occur without user intent and often go unnoticed. The Redis bug of March 2023 perfectly illustrates this risk: a vulnerability in the underlying infrastructure accidentally exposes one user’s conversation history to other users. These incidents underscore the importance of the security architecture of AI platforms themselves and the need for rigorous security audits.

Risky Interactions with Plugins: The Weak Link

Plugins extend ChatGPT’s capabilities by allowing access to the web, internal files, or third-party systems, but simultaneously introduce significant security risks. Once enabled, a plugin can read prompt content and potentially send it to external APIs or storage systems, often without the user being aware. A financial analyst uses a plugin to analyze a business spreadsheet. The plugin uploads the file to its own server for processing. Without the analyst knowing, the server logs and retains the file, violating data residency and privacy policies.

Most plugins are developed by third parties and do not undergo the same level of security scrutiny as internal tools. The use of unvalidated plugins can lead to uncontrolled data exfiltration and expose regulated information to unknown actors, representing a major risk for the enterprise.

ChatGPT Crushes the Competition: Strategic Implications

The LayerX study highlights ChatGPT’s overwhelming dominance in the enterprise generative AI landscape. More than 9 out of 10 employees use it, compared to only 15% for Google Gemini, 5% for Claude, and 2-3% for Microsoft Copilot. This predominance is explained by a marked preference: 83.5% of users limit themselves to a single AI platform, and they overwhelmingly choose ChatGPT, regardless of the tools officially approved by their organization.

Microsoft Copilot’s low adoption rate is particularly striking. With a base of 440 million Microsoft 365 subscribers, Microsoft shows only a 1.81% conversion rate to Copilot, a figure nearly identical to LayerX’s observations. Faced with this reality, Microsoft recently adopted a pragmatic approach by authorizing the use of personal Copilot accounts within professional Microsoft 365 environments, implicitly acknowledging the impossibility of fighting against user preferences.

ChatGPT now reaches 43% penetration in enterprises, approaching established applications like Zoom (75%) or Google services (65%), and far surpassing Slack (22%), Salesforce (18%), or Atlassian (15%). This rapid adoption positions generative AI as a critical application category requiring the same level of governance as email or productivity tools.

Shadow IT: A Systemic Phenomenon

The use of applications via non-corporate accounts is not limited to generative AI. The LayerX report reveals that Shadow IT affects the entire enterprise application ecosystem: instant messaging (87%), online meetings (60%), Microsoft Online (68%), Salesforce (77%), and Zoom (64%). These figures demonstrate that employees systematically prioritize convenience and efficiency over compliance with security policies.

This trend creates a dangerous convergence between Shadow AI and Shadow Chat, two major data leakage channels. Sixty-two percent of users transfer PII or PCI data into unmanaged instant messaging applications. Together, these practices create a double blind spot where sensitive data constantly flows to unmonitored environments.

Operational Implications for Security Teams

Inadequacy of Traditional Tools

Traditional data loss prevention systems focus on file transfers, suspicious attachments, and outbound emails. But AI conversations look like normal web traffic, even when they contain confidential information. Copy-paste operations in chat windows completely bypass these detection mechanisms, leaving no exploitable trace for investigation.

This reality requires a fundamental overhaul of the data security approach. The battlefield is no longer in file servers or sanctioned SaaS applications, but in the browser, where employees mix personal and professional accounts, switch between approved tools and Shadow IT, and fluidly move sensitive data between these environments.

New Strategic Recommendations

Faced with this massive and difficult-to-control adoption, security strategies must evolve along several axes:

Treat generative AI as a critical enterprise category. Governance strategies must place AI on the same level as email and file sharing, with monitoring of uploads, prompts, and copy-paste flows.

Deploy Single Sign-On systematically. SSO across all critical applications maintains visibility into data flows, even when employees favor unofficial tools. This measure constitutes a minimum prerequisite for any organization seeking to understand its actual exposure.

Extend monitoring to the browser. Security solutions must evolve toward browser-level monitoring, capable of intercepting copy-paste operations and analyzing content transferred to AI platforms, whether approved or not.

Implement adaptive access policies. Organizations must deploy policies that automatically block risky copy-paste or chat operations based on real-time content analysis, rather than relying solely on rarely respected blanket bans.

Train employees on specific risks. Awareness must go beyond security generalities to address concrete scenarios of exfiltration via AI, with tangible examples of organizational consequences.

Monitor external identities. Eighty-two percent of risky interactions come from sessions not registered with enterprise identity management systems. This reality requires extending monitoring beyond the traditional IAM perimeter.

Geopolitical and Regulatory Issues

The use of AI platforms hosted in different jurisdictions introduces complex geopolitical dimensions. The exposure of enterprise data to Chinese AI models like Qwen, for example, raises questions of digital sovereignty and potential strategic exploitation of information. Enterprise data exposed through personal accounts risks being used for model training, creating a permanent leak of intellectual property.

From a regulatory standpoint, PII data exfiltration via generative AI creates direct violations of GDPR, HIPAA, and SOX. Organizations in financial services and healthcare sectors are particularly exposed, with potentially massive fines and lasting reputational damage. The LayerX report indicates that the most concerned clientele is concentrated in financial services, healthcare, and semiconductors—precisely the sectors where regulatory implications are most severe.

Methodology and Limitations

LayerX’s methodology relies on data monitoring via an enterprise browser extension deployed at dozens of global enterprises and large enterprises (1,000-100,000 users), primarily in North America but present on all five continents. This approach offers significant visibility into web interactions with AI, but presents an important limitation: it does not include API calls from applications, which constitute a growing vector of interaction with AI platforms.

Perspective: Toward a New Security Paradigm

Employee enthusiasm for generative AI will not wane. Productivity gains are real, measurable, and widely documented. Rather than fighting this trend through ineffective bans, security strategies must evolve toward greater visibility and adaptive control.

The question is no longer whether employees will use ChatGPT, but how to ensure the protection of sensitive data in this new usage paradigm. Organizations that succeed will be those that accept this reality and build security architectures adapted to a world where traditional security perimeter boundaries are no longer relevant.

Security teams must abandon the fantasy of total control to adopt a pragmatic approach of visibility, detection, and rapid response. Generative AI simultaneously represents a transformation opportunity and a systemic risk. An organization’s ability to navigate between these two realities will determine its resilience in the years to come.

Enjoy!