
A newly identified vulnerability, termed ShadowLeak, has emerged in OpenAI’s ChatGPT, enabling attackers to extract sensitive information from Gmail accounts without user interaction. The flaw, discovered by cybersecurity firm Radware, exploits a zero-click method that utilizes hidden HTML prompts embedded in emails. This allows malicious actors to bypass traditional security measures and directly access users’ data, including emails and attachments, without their knowledge.
The mechanics of ShadowLeak are particularly alarming. According to a report from The Hacker News, attackers can craft emails containing covert instructions that, when processed by the ChatGPT Deep Research agent linked to a user’s Gmail, result in unauthorized data extraction. This operation occurs entirely on OpenAI’s cloud infrastructure, meaning users do not need to open the emails or grant explicit permission for their data to be compromised.
Understanding the Mechanics of ShadowLeak
Radware describes ShadowLeak as a “service-side leaking, zero-click indirect prompt injection” attack. This method deviates from traditional prompt injections that require some level of user engagement. Instead, the vulnerability activates as soon as the ChatGPT agent processes the manipulated HTML, interpreting these hidden commands as legitimate requests. This not only bypasses Gmail’s built-in protections but also raises questions about the integration of AI tools with sensitive data environments.
The discovery of ShadowLeak is timely, coming amid a rising tide of vulnerabilities associated with AI technologies. Experts warn that such exploits could potentially impact millions of users who rely on ChatGPT for various tasks. In a recent article, Infosecurity Magazine highlighted that the vulnerability was found during routine testing of ChatGPT’s integrations, underscoring the unintended attack vectors created by the agent’s ability to “browse” external content.
Industry Response and Future Implications
Following Radware’s responsible disclosure, OpenAI acted swiftly, rolling out a patch in September 2025. This update enhanced prompt filtering and limited the agent’s web interactions with external services like Gmail. Despite these measures, lingering questions remain regarding accountability. Should AI developers shoulder greater responsibility for the security of integrations with third-party applications? Finance Yahoo emphasized the need for businesses to audit AI tool permissions, particularly in sectors where data breaches could have severe repercussions.
The implications of ShadowLeak extend beyond individual user concerns. Cybersecurity analyst Nicolas Krassas remarked on social media that the vulnerability could affect over 5 million business users globally, based on estimates of OpenAI’s user base. The flaw’s nature makes it particularly elusive; it does not rely on phishing attempts or malware installations, but rather a targeted email that lands directly in the inbox.
As AI technologies become increasingly integrated into business operations, industry experts are advocating for heightened regulatory oversight. Discussions on social media platforms, such as The Cyber Security Hub, reflect a growing community demand for mandatory vulnerability disclosures in AI products. The rise of autonomous agents may lead to a new era of AI-mediated threats that traditional cybersecurity measures struggle to counter.
To mitigate risks associated with such vulnerabilities, organizations are encouraged to adopt layered defenses. This includes disabling unnecessary AI integrations, monitoring email traffic for unusual HTML, and educating staff about the dangers of over-reliance on automated tools. As noted in Cybersecurity News, ShadowLeak is not an isolated incident but indicative of broader systemic issues in how AI processes untrusted inputs.
Looking ahead, the discovery of ShadowLeak may catalyze advancements in AI security, including improved anomaly detection and blockchain-verified prompts. The incident serves as a stark reminder that as AI becomes more embedded in daily operations, the vulnerabilities it introduces require ongoing vigilance and proactive engineering to stay ahead of emerging threats.