Skip to content

Undiscovered Zero-Click Flaw in ChatGPT's AI Agent Facilitates Stealthy Theft of User's Gmail Data

Researchers at Radware discovered an unfixed vulnerability in ChatGPT, specifically the Deep Research agent, when linked to Gmail and surfing the web, allowing for covert actions.

Unidentified Gmail Data Exfiltration through Unnoticed Exploit in ChatGPT's Assistant Functionality
Unidentified Gmail Data Exfiltration through Unnoticed Exploit in ChatGPT's Assistant Functionality

Undiscovered Zero-Click Flaw in ChatGPT's AI Agent Facilitates Stealthy Theft of User's Gmail Data

In a significant cybersecurity finding, researchers at Radware have uncovered a vulnerability in OpenAI's Deep Research agent. The vulnerability, named ShadowLeak, allows an attacker to request the agent to leak sensitive Gmail inbox data.

The researchers successfully crafted a malicious email that triggered the Deep Research agent to inject Personally Identifiable Information (PII) into a malicious URL, achieving a 100% success rate in exfiltrating Gmail data using the ShadowLeak method.

The malicious email was disguised as a legitimate user request, forcing the Deep Research agent to use specific tools like browser.open() to make direct HTTP requests. ShadowLeak's attack expands the threat surface by exploiting backend execution rather than frontend rendering.

The vulnerability was shared by Radware on September 18, 2025. OpenAI silently fixed the vulnerability in August, and later acknowledged and marked it as resolved in early September.

ShadowLeak uses indirect prompt injection techniques, embedding hidden commands in email HTML. To exfiltrate data, the researchers had to instruct the agent to "retry several times" and encode the extracted PII into Base64 before appending it to the URL.

Real-time behavior monitoring, where the agent's actions and inferred intent are continuously checked against the user's original request, offers a stronger defense against such threats. Organizations can partially mitigate risks by sanitizing emails before agent processing, removing hidden CSS, obfuscated text, and malicious HTML.

Unlike previous attacks like AgentFlayer and EchoLeak, ShadowLeak occurs entirely within OpenAI's cloud. The agent's autonomous browsing tool executes the exfiltration without any client involvement.

Deep Research is an autonomous research mode launched by OpenAI in February 2025. ShadowLeak allows service-side exfiltration, meaning that data is leaked directly from OpenAI's cloud infrastructure.

The organization that fixed the vulnerability in OpenAI's Deep Research agent was OpenAI itself, with the fix implemented by September 3, 2025, following Radware's responsible disclosure on June 18, 2025.

Read also:

Latest