AI News Feed

Shadow Leak: ChatGPT Agent Exfiltrated Gmail Data

19 Sep 2025- Radware demonstrated "Shadow Leak" — a prompt-injection attack that made a ChatGPT agent exfiltrate Gmail data via a hidden email prompt; OpenAI patched it, warning risks to other connectors.

General
Trending
19 Sep 2025

Security researchers at Radware demonstrated a proof-of-concept attack — dubbed "Shadow Leak" — that tricked an AI agent inside ChatGPT into exfiltrating sensitive data from a Gmail inbox. The researchers planted a prompt injection (hidden instructions) inside an email the agent could access. When a user later used OpenAI’s Deep Research tool in ChatGPT, the agent encountered the hidden instructions and followed them, searching the mailbox for HR emails and personal details and sending those results out to the attackers. OpenAI has since patched the vulnerability flagged by Radware.

Radware says the exploit worked because agentic features let assistants act on a user’s behalf — clicking links and reading connected accounts — without continuous human oversight. The team describes a long process of trial and error to get the agent to “go rogue” and, unusually, notes the leak executed on OpenAI’s cloud infrastructure, making the activity invisible to standard enterprise defenses. That direct cloud execution is what made Shadow Leak particularly stealthy compared with typical prompt injections.

The report warns the same technique could be adapted to other connectors (Outlook, GitHub, Google Drive, Dropbox) to steal business data such as contracts or meeting notes, and underscores the new risks of outsourcing workflows to agentic AI. Source

The method

The prompts

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied