Skip to content

AI Assistant Exposed to First Known "Zero-Click" Attack - Key Points to Understand

AI specialists uncover a method for retrieving confidential data from artificial intelligence system

AI Experts Unearth Method to Extract Private Data from AI Agents
AI Experts Unearth Method to Extract Private Data from AI Agents

Sneaky AI Scam: EchoLeak Zero-Click Attack Exposes Microsoft 365 Users' Sensitive Data

AI Assistant Exposed to First Known "Zero-Click" Attack - Key Points to Understand

In an alarming turn of events, cybersecurity researchers at Aim Labs have uncovered a cunning, zero-click attack named EchoLeak that exploits a critical-severity flaw in Microsoft's latest artificial intelligence (AI) model integrated into Microsoft 365. Known as the LLM Scope Violation, this vulnerability could have allowed attackers to pilfer sensitive corporate data without any user action.

Shadowy hackers leverage EchoLeak to sneak hidden commands into seemingly innocuous emails. Embedded within these emails is a covert prompt that instructs Microsoft 365's Copilot to exfiltrate sensitive data to a malicious server. Since Copilot is deeply rooted in Microsoft 365, the pilfered data could range from valuable intellectual property files, confidential business contracts and legal documents, to sensitive internal communications, financial data, and more.

Mastering the Art of Deception

The EchoLeak tactic employs a crafty approach by carefully crafting prompts that circumnavigate Microsoft's defensive measures, called XPIA (cross-prompt injection attack). Cybercriminals formulate prompts in a conversational manner, thereby bypassing the explicit commands thatactivate the attack.

Once the hapless user interacts with Copilot while querying it with business-related questions, the LLM will dutifully pull all relevant data (including the attacker's email message) and execute the nefarious command. The looted data is stored within a concealed link or image.

The EchoLeak vulnerability was assigned the CVE-2025-32711 identifier and was graded with a severity score of 9.3 out of 10, classifying it as 'critical'. Microsoft rapidly addressed the issue on their servers in May, meaning users don't need to take any action to secure their data. Furthermore, Microsoft claimed that there's no evidence of the flaw being maliciously exploited in the past, and that none of their customers were impacted.

Commandeering the Cloud

Microsoft 365 serves as one of the most widely used cloud-based communication and collaboration platforms, bringing together popular office apps (Word, Excel, and others), cloud storage (OneDrive and SharePoint), email and calendar (Outlook, Exchange), and communication tools (Teams).

Recently, Microsoft integrated its GenAI model, Copilot, into Microsoft 365. This new integration allows users to draft and summarize emails, generate and edit documents, create data visualizations and analyze trends, and much more.

Stay sharp, pro! Sign up to our newsletter to get the latest news, opinions, features, and guidance your business needs to succeed!

Interested in Learning more?

  • Could other AI systems like Claude AI be susceptible to worrying command prompt injection attacks?
  • Discover our pick for the best authenticator app
  • Explore our list of the top password managers

[1] BleepingComputer. (2023, May 19). Microsoft fixes LLM Scope Violation (EchoLeak) in Microsoft 365 Copilot. [Weblog post]. Retrieved from https://www.bleepingcomputer.com/news/microsoft/microsoft-fixes-llm-scope-violation-echoleak-in-microsoft-365-copilot/

[2] Daly, T. (2023, May 19). Microsoft Patches Critical ‘Echoleak’ Vulnerability in 365 Copilot Voice Assistance. SecurityWeek. Retrieved from https://www.securityweek.com/microsoft-patches-critical-echoleak-vulnerability-365-copilot-voice-assistance

[3] Zacharia, Y. (2023, May 19). Echoleak: attackers exploit LLM Scope Violation to pilfer data from Microsoft 365 Copilot. Threatpost. Retrieved from https://threatpost.com/echoleak-attackers-exploit-llm-scope-violation-microsoft-365-copilot/180536/

[4] Kovacs, B. (2023, May 19). Microsoft fixes LLM Scope Violation (CVE-2025-32711) in Microsoft 365 Copilot. Help Net Security. Retrieved from https://www.helpnetsecurity.com/2023/05/19/microsoft-patches-llm-scope-violation-echoleak/

[5] Zorzo, E. (2023, May 19). Microsoft fixes LLM Scope Violation(CVE-2025-32711) in Microsoft 365 Copilot. Hacker News. Retrieved from https://news.ycombinator.com/item?id=35879919

  1. In light of Microsoft's prompt response to EchoLeak, a potential concern arises for other AI systems like Claude AI, as they might also be vulnerable to such command prompt injection attacks.
  2. As data-and-cloud-computing advances, businesses must stay vigilant in employing technology to prevent malicious actors from exploiting vulnerabilities like EchoLeak, safeguarding sensitive corporate data stored within these systems.

Read also:

    Latest