← Back to Library

EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM System

Authors: Pavan Reddy, Aditya Sanjay Gujral

Published: 2025-09-06

arXiv ID: 2509.10540v1

Added to Library: 2025-11-11 14:26 UTC

Red Teaming

📄 Abstract

Large language model (LLM) assistants are increasingly integrated into enterprise workflows, raising new security concerns as they bridge internal and external data sources. This paper presents an in-depth case study of EchoLeak (CVE-2025-32711), a zero-click prompt injection vulnerability in Microsoft 365 Copilot that enabled remote, unauthenticated data exfiltration via a single crafted email. By chaining multiple bypasses-evading Microsofts XPIA (Cross Prompt Injection Attempt) classifier, circumventing link redaction with reference-style Markdown, exploiting auto-fetched images, and abusing a Microsoft Teams proxy allowed by the content security policy-EchoLeak achieved full privilege escalation across LLM trust boundaries without user interaction. We analyze why existing defenses failed, and outline a set of engineering mitigations including prompt partitioning, enhanced input/output filtering, provenance-based access control, and strict content security policies. Beyond the specific exploit, we derive generalizable lessons for building secure AI copilots, emphasizing the principle of least privilege, defense-in-depth architectures, and continuous adversarial testing. Our findings establish prompt injection as a practical, high-severity vulnerability class in production AI systems and provide a blueprint for defending against future AI-native threats.

🔍 Key Points

  • Introduction of EchoLeak as a zero-click prompt injection exploit in Microsoft 365 Copilot, marking a critical security vulnerability in LLM integrations.
  • Detailed analysis of the attack mechanism illustrating how benign-looking content can facilitate severe data exfiltration with no user interaction.
  • Proposes multiple engineering mitigations including prompt partitioning and enhanced input filtering to prevent similar vulnerabilities in the future.
  • Establishes prompt injection as a high-severity vulnerability class in AI systems, emphasizing the fragility of traditional defenses against sophisticated AI-native attacks.
  • Derives generalizable lessons for secure AI engineering, underscoring the necessity of defense-in-depth strategies and continuous adversarial testing.

💡 Why This Paper Matters

This paper provides a crucial case study on EchoLeak, illustrating the real-world implications of zero-click prompt injection attacks in AI-enabled enterprise systems. Its findings underscore the need for enhanced security measures in AI integrations, which are increasingly bridging sensitive organizational data with external inputs in potentially unsafe ways. By addressing a new class of vulnerabilities in LLM systems, the paper contributes significantly to the discourse on AI security, making it clear that traditional defenses are inadequate against evolving AI-specific threats.

🎯 Why It's Interesting for AI Security Researchers

This paper is of high relevance to AI security researchers as it not only documents a real case of prompt injection but also offers insights into the mechanisms that allowed the exploit to succeed. By detailing the vulnerabilities in Microsoft 365 Copilot, the authors provide a blueprint for potential attackers while simultaneously outlining necessary defensive measures. The focus on practical security implications and ongoing challenges in securing AI systems highlights critical areas for further research and development, making it essential reading for anyone working on AI security.

📚 Read the Full Paper