← Back to Library

Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks

Authors: Zimo Ji, Xunguang Wang, Zongjie Li, Pingchuan Ma, Yudong Gao, Daoyuan Wu, Xincheng Yan, Tian Tian, Shuai Wang

Published: 2025-11-19

arXiv ID: 2511.15203v1

Added to Library: 2025-11-20 03:01 UTC

Red Teaming Safety

📄 Abstract

Large Language Model (LLM)-based agents with function-calling capabilities are increasingly deployed, but remain vulnerable to Indirect Prompt Injection (IPI) attacks that hijack their tool calls. In response, numerous IPI-centric defense frameworks have emerged. However, these defenses are fragmented, lacking a unified taxonomy and comprehensive evaluation. In this Systematization of Knowledge (SoK), we present the first comprehensive analysis of IPI-centric defense frameworks. We introduce a comprehensive taxonomy of these defenses, classifying them along five dimensions. We then thoroughly assess the security and usability of representative defense frameworks. Through analysis of defensive failures in the assessment, we identify six root causes of defense circumvention. Based on these findings, we design three novel adaptive attacks that significantly improve attack success rates targeting specific frameworks, demonstrating the severity of the flaws in these defenses. Our paper provides a foundation and critical insights for the future development of more secure and usable IPI-centric agent defense frameworks.

🔍 Key Points

  • First comprehensive taxonomy of IPI-centric defense frameworks, covering five distinct dimensions: technical paradigms, intervention stages, model access, explainability, and automation level.
  • Thorough evaluation of representative IPI defense frameworks in both static and dynamic environments, highlighting the average attack success rates and areas of vulnerability.
  • Identification of six root causes of defensive failures, providing a detailed analysis of why current defenses may be circumvented and suggesting implications for future designs.
  • Design and demonstration of three novel adaptive attack strategies that exploit identified vulnerabilities, substantially increasing attack success rates against specific frameworks.
  • Contributions laid down provide actionable insights and foundational knowledge for the development of more robust and usable IPI-centric defense frameworks.

💡 Why This Paper Matters

This paper is pivotal due to its systematic approach in addressing a critical security gap in large language model-based agent systems. By integrating a unified taxonomy with comprehensive evaluations and adaptive attacks, it sets a precedent for future research and development in AI security fields, underscoring the necessity for more resilient defenses against advanced prompt injection attacks.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it not only reveals the current limitations in IPI defenses but also proposes a structured framework for understanding and improving defense mechanisms. Its findings underline the ongoing challenge of securing LLM-based systems, urging the community to adapt and innovate in response to evolving attack vectors.

📚 Read the Full Paper