← Back to Library

VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit

Authors: Junda Lin, Zhaomeng Zhou, Zhi Zheng, Shuochen Liu, Tong Xu, Yong Chen, Enhong Chen

Published: 2026-01-09

arXiv ID: 2601.05755v1

Added to Library: 2026-01-12 03:02 UTC

Red Teaming

📄 Abstract

LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \textbf{VIGIL}, a framework that shifts the paradigm from restrictive isolation to a verify-before-commit protocol. By facilitating speculative hypothesis generation and enforcing safety through intent-grounded verification, \textbf{VIGIL} preserves reasoning flexibility while ensuring robust control. We further introduce \textbf{SIREN}, a benchmark comprising 959 tool stream injection cases designed to simulate pervasive threats characterized by dynamic dependencies. Extensive experiments demonstrate that \textbf{VIGIL} outperforms state-of-the-art dynamic defenses by reducing the attack success rate by over 22\% while more than doubling the utility under attack compared to static baselines, thereby achieving an optimal balance between security and utility. Code is available at https://anonymous.4open.science/r/VIGIL-378B/.

🔍 Key Points

  • Introduction of VIGIL, a novel framework employing a verify-before-commit protocol to enhance LLM agent security against tool stream injection attacks.
  • Development of SIREN, a comprehensive benchmark with 959 tool stream injection cases to simulate realistic adversarial threats, highlighting the vulnerability of LLM agents.
  • Demonstration that VIGIL significantly reduces attack success rates by over 22% while more than doubling utility under attack compared to existing static baselines, breaking the rigidity-utility trade-off in agent defenses.
  • Implementation details of VIGIL's key components: Intent Anchor, Perception Sanitizer, Speculative Reasoner, and Grounding Verifier, providing a structured approach to thwarting injections via dynamic hypothesis generation and validation.
  • Evaluation showing VIGIL's superior performance against both data stream and tool stream attacks, reinforcing its practical applicability in securing LLM agents in open environments.

💡 Why This Paper Matters

The paper presents a critical advancement in securing large language model (LLM) agents operating in dynamic, open environments against sophisticated injection attacks. By introducing the VIGIL framework, it provides a practical and effective method that balances security and reasoning utility, which is essential for the safe deployment of AI systems. The extensive experimental evaluation on the SIREN benchmark not only validates the effectiveness of the proposed methods but also establishes a foundation for future research in AI security and adaptive defense mechanisms.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it addresses a pressing challenge in the field: the vulnerability of LLM agents to indirect prompt injection attacks, particularly within tool streams. Its novel contributions toward adaptive defense mechanisms, exemplified by the VIGIL framework and the SIREN benchmark, provide essential insights for developing more resilient AI systems. Researchers focused on improving the security and robustness of AI applications will find the methodologies and findings particularly beneficial for advancing the state-of-the-art in agent security.

📚 Read the Full Paper