← Back to Library

VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit

Authors: Junda Lin, Zhaomeng Zhou, Zhi Zheng, Shuochen Liu, Tong Xu, Yong Chen, Enhong Chen

Published: 2026-01-09

arXiv ID: 2601.05755v2

Added to Library: 2026-01-15 03:01 UTC

Red Teaming

📄 Abstract

LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \textbf{VIGIL}, a framework that shifts the paradigm from restrictive isolation to a verify-before-commit protocol. By facilitating speculative hypothesis generation and enforcing safety through intent-grounded verification, \textbf{VIGIL} preserves reasoning flexibility while ensuring robust control. We further introduce \textbf{SIREN}, a benchmark comprising 959 tool stream injection cases designed to simulate pervasive threats characterized by dynamic dependencies. Extensive experiments demonstrate that \textbf{VIGIL} outperforms state-of-the-art dynamic defenses by reducing the attack success rate by over 22\% while more than doubling the utility under attack compared to static baselines, thereby achieving an optimal balance between security and utility.

🔍 Key Points

  • Introduction of VIGIL framework that employs a verify-before-commit paradigm to improve the security of LLM agents against tool stream injection attacks.
  • Development of SIREN benchmark containing 959 simulated injection cases to comprehensively evaluate LLM agent resilience against multifaceted threats.
  • Demonstration of VIGIL's performance, showing over a 22% reduction in attack success rate (ASR) while more than doubling the utility under attack (UA) compared to static defenses, thus addressing the rigidity-utility trade-off.
  • Detailed breakdown of VIGIL's components: Intent Anchor for constraint synthesis, Perception Sanitizer for input cleansing, Speculative Reasoner for trajectory exploration, and Grounding Verifier for validation.
  • Ablation studies affirming the critical role of each VIGIL component in maintaining security and reasoning flexibility.

💡 Why This Paper Matters

The study of VIGIL presents significant advancements in securing LLM agents operating in open environments against sophisticated tool stream injections. It showcases a novel framework that enhances the robustness of agent decision-making without compromising utility, providing a comprehensive solution to balance security measures with operational effectiveness. This work is crucial as it directly addresses a vital gap in AI safety, particularly in the dynamic landscape of interaction between LLMs and external inputs.

🎯 Why It's Interesting for AI Security Researchers

This paper will be of great interest to AI security researchers as it addresses emerging vulnerabilities in LLM implementations, specifically the risks associated with indirect prompt injections. The proposed VIGIL framework offers a sophisticated defense mechanism, which could inspire further research and development in defensive architectures against similar attacks. Additionally, the SIREN benchmark sets a new standard for evaluating agent resilience, thus serving as a critical tool for future studies in the field.

📚 Read the Full Paper