← Back to Library

Optimizing Agent Planning for Security and Autonomy

Authors: Aashish Kolluri, Rishi Sharma, Manuel Costa, Boris Köpf, Tobias Nießen, Mark Russinovich, Shruti Tople, Santiago Zanella-Béguelin

Published: 2026-02-11

arXiv ID: 2602.11416v1

Added to Library: 2026-02-13 03:00 UTC

📄 Abstract

Indirect prompt injection attacks threaten AI agents that execute consequential actions, motivating deterministic system-level defenses. Such defenses can provably block unsafe actions by enforcing confidentiality and integrity policies, but currently appear costly: they reduce task completion rates and increase token usage compared to probabilistic defenses. We argue that existing evaluations miss a key benefit of system-level defenses: reduced reliance on human oversight. We introduce autonomy metrics to quantify this benefit: the fraction of consequential actions an agent can execute without human-in-the-loop (HITL) approval while preserving security. To increase autonomy, we design a security-aware agent that (i) introduces richer HITL interactions, and (ii) explicitly plans for both task progress and policy compliance. We implement this agent design atop an existing information-flow control defense against prompt injection and evaluate it on the AgentDojo and WASP benchmarks. Experiments show that this approach yields higher autonomy without sacrificing utility.

🔍 Key Points

  • The paper identifies distinct latent-space patterns in the internal representations of LLMs that are indicative of jailbreak prompts, providing an innovative approach for detection without relying solely on external prompt-level evaluations.
  • It introduces a tensor-based latent representation framework that captures hidden activations' structure, allowing for lightweight jailbreak detection and mitigation at inference time without requiring model fine-tuning.
  • An experimental evaluation demonstrates that selectively bypassing high-susceptibility layers can block up to 78% of jailbreak attempts while maintaining a high level of benign prompt response accuracy (94%).
  • The findings suggest that jailbreak behavior is rooted in identifiable internal structures, offering a complementary approach to improving LLM security that could be applied across various architectures.
  • The authors provide a systematic layer-wise analysis across multiple open-source models, highlighting the necessity for future defenses to account for the evolving nature of jailbreaking strategies.

💡 Why This Paper Matters

This paper is significant as it presents a novel method for understanding and mitigating jailbreak attacks on large language models through a detailed analysis of internal representations. By leveraging these representations, the proposed approach offers a more resilient defense mechanism that is both efficient and broadly applicable to various model architectures. The insights gained from understanding where adversarial signals propagate within the model can lead to improved security and robustness of conversational AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper will be of particular interest to AI security researchers because it tackles a critical vulnerability in LLMs—the capability for adversarial prompts to exploit the models for malicious outputs. The presented methodologies provide a new angle on defending against jailbreaking, an area that is becoming increasingly relevant as LLM applications proliferate. With its focus on internal representations, this work encourages further exploration into model interpretability and security, which are essential for developing safe AI systems.

📚 Read the Full Paper