← Back to Library

Adversarial Reinforcement Learning for Large Language Model Agent Safety

Authors: Zizhao Wang, Dingcheng Li, Vaishakh Keshava, Phillip Wallis, Ananth Balashankar, Peter Stone, Lukas Rutishauser

Published: 2025-10-06

arXiv ID: 2510.05442v1

Added to Library: 2025-11-17 01:01 UTC

Red Teaming

📄 Abstract

Large Language Model (LLM) agents can leverage tools such as Google Search to complete complex tasks. However, this tool usage introduces the risk of indirect prompt injections, where malicious instructions hidden in tool outputs can manipulate the agent, posing security risks like data leakage. Current defense strategies typically rely on fine-tuning LLM agents on datasets of known attacks. However, the generation of these datasets relies on manually crafted attack patterns, which limits their diversity and leaves agents vulnerable to novel prompt injections. To address this limitation, we propose Adversarial Reinforcement Learning for Agent Safety (ARLAS), a novel framework that leverages adversarial reinforcement learning (RL) by formulating the problem as a two-player zero-sum game. ARLAS co-trains two LLMs: an attacker that learns to autonomously generate diverse prompt injections and an agent that learns to defend against them while completing its assigned tasks. To ensure robustness against a wide range of attacks and to prevent cyclic learning, we employ a population-based learning framework that trains the agent to defend against all previous attacker checkpoints. Evaluated on BrowserGym and AgentDojo, agents fine-tuned with ARLAS achieve a significantly lower attack success rate than the original model while also improving their task success rate. Our analysis further confirms that the adversarial process generates a diverse and challenging set of attacks, leading to a more robust agent compared to the base model.

🔍 Key Points

  • Introduction of Adversarial Reinforcement Learning for Agent Safety (ARLAS), a framework that co-trains an attacker and agent to enhance LLM agent safety against indirect prompt injections.
  • Utilization of a population-based learning strategy to ensure the agent learns robust defenses against a diverse set of attacks generated by the attacker model.
  • Validation of ARLAS on BrowserGym and AgentDojo benchmarks, demonstrating a significant reduction in attack success rates and improved task completion rates compared to baseline models.
  • ARLAS reduces reliance on manually crafted attack patterns by employing an autonomous adversarial training method, leading to more varied and challenging attacks over time.
  • Analysis of the sentence embeddings of generated attacks confirms that ARLAS produces increasingly diverse prompt injections throughout training.

💡 Why This Paper Matters

This paper presents a significant advancement in the safety and security of Large Language Model (LLM) agents through a novel adversarial reinforcement learning framework called ARLAS. By co-training an agent with an autonomous attacker, the framework addresses the critical challenge of indirect prompt injections, ultimately leading to more robust and capable agents that can maintain performance while minimizing the risk of compromise. The innovative approach not only automates the generation of diverse attack patterns but also ensures that agents can adapt to evolving threats, making it a crucial step toward enhancing LLM deployment in real-world scenarios.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it tackles the pressing issue of LLM vulnerabilities, specifically regarding indirect prompt injections that can lead to critical failures and security breaches. The introduction of ARLAS offers a fresh perspective on creating resilient AI systems through adversarial training, presenting potentially impactful methodologies for improving the security posture of AI agents in various applications. Additionally, the framework's potential for reducing manual effort in crafting attack strategies aligns with significant trends towards automating security assessment and enhancing the robustness of AI-driven systems.

📚 Read the Full Paper