← Back to Library

PISmith: Reinforcement Learning-based Red Teaming for Prompt Injection Defenses

Authors: Chenlong Yin, Runpeng Geng, Yanting Wang, Jinyuan Jia

Published: 2026-03-13

arXiv ID: 2603.13026v1

Added to Library: 2026-03-16 02:02 UTC

Red Teaming

📄 Abstract

Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been proposed, their robustness against adaptive attacks remains insufficiently evaluated, potentially creating a false sense of security. In this work, we propose PISmith, a reinforcement learning (RL)-based red-teaming framework that systematically assesses existing prompt-injection defenses by training an attack LLM to optimize injected prompts in a practical black-box setting, where the attacker can only query the defended LLM and observe its outputs. We find that directly applying standard GRPO to attack strong defenses leads to sub-optimal performance due to extreme reward sparsity -- most generated injected prompts are blocked by the defense, causing the policy's entropy to collapse before discovering effective attack strategies, while the rare successes cannot be learned effectively. In response, we introduce adaptive entropy regularization and dynamic advantage weighting to sustain exploration and amplify learning from scarce successes. Extensive evaluation on 13 benchmarks demonstrates that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks. We also compare PISmith with 7 baselines across static, search-based, and RL-based attack categories, showing that PISmith consistently achieves the highest attack success rates. Furthermore, PISmith achieves strong performance in agentic settings on InjecAgent and AgentDojo against both open-source and closed-source LLMs (e.g., GPT-4o-mini and GPT-5-nano). Our code is available at https://github.com/albert-y1n/PISmith.

🔍 Key Points

  • Introduction of PISmith, a reinforcement learning-based framework for red teaming prompt injection defenses.
  • Implementation of adaptive entropy regularization and dynamic advantage weighting to address the challenges of reward sparsity in training an attack LLM.
  • Extensive evaluation on 13 benchmarks demonstrating that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks.
  • Comparison against seven attack baselines, with PISmith consistently outperforming them in both effectiveness and efficiency.
  • Analysis of the utility-robustness trade-off, showing that existing defenses struggle to balance both aspects effectively.

💡 Why This Paper Matters

This paper presents a significant advancement in the evaluation of prompt injection defenses through PISmith, which effectively identifies vulnerabilities in language models. Its innovative methods for training adaptive attack models highlight the ongoing security challenges faced by LLM applications, stressing the need for more robust defenses.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant for AI security researchers as it addresses critical vulnerabilities in large language models, specifically the ways they can be exploited through prompt injection attacks. The introduction of new techniques to evaluate and enhance the robustness of defenses against such attacks is crucial for developing safer AI applications.

📚 Read the Full Paper