← Back to Library

RedCodeAgent: Automatic Red-teaming Agent against Diverse Code Agents

Authors: Chengquan Guo, Chulin Xie, Yu Yang, Zhaorun Chen, Zinan Lin, Xander Davies, Yarin Gal, Dawn Song, Bo Li

Published: 2025-10-02

arXiv ID: 2510.02609v1

Added to Library: 2025-10-06 04:02 UTC

Red Teaming

📄 Abstract

Code agents have gained widespread adoption due to their strong code generation capabilities and integration with code interpreters, enabling dynamic execution, debugging, and interactive programming capabilities. While these advancements have streamlined complex workflows, they have also introduced critical safety and security risks. Current static safety benchmarks and red-teaming tools are inadequate for identifying emerging real-world risky scenarios, as they fail to cover certain boundary conditions, such as the combined effects of different jailbreak tools. In this work, we propose RedCodeAgent, the first automated red-teaming agent designed to systematically uncover vulnerabilities in diverse code agents. With an adaptive memory module, RedCodeAgent can leverage existing jailbreak knowledge, dynamically select the most effective red-teaming tools and tool combinations in a tailored toolbox for a given input query, thus identifying vulnerabilities that might otherwise be overlooked. For reliable evaluation, we develop simulated sandbox environments to additionally evaluate the execution results of code agents, mitigating potential biases of LLM-based judges that only rely on static code. Through extensive evaluations across multiple state-of-the-art code agents, diverse risky scenarios, and various programming languages, RedCodeAgent consistently outperforms existing red-teaming methods, achieving higher attack success rates and lower rejection rates with high efficiency. We further validate RedCodeAgent on real-world code assistants, e.g., Cursor and Codeium, exposing previously unidentified security risks. By automating and optimizing red-teaming processes, RedCodeAgent enables scalable, adaptive, and effective safety assessments of code agents.

🔍 Key Points

  • Introduction of RedCodeAgent, the first automated red-teaming agent designed to systematically uncover vulnerabilities in diverse code agents.
  • Development of an adaptive memory module which stores successful attack experiences to optimize future attacks.
  • Creation of a tailored toolbox that integrates various jailbreak tools, enabling dynamic selection and combination based on input queries.
  • Extensive evaluation across multiple state-of-the-art code agents, showcasing superior performance in attack success rates and rejection rates compared to existing methods.
  • Validation of RedCodeAgent in real-world code assistants, revealing previously unidentified security risks.

💡 Why This Paper Matters

The paper presents a significant advancement in the evaluation and security assessment of code agents, addressing critical safety and security gaps in existing methodologies. The introduction of RedCodeAgent provides a structured framework for identifying vulnerabilities, enhancing the reliability of code generation and execution by AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper is of paramount interest to AI security researchers as it tackles pressing challenges related to the vulnerabilities of code-generating agents. By automating the red-teaming process, it offers insights into potential threats and exploitation pathways, helping researchers understand and mitigate risks associated with LLMs and code agents, ultimately contributing to safer deployment of AI technologies.

📚 Read the Full Paper