← Back to Library

CoP: Agentic Red-teaming for Large Language Models using Composition of Principles

Authors: Chen Xiong, Pin-Yu Chen, Tsung-Yi Ho

Published: 2025-06-01

arXiv ID: 2506.00781v1

Added to Library: 2025-06-04 04:00 UTC

Red Teaming

📄 Abstract

Recent advances in Large Language Models (LLMs) have spurred transformative applications in various domains, ranging from open-source to proprietary LLMs. However, jailbreak attacks, which aim to break safety alignment and user compliance by tricking the target LLMs into answering harmful and risky responses, are becoming an urgent concern. The practice of red-teaming for LLMs is to proactively explore potential risks and error-prone instances before the release of frontier AI technology. This paper proposes an agentic workflow to automate and scale the red-teaming process of LLMs through the Composition-of-Principles (CoP) framework, where human users provide a set of red-teaming principles as instructions to an AI agent to automatically orchestrate effective red-teaming strategies and generate jailbreak prompts. Distinct from existing red-teaming methods, our CoP framework provides a unified and extensible framework to encompass and orchestrate human-provided red-teaming principles to enable the automated discovery of new red-teaming strategies. When tested against leading LLMs, CoP reveals unprecedented safety risks by finding novel jailbreak prompts and improving the best-known single-turn attack success rate by up to 19.0 times.

🔍 Key Points

  • CoP framework automates and scales the red-teaming process for Large Language Models (LLMs) by using a Composition-of-Principles approach, allowing for effective generation of jailbreak prompts based on human-authored principles.
  • CoP achieved a significant improvement in attack effectiveness, achieving success rates up to 72.5% against highly aligned models, outperforming previous state-of-the-art methods by up to 19 times.
  • The study introduces a novel iterative refinement process where the framework continuously optimizes prompts to enhance effectiveness, combining multiple strategies for generating jailbreak prompts intelligently.
  • Performance evaluations demonstrated CoP's efficiency, requiring significantly fewer queries (up to 17 times less) compared to existing methods while maintaining high attack success rates, an important factor in practical applications.
  • The research highlights critical vulnerabilities in advanced LLM safety mechanisms, showcasing systematic exploitability even in models with reinforced safety defenses.

💡 Why This Paper Matters

This paper is pivotal in advancing AI safety measures, particularly in the context of rapidly evolving Large Language Models. By introducing a systematic and automated framework for red-teaming, it provides AI developers and security researchers with the necessary tools to identify vulnerabilities in a more efficient manner, thereby enabling the development of stronger protective mechanisms against potential misuse of AI technologies.

🎯 Why It's Interesting for AI Security Researchers

The findings will be of significant interest to AI security researchers as they reveal previously underestimated vulnerabilities within the safety mechanisms of LLMs. The innovative methods introduced in this paper provide a new paradigm for conducting adversarial testing, paving the way for comprehensive safety assessments that are essential in ensuring the responsible deployment of AI systems.

📚 Read the Full Paper