← Back to Library

Evo-MARL: Co-Evolutionary Multi-Agent Reinforcement Learning for Internalized Safety

Authors: Zhenyu Pan, Yiting Zhang, Yutong Zhang, Jianshu Zhang, Haozheng Luo, Yuwei Han, Dennis Wu, Hong-Yu Chen, Philip S. Yu, Manling Li, Han Liu

Published: 2025-08-05

arXiv ID: 2508.03864v1

Added to Library: 2025-08-14 23:02 UTC

Red Teaming

📄 Abstract

Multi-agent systems (MAS) built on multimodal large language models exhibit strong collaboration and performance. However, their growing openness and interaction complexity pose serious risks, notably jailbreak and adversarial attacks. Existing defenses typically rely on external guard modules, such as dedicated safety agents, to handle unsafe behaviors. Unfortunately, this paradigm faces two challenges: (1) standalone agents offer limited protection, and (2) their independence leads to single-point failure-if compromised, system-wide safety collapses. Naively increasing the number of guard agents further raises cost and complexity. To address these challenges, we propose Evo-MARL, a novel multi-agent reinforcement learning (MARL) framework that enables all task agents to jointly acquire defensive capabilities. Rather than relying on external safety modules, Evo-MARL trains each agent to simultaneously perform its primary function and resist adversarial threats, ensuring robustness without increasing system overhead or single-node failure. Furthermore, Evo-MARL integrates evolutionary search with parameter-sharing reinforcement learning to co-evolve attackers and defenders. This adversarial training paradigm internalizes safety mechanisms and continually enhances MAS performance under co-evolving threats. Experiments show that Evo-MARL reduces attack success rates by up to 22% while boosting accuracy by up to 5% on reasoning tasks-demonstrating that safety and utility can be jointly improved.

🔍 Key Points

  • Evo-MARL integrates multi-agent reinforcement learning (MARL) to internalize safety defenses directly within task agents, eliminating the dependence on external guard modules.
  • The framework employs a co-evolutionary mechanism that allows agents to learn effective defense strategies against evolving adversarial threats through adversarial training.
  • Experimental results demonstrate that Evo-MARL significantly reduces attack success rates by up to 22% while improving accuracy in reasoning tasks by up to 5%, indicating that safety and utility can be enhanced simultaneously.
  • The methodology employs a chain-structured multi-agent system to simulate realistic adversarial conditions, improving the robustness of the entire system without introducing additional complexity.
  • The approach showcases that larger language models do not inherently offer better safety, and that system-level defense strategies should prioritize the integration of safety measures into the agent design.

💡 Why This Paper Matters

The relevance and importance of this paper lie in its innovative approach to internalizing safety mechanisms within multi-agent systems, thus enhancing their resilience to adversarial attacks without increasing complexity or costs. The empirical findings validate the effectiveness of the proposed Evo-MARL framework and exemplify how safety measures can be integrated with performance enhancements, making it a significant contribution to the field of AI security.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of great interest to AI security researchers as it addresses a critical challenge in multi-agent systems—vulnerability to adversarial attacks. The co-evolutionary approach and the integration of safety measures into the core functionality of agents offer valuable insights into building robust systems. The ability to simultaneously enhance safety and performance is particularly relevant in today's AI landscape as researchers seek to develop more resilient AI frameworks against emerging threats.

📚 Read the Full Paper