← Back to Library

RADAR: A Risk-Aware Dynamic Multi-Agent Framework for LLM Safety Evaluation via Role-Specialized Collaboration

Authors: Xiuyuan Chen, Jian Zhao, Yuchen Yuan, Tianle Zhang, Huilin Zhou, Zheng Zhu, Ping Hu, Linghe Kong, Chi Zhang, Weiran Huang, Xuelong Li

Published: 2025-09-28

arXiv ID: 2509.25271v1

Added to Library: 2025-10-01 04:03 UTC

Safety

📄 Abstract

Existing safety evaluation methods for large language models (LLMs) suffer from inherent limitations, including evaluator bias and detection failures arising from model homogeneity, which collectively undermine the robustness of risk evaluation processes. This paper seeks to re-examine the risk evaluation paradigm by introducing a theoretical framework that reconstructs the underlying risk concept space. Specifically, we decompose the latent risk concept space into three mutually exclusive subspaces: the explicit risk subspace (encompassing direct violations of safety guidelines), the implicit risk subspace (capturing potential malicious content that requires contextual reasoning for identification), and the non-risk subspace. Furthermore, we propose RADAR, a multi-agent collaborative evaluation framework that leverages multi-round debate mechanisms through four specialized complementary roles and employs dynamic update mechanisms to achieve self-evolution of risk concept distributions. This approach enables comprehensive coverage of both explicit and implicit risks while mitigating evaluator bias. To validate the effectiveness of our framework, we construct an evaluation dataset comprising 800 challenging cases. Extensive experiments on our challenging testset and public benchmarks demonstrate that RADAR significantly outperforms baseline evaluation methods across multiple dimensions, including accuracy, stability, and self-evaluation risk sensitivity. Notably, RADAR achieves a 28.87% improvement in risk identification accuracy compared to the strongest baseline evaluation method.

🔍 Key Points

  • Introduces RADAR, a multi-agent collaborative framework for evaluating LLM safety, addressing limitations of traditional evaluation methods.
  • Decomposes latent risk concepts into explicit, implicit, and non-risk subspaces, enhancing risk detection accuracy and robustness.
  • Employs specialized roles (Safety Standards Auditor, Vulnerability Detector, Counterargument Critic, and Holistic Arbiter) to perform targeted evaluations through multi-round debates.
  • Demonstrates significant performance improvements over traditional methods, achieving a 28.87% increase in risk identification accuracy on their constructed evaluation dataset.
  • Provides a theoretical foundation for understanding biases in single-evaluator systems and how multi-agent frameworks can mitigate these biases.

💡 Why This Paper Matters

This paper is significant as it addresses the critical issue of safety evaluation for large language models, which poses risks if not properly assessed. The RADAR framework represents a novel approach that improves accuracy and robustness in identifying safety risks, making it a substantial advancement in AI safety evaluation methodologies.

🎯 Why It's Interesting for AI Security Researchers

The paper would interest AI security researchers as it not only offers a new method for assessing the safety of large language models but also lays a theoretical groundwork for understanding and mitigating evaluator biases. This is increasingly important as AI systems become more integrated into society and the potential for harmful outputs grows. The findings can inform future research and development of safer AI systems.

📚 Read the Full Paper