← Back to Library

SafeEvalAgent: Toward Agentic and Self-Evolving Safety Evaluation of LLMs

Authors: Yixu Wang, Xin Wang, Yang Yao, Xinyuan Li, Yan Teng, Xingjun Ma, Yingchun Wang

Published: 2025-09-30

arXiv ID: 2509.26100v1

Added to Library: 2025-10-01 04:02 UTC

Safety

📄 Abstract

The rapid integration of Large Language Models (LLMs) into high-stakes domains necessitates reliable safety and compliance evaluation. However, existing static benchmarks are ill-equipped to address the dynamic nature of AI risks and evolving regulations, creating a critical safety gap. This paper introduces a new paradigm of agentic safety evaluation, reframing evaluation as a continuous and self-evolving process rather than a one-time audit. We then propose a novel multi-agent framework SafeEvalAgent, which autonomously ingests unstructured policy documents to generate and perpetually evolve a comprehensive safety benchmark. SafeEvalAgent leverages a synergistic pipeline of specialized agents and incorporates a Self-evolving Evaluation loop, where the system learns from evaluation results to craft progressively more sophisticated and targeted test cases. Our experiments demonstrate the effectiveness of SafeEvalAgent, showing a consistent decline in model safety as the evaluation hardens. For instance, GPT-5's safety rate on the EU AI Act drops from 72.50% to 36.36% over successive iterations. These findings reveal the limitations of static assessments and highlight our framework's ability to uncover deep vulnerabilities missed by traditional methods, underscoring the urgent need for dynamic evaluation ecosystems to ensure the safe and responsible deployment of advanced AI.

🔍 Key Points

  • Introduction of the SafeEvalAgent framework that transforms the evaluation of Large Language Models (LLMs) from static benchmarks to a dynamic, self-evolving assessment process.
  • Development of a multi-agent system that autonomously ingests and structures regulatory documents, generating an initial comprehensive test suite for continuous evaluation.
  • Demonstration of significant declines in model compliance rates, highlighting deep vulnerabilities missed by static methods, e.g., GPT-5's compliance rate dropping from 72.50% to 36.36% under evolving evaluations.
  • Validation of the effectiveness of the SafeEvalAgent architecture through extensive experiments across various regulatory frameworks, showcasing its capacity to uncover nuanced safety issues.
  • Reliability assessment indicating high agreement between automated assessments and human evaluations, thereby ensuring the accuracy of safety evaluations.

💡 Why This Paper Matters

The paper presents a novel approach to safety evaluation in LLMs through the SafeEvalAgent framework, emphasizing the necessity for continuous and adaptive assessments to maintain compliance in rapidly evolving AI landscapes. By shifting from static evaluations to dynamic, self-improving processes, this framework aligns more closely with real-world regulatory needs, making it crucial for ensuring the safety and integrity of AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it addresses the pressing need for robust evaluation techniques that can keep pace with the evolving risks associated with AI deployment. The introduction of the SafeEvalAgent framework provides a new methodology for identifying safety flaws in LLMs that traditional static assessments fail to uncover, making it a valuable reference for enhancing AI safety protocols and refining compliance auditing practices.

📚 Read the Full Paper