← Back to Library

SafetyFlow: An Agent-Flow System for Automated LLM Safety Benchmarking

Authors: Xiangyang Zhu, Yuan Tian, Chunyi Li, Kaiwei Zhang, Wei Sun, Guangtao Zhai

Published: 2025-08-21

arXiv ID: 2508.15526v1

Added to Library: 2025-08-22 04:01 UTC

Safety

📄 Abstract

The rapid proliferation of large language models (LLMs) has intensified the requirement for reliable safety evaluation to uncover model vulnerabilities. To this end, numerous LLM safety evaluation benchmarks are proposed. However, existing benchmarks generally rely on labor-intensive manual curation, which causes excessive time and resource consumption. They also exhibit significant redundancy and limited difficulty. To alleviate these problems, we introduce SafetyFlow, the first agent-flow system designed to automate the construction of LLM safety benchmarks. SafetyFlow can automatically build a comprehensive safety benchmark in only four days without any human intervention by orchestrating seven specialized agents, significantly reducing time and resource cost. Equipped with versatile tools, the agents of SafetyFlow ensure process and cost controllability while integrating human expertise into the automatic pipeline. The final constructed dataset, SafetyFlowBench, contains 23,446 queries with low redundancy and strong discriminative power. Our contribution includes the first fully automated benchmarking pipeline and a comprehensive safety benchmark. We evaluate the safety of 49 advanced LLMs on our dataset and conduct extensive experiments to validate our efficacy and efficiency.

🔍 Key Points

  • Introduction of SafetyFlow, an automated agent-flow system that constructs LLM safety benchmarks without human intervention, dramatically reducing time and resource costs.
  • Development of seven specialized agents, each responsible for different tasks in the benchmarking pipeline, enhancing efficiency and reducing redundancy in safety evaluations.
  • Creation of a comprehensive dataset, SafetyFlowBench, containing 23,446 queries with low redundancy and high discriminative power, ready for evaluating the safety of various LLMs.
  • Demonstration of the system's effectiveness through extensive experiments and evaluation of 49 advanced LLMs, revealing a significant safety score gap, indicating its capability to distinguish between model performance.
  • Ablation studies illustrating the importance of each agent and tool, ensuring a robust and efficient automated safety evaluation process.

💡 Why This Paper Matters

The paper presents a significant advancement in automated safety benchmarking for large language models through the SafetyFlow system. By reducing the manual workload involved in constructing safety benchmarks and increasing the efficiency of safety evaluations, it paves the way for more consistent and reliable assessments of model vulnerabilities, which are crucial for the development of responsible AI.

🎯 Why It's Interesting for AI Security Researchers

This paper will interest AI security researchers as it addresses pressing challenges in LLM safety evaluation, such as manual curation inefficiencies, redundancy in benchmarks, and the need for adaptable evaluation frameworks. The automated agent-flow system presents a novel methodology for rigorous safety assessments, which could be crucial for developing robust AI safety standards and enhancing the accountability of AI models in sensitive applications.

📚 Read the Full Paper