← Back to Library

SafeRBench: A Comprehensive Benchmark for Safety Assessment in Large Reasoning Models

Authors: Xin Gao, Shaohan Yu, Zerui Chen, Yueming Lyu, Weichen Yu, Guanghao Li, Jiyao Liu, Jianxiong Gao, Jian Liang, Ziwei Liu, Chenyang Si

Published: 2025-11-19

arXiv ID: 2511.15169v1

Added to Library: 2025-11-20 03:01 UTC

📄 Abstract

Large Reasoning Models (LRMs) improve answer quality through explicit chain-of-thought, yet this very capability introduces new safety risks: harmful content can be subtly injected, surface gradually, or be justified by misleading rationales within the reasoning trace. Existing safety evaluations, however, primarily focus on output-level judgments and rarely capture these dynamic risks along the reasoning process. In this paper, we present SafeRBench, the first benchmark that assesses LRM safety end-to-end -- from inputs and intermediate reasoning to final outputs. (1) Input Characterization: We pioneer the incorporation of risk categories and levels into input design, explicitly accounting for affected groups and severity, and thereby establish a balanced prompt suite reflecting diverse harm gradients. (2) Fine-Grained Output Analysis: We introduce a micro-thought chunking mechanism to segment long reasoning traces into semantically coherent units, enabling fine-grained evaluation across ten safety dimensions. (3) Human Safety Alignment: We validate LLM-based evaluations against human annotations specifically designed to capture safety judgments. Evaluations on 19 LRMs demonstrate that SafeRBench enables detailed, multidimensional safety assessment, offering insights into risks and protective mechanisms from multiple perspectives.

🔍 Key Points

  • First comprehensive taxonomy of IPI-centric defense frameworks, covering five distinct dimensions: technical paradigms, intervention stages, model access, explainability, and automation level.
  • Thorough evaluation of representative IPI defense frameworks in both static and dynamic environments, highlighting the average attack success rates and areas of vulnerability.
  • Identification of six root causes of defensive failures, providing a detailed analysis of why current defenses may be circumvented and suggesting implications for future designs.
  • Design and demonstration of three novel adaptive attack strategies that exploit identified vulnerabilities, substantially increasing attack success rates against specific frameworks.
  • Contributions laid down provide actionable insights and foundational knowledge for the development of more robust and usable IPI-centric defense frameworks.

💡 Why This Paper Matters

This paper is pivotal due to its systematic approach in addressing a critical security gap in large language model-based agent systems. By integrating a unified taxonomy with comprehensive evaluations and adaptive attacks, it sets a precedent for future research and development in AI security fields, underscoring the necessity for more resilient defenses against advanced prompt injection attacks.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it not only reveals the current limitations in IPI defenses but also proposes a structured framework for understanding and improving defense mechanisms. Its findings underline the ongoing challenge of securing LLM-based systems, urging the community to adapt and innovate in response to evolving attack vectors.

📚 Read the Full Paper