← Back to Library

RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards

Authors: Jingnan Zheng, Xiangtian Ji, Yijun Lu, Chenhang Cui, Weixiang Zhao, Gelei Deng, Zhenkai Liang, An Zhang, Tat-Seng Chua

Published: 2025-06-09

arXiv ID: 2506.07736v2

Added to Library: 2025-06-12 01:01 UTC

Red Teaming

📄 Abstract

Large Language Models (LLMs) continue to exhibit vulnerabilities despite deliberate safety alignment efforts, posing significant risks to users and society. To safeguard against the risk of policy-violating content, system-level moderation via external guard models-designed to monitor LLM inputs and outputs and block potentially harmful content-has emerged as a prevalent mitigation strategy. Existing approaches of training guard models rely heavily on extensive human curated datasets and struggle with out-of-distribution threats, such as emerging harmful categories or jailbreak attacks. To address these limitations, we propose RSafe, an adaptive reasoning-based safeguard that conducts guided safety reasoning to provide robust protection within the scope of specified safety policies. RSafe operates in two stages: 1) guided reasoning, where it analyzes safety risks of input content through policy-guided step-by-step reasoning, and 2) reinforced alignment, where rule-based RL optimizes its reasoning paths to align with accurate safety prediction. This two-stage training paradigm enables RSafe to internalize safety principles to generalize safety protection capability over unseen or adversarial safety violation scenarios. During inference, RSafe accepts user-specified safety policies to provide enhanced safeguards tailored to specific safety requirements.

🔍 Key Points

  • Introduction of RSafe, an adaptive reasoning-based safeguard for LLMs that enhances safety through guided reasoning and reinforced alignment.
  • Implemented a two-stage training paradigm: Guided Reasoning for analyzing safety risks via step-by-step reasoning and Reinforced Alignment using rule-based reinforcement learning to optimize safety predictions.
  • Demonstrated improved generalization capabilities over existing guard models for out-of-distribution threats, specifically addressing emerging harmful categories and jailbreak attacks.
  • Achieved state-of-the-art performance in safety moderation tasks on several benchmark datasets while using limited human-curated data, showcasing its data efficiency.
  • Provided interpretable safety judgments with human-readable reasoning explanations, improving transparency in LLM safety mechanisms.

💡 Why This Paper Matters

The study presents RSafe, a novel and effective approach to enhance the safety of Large Language Models (LLMs) in various applications related to content generation and user interaction. By leveraging adaptive reasoning and reinforcement learning, RSafe sets a new standard for safeguarding against harmful content, thus addressing critical concerns about the societal impacts of LLM deployments. Its ability to generalize to unseen threats and provide interpretable reasoning positions RSafe as a significant contribution to the field.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant to AI security researchers as it addresses the pressing challenges associated with the deployment of LLMs and their potential risks. The introduction of RSafe's adaptive reasoning approach provides a framework that can dynamically adapt to new safety requirements, marking a progressive step in enhancing AI safety mechanisms. The empirical results demonstrating its robustness against adversarial attacks and generalization to novel safety categories offer valuable insights and methodologies that can inform future research and the development of more resilient AI systems.

📚 Read the Full Paper