← Back to Library

FuSaR: A Fuzzification-Based Method for LRM Safety-Reasoning Balance

Authors: Jianhao Chen, Mayi Xu, Xiaohu Li, Yongqi Li, Xiangyu Zhang, Jianjie Huang, Tieyun Qian

Published: 2025-08-18

arXiv ID: 2508.12897v1

Added to Library: 2025-08-19 04:01 UTC

Red Teaming

📄 Abstract

Large Reasoning Models (LRMs) have demonstrated impressive performance across various tasks due to their powerful reasoning capabilities. However, their safety performance remains a significant concern. In this paper, we explore the reasons behind the vulnerability of LRMs. Based on this, we propose a novel method to improve the safety of LLMs without sacrificing their reasoning capability. Specifically, we exploit the competition between LRM's reasoning ability and safety ability, and achieve jailbreak by improving LRM's reasoning performance to reduce its safety performance. We then introduce an alignment strategy based on Fuzzification to balance Safety-Reasoning (FuSaR), by detoxifying the harmful reasoning process, where both the dangerous entities and the dangerous procedures in the reasoning steps are hidden. FuSaR successfully mitigates safety risks while preserving core reasoning information. We validate this strategy through alignment experiments on several open-source LRMs using detoxified reasoning data. The results compared with existing baselines conclusively show that FuSaR is an efficient alignment strategy to simultaneously enhance both the reasoning capability and safety of LRMs.

🔍 Key Points

  • Introduction of FuSaR, a fuzzification-based method that enhances safety in Large Reasoning Models (LRMs) while preserving reasoning capabilities.
  • Development of a novel jailbreak attack strategy that exploits the competition between reasoning and safety abilities in LRMs, demonstrating the risks associated with their reasoning phase.
  • Implementation of detoxification processes to mitigate harmful reasoning outputs without losing essential logical structures and semantics in the models' responses.
  • Extensive experiments validating the effectiveness of FuSaR, showing significant reductions in safety vulnerabilities (measured by Attack Success Rate) without compromising reasoning performance on standard benchmarks.
  • Emergence of insights into how safety alignment techniques can be optimized for structured outputs from LRMs, providing a roadmap for future research.

💡 Why This Paper Matters

This paper is highly relevant as it addresses the crucial aspect of safety in Large Reasoning Models, a growing frontier in AI. By proposing FuSaR, the authors successfully bridge the gap between enhancing reasoning abilities and ensuring user safety, establishing a foundational framework for future improvements in LRM safety practices.

🎯 Why It's Interesting for AI Security Researchers

The insights and methods presented in this paper are of great interest to AI security researchers, particularly those focused on improving the robustness of machine learning models against adversarial attacks. Understanding how reasoning models can be simultaneously optimized for performance and safety is critical in developing secure AI systems that can operate reliably in real-world applications.

📚 Read the Full Paper