โ† Back to Library

RAPO: Risk-Aware Preference Optimization for Generalizable Safe Reasoning

Authors: Zeming Wei, Qiaosheng Zhang, Xia Hu, Xingcheng Xu

Published: 2026-02-04

arXiv ID: 2602.04224v1

Added to Library: 2026-02-05 03:03 UTC

Red Teaming

๐Ÿ“„ Abstract

Large Reasoning Models (LRMs) have achieved tremendous success with their chain-of-thought (CoT) reasoning, yet also face safety issues similar to those of basic language models. In particular, while algorithms are designed to guide them to deliberately refuse harmful prompts with safe reasoning, this process often fails to generalize against diverse and complex jailbreak attacks. In this work, we attribute these failures to the generalization of the safe reasoning process, particularly their insufficiency against complex attack prompts. We provide both theoretical and empirical evidence to show the necessity of a more sufficient safe reasoning process to defend against advanced attack prompts. Building on this insight, we propose a Risk-Aware Preference Optimization (RAPO) framework that enables LRM to adaptively identify and address the safety risks with appropriate granularity in its thinking content. Extensive experiments demonstrate that RAPO successfully generalizes multiple LRMs' safe reasoning adaptively across diverse attack prompts whilst preserving general utility, contributing a robust alignment technique for LRM safety. Our code is available at https://github.com/weizeming/RAPO.

๐Ÿ” Key Points

  • Introduction of the RAPO framework, which improves adaptive safe reasoning in Large Reasoning Models (LRMs) against complex jailbreak attacks.
  • The paper emphasizes the necessity of scaling safe reasoning depth according to the complexity of attack prompts, supported by both theoretical analysis and empirical evidence.
  • RAPO utilizes a composite reward system combining risk-aware and general utility rewards to enhance model responsiveness to potential risks in prompts.
  • Extensive experiments demonstrate RAPO's effectiveness, achieving significantly lower attack success rates on various benchmarks while maintaining model utility.
  • The study provides insights into safe reasoning processes and their relation to in-context learning, offering a new perspective on LRM safety.

๐Ÿ’ก Why This Paper Matters

This paper is significant as it addresses the ongoing safety challenges concerning Large Reasoning Models in the context of sophisticated adversarial prompts. By proposing the RAPO framework, it not only demonstrates a novel way to enhance the safety of these models but also offers empirical evidence supporting its effectiveness. The findings emphasize the importance of aligning safe reasoning with the complexity of the input, leading to safer AI systems capable of better handling malicious attempts to bypass safety measures.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

AI security researchers would find this paper relevant because it tackles a critical aspect of AI safetyโ€”ensuring that models can resist and adapt to increasingly sophisticated jailbreak attacks. With the risk of AI models generating harmful content being a pressing issue, the methods and insights presented in this research provide valuable strategies for developing more robust and secure AI systems. Furthermore, the theoretical and empirical analyses enrich the understanding of safe reasoning, which is crucial for enhancing AI alignment and overall safety.

๐Ÿ“š Read the Full Paper