← Back to Library

COSMO-RL: Towards Trustworthy LMRMs via Joint Safety and Stability

Authors: Yizhuo Ding, Mingkang Chen, Qiuhua Liu, Fenghua Weng, Wanying Qu, Yue Yang, Yugang Jiang, Zuxuan Wu, Yanwei Fu, Wenqi Shao

Published: 2025-10-05

arXiv ID: 2510.04196v1

Added to Library: 2025-10-07 04:02 UTC

📄 Abstract

Large Multimodal Reasoning Models (LMRMs) are moving into real applications, where they must be both useful and safe. Safety is especially challenging in multimodal settings: images and text can be combined to bypass guardrails, and single objective training can cause policy drift that yields over-refusal on benign inputs or unsafe compliance on risky ones. We present COSMO-RL, a mixed reinforcement learning framework that trains reasoning oriented LMRMs under multimodal, multitask, and multiobjective signals, and we release the resulting model, COSMO-R1. Our approach aims to let safety and capability grow together in one stable pipeline rather than competing during alignment. In experiments, COSMO-R1 improves safety while maintaining-and often improving multimodal reasoning and instruction following, shows stronger robustness to multimodal jailbreaks, and reduces unnecessary refusals. The framework also transfers across backbones with consistent gains. Ablations support the design choices, indicating a simple path to advancing safety and general capability together in LMRMs.

🔍 Key Points

  • Identification of 'Consequence-blindness' as a systematic failure mode in large language models (LLMs), leading to vulnerabilities in safety alignment.
  • Development and introduction of the CB-Bench benchmark for evaluating models based on their ability to separate semantic risk from outcome risk, showing that current models generally fail at this task.
  • Introduction of the CS-Chain-4k dataset, specifically designed for consequence-aware training, leading to improved performance in models with reduced rates of harmful outputs and over-refusals on benign queries.
  • Demonstration through experiments that fine-tuning with CS-Chain-4k enables models to achieve a better balance between safety and utility, showcasing its practical impact.
  • Findings suggest that advanced reasoning capabilities may exacerbate models' reliance on superficial semantic cues, thereby worsening safety alignment issues.

💡 Why This Paper Matters

This paper presents critical insights into the challenges facing the safety alignment of Large Language Models, particularly illustrating the concept of 'Consequence-blindness' affecting current methodologies. By providing novel benchmarks and training datasets designed to address these issues, the authors lay foundational work for enhancing AI safety and decision-making processes. The implications are significant not only for the development of LLMs but for the broader AI community focused on security and responsible use.

🎯 Why It's Interesting for AI Security Researchers

This paper is essential for AI security researchers as it addresses prevalent vulnerabilities in LLMs that can be exploited for harmful purposes. The identification of consequence-blindness introduces a novel perspective on safety alignments, prompting exploration of better training strategies and models. By advancing benchmarks and presenting practical datasets, this research enhances the understanding of how LLMs can be fortified against misuse, an increasingly relevant area of study given the rapid proliferation of these technologies.

📚 Read the Full Paper