← Back to Library

Read the Scene, Not the Script: Outcome-Aware Safety for LLMs

Authors: Rui Wu, Yihao Quan, Zeru Shi, Zhenting Wang, Yanshu Li, Ruixiang Tang

Published: 2025-10-05

arXiv ID: 2510.04320v1

Added to Library: 2025-10-07 04:02 UTC

Red Teaming Safety

📄 Abstract

Safety-aligned Large Language Models (LLMs) still show two dominant failure modes: they are easily jailbroken, or they over-refuse harmless inputs that contain sensitive surface signals. We trace both to a common cause: current models reason weakly about links between actions and outcomes and over-rely on surface-form signals, lexical or stylistic cues that do not encode consequences. We define this failure mode as Consequence-blindness. To study consequence-blindness, we build a benchmark named CB-Bench covering four risk scenarios that vary whether semantic risk aligns with outcome risk, enabling evaluation under both matched and mismatched conditions which are often ignored by existing safety benchmarks. Mainstream models consistently fail to separate these risks and exhibit consequence-blindness, indicating that consequence-blindness is widespread and systematic. To mitigate consequence-blindness, we introduce CS-Chain-4k, a consequence-reasoning dataset for safety alignment. Models fine-tuned on CS-Chain-4k show clear gains against semantic-camouflage jailbreaks and reduce over-refusal on harmless inputs, while maintaining utility and generalization on other benchmarks. These results clarify the limits of current alignment, establish consequence-aware reasoning as a core alignment goal and provide a more practical and reproducible evaluation path.

🔍 Key Points

  • Identification of 'Consequence-blindness' as a systematic failure mode in large language models (LLMs), leading to vulnerabilities in safety alignment.
  • Development and introduction of the CB-Bench benchmark for evaluating models based on their ability to separate semantic risk from outcome risk, showing that current models generally fail at this task.
  • Introduction of the CS-Chain-4k dataset, specifically designed for consequence-aware training, leading to improved performance in models with reduced rates of harmful outputs and over-refusals on benign queries.
  • Demonstration through experiments that fine-tuning with CS-Chain-4k enables models to achieve a better balance between safety and utility, showcasing its practical impact.
  • Findings suggest that advanced reasoning capabilities may exacerbate models' reliance on superficial semantic cues, thereby worsening safety alignment issues.

💡 Why This Paper Matters

This paper presents critical insights into the challenges facing the safety alignment of Large Language Models, particularly illustrating the concept of 'Consequence-blindness' affecting current methodologies. By providing novel benchmarks and training datasets designed to address these issues, the authors lay foundational work for enhancing AI safety and decision-making processes. The implications are significant not only for the development of LLMs but for the broader AI community focused on security and responsible use.

🎯 Why It's Interesting for AI Security Researchers

This paper is essential for AI security researchers as it addresses prevalent vulnerabilities in LLMs that can be exploited for harmful purposes. The identification of consequence-blindness introduces a novel perspective on safety alignments, prompting exploration of better training strategies and models. By advancing benchmarks and presenting practical datasets, this research enhances the understanding of how LLMs can be fortified against misuse, an increasingly relevant area of study given the rapid proliferation of these technologies.

📚 Read the Full Paper