โ† Back to Library

Contrastive Reasoning Alignment: Reinforcement Learning from Hidden Representations

Authors: Haozheng Luo, Yimin Wang, Jiahao Yu, Binghui Wang, Yan Chen

Published: 2026-03-18

arXiv ID: 2603.17305v1

Added to Library: 2026-03-19 02:02 UTC

Red Teaming

๐Ÿ“„ Abstract

We propose CRAFT, a red-teaming alignment framework that leverages model reasoning capabilities and hidden representations to improve robustness against jailbreak attacks. Unlike prior defenses that operate primarily at the output level, CRAFT aligns large reasoning models to generate safety-aware reasoning traces by explicitly optimizing objectives defined over the hidden state space. Methodologically, CRAFT integrates contrastive representation learning with reinforcement learning to separate safe and unsafe reasoning trajectories, yielding a latent-space geometry that supports robust, reasoning-level safety alignment. Theoretically, we show that incorporating latent-textual consistency into GRPO eliminates superficially aligned policies by ruling them out as local optima. Empirically, we evaluate CRAFT on multiple safety benchmarks using two strong reasoning models, Qwen3-4B-Thinking and R1-Distill-Llama-8B, where it consistently outperforms state-of-the-art defenses such as IPO and SafeKey. Notably, CRAFT delivers an average 79.0% improvement in reasoning safety and 87.7% improvement in final-response safety over the base models, demonstrating the effectiveness of hidden-space reasoning alignment.

๐Ÿ” Key Points

  • CRAFT introduces a distinct framework for safety alignment in Large Reasoning Models (LRMs), focusing on latent-space optimization rather than surface-level outputs to mitigate jailbreak attacks.
  • The framework effectively combines contrastive representation learning with reinforcement learning (RL) to distinguish safe and unsafe reasoning trajectories, achieving significant improvements in model safety.
  • In empirical evaluations, CRAFT demonstrated an impressive 79.0% increase in reasoning safety and an 87.7% improvement in final response safety over the baseline models, outperforming existing state-of-the-art defenses.
  • CRAFT provides a robust theoretical foundation by eliminating superficially aligned policies as local optima using latent-textual consistency, ensuring that internal reasoning mechanisms produce outputs consistent with safety guidelines.
  • User inputs in safety evaluations and metrics are well-structured, relying on benchmarks that assess safety at both reasoning and final-response levels, highlighting the practicality of CRAFT.

๐Ÿ’ก Why This Paper Matters

The paper presents CRAFT as a pivotal advancement in the ongoing battle against the vulnerabilities of large language models to adversarial manipulation. By addressing the critically underexplored area of reasoning safety, it not only improves protection against jailbreak attacks but also enhances the overall robustness and ethical alignment of AI systems. With such significant empirical results, CRAFT stands to influence future research and application in AI safety, potentially setting a new standard for model alignment methodologies.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

This paper will attract significant interest from AI security researchers as it addresses a pressing challenge in AI safetyโ€”jailbreak attacks that exploit vulnerabilities in language models. By introducing a novel framework that operates on latent representations rather than traditional output levels, it opens new avenues for research in model defense mechanisms and ethical AI development. Moreover, its empirical success at enhancing reasoning and response safety provides a strong foundation for further exploration and implementation in security-focused applications, making it a crucial reference for future studies in AI safety and security.

๐Ÿ“š Read the Full Paper