← Back to Library

Risk Awareness Injection: Calibrating Vision-Language Models for Safety without Compromising Utility

Authors: Mengxuan Wang, Yuxin Chen, Gang Xu, Tao He, Hongjie Jiang, Ming Li

Published: 2026-02-03

arXiv ID: 2602.03402v1

Added to Library: 2026-02-04 03:02 UTC

Red Teaming

📄 Abstract

Vision language models (VLMs) extend the reasoning capabilities of large language models (LLMs) to cross-modal settings, yet remain highly vulnerable to multimodal jailbreak attacks. Existing defenses predominantly rely on safety fine-tuning or aggressive token manipulations, incurring substantial training costs or significantly degrading utility. Recent research shows that LLMs inherently recognize unsafe content in text, and the incorporation of visual inputs in VLMs frequently dilutes risk-related signals. Motivated by this, we propose Risk Awareness Injection (RAI), a lightweight and training-free framework for safety calibration that restores LLM-like risk recognition by amplifying unsafe signals in VLMs. Specifically, RAI constructs an Unsafe Prototype Subspace from language embeddings and performs targeted modulation on selected high-risk visual tokens, explicitly activating safety-critical signals within the cross-modal feature space. This modulation restores the model's LLM-like ability to detect unsafe content from visual inputs, while preserving the semantic integrity of original tokens for cross-modal reasoning. Extensive experiments across multiple jailbreak and utility benchmarks demonstrate that RAI substantially reduces attack success rate without compromising task performance.

🔍 Key Points

  • Proposes Risk Awareness Injection (RAI), a lightweight and training-free method, to enhance safety calibration in vision-language models without degrading their utility.
  • Identifies 'Risk Signal Dilution' as a critical vulnerability in multimodal large language models, where unsafe visual cues fail to activate safety mechanisms effectively.
  • Demonstrates that by creating an Unsafe Prototype Subspace and selectively amplifying high-risk visual tokens, RAI successfully mitigates jailbreak risks while maintaining high performance in cross-modal tasks.
  • Extensive experiments show that RAI significantly decreases the Attack Success Rate (ASR) across various benchmark datasets, including image and video vulnerabilities, outperforming existing state-of-the-art methods.
  • RAI preserves the semantic integrity of the models, achieving a favorable balance between safety and utility, allowing for robust defense against multimodal attacks.

💡 Why This Paper Matters

The paper is highly relevant as it addresses a critical need for enhanced safety in increasingly complex multimodal AI systems. By offering a practical and efficient solution through RAI, it not only reduces security vulnerabilities but also ensures that the functional capabilities of the models are upheld. This makes it a vital contribution to the fields of AI safety and multimodal reasoning, supporting broader adoption of robust AI systems in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is of significant interest to AI security researchers as it tackles the urgent challenge of mitigating vulnerabilities in vision-language models that can be exploited through advanced multimodal jailbreak attacks. The proposed method, RAI, provides a novel approach to safety calibration that sets a new standard for balancing safety and performance, encouraging further exploration of lightweight techniques in AI security solutions.

📚 Read the Full Paper