← Back to Library

HoliSafe: Holistic Safety Benchmarking and Modeling with Safety Meta Token for Vision-Language Model

Authors: Youngwan Lee, Kangsan Kim, Kwanyong Park, Ilcahe Jung, Soojin Jang, Seanie Lee, Yong-Ju Lee, Sung Ju Hwang

Published: 2025-06-05

arXiv ID: 2506.04704v2

Added to Library: 2025-06-12 01:01 UTC

📄 Abstract

Despite emerging efforts to enhance the safety of Vision-Language Models (VLMs), current approaches face two main shortcomings. 1) Existing safety-tuning datasets and benchmarks only partially consider how image-text interactions can yield harmful content, often overlooking contextually unsafe outcomes from seemingly benign pairs. This narrow coverage leaves VLMs vulnerable to jailbreak attacks in unseen configurations. 2) Prior methods rely primarily on data-centric tuning, with limited architectural innovations to intrinsically strengthen safety. We address these gaps by introducing a holistic safety dataset and benchmark, HoliSafe, that spans all five safe/unsafe image-text combinations, providing a more robust basis for both training and evaluation. We further propose SafeLLaVA, a novel VLM augmented with a learnable safety meta token and a dedicated safety head. The meta token encodes harmful visual cues during training, intrinsically guiding the language model toward safer responses, while the safety head offers interpretable harmfulness classification aligned with refusal rationales. Experiments show that SafeLLaVA, trained on HoliSafe, achieves state-of-the-art safety performance across multiple VLM benchmarks. Additionally, the HoliSafe benchmark itself reveals critical vulnerabilities in existing models. We hope that HoliSafe and SafeLLaVA will spur further research into robust and interpretable VLM safety, expanding future avenues for multimodal alignment.

🔍 Key Points

  • Introduction of RSafe, an adaptive reasoning-based safeguard for LLMs that enhances safety through guided reasoning and reinforced alignment.
  • Implemented a two-stage training paradigm: Guided Reasoning for analyzing safety risks via step-by-step reasoning and Reinforced Alignment using rule-based reinforcement learning to optimize safety predictions.
  • Demonstrated improved generalization capabilities over existing guard models for out-of-distribution threats, specifically addressing emerging harmful categories and jailbreak attacks.
  • Achieved state-of-the-art performance in safety moderation tasks on several benchmark datasets while using limited human-curated data, showcasing its data efficiency.
  • Provided interpretable safety judgments with human-readable reasoning explanations, improving transparency in LLM safety mechanisms.

💡 Why This Paper Matters

The study presents RSafe, a novel and effective approach to enhance the safety of Large Language Models (LLMs) in various applications related to content generation and user interaction. By leveraging adaptive reasoning and reinforcement learning, RSafe sets a new standard for safeguarding against harmful content, thus addressing critical concerns about the societal impacts of LLM deployments. Its ability to generalize to unseen threats and provide interpretable reasoning positions RSafe as a significant contribution to the field.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant to AI security researchers as it addresses the pressing challenges associated with the deployment of LLMs and their potential risks. The introduction of RSafe's adaptive reasoning approach provides a framework that can dynamically adapt to new safety requirements, marking a progressive step in enhancing AI safety mechanisms. The empirical results demonstrating its robustness against adversarial attacks and generalization to novel safety categories offer valuable insights and methodologies that can inform future research and the development of more resilient AI systems.

📚 Read the Full Paper