← Back to Library

Principled Steering via Null-space Projection for Jailbreak Defense in Vision-Language Models

Authors: Xingyu Zhu, Beier Zhu, Shuo Wang, Junfeng Fang, Kesen Zhao, Hanwang Zhang, Xiangnan He

Published: 2026-03-23

arXiv ID: 2603.22094v2

Added to Library: 2026-03-26 03:01 UTC

Red Teaming

📄 Abstract

As vision-language models (VLMs) are increasingly deployed in open-world scenarios, they can be easily induced by visual jailbreak attacks to generate harmful content, posing serious risks to model safety and trustworthy usage. Recent activation steering methods inject directional vectors into model activations during inference to induce refusal behaviors and have demonstrated effectiveness. However, a steering vector may both enhance refusal ability and cause over-refusal, thereby degrading model performance on benign inputs. Moreover, due to the lack of theoretical interpretability, these methods still suffer from limited robustness and effectiveness. To better balance safety and utility, we propose NullSteer, a null-space projected activation defense framework. Our method constructs refusal directions within model activations through a linear transformation: it maintains zero perturbation within the benign subspace while dynamically inducing refusal along potentially harmful directions, thereby theoretically achieving safety enhancement without impairing the model's general capabilities. Extensive experiments show that NullSteer significantly reduces harmful outputs under various jailbreak attacks (average ASR reduction over 15 percent on MiniGPT-4) while maintaining comparable performance to the original model on general benchmarks.

🔍 Key Points

  • Proposes NullSteer, a novel activation-steering framework that uses null-space projection to defend vision-language models against jailbreak attacks while preserving benign inputs.
  • NullSteer theoretically ensures that modifications to activations only occur in harmful directions, significantly reducing the risk of over-refusal, which degrades model performance on harmless inputs.
  • Extensive experiments demonstrate that NullSteer outperforms existing defenses in robustness against various jailbreak attacks, including a reduction in attack success rates by over 15% without loss of performance on standard benchmarks.
  • The method is designed to be lightweight and operates in the activation space of models, making it a practical addition to safety mechanisms without the need for extensive retraining.
  • Utilizes a principled approach to balance safety (refusal behavior) and utility (model performance), positioning itself as a promising strategy in the landscape of AI safety.

💡 Why This Paper Matters

The paper presents a significant advancement in safety mechanisms for vision-language models, tackling the critical issue of jailbreak attacks that pose risks to model integrity and the generation of harmful content. By introducing a theoretically grounded method that enhances refusal robustness without impairing benign input handling, NullSteer marks an important step towards making AI systems more secure and reliable. The findings underscore the necessity of implementing effective defenses in real-world AI applications and provide a framework that can be built upon for future safety research.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it addresses a growing concern in the deployment of AI models in open-world scenarios where adversarial attacks can exploit vulnerabilities. The innovative approach of using null-space projection for manipulation of model activations not only enhances the understanding of defense mechanisms in AI but also provides a practical solution that can be integrated into existing systems. Additionally, its thorough empirical validation offers robust insights into how vulnerable these models are, which is vital for developing future defenses and ensuring the integrity of AI applications.

📚 Read the Full Paper