← Back to Library

VII: Visual Instruction Injection for Jailbreaking Image-to-Video Generation Models

Authors: Bowen Zheng, Yongli Xiang, Ziming Hong, Zerong Lin, Chaojian Yu, Tongliang Liu, Xinge You

Published: 2026-02-24

arXiv ID: 2602.20999v2

Added to Library: 2026-03-03 04:01 UTC

Red Teaming

📄 Abstract

Image-to-Video (I2V) generation models, which condition video generation on reference images, have shown emerging visual instruction-following capability, allowing certain visual cues in reference images to act as implicit control signals for video generation. However, this capability also introduces a previously overlooked risk: adversaries may exploit visual instructions to inject malicious intent through the image modality. In this work, we uncover this risk by proposing Visual Instruction Injection (VII), a training-free and transferable jailbreaking framework that intentionally disguises the malicious intent of unsafe text prompts as benign visual instructions in the safe reference image. Specifically, VII coordinates a Malicious Intent Reprogramming module to distill malicious intent from unsafe text prompts while minimizing their static harmfulness, and a Visual Instruction Grounding module to ground the distilled intent onto a safe input image by rendering visual instructions that preserve semantic consistency with the original unsafe text prompt, thereby inducing harmful content during I2V generation. Empirically, our extensive experiments on four state-of-the-art commercial I2V models (Kling-v2.5-turbo, Gemini Veo-3.1, Seedance-1.5-pro, and PixVerse-V5) demonstrate that VII achieves Attack Success Rates of up to 83.5% while reducing Refusal Rates to near zero, significantly outperforming existing baselines.

🔍 Key Points

  • Introduction of Visual Instruction Injection (VII), a jailbreaking framework for Image-to-Video (I2V) generation models that bypasses safety mechanisms by disguising malicious intent as benign visual instructions.
  • Development of two main modules: Malicious Intention Reprogramming (MIR) for distilling unsafe text prompts into harmless visuals and Visual Instruction Grounding (VIG) for embedding those visuals into safe input images.
  • Empirical results demonstrate that VII achieves Attack Success Rates (ASR) up to 83.5% across multiple state-of-the-art I2V models, significantly outperforming existing baselines while achieving near-zero Refusal Rates (RR).
  • Exploration of the inherent vulnerabilities in I2V models' visual instruction-following capabilities, raising concerns about the security of multimodal AI systems.
  • Identification of limitations in current safety measures and the need for robust, targeted defenses to address potential visual instruction exploitation.

💡 Why This Paper Matters

This paper is critical in highlighting the security risks posed by new advancements in AI, particularly in the area of image-to-video generation. By identifying a novel form of adversarial attack, the study underscores the pressing need for enhanced safety protocols in AI systems that integrate visual and textual inputs, making a clear case for ongoing research in AI security.

🎯 Why It's Interesting for AI Security Researchers

For AI security researchers, this paper presents a significant advancement in understanding how emerging I2V models can be exploited through adversarial techniques. The findings prompt further investigation into the vulnerabilities of AI systems and the development of more robust defenses, positioning this research at the forefront of the ongoing discourse on AI safety and security.

📚 Read the Full Paper