← Back to Library

VRSA: Jailbreaking Multimodal Large Language Models through Visual Reasoning Sequential Attack

Authors: Shiji Zhao, Shukun Xiong, Yao Huang, Yan Jin, Zhenyu Wu, Jiyang Guan, Ranjie Duan, Jialing Tao, Hui Xue, Xingxing Wei

Published: 2025-12-05

arXiv ID: 2512.05853v2

Added to Library: 2025-12-09 04:00 UTC

Red Teaming

📄 Abstract

Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation capabilities. However, more modalities bring more vulnerabilities to being utilized for jailbreak attacks, which induces MLLMs to output harmful content. Due to the strong reasoning ability of MLLMs, previous jailbreak attacks try to explore reasoning safety risk in text modal, while similar threats have been largely overlooked in the visual modal. To fully evaluate potential safety risks in the visual reasoning task, we propose Visual Reasoning Sequential Attack (VRSA), which induces MLLMs to gradually externalize and aggregate complete harmful intent by decomposing the original harmful text into several sequentially related sub-images. In particular, to enhance the rationality of the scene in the image sequence, we propose Adaptive Scene Refinement to optimize the scene most relevant to the original harmful query. To ensure the semantic continuity of the generated image, we propose Semantic Coherent Completion to iteratively rewrite each sub-text combined with contextual information in this scene. In addition, we propose Text-Image Consistency Alignment to keep the semantical consistency. A series of experiments demonstrates that the VRSA can achieve a higher attack success rate compared with the state-of-the-art jailbreak attack methods on both the open-source and closed-source MLLMs such as GPT-4o and Claude-4.5-Sonnet.

🔍 Key Points

  • Introduction of the Visual Reasoning Sequential Attack (VRSA) framework, which decomposes harmful intents into a sequence of sub-images, thus exploiting vulnerabilities in MLLMs that rely on visual reasoning.
  • Development of three novel techniques: Adaptive Scene Refinement to enhance scene rationality, Semantic Coherent Completion to ensure text continuity, and Text-Image Consistency Alignment to maintain semantic coherence between text and images.
  • Extensive experimental validation showing that VRSA significantly outperforms existing state-of-the-art jailbreak methods on both open-source and closed-source large language models.
  • Quantitative improvements in attack success rates and toxicity scores across various models, indicating the efficiency and effectiveness of VRSA.
  • Demonstration through ablation studies outlining the contributions of each component of the VRSA framework, further validating the approach's design and functionality.

💡 Why This Paper Matters

This paper presents a significant advancement in the security analysis of Multimodal Large Language Models (MLLMs) by introducing VRSA, a novel method designed to exploit vulnerabilities in visual reasoning tasks. The findings highlight critical safety concerns and offer practical tools for evaluating and hardening these models against sophisticated jailbreak attacks, making this work particularly relevant for both academic and applied AI safety research.

🎯 Why It's Interesting for AI Security Researchers

The relevance of this paper lies in its technical contributions to the field of AI security, particularly concerning the robustness of MLLMs. By addressing the largely overlooked vulnerabilities in visual modal reasoning, it opens up new avenues for research into exploit prevention and safety measures. This work is essential for AI security researchers focused on understanding and mitigating risks associated with advanced language models used in sensitive applications.

📚 Read the Full Paper