← Back to Library

FORCE: Transferable Visual Jailbreaking Attacks via Feature Over-Reliance CorrEction

Authors: Runqi Lin, Alasdair Paren, Suqin Yuan, Muyang Li, Philip Torr, Adel Bibi, Tongliang Liu

Published: 2025-09-25

arXiv ID: 2509.21029v1

Added to Library: 2025-09-26 04:00 UTC

Red Teaming

📄 Abstract

The integration of new modalities enhances the capabilities of multimodal large language models (MLLMs) but also introduces additional vulnerabilities. In particular, simple visual jailbreaking attacks can manipulate open-source MLLMs more readily than sophisticated textual attacks. However, these underdeveloped attacks exhibit extremely limited cross-model transferability, failing to reliably identify vulnerabilities in closed-source MLLMs. In this work, we analyse the loss landscape of these jailbreaking attacks and find that the generated attacks tend to reside in high-sharpness regions, whose effectiveness is highly sensitive to even minor parameter changes during transfer. To further explain the high-sharpness localisations, we analyse their feature representations in both the intermediate layers and the spectral domain, revealing an improper reliance on narrow layer representations and semantically poor frequency components. Building on this, we propose a Feature Over-Reliance CorrEction (FORCE) method, which guides the attack to explore broader feasible regions across layer features and rescales the influence of frequency features according to their semantic content. By eliminating non-generalizable reliance on both layer and spectral features, our method discovers flattened feasible regions for visual jailbreaking attacks, thereby improving cross-model transferability. Extensive experiments demonstrate that our approach effectively facilitates visual red-teaming evaluations against closed-source MLLMs.

🔍 Key Points

  • Limited cross-model transferability of visual jailbreaking attacks is primarily due to reliance on model-specific features in early layers and high-frequency information, resulting in high-sharpness loss landscapes.
  • The proposed Feature Over-Reliance CorrEction (FORCE) method enhances transferability by guiding attacks to explore broader, flatter feasible regions in layer features and rescaling the influence of frequency components based on semantic content.
  • Extensive experiments across various MLLM architectures and datasets show that FORCE significantly improves the average attack success rates (ASR) and reduces query costs compared to traditional methods.
  • FORCE provides an effective solution for visual red-teaming evaluations against commercial MLLMs, demonstrating substantial improvements in the practical usability of optimization-based visual attacks.
  • The approach aligns with ethical considerations by enabling better detection and mitigation of vulnerabilities in MLLMs, paving the way for safer AI applications.

💡 Why This Paper Matters

This paper is a significant contribution to the field of AI security as it addresses a critical challenge in enhancing the robustness and transferability of visual jailbreaking attacks on multimodal large language models (MLLMs). The introduction of the FORCE method not only improves attack success rates but also aids in identifying model vulnerabilities that can be exploited, which is essential for building safer AI systems.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper highly relevant as it provides insights into the vulnerabilities of MLLMs and proposes a novel method to enhance the transferability of visual attacks. This has direct implications for red-teaming efforts, as understanding the limitations of current attacks can lead to better defenses against potential exploitations and reinforce the importance of maintaining safety in increasingly complex AI models.

📚 Read the Full Paper