← Back to Library

Investigating Vulnerabilities and Defenses Against Audio-Visual Attacks: A Comprehensive Survey Emphasizing Multimodal Models

Authors: Jinming Wen, Xinyi Wu, Shuai Zhao, Yanhao Jia, Yuwen Li

Published: 2025-06-13

arXiv ID: 2506.11521v1

Added to Library: 2025-06-16 03:00 UTC

Red Teaming

📄 Abstract

Multimodal large language models (MLLMs), which bridge the gap between audio-visual and natural language processing, achieve state-of-the-art performance on several audio-visual tasks. Despite the superior performance of MLLMs, the scarcity of high-quality audio-visual training data and computational resources necessitates the utilization of third-party data and open-source MLLMs, a trend that is increasingly observed in contemporary research. This prosperity masks significant security risks. Empirical studies demonstrate that the latest MLLMs can be manipulated to produce malicious or harmful content. This manipulation is facilitated exclusively through instructions or inputs, including adversarial perturbations and malevolent queries, effectively bypassing the internal security mechanisms embedded within the models. To gain a deeper comprehension of the inherent security vulnerabilities associated with audio-visual-based multimodal models, a series of surveys investigates various types of attacks, including adversarial and backdoor attacks. While existing surveys on audio-visual attacks provide a comprehensive overview, they are limited to specific types of attacks, which lack a unified review of various types of attacks. To address this issue and gain insights into the latest trends in the field, this paper presents a comprehensive and systematic review of audio-visual attacks, which include adversarial attacks, backdoor attacks, and jailbreak attacks. Furthermore, this paper also reviews various types of attacks in the latest audio-visual-based MLLMs, a dimension notably absent in existing surveys. Drawing upon comprehensive insights from a substantial review, this paper delineates both challenges and emergent trends for future research on audio-visual attacks and defense.

🔍 Key Points

  • A comprehensive review of various types of audio-visual attacks, including adversarial attacks, backdoor attacks, and jailbreak attacks, addressing a gap in the existing literature which largely focuses on specific attack types.
  • Analysis of the security vulnerabilities associated with Multimodal Large Language Models (MLLMs), highlighting susceptibility to manipulation through tailored inputs and adversarial perturbations.
  • Discussion of challenges in developing effective defense mechanisms against emerging attack strategies, particularly in the context of jailbreak and fine-tuning evasion attacks.
  • Providing insights into future research trends, stressing the importance of developing robust defenses and attack algorithms that do not require fine-tuning of models.
  • Highlighting the significance of using third-party data and open-source models in contemporary research, which introduces risks that have not been fully addressed.

💡 Why This Paper Matters

This paper is relevant as it highlights critical security issues associated with MLLMs and audio-visual tasks, providing a systematic overview of potential attacks and their implications. Through its comprehensive survey, it serves as a resource for researchers to understand multi-faceted vulnerabilities and work toward enhancing the security of AI systems.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper valuable because it not only catalogs various attack strategies but also identifies gaps in defenses against these strategies. The insights into adversarial and backdoor attacks in multimodal systems can guide the development of more secure AI applications. Moreover, the discussions on emerging trends and challenges underscore the urgency for continued research in this domain.

📚 Read the Full Paper