← Back to Library

StyleBreak: Revealing Alignment Vulnerabilities in Large Audio-Language Models via Style-Aware Audio Jailbreak

Authors: Hongyi Li, Chengxuan Zhou, Chu Wang, Sicheng Liang, Yanting Chen, Qinlin Xie, Jiawei Ye, Jie Wu

Published: 2025-11-12

arXiv ID: 2511.10692v1

Added to Library: 2025-11-17 03:01 UTC

Red Teaming

📄 Abstract

Large Audio-language Models (LAMs) have recently enabled powerful speech-based interactions by coupling audio encoders with Large Language Models (LLMs). However, the security of LAMs under adversarial attacks remains underexplored, especially through audio jailbreaks that craft malicious audio prompts to bypass alignment. Existing efforts primarily rely on converting text-based attacks into speech or applying shallow signal-level perturbations, overlooking the impact of human speech's expressive variations on LAM alignment robustness. To address this gap, we propose StyleBreak, a novel style-aware audio jailbreak framework that systematically investigates how diverse human speech attributes affect LAM alignment robustness. Specifically, StyleBreak employs a two-stage style-aware transformation pipeline that perturbs both textual content and audio to control linguistic, paralinguistic, and extralinguistic attributes. Furthermore, we develop a query-adaptive policy network that automatically searches for adversarial styles to enhance the efficiency of LAM jailbreak exploration. Extensive evaluations demonstrate that LAMs exhibit critical vulnerabilities when exposed to diverse human speech attributes. Moreover, StyleBreak achieves substantial improvements in attack effectiveness and efficiency across multiple attack paradigms, highlighting the urgent need for more robust alignment in LAMs.

🔍 Key Points

  • Introduction of StyleBreak framework to analyze alignment vulnerabilities in LAMs under adversarial audio jailbreak scenarios.
  • Development of a two-stage style-aware transformation pipeline that effectively manipulates linguistic, paralinguistic, and extralinguistic attributes of audio prompts.
  • Incorporation of a query-adaptive policy network that enhances the efficiency and effectiveness of generating adversarial styles for bypassing LAM alignments.
  • Extensive experiments reveal significant vulnerabilities in LAMs, demonstrating increased attack effectiveness across various models and adversarial styles.
  • Demonstration that LAMs are more susceptible to emotional, age, and gender variations in audio, highlighting a critical need for improved model safety alignment.

💡 Why This Paper Matters

The paper is relevant as it sheds light on critical security vulnerabilities in large audio-language models (LAMs) through an innovative approach that utilizes style-aware transformations. By revealing how human speech attributes can be exploited in audio jailbreaks, it emphasizes the imperative for enhancing the robustness of alignment mechanisms in LAMs to ensure safer deployment in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

This research is of great interest to AI security researchers as it provides foundational insights into potential attack vectors that exploit the intersection of audio processing and language models. By understanding how various speech attributes can influence LAM behavior, security professionals can better prepare defenses against these adversarial attacks, thus contributing to more resilient AI systems.

📚 Read the Full Paper