← Back to Library

From LLMs to MLLMs to Agents: A Survey of Emerging Paradigms in Jailbreak Attacks and Defenses within LLM Ecosystem

Authors: Yanxu Mao, Tiehan Cui, Peipei Liu, Datao You, Hongsong Zhu

Published: 2025-06-18

arXiv ID: 2506.15170v1

Added to Library: 2025-06-19 03:01 UTC

Red Teaming Safety

📄 Abstract

Large language models (LLMs) are rapidly evolving from single-modal systems to multimodal LLMs and intelligent agents, significantly expanding their capabilities while introducing increasingly severe security risks. This paper presents a systematic survey of the growing complexity of jailbreak attacks and corresponding defense mechanisms within the expanding LLM ecosystem. We first trace the developmental trajectory from LLMs to MLLMs and Agents, highlighting the core security challenges emerging at each stage. Next, we categorize mainstream jailbreak techniques from both the attack impact and visibility perspectives, and provide a comprehensive analysis of representative attack methods, related datasets, and evaluation metrics. On the defense side, we organize existing strategies based on response timing and technical approach, offering a structured understanding of their applicability and implementation. Furthermore, we identify key limitations in existing surveys, such as insufficient attention to agent-specific security issues, the absence of a clear taxonomy for hybrid jailbreak methods, a lack of detailed analysis of experimental setups, and outdated coverage of recent advancements. To address these limitations, we provide an updated synthesis of recent work and outline future research directions in areas such as dataset construction, evaluation framework optimization, and strategy generalization. Our study seeks to enhance the understanding of jailbreak mechanisms and facilitate the advancement of more resilient and adaptive defense strategies in the context of ever more capable LLMs.

🔍 Key Points

  • Systematic survey of the evolution from LLMs to MLLMs and intelligent agents, highlighting security challenges at each stage.
  • Comprehensive categorization of jailbreak methods based on impact stages (training/inference) and visibility (white-box/black-box), enhancing understanding of attack modalities.
  • Detailed analysis of existing datasets and evaluation metrics for jailbreak attacks, emphasizing the need for better mapping and comparative studies.
  • Classification of defense strategies based on response timing and technical approach, providing a structured overview of available countermeasures.
  • Identification of key limitations in current research and proposal of future research directions in dataset construction, evaluation optimization, and multi-agent security.

💡 Why This Paper Matters

This paper is vital for understanding the landscape of jailbreak attacks on language models and their defenses. By systematically analyzing the evolution and complexities of LLMs, MLLMs, and agents, it provides a comprehensive framework that highlights both current vulnerabilities and future research needs. Such insights are crucial for developing more effective defense strategies against evolving security threats in AI systems.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers would find this paper particularly relevant because it addresses a critical and timely issue—jailbreak attacks—that poses significant risks to the ethical deployment of LLM technologies. The structured analysis of attack methods, defense mechanisms, and existing research limitations aids in identifying gaps in the literature, informing the development of more robust AI systems. Additionally, the paper outlines future research directions that could drive innovative solutions to emerging threats in AI, making it a valuable resource for both theoretical exploration and practical applications.

📚 Read the Full Paper