← Back to Library

Let the Bees Find the Weak Spots: A Path Planning Perspective on Multi-Turn Jailbreak Attacks against LLMs

Authors: Yize Liu, Yunyun Hou, Aina Sui

Published: 2025-11-05

arXiv ID: 2511.03271v1

Added to Library: 2025-11-06 05:00 UTC

Red Teaming

📄 Abstract

Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised increasing concerns. Existing research employs red teaming evaluations, utilizing multi-turn jailbreaks to identify potential vulnerabilities in LLMs. However, these approaches often lack exploration of successful dialogue trajectories within the attack space, and they tend to overlook the considerable overhead associated with the attack process. To address these limitations, this paper first introduces a theoretical model based on dynamically weighted graph topology, abstracting the multi-turn attack process as a path planning problem. Based on this framework, we propose ABC, an enhanced Artificial Bee Colony algorithm for multi-turn jailbreaks, featuring a collaborative search mechanism with employed, onlooker, and scout bees. This algorithm significantly improves the efficiency of optimal attack path search while substantially reducing the average number of queries required. Empirical evaluations on three open-source and two proprietary language models demonstrate the effectiveness of our approach, achieving attack success rates above 90\% across the board, with a peak of 98\% on GPT-3.5-Turbo, and outperforming existing baselines. Furthermore, it achieves comparable success with only 26 queries on average, significantly reducing red teaming overhead and highlighting its superior efficiency.

🔍 Key Points

  • Introduces a novel theoretical model using dynamically weighted graph topology to formalize multi-turn attack processes as path planning problems, enhancing our understanding of these complex attacks.
  • Develops an advanced Artificial Bee Colony (ABC) algorithm that significantly improves attack efficiency, achieving above 90% Attack Success Rate (ASR) while reducing average query counts to only 26 for successful attacks.
  • Empirically validates the approach on five large language models, including proprietary models, outperforming existing jailbreak methods in both effectiveness and efficiency across diverse scenarios.
  • Demonstrates that the proposed method maintains high ASR even in challenging attack categories, indicating a strong generalization capability of the algorithm.
  • Offers insights for practitioners in AI security on optimizing red teaming processes, fostering more effective defenses against potential vulnerabilities in language models.

💡 Why This Paper Matters

This paper presents a significant advancement in understanding and mitigating vulnerabilities in large language models through innovative modeling and algorithmic frameworks. By effectively addressing the limitations of current evaluations and demonstrating improvements in both attack success and efficiency, this research is crucial for enhancing the security posture of AI systems in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

The findings of this paper are particularly relevant to AI security researchers as it highlights novel methodologies for identifying and exploiting vulnerabilities in language models, a growing concern for AI safety. The efficient attack strategies proposed can help researchers understand the dynamics of multi-turn attacks and facilitate the development of more robust defenses, making it essential reading for those focused on resilience against adversarial threats.

📚 Read the Full Paper