← Back to Library

AutoDAN-Reasoning: Enhancing Strategies Exploration based Jailbreak Attacks with Test-Time Scaling

Authors: Xiaogeng Liu, Chaowei Xiao

Published: 2025-10-06

arXiv ID: 2510.05379v2

Added to Library: 2025-10-09 01:01 UTC

Red Teaming

📄 Abstract

Recent advancements in jailbreaking large language models (LLMs), such as AutoDAN-Turbo, have demonstrated the power of automated strategy discovery. AutoDAN-Turbo employs a lifelong learning agent to build a rich library of attack strategies from scratch. While highly effective, its test-time generation process involves sampling a strategy and generating a single corresponding attack prompt, which may not fully exploit the potential of the learned strategy library. In this paper, we propose to further improve the attack performance of AutoDAN-Turbo through test-time scaling. We introduce two distinct scaling methods: Best-of-N and Beam Search. The Best-of-N method generates N candidate attack prompts from a sampled strategy and selects the most effective one based on a scorer model. The Beam Search method conducts a more exhaustive search by exploring combinations of strategies from the library to discover more potent and synergistic attack vectors. According to the experiments, the proposed methods significantly boost performance, with Beam Search increasing the attack success rate by up to 15.6 percentage points on Llama-3.1-70B-Instruct and achieving a nearly 60% relative improvement against the highly robust GPT-o4-mini compared to the vanilla method.

🔍 Key Points

  • Introduction of AutoDAN-Reasoning as an enhancement to the original AutoDAN-Turbo framework, leveraging sophisticated scaling methods for improved jailbreak success.
  • Presentation of two novel test-time scaling strategies: Best-of-N and Beam Search, which optimize the prompt generation process during attacks.
  • Best-of-N generates multiple candidate prompts to select the most effective one, mitigating the risks of single-shot generation. The Beam Search explores combinations of multiple strategies for greater exploitability.
  • Experimental results demonstrate substantial enhancements in attack success rates across various models (up to 15.6 percentage points for Llama-3.1-70B-Instruct and nearly 60% improvement against GPT-o4-mini) when applying these scaling methods.

💡 Why This Paper Matters

This paper significantly contributes to the field of AI security by enhancing existing adversarial strategies for jailbreak attacks on large language models (LLMs). By introducing intelligent test-time scaling methods, the work not only improves the effectiveness of the existing frameworks but also addresses critical gaps in the robust performance of LLMs under adversarial conditions. This underscores the necessity of continuous innovation in protective measures against emerging threats in AI safety.

🎯 Why It's Interesting for AI Security Researchers

The paper is highly relevant for AI security researchers as it tackles the pressing challenge of jailbreaking sophisticated large language models, showcasing novel methodologies that could be employed to breach existing safety mechanisms. The findings emphasize the evolving landscape of AI misuse and highlight the need for adaptive strategies in both attack and defense contexts, making it a critical resource for those developing robust safety measures in AI systems.

📚 Read the Full Paper