← Back to Library

Adversarial Déjà Vu: Jailbreak Dictionary Learning for Stronger Generalization to Unseen Attacks

Authors: Mahavir Dabas, Tran Huynh, Nikhil Reddy Billa, Jiachen T. Wang, Peng Gao, Charith Peris, Yao Ma, Rahul Gupta, Ming Jin, Prateek Mittal, Ruoxi Jia

Published: 2025-10-24

arXiv ID: 2510.21910v1

Added to Library: 2025-10-28 04:03 UTC

Red Teaming

📄 Abstract

Large language models remain vulnerable to jailbreak attacks that bypass safety guardrails to elicit harmful outputs. Defending against novel jailbreaks represents a critical challenge in AI safety. Adversarial training -- designed to make models robust against worst-case perturbations -- has been the dominant paradigm for adversarial robustness. However, due to optimization challenges and difficulties in defining realistic threat models, adversarial training methods often fail on newly developed jailbreaks in practice. This paper proposes a new paradigm for improving robustness against unseen jailbreaks, centered on the Adversarial D\'ej\`a Vu hypothesis: novel jailbreaks are not fundamentally new, but largely recombinations of adversarial skills from previous attacks. We study this hypothesis through a large-scale analysis of 32 attack papers published over two years. Using an automated pipeline, we extract and compress adversarial skills into a sparse dictionary of primitives, with LLMs generating human-readable descriptions. Our analysis reveals that unseen attacks can be effectively explained as sparse compositions of earlier skills, with explanatory power increasing monotonically as skill coverage grows. Guided by this insight, we introduce Adversarial Skill Compositional Training (ASCoT), which trains on diverse compositions of skill primitives rather than isolated attack instances. ASCoT substantially improves robustness to unseen attacks, including multi-turn jailbreaks, while maintaining low over-refusal rates. We also demonstrate that expanding adversarial skill coverage, not just data scale, is key to defending against novel attacks. \textcolor{red}{\textbf{Warning: This paper contains content that may be harmful or offensive in nature.

🔍 Key Points

  • Introduces the Adversarial Déjà Vu hypothesis, suggesting that new jailbreak attacks are largely recombinations of prior adversarial skills, providing a framework for understanding the evolving nature of these threats.
  • Develops an automated pipeline for extracting and learning adversarial skills, resulting in a sparse Jailbreak Dictionary that captures essential manipulation techniques.
  • Proposes Adversarial Skill Compositional Training (ASCoT), which significantly enhances model robustness against unseen attacks by training on diverse combinations of identified skills, rather than isolated attack examples.
  • Demonstrates through empirical evaluation that expanding the coverage of adversarial skills within the training data improves generalization to novel attacks, emphasizing the importance of skill diversity over mere data size.
  • Presents a comprehensive analysis of the generational patterns of jailbreaks, linking them to the effectiveness of adversarial training approaches.

💡 Why This Paper Matters

The findings of this paper are crucial as they propose a paradigm shift in designing defenses against adversarial attacks on language models, moving from patch-based methods to a compositional understanding of adversarial skills. By leveraging the structured nature of these skills, the research establishes a systematic approach to tackle the persistent challenge of jailbreaks, enhancing the safety and reliability of AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper is of paramount interest to AI security researchers as it not only identifies the underlying patterns of jailbreak attacks but also provides a novel training strategy (ASCoT) that proactively enhances model robustness. The implications of this research extend beyond just language models, potentially informing wider strategies for building resilient AI against adversarial manipulations in various contexts.

📚 Read the Full Paper