← Back to Library

Toward Understanding the Transferability of Adversarial Suffixes in Large Language Models

Authors: Sarah Ball, Niki Hasrati, Alexander Robey, Avi Schwarzschild, Frauke Kreuter, Zico Kolter, Andrej Risteski

Published: 2025-10-24

arXiv ID: 2510.22014v1

Added to Library: 2025-10-28 04:02 UTC

Red Teaming

📄 Abstract

Discrete optimization-based jailbreaking attacks on large language models aim to generate short, nonsensical suffixes that, when appended onto input prompts, elicit disallowed content. Notably, these suffixes are often transferable -- succeeding on prompts and models for which they were never optimized. And yet, despite the fact that transferability is surprising and empirically well-established, the field lacks a rigorous analysis of when and why transfer occurs. To fill this gap, we identify three statistical properties that strongly correlate with transfer success across numerous experimental settings: (1) how much a prompt without a suffix activates a model's internal refusal direction, (2) how strongly a suffix induces a push away from this direction, and (3) how large these shifts are in directions orthogonal to refusal. On the other hand, we find that prompt semantic similarity only weakly correlates with transfer success. These findings lead to a more fine-grained understanding of transferability, which we use in interventional experiments to showcase how our statistical analysis can translate into practical improvements in attack success.

🔍 Key Points

  • The paper identifies three key features that correlate with the transfer success of adversarial suffixes: prompt refusal connectivity, suffix push away from refusal directions, and orthogonal shifts in activation space.
  • Semantic similarity between prompts does not strongly predict transfer success, indicating that adversarial transfer relies more on activation dynamics than linguistic similarity.
  • The authors conduct both qualitative and quantitative analyses, revealing that structural interventions in the generation of suffixes can enhance their effectiveness in adversarial attacks against large language models (LLMs).
  • Findings indicate that successful suffixes tend to induce shifts both away from the refusal direction and in orthogonal directions, contributing to an understanding of geometrical factors in activation space for adversarial examples.
  • The study informs the development of better attack strategies and potential defenses against adversarial attacks, providing insights into the internal workings of LLMs regarding their safety mechanisms.

💡 Why This Paper Matters

This paper offers significant advancements in understanding the dynamics of adversarial attacks on large language models. By systematically analyzing the features that contribute to the transferability of adversarial suffixes, it provides both a theoretical framework and practical guidance for improving the effectiveness of these attacks. Its findings underscore the complex relationship between model activations and adversarial behavior, contributing to the broader conversation about model security in AI.

🎯 Why It's Interesting for AI Security Researchers

The insights presented in this paper are particularly valuable for AI security researchers, as understanding the transferability of adversarial attacks on language models is critical for designing robust defense mechanisms. The identified features which strongly correlate with transfer success can guide future research in adversarial training techniques, improve model alignment, and help mitigate risks associated with the misuse of AI models capable of generating harmful content.

📚 Read the Full Paper