← Back to Library

Universal and Transferable Adversarial Attack on Large Language Models Using Exponentiated Gradient Descent

Authors: Sajib Biswas, Mao Nishino, Samuel Jacob Chacko, Xiuwen Liu

Published: 2025-08-20

arXiv ID: 2508.14853v1

Added to Library: 2025-08-21 04:00 UTC

Red Teaming

📄 Abstract

As large language models (LLMs) are increasingly deployed in critical applications, ensuring their robustness and safety alignment remains a major challenge. Despite the overall success of alignment techniques such as reinforcement learning from human feedback (RLHF) on typical prompts, LLMs remain vulnerable to jailbreak attacks enabled by crafted adversarial triggers appended to user prompts. Most existing jailbreak methods either rely on inefficient searches over discrete token spaces or direct optimization of continuous embeddings. While continuous embeddings can be given directly to selected open-source models as input, doing so is not feasible for proprietary models. On the other hand, projecting these embeddings back into valid discrete tokens introduces additional complexity and often reduces attack effectiveness. We propose an intrinsic optimization method which directly optimizes relaxed one-hot encodings of the adversarial suffix tokens using exponentiated gradient descent coupled with Bregman projection, ensuring that the optimized one-hot encoding of each token always remains within the probability simplex. We provide theoretical proof of convergence for our proposed method and implement an efficient algorithm that effectively jailbreaks several widely used LLMs. Our method achieves higher success rates and faster convergence compared to three state-of-the-art baselines, evaluated on five open-source LLMs and four adversarial behavior datasets curated for evaluating jailbreak methods. In addition to individual prompt attacks, we also generate universal adversarial suffixes effective across multiple prompts and demonstrate transferability of optimized suffixes to different LLMs.

🔍 Key Points

  • Introduction of a novel adversarial attack method using Exponentiated Gradient Descent (EGD) that directly optimizes relaxed one-hot encoding for adversarial suffixes in Large Language Models (LLMs).
  • Achieved significantly higher success rates and faster convergence compared to three state-of-the-art baselines, showcasing robustness and efficiency in jailbreaking attacks.
  • Demonstration of the generation of universal adversarial suffixes that are effective across various prompts and the transferability of these suffixes across different LLM architectures, including proprietary models.
  • Theoretical proof of convergence for the optimization method, enhancing the validity and applicability of the proposed approach in adversarial settings.
  • Extensive empirical evaluation across multiple datasets and models, establishing a strong benchmark for future research in adversarial attacks on LLMs.

💡 Why This Paper Matters

This paper is significant in the context of AI safety and security as it presents a robust method for targeting vulnerabilities in large language models, which are increasingly being deployed in sensitive applications. By demonstrating the effectiveness of universal and transferable adversarial suffixes, it raises important questions about the resilience and alignment of LLMs against malicious prompts, making the research crucial for developing more secure models.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper particularly relevant as it highlights innovative techniques for adversarial attacks, offering insights into potential vulnerabilities in LLMs. Understanding these attack vectors enables better defense mechanisms and helps ensure that AI systems align with safety standards, thereby mitigating risks associated with their deployment in real-world scenarios.

📚 Read the Full Paper