← Back to Library

Imperceptible Jailbreaking against Large Language Models

Authors: Kuofeng Gao, Yiming Li, Chao Du, Xin Wang, Xingjun Ma, Shu-Tao Xia, Tianyu Pang

Published: 2025-10-06

arXiv ID: 2510.05025v1

Added to Library: 2025-10-07 04:01 UTC

Red Teaming

📄 Abstract

Jailbreaking attacks on the vision modality typically rely on imperceptible adversarial perturbations, whereas attacks on the textual modality are generally assumed to require visible modifications (e.g., non-semantic suffixes). In this paper, we introduce imperceptible jailbreaks that exploit a class of Unicode characters called variation selectors. By appending invisible variation selectors to malicious questions, the jailbreak prompts appear visually identical to original malicious questions on screen, while their tokenization is "secretly" altered. We propose a chain-of-search pipeline to generate such adversarial suffixes to induce harmful responses. Our experiments show that our imperceptible jailbreaks achieve high attack success rates against four aligned LLMs and generalize to prompt injection attacks, all without producing any visible modifications in the written prompt. Our code is available at https://github.com/sail-sg/imperceptible-jailbreaks.

🔍 Key Points

  • Introduction of imperceptible jailbreaks using Unicode variation selectors to manipulate LLMs' outputs without visible changes to prompts.
  • Development of a chain-of-search optimization pipeline that iteratively enhances the effectiveness of adversarial suffixes.
  • Demonstration of high attack success rates against four aligned LLMs using imperceptible variations, achieving success even under strict safety alignment protocols.
  • Extension of imperceptible attacks to prompt injection scenarios, showcasing the versatility and broader implications of the method.

💡 Why This Paper Matters

The paper highlights a significant vulnerability in large language models' safety mechanisms, using imperceptible attacks that leverage invisible Unicode characters to manipulate model outputs. This research not only reveals weaknesses in current LLM defenses but also proposes methods that could potentially be exploited, emphasizing the need for more robust alignment and detection strategies in AI safety.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant to AI security researchers as it exposes new adversarial attack vectors that circumvent traditional defenses. The innovative use of invisible characters for advancing jailbreaking methods highlights critical flaws in LLM safety mechanisms, prompting further investigation and development of countermeasures to protect against such vulnerabilities.

📚 Read the Full Paper