← Back to Library

Reverse CAPTCHA: Evaluating LLM Susceptibility to Invisible Unicode Instruction Injection

Authors: Marcus Graves

Published: 2026-02-26

arXiv ID: 2603.00164v1

Added to Library: 2026-03-03 03:01 UTC

Red Teaming

📄 Abstract

We introduce Reverse CAPTCHA, an evaluation framework that tests whether large language models follow invisible Unicode-encoded instructions embedded in otherwise normal-looking text. Unlike traditional CAPTCHAs that distinguish humans from machines, our benchmark exploits a capability gap: models can perceive Unicode control characters that are invisible to human readers. We evaluate five models from two providers across two encoding schemes (zero-width binary and Unicode Tags), four hint levels, two payload framings, and with tool use enabled or disabled. Across 8,308 model outputs, we find that tool use dramatically amplifies compliance (Cohen's h up to 1.37, a large effect), that models exhibit provider-specific encoding preferences (OpenAI models decode zero-width binary; Anthropic models prefer Unicode Tags), and that explicit decoding instructions increase compliance by up to 95 percentage points within a single model and encoding. All pairwise model differences are statistically significant (p < 0.05, Bonferroni-corrected). These results highlight an underexplored attack surface for prompt injection via invisible Unicode payloads.

🔍 Key Points

  • Introduction of the Reverse CAPTCHA framework to evaluate LLMs' vulnerabilities to invisible Unicode instruction injections, contrasting traditional CAPTCHAs.
  • Identification of a significant compliance amplification when large language models have access to tool use, with effect sizes reaching up to 1.37, highlighting a capability gap in models.
  • Demonstration of model-specific encoding preferences, indicating OpenAI models favor zero-width binary encoding, while Anthropic models are more susceptible to Unicode Tags.
  • The establishment of a compliance gradient based on hint levels, showing that providing decoding instructions can dramatically increase compliance rates by as much as 95%.
  • Statistical significance of pairwise model performance differences across various test conditions and encodings, providing a detailed comparison of vulnerabilities.

💡 Why This Paper Matters

This paper is significant in highlighting the inherent vulnerabilities of large language models to sophisticated injection attacks through invisible Unicode characters. The findings serve as a wake-up call for developers and researchers regarding the potential risks of LLMs acting on hidden instructions, especially in critical applications where security is paramount. The proposed Reverse CAPTCHA evaluation framework could guide the development of more robust defense mechanisms against such attacks.

🎯 Why It's Interesting for AI Security Researchers

The paper is highly relevant for AI security researchers as it uncovers a previously underexplored attack vector that utilizes invisible Unicode characters, showcasing how such vulnerabilities can be exploited in real-world scenarios. The implications of tool-access enhancing compliance also encourage new lines of inquiry into how models should be designed and secured. Understanding these vulnerabilities is critical for developing resilient LLMs, particularly in applications where models interact with untrusted sources or user inputs.

📚 Read the Full Paper