← Back to Library

COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers

Authors: Junyu Wang, Changjia Zhu, Yuanbo Zhou, Lingyao Li, Xu He, Junjie Xiong

Published: 2025-12-02

arXiv ID: 2512.02318v2

Added to Library: 2025-12-04 03:01 UTC

Safety

📄 Abstract

This paper studies how multimodal large language models (MLLMs) undermine the security guarantees of visual CAPTCHA. We identify the attack surface where an adversary can cheaply automate CAPTCHA solving using off-the-shelf models. We evaluate 7 leading commercial and open-source MLLMs across 18 real-world CAPTCHA task types, measuring single-shot accuracy, success under limited retries, end-to-end latency, and per-solve cost. We further analyze the impact of task-specific prompt engineering and few-shot demonstrations on solver effectiveness. We reveal that MLLMs can reliably solve recognition-oriented and low-interaction CAPTCHA tasks at human-like cost and latency, whereas tasks requiring fine-grained localization, multi-step spatial reasoning, or cross-frame consistency remain significantly harder for current models. By examining the reasoning traces of such MLLMs, we investigate the underlying mechanisms of why models succeed/fail on specific CAPTCHA puzzles and use these insights to derive defense-oriented guidelines for selecting and strengthening CAPTCHA tasks. We conclude by discussing implications for platform operators deploying CAPTCHA as part of their abuse-mitigation pipeline.Code Availability (https://anonymous.4open.science/r/Captcha-465E/).

🔍 Key Points

  • Study investigates the vulnerability of visual CAPTCHAs to multimodal large language models (MLLMs) and identifies the attack surface for automated solving using off-the-shelf models.
  • Evaluated 7 MLLMs across 18 real-world CAPTCHA task types, measuring accuracy, retries, latency, and cost, revealing a sharp hardness gap in CAPTCHA types.
  • Identified which CAPTCHAs are broken (recognition-oriented tasks) and which are robust (tasks requiring spatial reasoning), providing insights into MLLMs' specific weaknesses in solving CAPTCHAs.
  • Derived practical, defense-oriented guidelines for designing more resilient CAPTCHAs based on structural hardness factors, including continuous-space localization and combining perception with basic arithmetic.
  • The paper discusses the implications of its findings for platform operators and proposes adaptive strategies to improve the effectiveness of CAPTCHA systems against automated solvers.

💡 Why This Paper Matters

This paper provides critical insights into the current limitations of visual CAPTCHAs in the face of sophisticated MLLM solvers. Through rigorous evaluation, it identifies specific CAPTCHA task types that remain unsolved and offers actionable guidelines to strengthen CAPTCHA security. By understanding how MLLMs interact with different CAPTCHA designs, developers and platform operators can enhance the robustness of their automated defenses, making this research essential for the evolution of web security measures.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper particularly interesting as it addresses a pressing challenge in the security landscape: the effectiveness of CAPTCHAs against advanced AI systems. The study not only highlights vulnerabilities in widely used security mechanisms but also contributes to the field by proposing data-backed strategies for enhancing these systems. As AI capabilities continue to evolve, understanding their implications for security protocols is crucial in safeguarding online interactions.

📚 Read the Full Paper