← Back to Library

Clouding the Mirror: Stealthy Prompt Injection Attacks Targeting LLM-based Phishing Detection

Authors: Takashi Koide, Hiroki Nakano, Daiki Chiba

Published: 2026-02-05

arXiv ID: 2602.05484v1

Added to Library: 2026-02-06 03:02 UTC

Red Teaming

📄 Abstract

Phishing sites continue to grow in volume and sophistication. Recent work leverages large language models (LLMs) to analyze URLs, HTML, and rendered content to decide whether a website is a phishing site. While these approaches are promising, LLMs are inherently vulnerable to prompt injection (PI). Because attackers can fully control various elements of phishing sites, this creates the potential for PI that exploits the perceptual asymmetry between LLMs and humans: instructions imperceptible to end users can still be parsed by the LLM and can stealthily manipulate its judgment. The specific risks of PI in phishing detection and effective mitigation strategies remain largely unexplored. This paper presents the first comprehensive evaluation of PI against multimodal LLM-based phishing detection. We introduce a two-dimensional taxonomy, defined by Attack Techniques and Attack Surfaces, that captures realistic PI strategies. Using this taxonomy, we implement diverse attacks and empirically study several representative LLM-based detection systems. The results show that phishing detection with state-of-the-art models such as GPT-5 remains vulnerable to PI. We then propose InjectDefuser, a defense framework that combines prompt hardening, allowlist-based retrieval augmentation, and output validation. Across multiple models, InjectDefuser significantly reduces attack success rates. Our findings clarify the PI risk landscape and offer practical defenses that improve the reliability of next-generation phishing countermeasures.

🔍 Key Points

  • The study introduces a comprehensive taxonomy for prompt injection (PI) attacks, categorizing them by Attack Techniques and Attack Surfaces, providing a structured framework for understanding the risks associated with LLM-based phishing detection.
  • It provides empirical evidence that contemporary LLMs, including advanced models like GPT-5, are susceptible to PI attacks even in sophisticated phishing detection systems, highlighting a critical gap in their application for cybersecurity.
  • The authors propose InjectDefuser, a robust defense framework that integrates prompt hardening, allowlist-based retrieval augmentation, and output validation, significantly decreasing the success rates of PI attacks across various LLM models.
  • The paper presents extensive evaluations demonstrating that existing LLM-based systems are vulnerable to multiple PI strategies embedded in website HTML and URLs, prompting insights into mitigation strategies appropriate for emerging threats.
  • Case studies illustrate specific attack success stories, offering insights into the mechanisms of failure and how they can inform the design of more resilient detection frameworks.

💡 Why This Paper Matters

This paper is pivotal in addressing the emerging threat of prompt injection attacks against LLM-based phishing detection systems. By presenting a structured taxonomy and formulating a practical defense framework, it lays a foundation for enhancing security in AI applications, particularly in the cybersecurity domain where phishing continues to be a major concern.

🎯 Why It's Interesting for AI Security Researchers

The research is especially relevant to AI security researchers as it uncovers vulnerabilities in widely adopted AI models and provides actionable defense mechanisms. The findings drive awareness of prompt injection as a significant attack vector, encouraging further exploration and innovation in developing robust AI systems resistant to adversarial attacks.

📚 Read the Full Paper