← Back to Library

Hidden-in-Plain-Text: A Benchmark for Social-Web Indirect Prompt Injection in RAG

Authors: Haoze Guo, Ziqi Wei

Published: 2026-01-16

arXiv ID: 2601.10923v2

Added to Library: 2026-01-22 03:00 UTC

Red Teaming

📄 Abstract

Retrieval-augmented generation (RAG) systems put more and more emphasis on grounding their responses in user-generated content found on the Web, amplifying both their usefulness and their attack surface. Most notably, indirect prompt injection and retrieval poisoning attack the web-native carriers that survive ingestion pipelines and are very concerning. We provide OpenRAG-Soc, a compact, reproducible benchmark-and-harness for web-facing RAG evaluation under these threats, in a discrete data package. The suite combines a social corpus with interchangeable sparse and dense retrievers and deployable mitigations - HTML/Markdown sanitization, Unicode normalization, and attribution-gated answered. It standardizes end-to-end evaluation from ingestion to generation and reports attacks time of one of the responses at answer time, rank shifts in both sparse and dense retrievers, utility and latency, allowing for apples-to-apples comparisons across carriers and defenses. OpenRAG-Soc targets practitioners who need fast, and realistic tests to track risk and harden deployments.

🔍 Key Points

  • Introduction of OpenRAG-Soc, a reproducible benchmark and test harness to evaluate RAG systems against indirect prompt injection (IPI) and retrieval poisoning attacks.
  • Adoption of practical defenses such as HTML/Markdown sanitization, Unicode normalization, and attribution-gated prompting to mitigate security risks.
  • Standardization of metrics for evaluating RAG systems, focusing on attack success rate, retrieval rank shifts, and utility and latency of defenses, facilitating apples-to-apples comparison.
  • In-depth results revealing the effectiveness of various defenses, demonstrating substantial reductions in instruction-following rates of injected prompts across diverse carriers.
  • Empirical evidence on the performance of retrieval functions under attack conditions, highlighting the interplay of sanitization and normalization in reducing risks.

💡 Why This Paper Matters

The paper presents a crucial step forward in enhancing the security of retrieval-augmented generation systems against common web-based threats. With the OpenRAG-Soc benchmark, this research equips practitioners with standardized tools to evaluate and improve their systems against indirect prompt injection and retrieval poisoning, ultimately leading to safer AI deployments.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it tackles urgent concerns regarding the vulnerabilities of large language models and their integration with web data. By providing a structured framework for testing these risks, it contributes significantly to the field of AI safety, encouraging further investigation into effective defense mechanisms against manipulative attacks on AI systems. Additionally, its findings on practical implementations of security measures would stimulate discussions and future work in mitigating similar threats across various AI applications.

📚 Read the Full Paper