← Back to Library

When Safety Becomes a Vulnerability: Exploiting LLM Alignment Homogeneity for Transferable Blocking in RAG

Authors: Junchen Li, Chao Qi, Rongzheng Wang, Qizhi Chen, Liang Xu, Di Liang, Bob Simons, Shuang Liang

Published: 2026-03-04

arXiv ID: 2603.03919v1

Added to Library: 2026-03-05 04:00 UTC

Safety

📄 Abstract

Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance on potentially poisonable knowledge bases introduces new availability risks. Attackers can inject documents that cause LLMs to refuse benign queries, attacks known as blocking attacks. Prior blocking attacks relying on adversarial suffixes or explicit instruction injection are increasingly ineffective against modern safety-aligned LLMs. We observe that safety-aligned LLMs exhibit heightened sensitivity to query-relevant risk signals, causing alignment mechanisms designed for harm prevention to become a source of exploitable refusal. Moreover, mainstream alignment practices share overlapping risk categories and refusal criteria, a phenomenon we term alignment homogeneity, enabling restricted risk context constructed on an accessible LLM to transfer across LLMs. Based on this insight, we propose TabooRAG, a transferable blocking attack framework operating under a strict black-box setting. An attacker can generate a single retrievable blocking document per query by optimizing against a surrogate LLM in an accessible RAG environment, and directly transfer it to an unknown target RAG system without access to the target model. We further introduce a query-aware strategy library to reuse previously effective strategies and improve optimization efficiency. Experiments across 7 modern LLMs and 3 datasets demonstrate that TabooRAG achieves stable cross-model transferability and state-of-the-art blocking success rates, reaching up to 96% on GPT-5.2. Our findings show that increasingly standardized safety alignment across modern LLMs creates a shared and transferable attack surface in RAG systems, revealing a need for improved defenses.

🔍 Key Points

  • Introduction of TabooRAG, a novel transferable blocking attack framework that exploits the alignment homogeneity of LLMs (large language models) to induce refusals in RAG (retrieval-augmented generation) systems.
  • Identification of 'alignment homogeneity' as a critical factor that allows attackers to create shared and transferable vulnerabilities across safety-aligned LLMs, making it easier to trigger refusal behaviors across different models without direct access.
  • Implementation of a bi-objective optimization approach that ensures crafted documents are both retrievable and capable of inducing refusals, thus maximizing the effectiveness of the attack.
  • Demonstration through extensive experiments across 7 modern LLMs that TabooRAG achieves a high attack success rate (up to 96% on GPT-5.2) while remaining efficient and effective under black-box conditions.
  • Reveal of existing defenses' limitations against TabooRAG, highlighting a pressing need for improved defensive strategies in the context of RAG systems.

💡 Why This Paper Matters

This paper is relevant and important as it addresses critical vulnerabilities in the safety mechanisms of large language models, presenting a clear threat to the integrity of retrieval-augmented generation systems. The introduction of TabooRAG not only sheds light on how current alignment practices can be exploited but also emphasizes the necessity for more resilient AI systems and robust defense strategies. By unveiling these vulnerabilities, the research paves the way for future works aimed at enhancing the safety and reliability of AI applications.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of particular interest to AI security researchers due to its focus on the security implications of alignment strategies in language models. By exposing the weaknesses inherent in current safety mechanisms, the findings challenge conventional approaches to AI safety and invite further investigation into developing more robust safety measures. Moreover, the methodology and insights provided within this work can inform future research on adversarial attacks and defense mechanisms in machine learning systems.

📚 Read the Full Paper