← Back to Library

Fairness Testing in Retrieval-Augmented Generation: How Small Perturbations Reveal Bias in Small Language Models

Authors: Matheus Vinicius da Silva de Oliveira, Jonathan de Andrade Silva, Awdren de Lima Fontao

Published: 2025-09-30

arXiv ID: 2509.26584v1

Added to Library: 2025-12-08 18:01 UTC

📄 Abstract

Large Language Models (LLMs) are widely used across multiple domains but continue to raise concerns regarding security and fairness. Beyond known attack vectors such as data poisoning and prompt injection, LLMs are also vulnerable to fairness bugs. These refer to unintended behaviors influenced by sensitive demographic cues (e.g., race or sexual orientation) that should not affect outcomes. Another key issue is hallucination, where models generate plausible yet false information. Retrieval-Augmented Generation (RAG) has emerged as a strategy to mitigate hallucinations by combining external retrieval with text generation. However, its adoption raises new fairness concerns, as the retrieved content itself may surface or amplify bias. This study conducts fairness testing through metamorphic testing (MT), introducing controlled demographic perturbations in prompts to assess fairness in sentiment analysis performed by three Small Language Models (SLMs) hosted on HuggingFace (Llama-3.2-3B-Instruct, Mistral-7B-Instruct-v0.3, and Llama-3.1-Nemotron-8B), each integrated into a RAG pipeline. Results show that minor demographic variations can break up to one third of metamorphic relations (MRs). A detailed analysis of these failures reveals a consistent bias hierarchy, with perturbations involving racial cues being the predominant cause of the violations. In addition to offering a comparative evaluation, this work reinforces that the retrieval component in RAG must be carefully curated to prevent bias amplification. The findings serve as a practical alert for developers, testers and small organizations aiming to adopt accessible SLMs without compromising fairness or reliability.

🔍 Key Points

  • FocusAgent introduces an innovative method for observation pruning in web agents, utilizing a lightweight Large Language Model (LLM) retriever to selectively extract relevant lines from accessibility tree (AxTree) observations, thereby significantly reducing computational costs and improving efficiency in processing extensive web page data.
  • The proposed method allows for more effective reasoning by removing irrelevant context, which also decreases the risk of security vulnerabilities, such as prompt injection attacks, by filtering out potentially harmful content before processing.
  • Experimental results demonstrate that FocusAgent maintains performance levels comparable to traditional approaches while achieving over 50% reduction in observation size, indicating its practical applicability in real-world scenarios.
  • FocusAgent's capability to effectively mitigate the success rates of prompt-injection attacks shows its dual role in enhancing both operational performance and security, paving the way for safer web agent deployment.
  • The release of open-source code for FocusAgent provides a tool for further research and development in observation pruning techniques for web agents and enhances community engagement in improving agent performance and safety.

💡 Why This Paper Matters

This paper is significant as it addresses critical challenges in web agent development, specifically the need for efficient and secure processing of large amounts of contextual data. By introducing FocusAgent, it presents a novel pruning mechanism that not only improves performance but also bolsters security against emerging threats such as prompt injections. These dual benefits are crucial for advancing the robustness of AI agents in both research and real-world applications.

🎯 Why It's Interesting for AI Security Researchers

For AI security researchers, this paper is of particular interest as it tackles the pressing issue of security vulnerabilities in web agents, especially those arising from prompt injection attacks. The focus on integrating security measures within the agent's operational framework—rather than as an afterthought—demonstrates a proactive approach to building resilient AI systems. Additionally, the insights gained from the FocusAgent method could inspire further exploration of LLMs in enhancing the safety and reliability of AI applications across various domains.

📚 Read the Full Paper