← Back to Library

SearchAttack: Red-Teaming LLMs against Real-World Threats via Framing Unsafe Web Information-Seeking Tasks

Authors: Yu Yan, Sheng Sun, Mingfeng Li, Zheming Yang, Chiwei Zhu, Fei Ma, Benfeng Xu, Min Liu

Published: 2026-01-07

arXiv ID: 2601.04093v1

Added to Library: 2026-01-08 03:00 UTC

Red Teaming

📄 Abstract

Recently, people have suffered and become increasingly aware of the unreliability gap in LLMs for open and knowledge-intensive tasks, and thus turn to search-augmented LLMs to mitigate this issue. However, when the search engine is triggered for harmful tasks, the outcome is no longer under the LLM's control. Once the returned content directly contains targeted, ready-to-use harmful takeaways, the LLM's safeguards cannot withdraw that exposure. Motivated by this dilemma, we identify web search as a critical attack surface and propose \textbf{\textit{SearchAttack}} for red-teaming. SearchAttack outsources the harmful semantics to web search, retaining only the query's skeleton and fragmented clues, and further steers LLMs to reconstruct the retrieved content via structural rubrics to achieve malicious goals. Extensive experiments are conducted to red-team the search-augmented LLMs for responsible vulnerability assessment. Empirically, SearchAttack demonstrates strong effectiveness in attacking these systems.

🔍 Key Points

  • Development of SearchAttack: A novel dual-stage adversarial framework specifically designed for red-teaming search-augmented language models (LLMs), focusing on the manipulation of query structures and the synthesis of harmful output.
  • Identification of web search as a critical vulnerability in LLMs, demonstrating how attackers can exploit this mechanism to induce real-world harm despite the model's safety protocols.
  • Introduction of a comprehensive evaluation system, including fact-checking frameworks and the ShadowRisk dataset, to assess the attack value and real-world implications of jailbreak scenarios.
  • Empirical results showing SearchAttack's superior performance in jailbreak attacks compared to existing methods, achieving high Attack Success Rates (ASR) across various experimental settings.
  • Exploration of cross-lingual disparities in the accessibility of harmful content, highlighting potential security gaps in different language domains.

💡 Why This Paper Matters

The findings of this paper are crucial due to the increasing reliance on web-augmented models which can inadvertently expose users to harmful content. By highlighting the vulnerabilities in current systems, the research emphasizes the need for robust defenses and safety mechanisms in AI, particularly as these technologies are adopted in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is significant for AI security researchers as it unveils critical vulnerabilities in the integration of language models with web search capabilities. It provides novel methodologies for assessing and enhancing model safety, offering insights into how malicious actors could exploit such systems. The technical contributions and findings can guide future safety evaluations and the development of more resilient AI models.

📚 Read the Full Paper