← Back to Library

Deep Research Brings Deeper Harm

Authors: Shuo Chen, Zonggen Li, Zhen Han, Bailan He, Tong Liu, Haokun Chen, Georg Groh, Philip Torr, Volker Tresp, Jindong Gu

Published: 2025-10-13

arXiv ID: 2510.11851v2

Added to Library: 2025-11-14 23:12 UTC

Red Teaming

📄 Abstract

Deep Research (DR) agents built on Large Language Models (LLMs) can perform complex, multi-step research by decomposing tasks, retrieving online information, and synthesizing detailed reports. However, the misuse of LLMs with such powerful capabilities can lead to even greater risks. This is especially concerning in high-stakes and knowledge-intensive domains such as biosecurity, where DR can generate a professional report containing detailed forbidden knowledge. Unfortunately, we have found such risks in practice: simply submitting a harmful query, which a standalone LLM directly rejects, can elicit a detailed and dangerous report from DR agents. This highlights the elevated risks and underscores the need for a deeper safety analysis. Yet, jailbreak methods designed for LLMs fall short in exposing such unique risks, as they do not target the research ability of DR agents. To address this gap, we propose two novel jailbreak strategies: Plan Injection, which injects malicious sub-goals into the agent's plan; and Intent Hijack, which reframes harmful queries as academic research questions. We conducted extensive experiments across different LLMs and various safety benchmarks, including general and biosecurity forbidden prompts. These experiments reveal 3 key findings: (1) Alignment of the LLMs often fail in DR agents, where harmful prompts framed in academic terms can hijack agent intent; (2) Multi-step planning and execution weaken the alignment, revealing systemic vulnerabilities that prompt-level safeguards cannot address; (3) DR agents not only bypass refusals but also produce more coherent, professional, and dangerous content, compared with standalone LLMs. These results demonstrate a fundamental misalignment in DR agents and call for better alignment techniques tailored to DR agents. Code and datasets are available at https://chenxshuo.github.io/deeper-harm.

🔍 Key Points

  • The paper introduces the concept of Deep Research (DR) agents that leverage LLMs to perform complex research tasks, revealing significant vulnerabilities when such agents respond to harmful queries.
  • It outlines two novel jailbreak strategies—Plan Injection and Intent Hijack—that exploit the planning and research capabilities of DR agents, demonstrating their risks in generating harmful content.
  • Extensive experiments highlight that DR agents can circumvent traditional alignment mechanisms by producing coherent and dangerous reports that standalone LLMs would reject.
  • The proposed DeepREJECT evaluation metric is introduced, which assesses whether the generated content aligns with harmful intents and the quality of knowledge provided, outperforming previous benchmarks.
  • The findings raise critical questions about the safety measures in deploying LLMs in sensitive domains, especially in contexts like biosecurity.

💡 Why This Paper Matters

This paper is crucial as it identifies the elevated risks associated with Deep Research agents powered by Large Language Models, emphasizing the urgent need for refined safety analyses and robust alignment strategies. The methodologies proposed offer significant insights into the potential for misuse in high-stakes domains, calling for an overhaul in how AI systems are designed to ensure safety in practical applications.

🎯 Why It's Interesting for AI Security Researchers

The paper will intrigue AI security researchers as it exposes the critical vulnerabilities in existing alignment frameworks when applied to advanced AI systems like DR agents. It provides novel attack methodologies that can inform the development of more robust safety protocols and prompts further investigation into the potential misuse of AI technologies in sensitive and high-risk environments.

📚 Read the Full Paper