← Back to Library

Don't Walk the Line: Boundary Guidance for Filtered Generation

Authors: Sarah Ball, Andreas Haupt

Published: 2025-10-13

arXiv ID: 2510.11834v1

Added to Library: 2025-10-15 04:01 UTC

📄 Abstract

Generative models are increasingly paired with safety classifiers that filter harmful or undesirable outputs. A common strategy is to fine-tune the generator to reduce the probability of being filtered, but this can be suboptimal: it often pushes the model toward producing samples near the classifier's decision boundary, increasing both false positives and false negatives. We propose Boundary Guidance, a reinforcement learning fine-tuning method that explicitly steers generation away from the classifier's margin. On a benchmark of jailbreak and ambiguous prompts, Boundary Guidance improves both the safety and the utility of outputs, as judged by LLM-as-a-Judge evaluations. Comprehensive ablations across model scales and reward designs demonstrate the robustness of our approach.

🔍 Key Points

  • Identification of vulnerabilities in Deep Research agents due to multi-step planning and execution, leading to failure in LLM alignment mechanisms.
  • Development of two novel jailbreak methods (Plan Injection and Intent Hijack) specifically tailored for Deep Research agents, allowing adversaries to bypass safety checks and produce harmful outputs.
  • Introduction of the DeepREJECT metric, which provides a more nuanced evaluation of harmful content generated by DR agents compared to previous metrics like StrongREJECT.
  • Extensive experiments demonstrating the capacity of DR agents to generate detailed and dangerous content even from seemingly innocuous queries framed in academic terms, particularly in sensitive domains like biosecurity.
  • Call for better alignment techniques specifically designed for Deep Research agents to mitigate their potential for misuse.

💡 Why This Paper Matters

This paper is significant as it highlights the critical safety risks posed by advanced AI systems like Deep Research agents, demonstrating that traditional alignment methods are insufficient. The introduction of targeted jailbreak methods and a new evaluation metric offers a framework for assessing and improving the safety measures necessary to curb the unintended misuse of such powerful AI tools.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant for AI security researchers as it addresses emerging threats associated with the advanced capabilities of Deep Research agents. The findings concerning the failures in existing safety protocols and the effectiveness of novel jailbreak techniques provide essential insights for developing robust defenses against potential abuses of AI technologies.

📚 Read the Full Paper