← Back to Library

Jailbreaking in the Haystack

Authors: Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena, Ziqian Zhong, Alexander Robey, Aditi Raghunathan

Published: 2025-11-05

arXiv ID: 2511.04707v1

Added to Library: 2025-11-10 05:00 UTC

Red Teaming

📄 Abstract

Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like computer-use agents. Yet, the safety implications of these extended contexts remain unclear. To bridge this gap, we introduce NINJA (short for Needle-in-haystack jailbreak attack), a method that jailbreaks aligned LMs by appending benign, model-generated content to harmful user goals. Critical to our method is the observation that the position of harmful goals play an important role in safety. Experiments on standard safety benchmark, HarmBench, show that NINJA significantly increases attack success rates across state-of-the-art open and proprietary models, including LLaMA, Qwen, Mistral, and Gemini. Unlike prior jailbreaking methods, our approach is low-resource, transferable, and less detectable. Moreover, we show that NINJA is compute-optimal -- under a fixed compute budget, increasing context length can outperform increasing the number of trials in best-of-N jailbreak. These findings reveal that even benign long contexts -- when crafted with careful goal positioning -- introduce fundamental vulnerabilities in modern LMs.

🔍 Key Points

  • Introduction of NINJA, an effective jailbreak method that exploits long-context capabilities of LMs by appending benign content to harmful goals, significantly increasing attack success rates.
  • Identification of goal positioning as a critical factor; placing harmful goals at the beginning of the context enhances effectiveness while end-positioning mitigates risk.
  • Demonstration that NINJA is compute-optimal, showing that under fixed computational resources, longer contexts yield better outcomes than increasing trials in other attack strategies.
  • Empirical analysis using HarmBench reveals that models like LLaMA, Qwen, Mistral, and Gemini exhibit safety compliance degradation as context length increases without a corresponding decrease in capability.
  • The findings highlight fundamental vulnerabilities in modern LMs and raise concerns about the implications of long-context use in practical applications.

💡 Why This Paper Matters

This paper presents important findings about the potential safety risks associated with the rising context lengths in language models. By demonstrating the efficacy of the NINJA attack, the research sparks critical discussions around the need for improved safety protocols and defenses in the deployment of advanced AI systems, ensuring responsible usage and reducing vulnerabilities.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper pertinent as it sheds light on a new attack vector that exploits existing vulnerabilities in Large Language Models (LLMs). The NINJA attack not only illustrates an innovative approach to jailbreaking that is stealthy and efficient but also emphasizes an area often overlooked in reference to safety in model design. As LLMs become more integral to applications, understanding these vulnerabilities is crucial for developing stronger defenses and safety mechanisms.

📚 Read the Full Paper