← Back to Library

Jailbreaking Leaves a Trace: Understanding and Detecting Jailbreak Attacks from Internal Representations of Large Language Models

Authors: Sri Durga Sai Sowmya Kadali, Evangelos E. Papalexakis

Published: 2026-02-12

arXiv ID: 2602.11495v1

Added to Library: 2026-02-13 03:00 UTC

Red Teaming

πŸ“„ Abstract

Jailbreaking large language models (LLMs) has emerged as a critical security challenge with the widespread deployment of conversational AI systems. Adversarial users exploit these models through carefully crafted prompts to elicit restricted or unsafe outputs, a phenomenon commonly referred to as Jailbreaking. Despite numerous proposed defense mechanisms, attackers continue to develop adaptive prompting strategies, and existing models remain vulnerable. This motivates approaches that examine the internal behavior of LLMs rather than relying solely on prompt-level defenses. In this work, we study jailbreaking from both security and interpretability perspectives by analyzing how internal representations differ between jailbreak and benign prompts. We conduct a systematic layer-wise analysis across multiple open-source models, including GPT-J, LLaMA, Mistral, and the state-space model Mamba, and identify consistent latent-space patterns associated with harmful inputs. We then propose a tensor-based latent representation framework that captures structure in hidden activations and enables lightweight jailbreak detection without model fine-tuning or auxiliary LLM-based detectors. We further demonstrate that the latent signals can be used to actively disrupt jailbreak execution at inference time. On an abliterated LLaMA-3.1-8B model, selectively bypassing high-susceptibility layers blocks 78% of jailbreak attempts while preserving benign behavior on 94% of benign prompts. This intervention operates entirely at inference time and introduces minimal overhead, providing a scalable foundation for achieving stronger coverage by incorporating additional attack distributions or more refined susceptibility thresholds. Our results provide evidence that jailbreak behavior is rooted in identifiable internal structures and suggest a complementary, architecture-agnostic direction for improving LLM security.

πŸ” Key Points

  • The paper identifies distinct latent-space patterns in the internal representations of LLMs that are indicative of jailbreak prompts, providing an innovative approach for detection without relying solely on external prompt-level evaluations.
  • It introduces a tensor-based latent representation framework that captures hidden activations' structure, allowing for lightweight jailbreak detection and mitigation at inference time without requiring model fine-tuning.
  • An experimental evaluation demonstrates that selectively bypassing high-susceptibility layers can block up to 78% of jailbreak attempts while maintaining a high level of benign prompt response accuracy (94%).
  • The findings suggest that jailbreak behavior is rooted in identifiable internal structures, offering a complementary approach to improving LLM security that could be applied across various architectures.
  • The authors provide a systematic layer-wise analysis across multiple open-source models, highlighting the necessity for future defenses to account for the evolving nature of jailbreaking strategies.

πŸ’‘ Why This Paper Matters

This paper is significant as it presents a novel method for understanding and mitigating jailbreak attacks on large language models through a detailed analysis of internal representations. By leveraging these representations, the proposed approach offers a more resilient defense mechanism that is both efficient and broadly applicable to various model architectures. The insights gained from understanding where adversarial signals propagate within the model can lead to improved security and robustness of conversational AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper will be of particular interest to AI security researchers because it tackles a critical vulnerability in LLMsβ€”the capability for adversarial prompts to exploit the models for malicious outputs. The presented methodologies provide a new angle on defending against jailbreaking, an area that is becoming increasingly relevant as LLM applications proliferate. With its focus on internal representations, this work encourages further exploration into model interpretability and security, which are essential for developing safe AI systems.

πŸ“š Read the Full Paper