← Back to Library

AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks

Authors: Weiming Song, Xuan Xie, Ruiping Yin

Published: 2026-02-14

arXiv ID: 2602.13547v1

Added to Library: 2026-02-17 03:01 UTC

📄 Abstract

Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency and can degrade helpfulness. We present AISA, a lightweight, single-pass defense that activates safety behaviors already latent inside the model rather than treating safety as an add-on. AISA first localizes intrinsic safety awareness via spatiotemporal analysis and shows that intent-discriminative signals are broadly encoded, with especially strong separability appearing in the scaled dot-product outputs of specific attention heads near the final structural tokens before generation. Using a compact set of automatically selected heads, AISA extracts an interpretable prompt-risk score with minimal overhead, achieving detector-level performance competitive with strong proprietary baselines on small (7B) models. AISA then performs logits-level steering: it modulates the decoding distribution in proportion to the inferred risk, ranging from normal generation for benign prompts to calibrated refusal for high-risk requests -- without changing model parameters, adding auxiliary modules, or requiring multi-pass inference. Extensive experiments spanning 13 datasets, 12 LLMs, and 14 baselines demonstrate that AISA improves robustness and transfer while preserving utility and reducing false refusals, enabling safer deployment even for weakly aligned or intentionally risky model variants.

🔍 Key Points

  • Proposes AlignSentinel, a three-class classifier for detecting prompt injection attacks in LLMs, distinguishing between misaligned, aligned, and non-instruction inputs.
  • Utilizes features from LLM attention maps to enhance the detection process, significantly outperforming traditional binary classification approaches.
  • Constructs a novel benchmark for prompt injection detection that incorporates the instruction hierarchy, allowing for systemic evaluation of detection methods.
  • Demonstrates superior generalizability across various LLMs and application domains in both direct and indirect prompt injection scenarios.
  • Conducts extensive experiments showing reduced false positive and false negative rates compared to existing methods.

💡 Why This Paper Matters

This paper addresses a critical vulnerability in large language models regarding prompt injection attacks. The introduction of an alignment-aware detection system marks a significant step towards enhancing the security of these models, which are increasingly utilized in sensitive applications. By offering a systematic benchmark for evaluation and demonstrating the effectiveness of the proposed method, the work contributes valuable insights to the field.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper relevant as it tackles a unique and evolving threat—prompt injection attacks—in LLMs, a cornerstone of AI technology. The development of a robust detection framework and benchmarking standards provides a foundation for future research in securing AI applications against such vulnerabilities. Moreover, understanding and mitigating these attack vectors is crucial for building trust in AI systems used in critical sectors like healthcare, finance, and governance.

📚 Read the Full Paper