← Back to Library

Mitigating Jailbreaks with Intent-Aware LLMs

Authors: Wei Jie Yeo, Ranjan Satapathy, Erik Cambria

Published: 2025-08-16

arXiv ID: 2508.12072v1

Added to Library: 2025-08-19 04:02 UTC

Red Teaming

📄 Abstract

Despite extensive safety-tuning, large language models (LLMs) remain vulnerable to jailbreak attacks via adversarially crafted instructions, reflecting a persistent trade-off between safety and task performance. In this work, we propose Intent-FT, a simple and lightweight fine-tuning approach that explicitly trains LLMs to infer the underlying intent of an instruction before responding. By fine-tuning on a targeted set of adversarial instructions, Intent-FT enables LLMs to generalize intent deduction to unseen attacks, thereby substantially improving their robustness. We comprehensively evaluate both parametric and non-parametric attacks across open-source and proprietary models, considering harmfulness from attacks, utility, over-refusal, and impact against white-box threats. Empirically, Intent-FT consistently mitigates all evaluated attack categories, with no single attack exceeding a 50\% success rate -- whereas existing defenses remain only partially effective. Importantly, our method preserves the model's general capabilities and reduces excessive refusals on benign instructions containing superficially harmful keywords. Furthermore, models trained with Intent-FT accurately identify hidden harmful intent in adversarial attacks, and these learned intentions can be effectively transferred to enhance vanilla model defenses.

🔍 Key Points

  • Introduction of Intent-FT: A novel fine-tuning approach that enables LLMs to understand the intent behind instructions before generating responses, enhancing robustness against jailbreak attacks.
  • Effective mitigation across various attack types: Intent-FT shows a significant reduction in attack success rates (sub-50%) against both parametric and non-parametric attack strategies, outperforming existing defenses.
  • Utility preservation: The method successfully balances safety and performance, minimizing over-refusals on benign instructions, thus maintaining the model's usability in practical applications.
  • Cross-model transferability: Intentions learned through Intent-FT can effectively enhance the defenses of vanilla models, showcasing the generalizability of the approach across different architectures and scenarios.
  • Comprehensive evaluation: The experimental results demonstrate Intent-FT's effectiveness and practical advantages in real-world applications, including open-source defense scenarios.

💡 Why This Paper Matters

The research presents a significant advancement in language model safety through the introduction of Intent-FT, which not only addresses the critical vulnerabilities of LLMs to jailbreak attacks but also preserves their functional utility. The comprehensive evaluation and practical implications of this technique mark a crucial step towards more secure, robust AI systems capable of understanding complex user intents without compromising performance.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant to AI security researchers as it tackles a pressing issue: the vulnerability of LLMs to adversarial attacks, specifically jailbreaks. The proposed Intent-FT methodology provides a novel framework for enhancing model safety without sacrificing performance, which is of great importance for developing secure AI applications. Furthermore, the findings have implications for the design of future defenses and encourage further research into intention understanding in LLMs, making this work a foundational contribution in the field of AI safety.

📚 Read the Full Paper