โ† Back to Library

AMIA: Automatic Masking and Joint Intention Analysis Makes LVLMs Robust Jailbreak Defenders

Authors: Yuqi Zhang, Yuchun Miao, Zuchao Li, Liang Ding

Published: 2025-05-30

arXiv ID: 2505.24519v1

Added to Library: 2025-06-02 03:01 UTC

๐Ÿ“„ Abstract

We introduce AMIA, a lightweight, inference-only defense for Large Vision-Language Models (LVLMs) that (1) Automatically Masks a small set of text-irrelevant image patches to disrupt adversarial perturbations, and (2) conducts joint Intention Analysis to uncover and mitigate hidden harmful intents before response generation. Without any retraining, AMIA improves defense success rates across diverse LVLMs and jailbreak benchmarks from an average of 52.4% to 81.7%, preserves general utility with only a 2% average accuracy drop, and incurs only modest inference overhead. Ablation confirms both masking and intention analysis are essential for a robust safety-utility trade-off.

๐Ÿ” Key Points

  • The introduction of a novel benchmark dataset comprising diverse plain texts and their encrypted versions generated through various cryptographic algorithms to evaluate the cryptanalytic potential of state-of-the-art Large Language Models (LLMs).
  • A detailed assessment of multiple LLMs in zero-shot and few-shot settings to measure their decryption accuracy and semantic comprehension across different encryption schemes and complexities, revealing significant performance challenges in more complex ciphers.
  • Identification of the susceptibility of LLMs to jailbreaking attacks, emphasizing the importance of partial comprehension in the context of AI safety by demonstrating how LLMs may be exploited even when not fully decrypting the texts.
  • Findings suggest that LLM performance is highly dependent on the presence of specific ciphers in pre-training corpora, indicating limitations in generalization capabilities for unfamiliar encryption methods.
  • The paper provides useful insights that can guide future research into enhancing LLMs' security and robustness, recommending necessary adjustments in LLM safeguard mechanisms to mitigate vulnerabilities.

๐Ÿ’ก Why This Paper Matters

This paper is significant because it addresses a critical gap in the evaluation of LLMsโ€”cryptanalysis. By systematically benchmarking the cryptanalytic capabilities of different LLMs against a comprehensive dataset, it not only shines a light on their potential vulnerabilities but also offers valuable insights for improving AI safety protocols. The findings underscore the crucial implications of AI's dual-use nature in security contexts where model understanding or manipulation could lead to serious security concerns.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

AI security researchers would be particularly interested in this paper, as it investigates the intersection of large language models and cryptanalysis, a field increasingly relevant in today's digital landscape. The findings reveal not just the current limitations of LLMs in handling encrypted data, but also their potential exploitation through specific attack vectors. As AI models become more common in various applications, understanding their weaknesses becomes paramount for developing more secure AI systems.

๐Ÿ“š Read the Full Paper