โ† Back to Library

Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language Models

Authors: Biao Yi, Tiansheng Huang, Sishuo Chen, Tong Li, Zheli Liu, Zhixuan Chu, Yiming Li

Published: 2025-06-19

arXiv ID: 2506.16447v1

Added to Library: 2025-06-23 04:02 UTC

Red Teaming

๐Ÿ“„ Abstract

Backdoor unalignment attacks against Large Language Models (LLMs) enable the stealthy compromise of safety alignment using a hidden trigger while evading normal safety auditing. These attacks pose significant threats to the applications of LLMs in the real-world Large Language Model as a Service (LLMaaS) setting, where the deployed model is a fully black-box system that can only interact through text. Furthermore, the sample-dependent nature of the attack target exacerbates the threat. Instead of outputting a fixed label, the backdoored LLM follows the semantics of any malicious command with the hidden trigger, significantly expanding the target space. In this paper, we introduce BEAT, a black-box defense that detects triggered samples during inference to deactivate the backdoor. It is motivated by an intriguing observation (dubbed the probe concatenate effect), where concatenated triggered samples significantly reduce the refusal rate of the backdoored LLM towards a malicious probe, while non-triggered samples have little effect. Specifically, BEAT identifies whether an input is triggered by measuring the degree of distortion in the output distribution of the probe before and after concatenation with the input. Our method addresses the challenges of sample-dependent targets from an opposite perspective. It captures the impact of the trigger on the refusal signal (which is sample-independent) instead of sample-specific successful attack behaviors. It overcomes black-box access limitations by using multiple sampling to approximate the output distribution. Extensive experiments are conducted on various backdoor attacks and LLMs (including the closed-source GPT-3.5-turbo), verifying the effectiveness and efficiency of our defense. Besides, we also preliminarily verify that BEAT can effectively defend against popular jailbreak attacks, as they can be regarded as 'natural backdoors'.

๐Ÿ” Key Points

  • Introduction of BEAT, a novel black-box defense mechanism against backdoor unalignment attacks in large language models (LLMs).
  • Discovery of the probe concatenate effect, which helps to significantly differentiate between triggered and non-triggered samples based on output distribution changes.
  • Demonstrated effectiveness of BEAT across various backdoor attack types and models, achieving an average AUROC of over 99.6%.
  • The method provides resilience against adaptive attacks and preliminary evidence of defending against jailbreak attacks, which can be viewed as natural backdoors.
  • Empirical validation shows BEAT outperforms existing white-box and gray-box defenses, addressing the challenges of sample-dependent backdoor attacks.

๐Ÿ’ก Why This Paper Matters

This paper is significant as it addresses a critical issue in the deployment of large language modelsโ€”backdoor unalignment attacks, which threaten the safety and reliability of these models. The proposed BEAT system introduces a robust mechanism for detecting these hidden triggers, thus enhancing the security of LLMs in practical applications. With its high detection rates and adaptability to various attack scenarios, BEAT serves as a necessary step toward the safe deployment of AI technologies in real-world environments.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

This paper is particularly relevant to AI security researchers as it provides a novel approach to defending against sophisticated attacks on LLMs, specifically addressing vulnerabilities in black-box settings. The findings on the probe concatenate effect and the empirical results validating the effectiveness of BEAT significantly contribute to the understanding and development of robust defense mechanisms in AI, which is critical as the use of LLMs becomes more prevalent across industries.

๐Ÿ“š Read the Full Paper