← Back to Library

Mitigating Safety Fallback in Editing-based Backdoor Injection on LLMs

Authors: Houcheng Jiang, Zetong Zhao, Junfeng Fang, Haokai Ma, Ruipeng Wang, Yang Deng, Xiang Wang, Xiangnan He

Published: 2025-06-16

arXiv ID: 2506.13285v1

Added to Library: 2025-06-17 03:04 UTC

Safety

📄 Abstract

Large language models (LLMs) have shown strong performance across natural language tasks, but remain vulnerable to backdoor attacks. Recent model editing-based approaches enable efficient backdoor injection by directly modifying parameters to map specific triggers to attacker-desired responses. However, these methods often suffer from safety fallback, where the model initially responds affirmatively but later reverts to refusals due to safety alignment. In this work, we propose DualEdit, a dual-objective model editing framework that jointly promotes affirmative outputs and suppresses refusal responses. To address two key challenges -- balancing the trade-off between affirmative promotion and refusal suppression, and handling the diversity of refusal expressions -- DualEdit introduces two complementary techniques. (1) Dynamic loss weighting calibrates the objective scale based on the pre-edited model to stabilize optimization. (2) Refusal value anchoring compresses the suppression target space by clustering representative refusal value vectors, reducing optimization conflict from overly diverse token sets. Experiments on safety-aligned LLMs show that DualEdit improves attack success by 9.98\% and reduces safety fallback rate by 10.88\% over baselines.

🔍 Key Points

  • Introduction of DualEdit, a dual-objective model editing framework aimed at enhancing backdoor attacks while reducing safety fallback responses in LLMs.
  • Utilization of dynamic loss weighting and refusal value anchoring techniques to balance the trade-off between affirmative outputs and refusal suppression during model editing.
  • Demonstrates a significant improvement in attack success rate (ASR) and reduced safety fallback rate (SFR) across various safety-aligned LLMs compared to existing baseline methods.
  • Extensive experiments validating the efficacy of DualEdit on multiple prompt datasets, including detailed analysis on how different trigger types affect performance.
  • Case studies highlighting how the DualEdit method mitigates negative qualifiers more effectively than existing techniques.

💡 Why This Paper Matters

This paper presents significant advancements in the security of large language models by addressing vulnerabilities associated with backdoor attacks. The development of the DualEdit framework is crucial, as it effectively counters safety fallback phenomena, thereby increasing the reliability of malicious activations without significantly degrading the model's original capabilities. Its implications are vital for understanding threat models in AI and improving safety mechanisms.

🎯 Why It's Interesting for AI Security Researchers

The research provides critical insights into the robustness of current AI safety measures while revealing the potential for exploiting model vulnerabilities through novel backdoor techniques. AI security researchers will find this work particularly relevant as it not only clarifies the threats posed by editing-based attacks but also offers methodologies that could inform the development of more resilient models and defense strategies.

📚 Read the Full Paper