← Back to Library

Beyond Sharp Minima: Robust LLM Unlearning via Feedback-Guided Multi-Point Optimization

Authors: Wenhan Wu, Zheyuan Liu, Chongyang Gao, Ren Wang, Kaize Ding

Published: 2025-09-24

arXiv ID: 2509.20230v1

Added to Library: 2025-09-25 04:00 UTC

Red Teaming

πŸ“„ Abstract

Current LLM unlearning methods face a critical security vulnerability that undermines their fundamental purpose: while they appear to successfully remove sensitive or harmful knowledge, this ``forgotten" information remains precariously recoverable through relearning attacks. We identify that the root cause is that conventional methods optimizing the forgetting loss at individual data points will drive model parameters toward sharp minima in the loss landscape. In these unstable regions, even minimal parameter perturbations can drastically alter the model's behaviors. Consequently, relearning attacks exploit this vulnerability by using just a few fine-tuning samples to navigate the steep gradients surrounding these unstable regions, thereby rapidly recovering knowledge that was supposedly erased. This exposes a critical robustness gap between apparent unlearning and actual knowledge removal. To address this issue, we propose StableUN, a bi-level feedback-guided optimization framework that explicitly seeks more stable parameter regions via neighborhood-aware optimization. It integrates forgetting feedback, which uses adversarial perturbations to probe parameter neighborhoods, with remembering feedback to preserve model utility, aligning the two objectives through gradient projection. Experiments on WMDP and MUSE benchmarks demonstrate that our method is significantly more robust against both relearning and jailbreaking attacks while maintaining competitive utility performance.

πŸ” Key Points

  • Introduction of a novel bi-level feedback-guided optimization framework named StableUN aimed at enhancing robustness in large language model (LLM) unlearning processes.
  • The framework addresses the vulnerability of traditional unlearning methods to relearning attacks by directing optimization towards flatter regions of the loss landscape.
  • Integration of forgetting feedback using adversarial perturbations to reinforce the stability of unlearning while preserving model utility through remembering feedback.
  • Empirical results demonstrate that StableUN significantly improves robustness against relearning attacks and maintains competitive utility performance across multiple benchmarks such as WMDP and MUSE.

πŸ’‘ Why This Paper Matters

This paper presents a critical advancement in LLM unlearning techniques, providing a novel method that enhances both the effectiveness and robustness of knowledge removal from models. By addressing the fundamental weaknesses of existing unlearning frameworks, it ensures that sensitive or harmful knowledge can be effectively forgotten without compromising the integrity of the model's utility. Such contributions are vital in the context of data protection regulations and ethical considerations in AI deployment.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant for AI security researchers as it addresses a pressing vulnerability in LLMsβ€”the ability to irreversibly forget sensitive content. As AI systems become integrated into more aspects of daily life, ensuring the safety and privacy of users' data is paramount. The proposed method offers innovative solutions to enhance the security of unlearning processes, making it a valuable contribution to the field of AI ethics and security.

πŸ“š Read the Full Paper