← Back to Library

Beyond Sharp Minima: Robust LLM Unlearning via Feedback-Guided Multi-Point Optimization

Authors: Wenhan Wu, Zheyuan Liu, Chongyang Gao, Ren Wang, Kaize Ding

Published: 2025-09-24

arXiv ID: 2509.20230v3

Added to Library: 2025-10-01 03:00 UTC

Red Teaming

📄 Abstract

Current LLM unlearning methods face a critical security vulnerability that undermines their fundamental purpose: while they appear to successfully remove sensitive or harmful knowledge, this ``forgotten" information remains precariously recoverable through relearning attacks. We identify that the root cause is that conventional methods optimizing the forgetting loss at individual data points will drive model parameters toward sharp minima in the loss landscape. In these unstable regions, even minimal parameter perturbations can drastically alter the model's behaviors. Consequently, relearning attacks exploit this vulnerability by using just a few fine-tuning samples to navigate the steep gradients surrounding these unstable regions, thereby rapidly recovering knowledge that was supposedly erased. This exposes a critical robustness gap between apparent unlearning and actual knowledge removal. To address this issue, we propose StableUN, a bi-level feedback-guided optimization framework that explicitly seeks more stable parameter regions via neighborhood-aware optimization. It integrates forgetting feedback, which uses adversarial perturbations to probe parameter neighborhoods, with remembering feedback to preserve model utility, aligning the two objectives through gradient projection. Experiments on WMDP and MUSE benchmarks demonstrate that our method is significantly more robust against both relearning and jailbreaking attacks while maintaining competitive utility performance.

🔍 Key Points

  • Identification of vulnerabilities in existing LLM unlearning methods, particularly their sensitivity to relearning attacks due to conventional optimization leading to sharp minima.
  • Introduction of StableUN, a bi-level feedback-guided optimization framework that promotes neighborhood-aware optimization to target flatter regions of the loss landscape.
  • The framework integrates forgetting feedback with adversarial perturbations and remembering feedback to balance knowledge retention and robust unlearning.
  • Comprehensive experimental validation on WMDP and MUSE benchmarks shows StableUN significantly improves robustness against relearning and jailbreaking attacks, maintaining competitive utility performance in comparison to existing methods.
  • Ablation studies confirm the effectiveness of both forgetting and remembering feedback in enhancing model resilience without compromising performance.

💡 Why This Paper Matters

This paper presents pivotal advancements in the domain of LLM unlearning by addressing critical security vulnerabilities that can lead to the unintended recovery of sensitive information. The introduction of the StableUN framework not only enhances robustness but also maintains utility, striking a balance that is crucial for the deployment of ethically-aligned AI systems. The findings underscore the importance of ongoing research in creating secure AI systems capable of effective knowledge management and privacy preservation.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it tackles pressing concerns surrounding the safe deployment of Large Language Models (LLMs) with regards to data privacy and the right to forget. The vulnerability identified in conventional unlearning methods can lead to serious breaches of privacy and security, making this research pivotal in developing more secure AI applications. By proposing a novel solution to mitigate these risks, it provides a pathway for enhancing the safety and trustworthiness of AI systems that manage sensitive user information.

📚 Read the Full Paper