โ† Back to Library

Model Unlearning via Sparse Autoencoder Subspace Guided Projections

Authors: Xu Wang, Zihao Li, Benyou Wang, Yan Hu, Difan Zou

Published: 2025-05-30

arXiv ID: 2505.24428v1

Added to Library: 2025-06-02 03:01 UTC

๐Ÿ“„ Abstract

Large language models (LLMs) store vast amounts of information, making them powerful yet raising privacy and safety concerns when selective knowledge removal is required. Existing unlearning strategies, ranging from gradient-based fine-tuning and model editing to sparse autoencoder (SAE) steering, either lack interpretability or fail to provide a robust defense against adversarial prompts. We propose SAE-Guided Subspace Projection Unlearning (SSPU), a novel framework that leverages SAE features to drive targeted updates in the model's parameter space, enabling precise, interpretable, and robust unlearning. SSPU's three-stage pipeline performs data-driven layer and feature selection, subspace construction via QR decomposition, and constrained optimization that controls activations into an "irrelevant" subspace while preserving retained knowledge. Overall, we use SAE features to construct a subspace that supervises unlearning, refining the loss and adding a regularization term to guide interpretable parameter updates. In experiments on the WMDP-Cyber forget set and three utility benchmarks (MMLU, TruthfulQA, GSM8K), SSPU reduces harmful knowledge accuracy by 3.22% compared to the strongest baseline. It also improves adversarial robustness, lowering malicious accuracy under jailbreak prompts compared to baselines. Our findings expose the limitations of prior unlearning methods and demonstrate how interpretable subspace-guided optimization can achieve robust, controllable model behavior.

๐Ÿ” Key Points

  • The introduction of a novel benchmark dataset comprising diverse plain texts and their encrypted versions generated through various cryptographic algorithms to evaluate the cryptanalytic potential of state-of-the-art Large Language Models (LLMs).
  • A detailed assessment of multiple LLMs in zero-shot and few-shot settings to measure their decryption accuracy and semantic comprehension across different encryption schemes and complexities, revealing significant performance challenges in more complex ciphers.
  • Identification of the susceptibility of LLMs to jailbreaking attacks, emphasizing the importance of partial comprehension in the context of AI safety by demonstrating how LLMs may be exploited even when not fully decrypting the texts.
  • Findings suggest that LLM performance is highly dependent on the presence of specific ciphers in pre-training corpora, indicating limitations in generalization capabilities for unfamiliar encryption methods.
  • The paper provides useful insights that can guide future research into enhancing LLMs' security and robustness, recommending necessary adjustments in LLM safeguard mechanisms to mitigate vulnerabilities.

๐Ÿ’ก Why This Paper Matters

This paper is significant because it addresses a critical gap in the evaluation of LLMsโ€”cryptanalysis. By systematically benchmarking the cryptanalytic capabilities of different LLMs against a comprehensive dataset, it not only shines a light on their potential vulnerabilities but also offers valuable insights for improving AI safety protocols. The findings underscore the crucial implications of AI's dual-use nature in security contexts where model understanding or manipulation could lead to serious security concerns.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

AI security researchers would be particularly interested in this paper, as it investigates the intersection of large language models and cryptanalysis, a field increasingly relevant in today's digital landscape. The findings reveal not just the current limitations of LLMs in handling encrypted data, but also their potential exploitation through specific attack vectors. As AI models become more common in various applications, understanding their weaknesses becomes paramount for developing more secure AI systems.

๐Ÿ“š Read the Full Paper