← Back to Library

GSAE: Graph-Regularized Sparse Autoencoders for Robust LLM Safety Steering

Authors: Jehyeok Yeon, Federico Cinus, Yifan Wu, Luca Luceri

Published: 2025-12-07

arXiv ID: 2512.06655v1

Added to Library: 2025-12-09 03:01 UTC

Red Teaming Safety

📄 Abstract

Large language models (LLMs) face critical safety challenges, as they can be manipulated to generate harmful content through adversarial prompts and jailbreak attacks. Many defenses are typically either black-box guardrails that filter outputs, or internals-based methods that steer hidden activations by operationalizing safety as a single latent feature or dimension. While effective for simple concepts, this assumption is limiting, as recent evidence shows that abstract concepts such as refusal and temporality are distributed across multiple features rather than isolated in one. To address this limitation, we introduce Graph-Regularized Sparse Autoencoders (GSAEs), which extends SAEs with a Laplacian smoothness penalty on the neuron co-activation graph. Unlike standard SAEs that assign each concept to a single latent feature, GSAEs recover smooth, distributed safety representations as coherent patterns spanning multiple features. We empirically demonstrate that GSAE enables effective runtime safety steering, assembling features into a weighted set of safety-relevant directions and controlling them with a two-stage gating mechanism that activates interventions only when harmful prompts or continuations are detected during generation. This approach enforces refusals adaptively while preserving utility on benign queries. Across safety and QA benchmarks, GSAE steering achieves an average 82% selective refusal rate, substantially outperforming standard SAE steering (42%), while maintaining strong task accuracy (70% on TriviaQA, 65% on TruthfulQA, 74% on GSM8K). Robustness experiments further show generalization across LLaMA-3, Mistral, Qwen, and Phi families and resilience against jailbreak attacks (GCG, AutoDAN), consistently maintaining >= 90% refusal of harmful content.

🔍 Key Points

  • Introduction of Graph-Regularized Sparse Autoencoders (GSAEs) that improve the representation of safety concepts in large language models (LLMs) through graph Laplacian regularization, promoting smooth, coherent features across co-activating neurons.
  • GSAEs outperform standard Sparse Autoencoders (SAEs) in runtime safety steering by achieving an average 82% selective refusal rate on harmful prompts while maintaining acceptable utility scores on QA tasks, significantly improving upon the 42% refusal rate of traditional SAE methods.
  • The dual-gated controller mechanism allows GSAE to adaptively intervene during generation, leading to robust handling of adversarial attacks and offering strong generalizability across various LLM architectures, including LLaMA-3 and others.
  • Extensive empirical results demonstrate GSAE's robustness against jailbreak attacks, consistently maintaining over 90% refusal of harmful content compared to existing safety mechanisms, which are less effective under similar conditions.
  • The paper emphasizes the distributed nature of abstract safety concepts, highlighting that effective safety steering requires a model that captures the relational structure of neural activations rather than relying on isolated, single-feature representations.

💡 Why This Paper Matters

This paper presents a novel and effective approach to improving the safety of large language models through the development of Graph-Regularized Sparse Autoencoders. By addressing the limitations of traditional methods and demonstrating empirical gains in both performance and generalization, the findings underscore the need for more sophisticated representations of complex concepts like safety in AI systems, which is critical given the increasing deployment of LLMs in diverse applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant to AI security researchers as it addresses a pressing issue in AI safety—how to prevent harmful outputs from LLMs. By introducing a new method that enhances safety features and demonstrates significant improvements over previous techniques, it provides valuable insights into the development of more secure AI systems. The ability to generalize across different models and resist adversarial manipulation is crucial for the ongoing efforts to ensure safe deployment of AI technologies.

📚 Read the Full Paper