← Back to Library

Understanding Refusal in Language Models with Sparse Autoencoders

Authors: Wei Jie Yeo, Nirmalendu Prakash, Clement Neo, Roy Ka-Wei Lee, Erik Cambria, Ranjan Satapathy

Published: 2025-05-29

arXiv ID: 2505.23556v1

Added to Library: 2025-05-30 03:00 UTC

Red Teaming

📄 Abstract

Refusal is a key safety behavior in aligned language models, yet the internal mechanisms driving refusals remain opaque. In this work, we conduct a mechanistic study of refusal in instruction-tuned LLMs using sparse autoencoders to identify latent features that causally mediate refusal behaviors. We apply our method to two open-source chat models and intervene on refusal-related features to assess their influence on generation, validating their behavioral impact across multiple harmful datasets. This enables a fine-grained inspection of how refusal manifests at the activation level and addresses key research questions such as investigating upstream-downstream latent relationship and understanding the mechanisms of adversarial jailbreaking techniques. We also establish the usefulness of refusal features in enhancing generalization for linear probes to out-of-distribution adversarial samples in classification tasks. We open source our code in https://github.com/wj210/refusal_sae.

🔍 Key Points

  • The paper employs Sparse Autoencoders (SAEs) to identify and analyze latent features in language models that mediate refusal behaviors, demonstrating a mechanistic understanding of how language models refuse harmful requests.
  • Findings indicate that LLMs encode harmful features and refusal features as separate entities, showing that upstream harmful features can suppress downstream refusal features, which has implications for adversarial attacks on safety mechanisms.
  • A hybrid attribution method combining Attribution Patching (AP) and Activation Steering (AS) is proposed to pinpoint causally relevant features associated with refusal behavior, leading to a minified, interpretable feature set that significantly outperforms traditional approaches.
  • The study demonstrates the potential of leveraging refusal features to classify out-of-distribution adversarial examples effectively, highlighting their practical use in enhancing the robustness of language models against adversarial manipulations.
  • The code and methodologies have been open-sourced, fostering accessibility for researchers aiming to deepen their understanding of LLMs and their safety mechanisms.

💡 Why This Paper Matters

This paper is crucial as it advances the understanding of refusal behavior in language models, a fundamental aspect of AI safety. By identifying the underlying mechanisms and features that lead to refusals, it offers potential pathways to enhance the robustness of models against adversarial prompts, ultimately contributing to more secure AI applications.

🎯 Why It's Interesting for AI Security Researchers

For AI security researchers, this paper provides insights into the vulnerabilities of language models, specifically how adversarial techniques can exploit refusal mechanisms. The findings can guide the development of more effective safety frameworks for AI systems by improving the interpretability and resilience of models against harmful stimuli.

📚 Read the Full Paper