← Back to Library

Steering MoE LLMs via Expert (De)Activation

Authors: Mohsen Fayyaz, Ali Modarressi, Hanieh Deilamsalehy, Franck Dernoncourt, Ryan Rossi, Trung Bui, Hinrich Schütze, Nanyun Peng

Published: 2025-09-11

arXiv ID: 2509.09660v1

Added to Library: 2025-09-12 04:00 UTC

Red Teaming

📄 Abstract

Mixture-of-Experts (MoE) in Large Language Models (LLMs) routes each token through a subset of specialized Feed-Forward Networks (FFN), known as experts. We present SteerMoE, a framework for steering MoE models by detecting and controlling behavior-linked experts. Our detection method identifies experts with distinct activation patterns across paired inputs exhibiting contrasting behaviors. By selectively (de)activating such experts during inference, we control behaviors like faithfulness and safety without retraining or modifying weights. Across 11 benchmarks and 6 LLMs, our steering raises safety by up to +20% and faithfulness by +27%. In adversarial attack mode, it drops safety by -41% alone, and -100% when combined with existing jailbreak methods, bypassing all safety guardrails and exposing a new dimension of alignment faking hidden within experts.

🔍 Key Points

  • Introduction of SteerMoE framework, which leverages the properties of Mixture-of-Experts (MoE) models to detect and control expert activations based on paired inputs to influence model behavior.
  • Demonstration of significant improvements in model faithfulness (+27%) and safety (+20%) through expert (de)activation with no need for retraining, thereby preserving original model weights.
  • Identification of ‘Alignment Faking’ vulnerabilities where certain experts can bypass safety measures, raising concerns about the robustness of existing alignment techniques in LLMs.
  • Extensive experimentation on various benchmarks confirms the effectiveness of the steering mechanism across 11 benchmarks using 6 different large language models (LLMs).
  • Proposal of a novel risk difference metric that quantifies the behavioral associations of experts, enabling targeted steering interventions.

💡 Why This Paper Matters

The research presented in this paper is significant as it offers a novel approach to controlling and understanding the behavior of large language models via expert activation strategies. By highlighting the effectiveness of the SteerMoE framework in enhancing faithfulness and safety, the paper's contributions can lead to stronger LLMs that align better with human values, paving the way for future research into robust and interpretable AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper would interest AI security researchers due to its exploration of vulnerabilities within LLMs, particularly the concept of 'Alignment Faking.' By illustrating how expert activations can be manipulated to bypass safety mechanisms, it brings to light critical considerations for developing secure AI applications. It emphasizes the ongoing need for improved alignment strategies that account for these findings, thus directly impacting the safety and reliability of AI systems.

📚 Read the Full Paper