← Back to Library

Steering Safely or Off a Cliff? Rethinking Specificity and Robustness in Inference-Time Interventions

Authors: Navita Goyal, Hal Daumé

Published: 2026-02-05

arXiv ID: 2602.06256v1

Added to Library: 2026-02-09 03:02 UTC

Red Teaming

📄 Abstract

Model steering, which involves intervening on hidden representations at inference time, has emerged as a lightweight alternative to finetuning for precisely controlling large language models. While steering efficacy has been widely studied, evaluations of whether interventions alter only the intended property remain limited, especially with respect to unintended changes in behaviors related to the target property. We call this notion specificity. We propose a framework that distinguishes three dimensions of specificity: general (preserving fluency and unrelated abilities), control (preserving related control properties), and robustness (preserving control properties under distribution shifts). We study two safety-critical use cases: steering models to reduce overrefusal and faithfulness hallucinations, and show that while steering achieves high efficacy and largely maintains general and control specificity, it consistently fails to preserve robustness specificity. In the case of overrefusal steering, for example, all steering methods reduce overrefusal without harming general abilities and refusal on harmful queries; however, they substantially increase vulnerability to jailbreaks. Our work provides the first systematic evaluation of specificity in model steering, showing that standard efficacy and specificity checks are insufficient, because without robustness evaluation, steering methods may appear reliable even when they compromise model safety.

🔍 Key Points

  • The paper introduces a novel framework for evaluating specificity in model steering that distinguishes between general, control, and robustness specificity.
  • It provides empirical evidence showing that while steering methods can effectively mitigate issues such as overrefusal and hallucinations, they compromise robustness under adversarial conditions, increasing vulnerability to attacks.
  • The authors systematically evaluate existing steering techniques across safety-critical use cases, revealing consistent trade-offs between utility and safety in their implementations.
  • The study highlights the inadequacy of current evaluation metrics by showing that interventions may seem effective in standard settings while undermining model safety under distribution shifts.
  • The research emphasizes the need for a comprehensive safety evaluation in developing steering interventions, suggesting that model safety cannot be guaranteed without evaluating robustness.

💡 Why This Paper Matters

This paper is critical as it addresses the pressing issue of safety in large language models, particularly in inference-time interventions. By providing a multidimensional evaluation of specificity, it reveals significant gaps in current methodologies that could mislead researchers and practitioners into believing that these models are safe for practical applications. Hence, understanding the limits of steering methods is vital for developing reliable AI systems that can operate safely in real-world settings.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it delves into the vulnerabilities of model steering methods, particularly in how they may inadvertently compromise the robustness of AI systems under adversarial conditions. The empirical findings raise awareness of potential risks that exist in seemingly effective interventions, thereby prompting further exploration of more secure design strategies that ensure AI safety and reliability.

📚 Read the Full Paper