← Back to Library

From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers

Authors: Jingtong Su, Julia Kempe, Karen Ullrich

Published: 2025-06-20

arXiv ID: 2506.17052v1

Added to Library: 2025-06-23 04:00 UTC

Red Teaming

📄 Abstract

Transformers have achieved state-of-the-art performance across language and vision tasks. This success drives the imperative to interpret their internal mechanisms with the dual goals of enhancing performance and improving behavioral control. Attribution methods help advance interpretability by assigning model outputs associated with a target concept to specific model components. Current attribution research primarily studies multi-layer perceptron neurons and addresses relatively simple concepts such as factual associations (e.g., Paris is located in France). This focus tends to overlook the impact of the attention mechanism and lacks a unified approach for analyzing more complex concepts. To fill these gaps, we introduce Scalable Attention Module Discovery (SAMD), a concept-agnostic method for mapping arbitrary, complex concepts to specific attention heads of general transformer models. We accomplish this by representing each concept as a vector, calculating its cosine similarity with each attention head, and selecting the TopK-scoring heads to construct the concept-associated attention module. We then propose Scalar Attention Module Intervention (SAMI), a simple strategy to diminish or amplify the effects of a concept by adjusting the attention module using only a single scalar parameter. Empirically, we demonstrate SAMD on concepts of varying complexity, and visualize the locations of their corresponding modules. Our results demonstrate that module locations remain stable before and after LLM post-training, and confirm prior work on the mechanics of LLM multilingualism. Through SAMI, we facilitate jailbreaking on HarmBench (+72.7%) by diminishing "safety" and improve performance on the GSM8K benchmark (+1.6%) by amplifying "reasoning". Lastly, we highlight the domain-agnostic nature of our approach by suppressing the image classification accuracy of vision transformers on ImageNet.

🔍 Key Points

  • Introduction of Scalable Attention Module Discovery (SAMD): A concept-agnostic method enabling the mapping of complex concepts to specific attention heads in transformer models, overcoming limitations of existing attribution methods.
  • Development of Scalar Attention Module Intervention (SAMI): A novel intervention strategy that uses a single scalar parameter to amplify or diminish the effects of particular concepts in transformers.
  • Empirical validation across multiple domains: Demonstrated the effectiveness of SAMD and SAMI on language tasks (reasoning, safety) and vision tasks, showcasing the universality of the approach.
  • Uncovering of sparse attention modules: Finds that only a limited number of attention heads are critical for understanding a wide variety of concepts, revealing the latent structure of knowledge in transformers.
  • Effectiveness in enhancing control over model behavior: Unique intervention techniques allowed significant changes in model outputs, demonstrating practical applications for behavioral modification.

💡 Why This Paper Matters

This paper is highly relevant as it addresses critical gaps in the interpretability of transformer models. By proposing a unified framework for concept attribution and intervention, it empowers researchers to better understand model behavior and improve safety measures, which are critical in deploying AI responsibly in various applications. The ability to manipulate attention modules with precision not only enhances performance on specific tasks but also aids in aligning models with ethical standards.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly interesting to AI security researchers because it highlights novel methods to control and audit AI behavior through attention module manipulation. The ability to diminish 'safety' responses or amplify 'reasoning' through direct intervention raises important discussions around model robustness and safety, which are paramount in securing AI against malicious use. Furthermore, the insights gained from such interpretability frameworks may help in identifying vulnerabilities in AI systems, allowing researchers to design more secure and trustworthy models.

📚 Read the Full Paper