← Back to Library

SilentStriker:Toward Stealthy Bit-Flip Attacks on Large Language Models

Authors: Haotian Xu, Qingsong Peng, Jie Shi, Huadi Zheng, Yu Li, Cheng Zhuo

Published: 2025-09-22

arXiv ID: 2509.17371v2

Added to Library: 2025-12-08 18:04 UTC

Red Teaming

📄 Abstract

The rapid adoption of large language models (LLMs) in critical domains has spurred extensive research into their security issues. While input manipulation attacks (e.g., prompt injection) have been well studied, Bit-Flip Attacks (BFAs) -- which exploit hardware vulnerabilities to corrupt model parameters and cause severe performance degradation -- have received far less attention. Existing BFA methods suffer from key limitations: they fail to balance performance degradation and output naturalness, making them prone to discovery. In this paper, we introduce SilentStriker, the first stealthy bit-flip attack against LLMs that effectively degrades task performance while maintaining output naturalness. Our core contribution lies in addressing the challenge of designing effective loss functions for LLMs with variable output length and the vast output space. Unlike prior approaches that rely on output perplexity for attack loss formulation, which inevitably degrade output naturalness, we reformulate the attack objective by leveraging key output tokens as targets for suppression, enabling effective joint optimization of attack effectiveness and stealthiness. Additionally, we employ an iterative, progressive search strategy to maximize attack efficacy. Experiments show that SilentStriker significantly outperforms existing baselines, achieving successful attacks without compromising the naturalness of generated text.

🔍 Key Points

  • Introduction of SilentStriker, the first stealthy Bit-Flip Attack (BFA) targeting Large Language Models (LLMs) that achieves significant task degradation with minimal compromise to output naturalness.
  • The development of a token-based loss function that balances the dual objectives of attack effectiveness (performance degradation) and naturalness (output fluency).
  • Implementation of an iterative, progressive search strategy to identify optimal parameters for attack, maximizing efficiency and stealthiness in hardware-level adversarial techniques.
  • Demonstration of the effectiveness of SilentStriker through extensive experiments on various LLM models, achieving superior results compared to existing BFA methods like GenBFA and PrisonBreak.

💡 Why This Paper Matters

The paper demonstrates a critical step in understanding and exploiting vulnerabilities in large language models, particularly in contexts where their integrity can affect important decisions. As LLMs become increasingly adopted in sensitive domains, the proposed SilentStriker attack highlights the urgent need for improved security measures in AI systems, making this research highly relevant.

🎯 Why It's Interesting for AI Security Researchers

This paper is crucial for AI security researchers as it explores a less-discussed dimension of model vulnerabilities—hardware-based attacks. The methodology presented not only advances the understanding of BFA but also challenges existing security paradigms, requiring researchers to rethink the defenses against such stealthy and effective attacks on LLMs.

📚 Read the Full Paper