← Back to Library

The Cost of Thinking: Increased Jailbreak Risk in Large Language Models

Authors: Fan Yang

Published: 2025-08-09

arXiv ID: 2508.10032v1

Added to Library: 2025-08-15 04:01 UTC

Red Teaming

📄 Abstract

Thinking mode has always been regarded as one of the most valuable modes in LLMs. However, we uncover a surprising and previously overlooked phenomenon: LLMs with thinking mode are more easily broken by Jailbreak attack. We evaluate 9 LLMs on AdvBench and HarmBench and find that the success rate of attacking thinking mode in LLMs is almost higher than that of non-thinking mode. Through large numbers of sample studies, it is found that for educational purposes and excessively long thinking lengths are the characteristics of successfully attacked data, and LLMs also give harmful answers when they mostly know that the questions are harmful. In order to alleviate the above problems, this paper proposes a method of safe thinking intervention for LLMs, which explicitly guides the internal thinking processes of LLMs by adding "specific thinking tokens" of LLMs to the prompt. The results demonstrate that the safe thinking intervention can significantly reduce the attack success rate of LLMs with thinking mode.

🔍 Key Points

  • Identifies a previously unrecognized vulnerability in large language models (LLMs) using thinking mode, revealing they are more susceptible to jailbreak attacks compared to non-thinking mode models.
  • Proposes a novel defense mechanism called 'safe thinking intervention', which incorporates specific tokens into prompts to guide LLM reasoning and improve safety during thought processes.
  • Demonstrates through extensive experiments that the proposed intervention method significantly decreases the attack success rates (ASR) of jailbreak attacks on LLMs.
  • Provides insights into the systematic characteristics of successful jailbreak attacks, showing that LLMs tend to generate harmful responses even when aware of their inappropriate nature.
  • Implements a human-annotated evaluation approach alongside an LLM voting mechanism for assessing harmfulness, demonstrating its superior precision compared to traditional keyword-based detection methods.

💡 Why This Paper Matters

This paper is significant as it uncovers a critical aspect of LLMs’ security, particularly emphasizing how advanced reasoning capabilities might introduce novel vulnerabilities during thought generation. The proposed safe thinking intervention method offers a promising approach to mitigating such risks, thus enhancing AI safety and robustness, which is increasingly paramount as these models become integrated into sensitive applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it provides empirical evidence of vulnerabilities associated with the reasoning capabilities in LLMs, a topic that has not been thoroughly explored. The findings and defense strategies could lead to improved models that not only perform tasks effectively but also uphold ethical and safety standards, an essential consideration in AI deployment.

📚 Read the Full Paper