← Back to Library

Jailbreaking Large Language Models through Iterative Tool-Disguised Attacks via Reinforcement Learning

Authors: Zhaoqi Wang, Zijian Zhang, Daqing He, Pengtao Kou, Xin Li, Jiamou Liu, Jincheng An, Yong Liu

Published: 2026-01-09

arXiv ID: 2601.05466v1

Added to Library: 2026-01-12 03:00 UTC

Red Teaming

📄 Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to jailbreak attacks that elicit harmful responses violating human values and safety guidelines. Despite extensive research on defense mechanisms, existing safeguards prove insufficient against sophisticated adversarial strategies. In this work, we propose iMIST (\underline{i}nteractive \underline{M}ulti-step \underline{P}rogre\underline{s}sive \underline{T}ool-disguised Jailbreak Attack), a novel adaptive jailbreak method that synergistically exploits vulnerabilities in current defense mechanisms. iMIST disguises malicious queries as normal tool invocations to bypass content filters, while simultaneously introducing an interactive progressive optimization algorithm that dynamically escalates response harmfulness through multi-turn dialogues guided by real-time harmfulness assessment. Our experiments on widely-used models demonstrate that iMIST achieves higher attack effectiveness, while maintaining low rejection rates. These results reveal critical vulnerabilities in current LLM safety mechanisms and underscore the urgent need for more robust defense strategies.

🔍 Key Points

  • Introduction of iMIST: A novel jailbreak method that uses Tool Disguised Invocation (TDI) to mask malicious prompts as legitimate tool calls, allowing them to bypass safety mechanisms in LLMs.
  • Implementation of Interactive Progressive Optimization (IPO) using reinforcement learning, enabling the dynamic iterative enhancement of harmful responses through multi-turn dialogues with the model.
  • Demonstrated significantly higher attack effectiveness across state-of-the-art language models compared to existing black-box jailbreak methods, achieving notable JADES and StrongREJECT scores with low rejection rates.
  • Highlighting critical vulnerabilities in current LLM safety mechanisms and the limitations of existing defenses against sophisticated adversarial attacks.
  • Results call for the development of more robust defense mechanisms in LLMs that account for advanced techniques like iMIST.

💡 Why This Paper Matters

The paper presents iMIST, an innovative approach to bypassing safety protocols of large language models, showcasing its effectiveness in circumventing current defenses. This work not only brings attention to the vulnerabilities present in advanced LLMs but also underscores the necessity for enhanced security measures within these systems. With LLMs increasingly integrated into various applications, ensuring their alignment with human values and safety is crucial.

🎯 Why It's Interesting for AI Security Researchers

This paper is of particular interest to AI security researchers as it reveals sophisticated attack methods that exploit current LLM architectures. The adaptive nature of iMIST not only demonstrates the potential for adversarial strategies to evolve alongside defense mechanisms but also provides a framework for assessing LLM vulnerabilities comprehensively. Understanding and mitigating such threats is vital in the pursuit of developing secure AI systems.

📚 Read the Full Paper