โ† Back to Library

An attention-aware GNN-based input defender against multi-turn jailbreak on LLMs

Authors: Zixuan Huang, Kecheng Huang, Lihao Yin, Bowei He, Huiling Zhen, Mingxuan Yuan, Zili Shao

Published: 2025-07-09

arXiv ID: 2507.07146v1

Added to Library: 2025-07-11 04:01 UTC

Red Teaming

๐Ÿ“„ Abstract

Large Language Models (LLMs) have gained widespread popularity and are increasingly integrated into various applications. However, their capabilities can be exploited for both benign and harmful purposes. Despite rigorous training and fine-tuning for safety, LLMs remain vulnerable to jailbreak attacks. Recently, multi-turn attacks have emerged, exacerbating the issue. Unlike single-turn attacks, multi-turn attacks gradually escalate the dialogue, making them more difficult to detect and mitigate, even after they are identified. In this study, we propose G-Guard, an innovative attention-aware GNN-based input classifier designed to defend against multi-turn jailbreak attacks on LLMs. G-Guard constructs an entity graph for multi-turn queries, explicitly capturing relationships between harmful keywords and queries even when those keywords appear only in previous queries. Additionally, we introduce an attention-aware augmentation mechanism that retrieves the most similar single-turn query based on the multi-turn conversation. This retrieved query is treated as a labeled node in the graph, enhancing the ability of GNN to classify whether the current query is harmful. Evaluation results demonstrate that G-Guard outperforms all baselines across all datasets and evaluation metrics.

๐Ÿ” Key Points

  • Introduction of G-Guard, an attention-aware GNN-based classifier that specifically targets multi-turn jailbreak attacks on Large Language Models (LLMs).
  • Development of an entity graph that captures the complex relationships between queries and harmful keywords across multiple conversational turns, enhancing detection of malicious inputs.
  • Implementation of an attention-aware augmentation mechanism for better classification accuracy by retrieving and incorporating similar labeled single-turn queries into the graph.
  • Demonstrated superior performance of G-Guard in rigorous evaluation across various datasets, significantly outperforming traditional single-turn defense mechanisms and existing multi-turn attack defenses.
  • Identification of limitations regarding scalability and generalization, emphasizing the need for further adaptation to evolving adversarial strategies and real-world dialogue complexities.

๐Ÿ’ก Why This Paper Matters

This paper presents a significant advancement in the protection of Large Language Models against sophisticated jailbreak attacks, emphasizing the necessity of context-aware mechanisms in AI safety. G-Guard not only provides robust defense mechanisms but also expands the understanding of how adversarial tactics exploit conversational depth, making it a pivotal contribution to the field of AI security.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

AI security researchers will find this paper highly relevant as it addresses a critical vulnerability within LLMsโ€”multi-turn jailbreak attacksโ€”by proposing innovative methodologies for detection and mitigation. The emphasis on graph-based representation and contextual understanding opens new avenues for defense against evolving adversarial strategies, making this work crucial for developing more resilient AI systems.

๐Ÿ“š Read the Full Paper