← Back to Library

LeakSealer: A Semisupervised Defense for LLMs Against Prompt Injection and Leakage Attacks

Authors: Francesco Panebianco, Stefano Bonfanti, Francesco TrovΓ², Michele Carminati

Published: 2025-08-01

arXiv ID: 2508.00602v1

Added to Library: 2025-08-04 04:00 UTC

Red Teaming Safety

πŸ“„ Abstract

The generalization capabilities of Large Language Models (LLMs) have led to their widespread deployment across various applications. However, this increased adoption has introduced several security threats, notably in the forms of jailbreaking and data leakage attacks. Additionally, Retrieval Augmented Generation (RAG), while enhancing context-awareness in LLM responses, has inadvertently introduced vulnerabilities that can result in the leakage of sensitive information. Our contributions are twofold. First, we introduce a methodology to analyze historical interaction data from an LLM system, enabling the generation of usage maps categorized by topics (including adversarial interactions). This approach further provides forensic insights for tracking the evolution of jailbreaking attack patterns. Second, we propose LeakSealer, a model-agnostic framework that combines static analysis for forensic insights with dynamic defenses in a Human-In-The-Loop (HITL) pipeline. This technique identifies topic groups and detects anomalous patterns, allowing for proactive defense mechanisms. We empirically evaluate LeakSealer under two scenarios: (1) jailbreak attempts, employing a public benchmark dataset, and (2) PII leakage, supported by a curated dataset of labeled LLM interactions. In the static setting, LeakSealer achieves the highest precision and recall on the ToxicChat dataset when identifying prompt injection. In the dynamic setting, PII leakage detection achieves an AUPRC of $0.97$, significantly outperforming baselines such as Llama Guard.

πŸ” Key Points

  • LeakSealer introduces a novel model-agnostic framework for defending Large Language Models (LLMs) against prompt injection and PII leakage attacks, effectively combining static and dynamic defense strategies.
  • The framework utilizes historical interaction data to create detailed usage maps and forensics, allowing for the identification of different interaction topics and the tracking of jailbreaking attack evolution.
  • LeakSealer demonstrates superior performance in both static and dynamic evaluations, achieving high precision and recall metrics on benchmark datasets, significantly outperforming existing baselines like Llama Guard.
  • The paper provides a curated dataset of labeled LLM interactions specifically designed for PII leakage detection in retrieval-augmented generation scenarios, fostering reproducibility and future research.
  • The semi-supervised nature of LeakSealer allows adaptability to new attack patterns and concept drift without requiring extensive retraining processes.

πŸ’‘ Why This Paper Matters

This paper is highly relevant as it addresses critical security threats posed to LLMs, particularly in widespread deployment cases where sensitive information could be at risk due to prompt injections or data leakage. By introducing LeakSealer, the authors provide an effective solution that not only enhances the security of LLMs but also ensures practical application in real-world scenarios, encouraging further research and development in this domain.

🎯 Why It's Interesting for AI Security Researchers

The findings of this paper are crucial for AI security researchers, as it highlights innovative defense mechanisms against emerging threats targeting LLMs. The quantitative improvements demonstrated by LeakSealer over existing defenses provide a pathway for developing more resilient AI systems, while the methodology and datasets presented can serve as foundational resources for future research in LLM security and privacy protection.

πŸ“š Read the Full Paper