← Back to Library

Jatmo: Prompt Injection Defense by Task-Specific Finetuning

Authors: Julien Piet, Maha Alrashed, Chawin Sitawarin, Sizhe Chen, Zeming Wei, Elizabeth Sun, Basel Alomair, David Wagner

Published: 2023-12-29

arXiv ID: 2312.17673v2

Added to Library: 2025-11-11 14:21 UTC

📄 Abstract

Large Language Models (LLMs) are attracting significant research attention due to their instruction-following abilities, allowing users and developers to leverage LLMs for a variety of tasks. However, LLMs are vulnerable to prompt-injection attacks: a class of attacks that hijack the model's instruction-following abilities, changing responses to prompts to undesired, possibly malicious ones. In this work, we introduce Jatmo, a method for generating task-specific models resilient to prompt-injection attacks. Jatmo leverages the fact that LLMs can only follow instructions once they have undergone instruction tuning. It harnesses a teacher instruction-tuned model to generate a task-specific dataset, which is then used to fine-tune a base model (i.e., a non-instruction-tuned model). Jatmo only needs a task prompt and a dataset of inputs for the task: it uses the teacher model to generate outputs. For situations with no pre-existing datasets, Jatmo can use a single example, or in some cases none at all, to produce a fully synthetic dataset. Our experiments on seven tasks show that Jatmo models provide similar quality of outputs on their specific task as standard LLMs, while being resilient to prompt injections. The best attacks succeeded in less than 0.5% of cases against our models, versus 87% success rate against GPT-3.5-Turbo. We release Jatmo at https://github.com/wagner-group/prompt-injection-defense.

🔍 Key Points

  • Demonstration of two effective evasion techniques for bypassing Large Language Model (LLM) guardrails: traditional character injection and algorithmic Adversarial Machine Learning (AML) techniques.
  • Empirical analysis of six major LLM guardrail systems, revealing vulnerabilities and a high success rate (up to 100%) in evading detection with certain methods, like Emoji Smuggling and Bidirectional Text.
  • Introduction of the concept of word importance transferability, showing how adversaries can leverage white-box models to enhance attack success rates against black-box targets.
  • Provision of a detailed experimental setup and results that highlight the effectiveness of character injection techniques compared to AML approaches, emphasizing the weaknesses in current LLM defenses.
  • Identification of critical weaknesses in existing LLM guardrail systems, calling for the development of more robust and resilient mechanisms against adversarial attacks.

💡 Why This Paper Matters

This paper is crucial as it sheds light on the vulnerabilities in LLM guardrails designed to prevent prompt injection and jailbreak attacks. By empirically demonstrating the effectiveness of evasion techniques, the research emphasizes the need for improved security measures in AI systems, which are increasingly deployed in various applications where safety and reliability are paramount.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of significant interest to AI security researchers as it provides empirical evidence of the limitations of current LLM protective systems. The findings highlight the urgency for developing more sophisticated defenses against evolving adversarial tactics, thereby contributing to advancing the field of AI security and promoting safer AI deployment in sensitive environments.

📚 Read the Full Paper