← Back to Library

The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multi-Step Malware

Authors: Ben Nassi, Bruce Schneier, Oleg Brodt

Published: 2026-01-14

arXiv ID: 2601.09625v1

Added to Library: 2026-01-15 03:00 UTC

Red Teaming

📄 Abstract

The rapid adoption of large language model (LLM)-based systems -- from chatbots to autonomous agents capable of executing code and financial transactions -- has created a new attack surface that existing security frameworks inadequately address. The dominant framing of these threats as "prompt injection" -- a catch-all phrase for security failures in LLM-based systems -- obscures a more complex reality: Attacks on LLM-based systems increasingly involve multi-step sequences that mirror traditional malware campaigns. In this paper, we propose that attacks targeting LLM-based applications constitute a distinct class of malware, which we term \textit{promptware}, and introduce a five-step kill chain model for analyzing these threats. The framework comprises Initial Access (prompt injection), Privilege Escalation (jailbreaking), Persistence (memory and retrieval poisoning), Lateral Movement (cross-system and cross-user propagation), and Actions on Objective (ranging from data exfiltration to unauthorized transactions). By mapping recent attacks to this structure, we demonstrate that LLM-related attacks follow systematic sequences analogous to traditional malware campaigns. The promptware kill chain offers security practitioners a structured methodology for threat modeling and provides a common vocabulary for researchers across AI safety and cybersecurity to address a rapidly evolving threat landscape.

🔍 Key Points

  • Introduction of the concept of 'promptware' as a distinct class of malware targeting LLM-based applications.
  • Proposal of a five-step kill chain model (Initial Access, Privilege Escalation, Persistence, Lateral Movement, Actions on Objective) for analyzing promptware attacks.
  • Detailed exploration of the mechanisms behind each kill chain step, including various methods of prompt injection, jailbreaking techniques, and persistence strategies.
  • Practical examples of how existing attacks map to the kill chain framework, demonstrating the systematic nature of promptware campaigns.
  • Emphasis on the need for evolving cybersecurity strategies to address the unique challenges posed by LLM architectures.

💡 Why This Paper Matters

This paper is a seminal contribution to the field of AI security, as it not only categorizes a new class of malware but also provides a structured framework for analyzing and mitigating these threats. By introducing the promptware kill chain, the authors illuminate the complexity of attacks on LLM-based systems and the inadequacies of current security measures. This foundational work aims to bridge the gap between AI safety and cybersecurity, fostering collaboration and improvement in defensive strategies.

🎯 Why It's Interesting for AI Security Researchers

The analysis of promptware through a structured kill chain framework is highly relevant for AI security researchers as it provides a comprehensive understanding of the attack vectors and methods used against LLM systems. The paper's technical findings and practical implications offer essential insights for developing more robust security measures in the rapidly evolving landscape of AI technologies. Furthermore, the framework lays the groundwork for future research collaborations between AI safety and cybersecurity experts, enhancing the overall security posture of AI applications.

📚 Read the Full Paper