← Back to Library

The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multistep Malware Delivery Mechanism

Authors: Oleg Brodt, Elad Feldman, Bruce Schneier, Ben Nassi

Published: 2026-01-14

arXiv ID: 2601.09625v2

Added to Library: 2026-02-11 04:00 UTC

Red Teaming

📄 Abstract

Prompt injection was initially framed as the large language model (LLM) analogue of SQL injection. However, over the past three years, attacks labeled as prompt injection have evolved from isolated input-manipulation exploits into multistep attack mechanisms that resemble malware. In this paper, we argue that prompt injections evolved into promptware, a new class of malware execution mechanism triggered through prompts engineered to exploit an application's LLM. We introduce a seven-stage promptware kill chain: Initial Access (prompt injection), Privilege Escalation (jailbreaking), Reconnaissance, Persistence (memory and retrieval poisoning), Command and Control, Lateral Movement, and Actions on Objective. We analyze thirty-six prominent studies and real-world incidents affecting production LLM systems and show that at least twenty-one documented attacks that traverse four or more stages of this kill chain, demonstrating that the threat model is not merely theoretical. We discuss the need for a defense-in-depth approach that addresses all stages of the promptware life cycle and review relevant countermeasures for each step. By moving the conversation from prompt injection to a promptware kill chain, our work provides analytical clarity, enables structured risk assessment, and lays a foundation for systematic security engineering of LLM-based systems.

🔍 Key Points

  • Introduction of the concept of 'promptware,' a new class of malware exploiting large language models (LLMs) through a multistep attack approach.
  • Presentation of a comprehensive seven-stage promptware kill chain: Initial Access, Privilege Escalation, Reconnaissance, Persistence, Command and Control, Lateral Movement, and Actions on Objective.
  • Analysis of thirty-six documented promptware attacks, demonstrating the evolution and increasing sophistication of these attacks over time, emphasizing that the threat is tangible and not merely theoretical.
  • Proposed defense-in-depth strategies that address all stages of the promptware kill chain, advocating for a systematic security engineering approach for applications utilizing LLMs.
  • Provision of case studies to illustrate the kill chain in action, highlighting real-world implications and the need for robust defenses.

💡 Why This Paper Matters

This paper is significant as it reframes the discourse surrounding prompt injection vulnerabilities by presenting them in a structured way that emphasizes their complexity and potential impact. The introduction of the promptware concept provides clarity to a nuanced threat landscape as attacks evolve from simple manipulations into more structured and sophisticated exploits. Importantly, the paper offers actionable strategies for mitigating these attacks, thereby enhancing the security postures of LLM applications.

🎯 Why It's Interesting for AI Security Researchers

This paper captures the attention of AI security researchers by offering new frameworks for understanding emerging threats within LLM ecosystems. Its detailed analysis of the promptware kill chain provides a valuable tool for risk assessment and illustrates the necessity of comprehensive security measures. The findings serve as a call to action, pushing the research community to adapt and fortify defenses against increasingly complex attack vectors involving LLMs.

📚 Read the Full Paper