← Back to Library

Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems

Authors: Sarbartha Banerjee, Prateek Sahu, Anjo Vahldiek-Oberwagner, Jose Sanchez Vicarte, Mohit Tiwari

Published: 2026-03-12

arXiv ID: 2603.12023v1

Added to Library: 2026-03-13 03:01 UTC

Red Teaming

📄 Abstract

Rapid progress in generative AI has given rise to Compound AI systems - pipelines comprised of multiple large language models (LLM), software tools and database systems. Compound AI systems are constructed on a layered traditional software stack running on a distributed hardware infrastructure. Many of the diverse software components are vulnerable to traditional security flaws documented in the Common Vulnerabilities and Exposures (CVE) database, while the underlying distributed hardware infrastructure remains exposed to timing attacks, bit-flip faults, and power-based side channels. Today, research targets LLM-specific risks like model extraction, training data leakage, and unsafe generation -- overlooking the impact of traditional system vulnerabilities. This work investigates how traditional software and hardware vulnerabilities can complement LLM-specific algorithmic attacks to compromise the integrity of a compound AI pipeline. We demonstrate two novel attacks that combine system-level vulnerabilities with algorithmic weaknesses: (1) Exploiting a software code injection flaw along with a guardrail Rowhammer attack to inject an unaltered jailbreak prompt into an LLM, resulting in an AI safety violation, and (2) Manipulating a knowledge database to redirect an LLM agent to transmit sensitive user data to a malicious application, thus breaching confidentiality. These attacks highlight the need to address traditional vulnerabilities; we systematize the attack primitives and analyze their composition by grouping vulnerabilities by their objective and mapping them to distinct stages of an attack lifecycle. This approach enables a rigorous red-teaming exercise and lays the groundwork for future defense strategies.

🔍 Key Points

  • The paper introduces a Cascade Red Teaming Framework that maps attacker goals and capabilities to a curated set of algorithmic, software, and hardware attack gadgets, enabling the composition of end-to-end attack chains in Compound AI systems.
  • It demonstrates two novel attacks that exploit traditional software and hardware vulnerabilities alongside algorithmic weaknesses to compromise AI safety and user confidentiality.
  • The authors systematize attack primitives and create a comprehensive corpus of vulnerabilities across various layers of the Compound AI stack, including traditional software vulnerabilities and hardware side channels.
  • The research highlights how system-level vulnerabilities can amplify adversarial threats in complex AI pipelines, where traditional defenses may not be sufficient, emphasizing the need for a more holistic security approach.
  • Case studies are presented showing effective multi-stage attacks on AI systems, illustrating how attackers can bypass guardrails and specific defenses through systematic exploitation of both algorithmic and system-level vulnerabilities.

💡 Why This Paper Matters

This paper is highly relevant as it addresses a critical intersection between traditional cybersecurity vulnerabilities and modern AI systems. It highlights the often-overlooked traditional software and hardware vulnerabilities that can be exploited in conjunction with adversarial attacks on AI, providing not only a detailed analysis of potential attack vectors but also a framework for understanding and testing these vulnerabilities. By demonstrating the risks inherent in Compound AI systems, it lays the groundwork for bolstering defenses and developing robust security strategies.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper of great interest as it expands the understanding of attack surfaces in AI systems beyond algorithmic vulnerabilities to include systemic risks from traditional software and hardware flaws. The introduction of the Cascade Red Teaming Framework offers a structured method for evaluating security across complex AI pipelines, which is crucial for enhancing the resilience of these systems against sophisticated attacks. Furthermore, the paper's experimental validation of various attack methods provides direct insights into potential real-world implications, making it a valuable resource for researchers focused on improving AI security.

📚 Read the Full Paper