← Back to Library

ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with Multi-source Data

Authors: Reachal Wang, Yuqi Jia, Neil Zhenqiang Gong

Published: 2025-12-10

arXiv ID: 2512.09321v3

Added to Library: 2026-01-07 10:13 UTC

Red Teaming

📄 Abstract

Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended task. In many applications and agents, the input data originates from multiple sources, with each source contributing a segment of the overall input. In these multi-source scenarios, an attacker may control only a subset of the sources and contaminate the corresponding segments, but typically does not know the order in which the segments are arranged within the input. Existing prompt injection attacks either assume that the entire input data comes from a single source under the attacker's control or ignore the uncertainty in the ordering of segments from different sources. As a result, their success is limited in domains involving multi-source data. In this work, we propose ObliInjection, the first prompt injection attack targeting LLM applications and agents with multi-source input data. ObliInjection introduces two key technical innovations: the order-oblivious loss, which quantifies the likelihood that the LLM will complete the attacker-chosen task regardless of how the clean and contaminated segments are ordered; and the orderGCG algorithm, which is tailored to minimize the order-oblivious loss and optimize the contaminated segments. Comprehensive experiments across three datasets spanning diverse application domains and twelve LLMs demonstrate that ObliInjection is highly effective, even when only one out of 6-100 segments in the input data is contaminated. Our code and data are available at: https://github.com/ReachalWang/ObliInjection.

🔍 Key Points

  • Introduction of ObliInjection, the first prompt injection attack specifically targeting LLMs with multi-source input data.
  • Development of the order-oblivious loss function, which measures the success probability of the attack regardless of segment ordering.
  • Implementation of the orderGCG optimization algorithm to minimize the order-oblivious loss, facilitating effective segment contamination even with limited access to the data.
  • Extensive evaluation across multiple datasets and LLMs demonstrating high Attack Success Rates (ASR), with effective contamination of only a single segment in the input data.
  • Empirical findings highlight that existing defenses against prompt injection attacks are insufficient to mitigate ObliInjection.

💡 Why This Paper Matters

The relevance of this paper lies in its pioneering approach to handling prompt injection attacks in multi-source data scenarios, highlighting a critical vulnerability in widely used AI systems. The paper not only proposes a novel attack method but also underscores the need for stronger defenses in LLMs, making it a significant contribution to the field of AI security.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly interesting to AI security researchers as it uncovers new vulnerabilities in large language models and presents a sophisticated new attack method that specifically targets multi-source scenarios, which are increasingly common in real-world applications. The insights provided on the limitations of current defenses further inform ongoing research on securing LLMs against prompt injection attacks.

📚 Read the Full Paper