← Back to Library

AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World Agent Security System

Authors: Hao Li, Ruoyao Wen, Shanghao Shi, Ning Zhang, Chaowei Xiao

Published: 2026-02-03

arXiv ID: 2602.03117v1

Added to Library: 2026-02-04 03:04 UTC

Red Teaming

📄 Abstract

AI agents that autonomously interact with external tools and environments show great promise across real-world applications. However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided by benchmarks, such as AgentDojo, there has been significant amount of progress in developing defense against the said attacks. As the technology continues to mature, and that agents are increasingly being relied upon for more complex tasks, there is increasing pressing need to also evolve the benchmark to reflect threat landscape faced by emerging agentic systems. In this work, we reveal three fundamental flaws in current benchmarks and push the frontier along these dimensions: (i) lack of dynamic open-ended tasks, (ii) lack of helpful instructions, and (iii) simplistic user tasks. To bridge this gap, we introduce AgentDyn, a manually designed benchmark featuring 60 challenging open-ended tasks and 560 injection test cases across Shopping, GitHub, and Daily Life. Unlike prior static benchmarks, AgentDyn requires dynamic planning and incorporates helpful third-party instructions. Our evaluation of ten state-of-the-art defenses suggests that almost all existing defenses are either not secure enough or suffer from significant over-defense, revealing that existing defenses are still far from real-world deployment. Our benchmark is available at https://github.com/leolee99/AgentDyn.

🔍 Key Points

  • Development of AgentDyn, a dynamic open-ended benchmark to assess the resilience of AI agents against prompt injection attacks.
  • Identification and critical analysis of three major flaws in existing benchmarks: lack of dynamic tasks, absence of helpful instructions, and overly simplistic user tasks.
  • Empirical investigation demonstrating that state-of-the-art defenses exhibit significant vulnerabilities when tested against AgentDyn, highlighting their inadequacy for real-world deployment.
  • Introduction of a comprehensive suite of 60 challenging user tasks and 560 injection scenarios across various real-life applications such as Shopping and GitHub.
  • Evaluation results showing that almost all defenses face severe utility drops under dynamic and complex attack scenarios, exposing the shortcomings in their robustness.

💡 Why This Paper Matters

This paper is relevant and important because it addresses the critical security challenges posed by prompt injection attacks in AI agents, which are increasingly integrated into complex real-world applications. By introducing AgentDyn, the authors not only provide a new standard for evaluating agent defenses but also raise awareness about the limitations of current methods and the need for more robust security frameworks. This contribution is vital as it informs both researchers and developers on the vulnerabilities of existing systems and encourages the advancement of secure AI technologies.

🎯 Why It's Interesting for AI Security Researchers

This paper is of interest to AI security researchers as it presents groundbreaking insights into the vulnerabilities of existing defenses against prompt injection attacks, a significant concern for the deployment of AI systems. The introduction of AgentDyn as a new benchmark for assessing agent security and its ability to expose hidden failures of traditional defenses offers valuable data and encourages further research and innovation in the design of more secure AI architectures. The findings can influence the development of protective measures and promote a deeper understanding of AI security within the research community.

📚 Read the Full Paper