← Back to Library

AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World Agent Security System

Authors: Hao Li, Ruoyao Wen, Shanghao Shi, Ning Zhang, Chaowei Xiao

Published: 2026-02-03

arXiv ID: 2602.03117v2

Added to Library: 2026-02-09 03:03 UTC

Red Teaming

📄 Abstract

AI agents that autonomously interact with external tools and environments show great promise across real-world applications. However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided by benchmarks, such as AgentDojo, there has been significant amount of progress in developing defense against the said attacks. As the technology continues to mature, and that agents are increasingly being relied upon for more complex tasks, there is increasing pressing need to also evolve the benchmark to reflect threat landscape faced by emerging agentic systems. In this work, we reveal three fundamental flaws in current benchmarks and push the frontier along these dimensions: (i) lack of dynamic open-ended tasks, (ii) lack of helpful instructions, and (iii) simplistic user tasks. To bridge this gap, we introduce AgentDyn, a manually designed benchmark featuring 60 challenging open-ended tasks and 560 injection test cases across Shopping, GitHub, and Daily Life. Unlike prior static benchmarks, AgentDyn requires dynamic planning and incorporates helpful third-party instructions. Our evaluation of ten state-of-the-art defenses suggests that almost all existing defenses are either not secure enough or suffer from significant over-defense, revealing that existing defenses are still far from real-world deployment. Our benchmark is available at https://github.com/leolee99/AgentDyn.

🔍 Key Points

  • Introduction of AgentDyn as a dynamic open-ended benchmark for evaluating prompt injection attacks on AI agent security systems.
  • Identification of three critical flaws in existing benchmarks: lack of dynamic tasks, absence of helpful instructions, and overly simplistic user tasks.
  • AgentDyn features 60 challenging tasks and 560 injection test cases that require dynamic planning and incorporate helpful third-party instructions.
  • Evaluation of ten state-of-the-art defenses revealed that most are insufficient for real-world scenarios, suffering from issues like over-defense and inability to distinguish between benign and harmful instructions.
  • Insights from AgentDyn challenge the effectiveness of existing defenses and encourage the development of more robust agent security strategies.

💡 Why This Paper Matters

This paper is significant as it addresses the growing concern over prompt injection attacks in AI agent systems by introducing a novel and comprehensive benchmark, AgentDyn. It reveals inherent limitations in current evaluation methods, ultimately emphasizing the need for improved defenses against such attacks. The findings not only contribute to the academic discourse but also have practical implications for deploying safer AI systems in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

This research is crucial for AI security researchers as it systematically evaluates existing defenses against prompt injection attacks, providing a clearer picture of their effectiveness in dynamic, real-world situations. The development of AgentDyn as a benchmark offers a foundation for future research to build upon, driving innovation in AI security measures and prompting a re-assessment of current methodologies in combating prompt injection threats.

📚 Read the Full Paper