← Back to Library

Are AI-assisted Development Tools Immune to Prompt Injection?

Authors: Charoes Huang, Xin Huang, Amin Milani Fard

Published: 2026-03-23

arXiv ID: 2603.21642v1

Added to Library: 2026-03-24 03:03 UTC

Red Teaming

📄 Abstract

Prompt injection is listed as the number-one vulnerability class in the OWASP Top 10 for LLM Applications that can subvert LLM guardrails, disclose sensitive data, and trigger unauthorized tool use. Developers are rapidly adopting AI-assisted development tools built on the Model Context Protocol (MCP). However, their convenience comes with security risks, especially prompt-injection attacks delivered via tool-poisoning vectors. While prior research has studied prompt injection in LLMs, the security posture of real-world MCP clients remains underexplored. We present the first empirical analysis of prompt injection with the tool-poisoning vulnerability across seven widely used MCP clients: Claude Desktop, Claude Code, Cursor, Cline, Continue, Gemini CLI, and Langflow. We identify their detection and mitigation mechanisms, as well as the coverage of security features, including static validation, parameter visibility, injection detection, user warnings, execution sandboxing, and audit logging. Our evaluation reveals significant disparities. While some clients, such as Claude Desktop, implement strong guardrails, others, such as Cursor, exhibit high susceptibility to cross-tool poisoning, hidden parameter exploitation, and unauthorized tool invocation. We further provide actionable guidance for MCP implementers and the software engineering community seeking to build secure AI-assisted development workflows.

🔍 Key Points

  • First empirical analysis of tool-poisoning vulnerabilities across seven widely used AI-assisted development tools (MCP clients).
  • Identification of significant disparities in security postures, revealing that some clients (e.g., Cursor) are highly susceptible to prompt injection compared to others (e.g., Claude Desktop).
  • Comprehensive evaluation of security features including detection and mitigation mechanisms, advocating for the implementation of static validation and execution sandboxing.
  • Findings highlight the need for proactive security measures in MCP design, emphasizing that security practices must be integral, not merely afterthoughts.
  • Actionable recommendations for developers, organizations, and policymakers aiming to improve security in AI-assisted development workflows.

💡 Why This Paper Matters

This paper is crucial as it uncovers the vulnerabilities associated with prompt injection in AI-assisted development tools, which are increasingly adopted in software engineering. By providing empirical data and analyzing security measures across multiple clients, the authors shed light on weaknesses in current protocols and the necessity for improved security practices. These insights are vital for ensuring safe AI tool usage and protecting sensitive data in development environments.

🎯 Why It's Interesting for AI Security Researchers

The paper holds significant relevance for AI security researchers as it addresses a pressing threat — prompt injection attacks that exploit the architectural design of AI-assisted development tools. It contributes to the understanding of vulnerabilities in real-world applications and opens avenues for further research into defense mechanisms. The empirical evaluations provide a foundational basis for future studies directed at enhancing the security frameworks of AI applications.

📚 Read the Full Paper