← Back to Library

A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures

Authors: Dezhang Kong, Shi Lin, Zhenhua Xu, Zhebo Wang, Minghao Li, Yufeng Li, Yilun Zhang, Zeyang Sha, Yuyuan Li, Changting Lin, Xun Wang, Xuan Liu, Muhammad Khurram Khan, Ningyu Zhang, Chaochao Chen, Meng Han

Published: 2025-06-24

arXiv ID: 2506.19676v1

Added to Library: 2025-06-25 04:01 UTC

Safety

📄 Abstract

In recent years, Large-Language-Model-driven AI agents have exhibited unprecedented intelligence, flexibility, and adaptability, and are rapidly changing human production and lifestyle. Nowadays, agents are undergoing a new round of evolution. They no longer act as an isolated island like LLMs. Instead, they start to communicate with diverse external entities, such as other agents and tools, to collectively perform more complex tasks. Under this trend, agent communication is regarded as a foundational pillar of the future AI ecosystem, and many organizations intensively begin to design related communication protocols (e.g., Anthropic's MCP and Google's A2A) within the recent few months. However, this new field exposes significant security hazard, which can cause severe damage to real-world scenarios. To help researchers to quickly figure out this promising topic and benefit the future agent communication development, this paper presents a comprehensive survey of agent communication security. More precisely, we first present a clear definition of agent communication and categorize the entire lifecyle of agent communication into three stages: user-agent interaction, agent-agent communication, and agent-environment communication. Next, for each communication phase, we dissect related protocols and analyze its security risks according to the communication characteristics. Then, we summarize and outlook on the possible defense countermeasures for each risk. Finally, we discuss open issues and future directions in this promising research field.

🔍 Key Points

  • Systematic overview of agent communication, introducing a clear definition and classification into user-agent interaction, agent-agent communication, and agent-environment communication.
  • In-depth analysis of security risks at each communication level, pinpointing specific vulnerabilities and potential attacks used against agents.
  • Identification of tailored defense countermeasures for each type of communication, enhancing the security framework for LLM-driven AI agents.
  • Illustration of current and emerging protocols in agent communication, including evaluation of their security implications and effectiveness.
  • Discussion of future directions in agent communication research, including both technical improvements and legal frameworks.

💡 Why This Paper Matters

This paper is significant as it establishes a foundational understanding of the security landscape surrounding LLM-driven AI agents. By categorizing agent communication and delineating associated risks and defenses, it provides a comprehensive guide for researchers in expanding safe and effective AI applications in various domains. It highlights the urgency for security measures as agents increasingly interact across different environments, which is pivotal to protecting user data and ensuring the operational integrity of AI systems.

🎯 Why It's Interesting for AI Security Researchers

This paper serves as a critical exploration of the interplay between LLM-driven AI agents and security concerns, a topic of ever-growing importance as AI systems become more integrated into everyday tasks. It presents a structured framework for understanding and mitigating security risks, making it invaluable for AI security researchers who strive to enhance the resilience of these systems against attacks, thus contributing to the broader goal of responsible AI deployment.

📚 Read the Full Paper