← Back to Library

Agentic JWT: A Secure Delegation Protocol for Autonomous AI Agents

Authors: Abhishek Goswami

Published: 2025-09-16

arXiv ID: 2509.13597v1

Added to Library: 2025-12-08 18:04 UTC

📄 Abstract

Autonomous LLM agents can issue thousands of API calls per hour without human oversight. OAuth 2.0 assumes deterministic clients, but in agentic settings stochastic reasoning, prompt injection, or multi-agent orchestration can silently expand privileges. We introduce Agentic JWT (A-JWT), a dual-faceted intent token that binds each agent's action to verifiable user intent and, optionally, to a specific workflow step. A-JWT carries an agent's identity as a one-way checksum hash derived from its prompt, tools and configuration, and a chained delegation assertion to prove which downstream agent may execute a given task, and per-agent proof-of-possession keys to prevent replay and in-process impersonation. We define a new authorization mechanism and add a lightweight client shim library that self-verifies code at run time, mints intent tokens, tracks workflow steps and derives keys, thus enabling secure agent identity and separation even within a single process. We illustrate a comprehensive threat model for agentic applications, implement a Python proof-of-concept and show functional blocking of scope-violating requests, replay, impersonation, and prompt-injection pathways with sub-millisecond overhead on commodity hardware. The design aligns with ongoing OAuth agent discussions and offers a drop-in path toward zero-trust guarantees for agentic applications. A comprehensive performance and security evaluation with experimental results will appear in our forthcoming journal publication

🔍 Key Points

  • Introduction of Sentinel Agents: The paper introduces Sentinel Agents as a novel security architecture designed for multi-agent systems (MAS), focusing on enhanced threat detection, monitoring, and policy enforcement within dynamic environments.
  • Integration of Coordinator Agents: The paper highlights the pivotal role of Coordinator Agents that manage policy implementation and alert responses from Sentinel Agents, establishing a two-layer security framework for real-time threat management.
  • Comprehensive Threat Mitigation: Sentinel Agents are shown to effectively detect and mitigate a range of attacks, including prompt injections and data exfiltration, through layered defenses that combine behavioral and semantic analysis with rule-based detection.
  • Experimental Validation: The feasibility of the Sentinel architecture was confirmed through simulations involving 162 synthetic attacks, where it achieved a 100% detection rate, indicating potential for practical application in real-world systems.
  • Ethical and Practical Considerations: The paper discusses the ethical implications of deploying autonomous security agents, including bias, accountability, and the balance between privacy and security, which are essential for responsible AI implementations.

💡 Why This Paper Matters

This paper is highly relevant as it addresses crucial security challenges in multi-agent systems, emphasizing the need for intelligent, adaptable solutions in contemporary AI environments. The introduction of Sentinel Agents along with a Coordinator Agent framework marks a significant advancement in securing agentic AI applications, which are increasingly susceptible to sophisticated attacks. By evidencing successful detection capabilities through simulation, it sets a foundation for future research and deployment in real-world scenarios, ensuring trustworthiness and integrity in AI systems.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper particularly important as it tackles the evolving landscape of AI vulnerabilities in multi-agent systems, providing novel methodologies for real-time threat detection and mitigation. The proposed architecture not only enhances security measures but also presents ethical considerations that are vital in discussions surrounding AI governance. The balanced approach to technical implementation and ethical oversight will resonate with researchers focused on developing secure, compliant, and responsible AI systems.

📚 Read the Full Paper