← Back to Library

ASTRIDE: A Security Threat Modeling Platform for Agentic-AI Applications

Authors: Eranga Bandara, Amin Hass, Ross Gore, Sachin Shetty, Ravi Mukkamala, Safdar H. Bouk, Xueping Liang, Ng Wee Keong, Kasun De Zoysa, Aruna Withanage, Nilaan Loganathan

Published: 2025-12-04

arXiv ID: 2512.04785v1

Added to Library: 2025-12-05 03:03 UTC

Red Teaming

📄 Abstract

AI agent-based systems are becoming increasingly integral to modern software architectures, enabling autonomous decision-making, dynamic task execution, and multimodal interactions through large language models (LLMs). However, these systems introduce novel and evolving security challenges, including prompt injection attacks, context poisoning, model manipulation, and opaque agent-to-agent communication, that are not effectively captured by traditional threat modeling frameworks. In this paper, we introduce ASTRIDE, an automated threat modeling platform purpose-built for AI agent-based systems. ASTRIDE extends the classical STRIDE framework by introducing a new threat category, A for AI Agent-Specific Attacks, which encompasses emerging vulnerabilities such as prompt injection, unsafe tool invocation, and reasoning subversion, unique to agent-based applications. To automate threat modeling, ASTRIDE combines a consortium of fine-tuned vision-language models (VLMs) with the OpenAI-gpt-oss reasoning LLM to perform end-to-end analysis directly from visual agent architecture diagrams, such as data flow diagrams(DFDs). LLM agents orchestrate the end-to-end threat modeling automation process by coordinating interactions between the VLM consortium and the reasoning LLM. Our evaluations demonstrate that ASTRIDE provides accurate, scalable, and explainable threat modeling for next-generation intelligent systems. To the best of our knowledge, ASTRIDE is the first framework to both extend STRIDE with AI-specific threats and integrate fine-tuned VLMs with a reasoning LLM to fully automate diagram-driven threat modeling in AI agent-based applications.

🔍 Key Points

  • Introduction of ASTRIDE, an automated threat modeling platform specifically designed for AI agent-based applications.
  • Extension of the traditional STRIDE framework to include a new category for AI Agent-Specific Attacks, addressing unique security challenges in AI systems.
  • Utilization of fine-tuned vision-language models (VLMs) combined with OpenAI-gpt-oss reasoning LLM to automate threat analysis from visual system diagrams.
  • Demonstration of improved accuracy, scalability, and explainability in threat modeling for intelligent systems through experimental evaluations.
  • Establishment of a comprehensive automated process that reduces reliance on human experts for threat identification in AI-driven applications.

💡 Why This Paper Matters

The paper presents ASTRIDE as a pioneering framework that addresses the emerging security concerns associated with AI agent-based systems. By automating the threat modeling process and enhancing the STRIDE framework with AI-specific threats, ASTRIDE provides a robust tool for developers and security professionals to effectively analyze and mitigate potential vulnerabilities in complex AI architectures.

🎯 Why It's Interesting for AI Security Researchers

This paper is of significant interest to AI security researchers as it tackles the urgent need for effective security measures in AI systems, which are increasingly vulnerable to novel attack vectors. The innovative combination of fine-tuned VLMs and reasoning LLMs to automate threat modeling offers a scalable and efficient solution, while the emphasis on AI-specific vulnerabilities fills a critical gap in existing threat modeling methodologies. This approach not only enhances the security posture of AI applications but also contributes valuable knowledge to the field of AI security.

📚 Read the Full Paper