← Back to Library

Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

Authors: Neha Nagaraja, Hayretdin Bahsi

Published: 2026-03-04

arXiv ID: 2603.03633v1

Added to Library: 2026-03-05 03:00 UTC

Red Teaming

πŸ“„ Abstract

While incorporating LLMs into systems offers significant benefits in critical application areas such as healthcare, new security challenges emerge due to the potential cyber kill chain cycles that combine adversarial model, prompt injection and conventional cyber attacks. Threat modeling methods enable the system designers to identify potential cyber threats and the relevant mitigations during the early stages of development. Although the cyber security community has extensive experience in applying these methods to software-based systems, the elicited threats are usually abstract and vague, limiting their effectiveness for conducting proper likelihood and impact assessments for risk prioritization, especially in complex systems with novel attacks surfaces, such as those involving LLMs. In this study, we propose a structured, goal driven risk assessment approach that contextualizes the threats with detailed attack vectors, preconditions, and attack paths through the use of attack trees. We demonstrate the proposed approach on a case study with an LLM agent-based healthcare system. This study harmonizes the state-of-the-art attacks to LLMs with conventional ones and presents possible attack paths applicable to similar systems. By providing a structured risk assessment, this study makes a significant contribution to the literature and advances the secure-by-design practices in LLM-based systems.

πŸ” Key Points

  • The paper presents a structured, goal-driven risk assessment framework for Large Language Model (LLM)-powered healthcare systems, focusing on identifying and contextualizing cyber threats specific to these systems.
  • By utilizing attack trees, the authors illustrate how various attacks (both conventional and AI-specific) can lead to significant risks, providing a clear methodology for assessing risk likelihood and impact in healthcare contexts.
  • The study bridges the gap between abstract threat modeling and actionable risk assessment by establishing concrete attack vectors related to specific healthcare security goals, allowing better prioritization of risks.
  • The research identifies specific risks associated with LLM usage in healthcare, such as misdiagnosis and unauthorized procedures, and quantifies their likelihood and potential impact using a tailored Likelihood Γ— Impact framework.
  • This work contributes to secure-by-design practices in AI, with implications for the integration of LLMs into sensitive domains, emphasizing the need for proactive security and risk management.

πŸ’‘ Why This Paper Matters

This paper is relevant and important as it addresses the emerging security challenges posed by LLM integration in healthcare. By establishing a systematic approach to risk assessment that combines existing threat modeling with practical, real-world implications, the authors provide valuable insights that can enhance the safety and reliability of AI-powered systems used in clinical settings. Moreover, the structured methodology introduced can serve as a foundation for future research and development in AI-driven security practices, ensuring that healthcare systems can more effectively mitigate risks related to advanced cyber threats.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of great interest to AI security researchers as it tackles the intersection of cybersecurity and artificial intelligence, specifically in the context of healthcareβ€”one of the most critical and vulnerable sectors. The structured risk assessment framework presented offers a novel approach that not only identifies threats but also contextualizes them within real-world scenarios, providing actionable insights for securing AI systems. Furthermore, the implications for risk management strategies in the use of LLMs in healthcare can inform broader discussions on AI security, making it a significant contribution to the field.

πŸ“š Read the Full Paper