← Back to Library

Securing Educational LLMs: A Generalised Taxonomy of Attacks on LLMs and DREAD Risk Assessment

Authors: Farzana Zahid, Anjalika Sewwandi, Lee Brandon, Vimal Kumar, Roopak Sinha

Published: 2025-08-12

arXiv ID: 2508.08629v1

Added to Library: 2025-08-14 23:12 UTC

Red Teaming

📄 Abstract

Due to perceptions of efficiency and significant productivity gains, various organisations, including in education, are adopting Large Language Models (LLMs) into their workflows. Educator-facing, learner-facing, and institution-facing LLMs, collectively, Educational Large Language Models (eLLMs), complement and enhance the effectiveness of teaching, learning, and academic operations. However, their integration into an educational setting raises significant cybersecurity concerns. A comprehensive landscape of contemporary attacks on LLMs and their impact on the educational environment is missing. This study presents a generalised taxonomy of fifty attacks on LLMs, which are categorized as attacks targeting either models or their infrastructure. The severity of these attacks is evaluated in the educational sector using the DREAD risk assessment framework. Our risk assessment indicates that token smuggling, adversarial prompts, direct injection, and multi-step jailbreak are critical attacks on eLLMs. The proposed taxonomy, its application in the educational environment, and our risk assessment will help academic and industrial practitioners to build resilient solutions that protect learners and institutions.

🔍 Key Points

  • Introduces a comprehensive taxonomy of fifty attacks targeting Large Language Models (LLMs) and differentiates them into model-based and infrastructure-based categories.
  • Employs the DREAD risk assessment framework to evaluate the severity of these attacks specifically in the educational context, highlighting critical risks such as token smuggling and adversarial prompts.
  • Identifies significant cybersecurity concerns associated with the integration of Educational Large Language Models (eLLMs) in academic workflows and provides practical safeguards for risk mitigation.
  • Conducts a systematic literature review (SLR) to summarize the current state of research on LLM security, thereby filling a gap in the existing literature.
  • Proposes a multi-faceted approach to strengthen the security posture of educational institutions using eLLMs, emphasizing the need for awareness, training, and robust access controls.

💡 Why This Paper Matters

This paper is crucial as it provides a structured approach to understanding and mitigating the security risks associated with Large Language Models in educational settings. By presenting a detailed taxonomy of attacks and applying a well-known risk assessment framework, it offers valuable insights for both academic and industry practitioners who aim to secure eLLMs against malicious threats. The practical implications outlined in the paper are instrumental for educational institutions wanting to harness the benefits of LLMs while ensuring the safety and integrity of their systems.

🎯 Why It's Interesting for AI Security Researchers

This paper would interest AI security researchers as it not only addresses the emerging threats posed by LLMs but also contributes a novel taxonomy and practical mitigation strategies tailored for the educational sector. With the accelerated integration of AI technologies in various domains, understanding LLM-specific vulnerabilities and their implications on user safety is paramount for researchers focused on building robust AI systems.

📚 Read the Full Paper