← Back to Library

Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

Authors: Armstrong Foundjem, Lionel Nganyewou Tidjon, Leuson Da Silva, Foutse Khomh

Published: 2025-12-29

arXiv ID: 2512.23132v1

Added to Library: 2026-01-07 10:07 UTC

Red Teaming

📄 Abstract

Machine learning (ML) underpins foundation models in finance, healthcare, and critical infrastructure, making them targets for data poisoning, model extraction, prompt injection, automated jailbreaking, and preference-guided black-box attacks that exploit model comparisons. Larger models can be more vulnerable to introspection-driven jailbreaks and cross-modal manipulation. Traditional cybersecurity lacks ML-specific threat modeling for foundation, multimodal, and RAG systems. Objective: Characterize ML security risks by identifying dominant TTPs, vulnerabilities, and targeted lifecycle stages. Methods: We extract 93 threats from MITRE ATLAS (26), AI Incident Database (12), and literature (55), and analyze 854 GitHub/Python repositories. A multi-agent RAG system (ChatGPT-4o, temp 0.4) mines 300+ articles to build an ontology-driven threat graph linking TTPs, vulnerabilities, and stages. Results: We identify unreported threats including commercial LLM API model stealing, parameter memorization leakage, and preference-guided text-only jailbreaks. Dominant TTPs include MASTERKEY-style jailbreaking, federated poisoning, diffusion backdoors, and preference optimization leakage, mainly impacting pre-training and inference. Graph analysis reveals dense vulnerability clusters in libraries with poor patch propagation. Conclusion: Adaptive, ML-specific security frameworks, combining dependency hygiene, threat intelligence, and monitoring, are essential to mitigate supply-chain and inference risks across the ML lifecycle.

🔍 Key Points

  • Introduction of a multi-agent framework that combines TTPs from multiple databases to create a comprehensive threat landscape for ML systems.
  • Identification of previously undocumented threats and vulnerabilities, such as model-stealing and parameter memorization, which were not covered in existing threat frameworks.
  • Development of a graph-based ontology-driven threat mapping that reveals dense clusters of vulnerabilities in popular ML libraries, helping to identify high-risk areas for further investigation and remediation.
  • A structured threat assessment method that integrates real-world threat intelligence with automated repository mining, using a modular pipeline for scalable threat classification and mitigation.
  • Proposed adaptive, ML-specific security frameworks that emphasize dependency management, threat intelligence, and continuous monitoring to address evolving risks in ML lifecycle phases.

💡 Why This Paper Matters

This paper provides critical insights into the security vulnerabilities and threats facing modern AI-based systems by systematically characterizing the risks associated with machine learning. Its relevance lies in the establishment of a structured framework that can guide both researchers and practitioners in developing stronger security postures for AI technologies. Given the increasing integration of AI in high-stakes sectors, the findings underscore the urgent need for enhanced and adaptive security solutions in the field.

🎯 Why It's Interesting for AI Security Researchers

The research examines and consolidates various attack vectors that target machine learning systems, making it highly relevant for AI security researchers. It not only identifies emerging threats but also proposes a coherent framework for threat mitigation, providing a valuable resource for those looking to improve the security of AI applications. The integration of real-world data and automated tools for threat classification would be of particular interest to researchers focusing on developing proactive defense strategies against adversarial attacks in AI.

📚 Read the Full Paper