← Back to Library

Automating Cloud Security and Forensics Through a Secure-by-Design Generative AI Framework

Authors: Dalal Alharthi, Ivan Roberto Kawaminami Garcia

Published: 2026-04-05

arXiv ID: 2604.03912v1

Added to Library: 2026-04-07 02:00 UTC

Red Teaming

📄 Abstract

As cloud environments become increasingly complex, cybersecurity and forensic investigations must evolve to meet emerging threats. Large Language Models (LLMs) have shown promise in automating log analysis and reasoning tasks, yet they remain vulnerable to prompt injection attacks and lack forensic rigor. To address these dual challenges, we propose a unified, secure-by-design GenAI framework that integrates PromptShield and the Cloud Investigation Automation Framework (CIAF). PromptShield proactively defends LLMs against adversarial prompts using ontology-driven validation that standardizes user inputs and mitigates manipulation. CIAF streamlines cloud forensic investigations through structured, ontology-based reasoning across all six phases of the forensic process. We evaluate our system on real-world datasets from AWS and Microsoft Azure, demonstrating substantial improvements in both LLM security and forensic accuracy. Experimental results show PromptShield boosts classification performance under attack conditions, achieving precision, recall, and F1 scores above 93%, while CIAF enhances ransomware detection accuracy in cloud logs using Likert-transformed performance features. Our integrated framework advances the automation, interpretability, and trustworthiness of cloud forensics and LLM-based systems, offering a scalable foundation for real-time, AI-driven incident response across diverse cloud infrastructures.

🔍 Key Points

  • The paper introduces a unified, secure-by-design framework that combines the Cloud Investigation Automation Framework (CIAF) and PromptShield to enhance cloud forensic investigations and the security of Large Language Models (LLMs).
  • PromptShield addresses vulnerabilities in LLMs to prompt injection attacks through ontology-driven validation, achieving precision, recall, and F1 scores above 93% even under attack conditions.
  • CIAF automates the six-phase cloud forensic process, improving efficiency and accuracy in forensic analysis and contributing to real-time incident response capabilities in cloud infrastructures.
  • The experimental evaluation demonstrates substantial improvements in forensic accuracy and interpretability, particularly in detecting ransomware, leveraging real-world datasets from AWS and Microsoft Azure.
  • The integration of structured, ontology-driven methods in both forensic analysis and LLM security highlights significant advancements in interpretability, trustworthiness, and scalability of AI-driven cyber defense mechanisms.

💡 Why This Paper Matters

This paper is significant as it presents a novel approach to addressing critical challenges in both cloud cybersecurity and AI model security. By seamlessly combining forensic automation with proactive security measures, it lays a comprehensive foundation for more resilient cloud environments, especially in the face of evolving cyber threats. The framework's dual-layered architecture enhances the interpretability and reliability of forensic investigations, crucial for organizations increasingly reliant on cloud technologies.

🎯 Why It's Interesting for AI Security Researchers

This paper is of keen interest to AI security researchers as it tackles pressing issues regarding the vulnerabilities of LLMs in cybersecurity applications. By combining cutting-edge generative AI techniques with robust forensic methodologies, it provides insights into developing more secure AI systems. The innovative use of ontology-driven validation and structured reasoning in addressing adversarial attacks is especially timely as the reliance on AI in critical infrastructures grows.

📚 Read the Full Paper