← Back to Library

Prompt Injection Detection and Mitigation via AI Multi-Agent NLP Frameworks

Authors: Diego Gosmar, Deborah A. Dahl, Dario Gosmar

Published: 2025-03-14

arXiv ID: 2503.11517v1

Added to Library: 2025-11-11 14:04 UTC

Red Teaming

📄 Abstract

Prompt injection constitutes a significant challenge for generative AI systems by inducing unintended outputs. We introduce a multi-agent NLP framework specifically designed to address prompt injection vulnerabilities through layered detection and enforcement mechanisms. The framework orchestrates specialized agents for generating responses, sanitizing outputs, and enforcing policy compliance. Evaluation on 500 engineered injection prompts demonstrates a marked reduction in injection success and policy breaches. Novel metrics, including Injection Success Rate (ISR), Policy Override Frequency (POF), Prompt Sanitization Rate (PSR), and Compliance Consistency Score (CCS), are proposed to derive a composite Total Injection Vulnerability Score (TIVS). The system utilizes the OVON (Open Voice Network) framework for inter-agent communication via structured JSON messages, extending a previously established multi-agent architecture from hallucination mitigation to address the unique challenges of prompt injection.

🔍 Key Points

  • The paper introduces a multi-agent NLP framework specifically designed for detecting and mitigating prompt injection attacks, addressing a significant vulnerability in generative AI systems.
  • Four novel metrics are proposed to evaluate the effectiveness of mitigation strategies: Injection Success Rate (ISR), Policy Override Frequency (POF), Prompt Sanitization Rate (PSR), and Compliance Consistency Score (CCS), which contribute to the composite Total Injection Vulnerability Score (TIVS).
  • Empirical evaluation on 500 injection prompts demonstrates substantial improvement in mitigation effectiveness, with the Policy Enforcer reducing TIVS by 45.7%, highlighting the efficacy of the multi-agent approach against sophisticated prompt injection techniques.
  • The study emphasizes the importance of inter-agent communication using the OVON framework, which facilitates structured metadata exchange for improved transparency and understanding of detected vulnerabilities during processing.

💡 Why This Paper Matters

This paper presents significant advancements in the detection and mitigation of prompt injection vulnerabilities in generative AI systems through a systematic multi-agent architecture. The incorporation of specific metrics to evaluate mitigation efforts provides a robust framework for enhancing the security of AI applications, addressing a critical area of concern as AI continues to be integrated into sensitive domains.

🎯 Why It's Interesting for AI Security Researchers

The findings are highly relevant for AI security researchers as they illustrate a comprehensive approach to identifying and mitigating prompt injection vulnerabilities, a key area of concern for developers and practitioners aiming to ensure the reliability and safety of AI systems. The proposed metrics and framework can serve as a foundation for further research and practical applications in securing AI-driven solutions against adversarial inputs.

📚 Read the Full Paper