← Back to Library

Agentic Moderation: Multi-Agent Design for Safer Vision-Language Models

Authors: Juan Ren, Mark Dras, Usman Naseem

Published: 2025-10-29

arXiv ID: 2510.25179v1

Added to Library: 2025-10-30 04:00 UTC

Red Teaming

📄 Abstract

Agentic methods have emerged as a powerful and autonomous paradigm that enhances reasoning, collaboration, and adaptive control, enabling systems to coordinate and independently solve complex tasks. We extend this paradigm to safety alignment by introducing Agentic Moderation, a model-agnostic framework that leverages specialised agents to defend multimodal systems against jailbreak attacks. Unlike prior approaches that apply as a static layer over inputs or outputs and provide only binary classifications (safe or unsafe), our method integrates dynamic, cooperative agents, including Shield, Responder, Evaluator, and Reflector, to achieve context-aware and interpretable moderation. Extensive experiments across five datasets and four representative Large Vision-Language Models (LVLMs) demonstrate that our approach reduces the Attack Success Rate (ASR) by 7-19%, maintains a stable Non-Following Rate (NF), and improves the Refusal Rate (RR) by 4-20%, achieving robust, interpretable, and well-balanced safety performance. By harnessing the flexibility and reasoning capacity of agentic architectures, Agentic Moderation provides modular, scalable, and fine-grained safety enforcement, highlighting the broader potential of agentic systems as a foundation for automated safety governance.

🔍 Key Points

  • Introduction of Agentic Moderation, a multi-agent system for safety alignment in LVLMs that enhances reasoning and adaptive control during moderation processes.
  • The framework utilizes specialized agents (Shield, Responder, Evaluator, and Reflector) to collaboratively defend against jailbreak attacks in a dynamic, context-aware manner.
  • Empirical results show that the proposed system reduces Attack Success Rate (ASR) by 7-19% while maintaining a stable Non-Following Rate (NF) and improving Refusal Rate (RR) by 4-20%, demonstrating robust safety performance.
  • The methodology is demonstrated across five datasets using four different LVLMs, underlining its versatility and general applicability in real-world scenarios.
  • A focus on modularity and scalability allows for easy integration of new policies and safeguards adaptive responses to emerging threats.

💡 Why This Paper Matters

This paper presents a significant advancement in the safety alignment of large vision-language models through Agentic Moderation, highlighting a multi-agent collaborative framework that enhances interpretability and contextual adaptability in moderation. Its empirical results suggest that the approach is not only effective but also practical, making it an essential contribution to the field of AI safety.

🎯 Why It's Interesting for AI Security Researchers

This paper is crucial for AI security researchers as it addresses the pressing issue of safety in AI systems, particularly against sophisticated adversarial threats. The innovative use of multi-agent systems for moderation introduces a new paradigm for protecting users and maintaining ethical standards in AI applications, making it of great interest to those focused on enhancing the robustness and accountability of AI technologies.

📚 Read the Full Paper