← Back to Library

DialogGuard: Multi-Agent Psychosocial Safety Evaluation of Sensitive LLM Responses

Authors: Han Luo, Guy Laban

Published: 2025-12-01

arXiv ID: 2512.02282v1

Added to Library: 2025-12-03 03:01 UTC

Safety

📄 Abstract

Large language models (LLMs) now mediate many web-based mental- health, crisis, and other emotionally sensitive services, yet their psychosocial safety in these settings remains poorly understood and weakly evaluated. We present DialogGuard, a multi-agent frame- work for assessing psychosocial risks in LLM-generated responses along five high-severity dimensions: privacy violations, discrimi- natory behaviour, mental manipulation, psychological harm, and insulting behaviour. DialogGuard can be applied to diverse gen- erative models through four LLM-as-a-judge pipelines, including single-agent scoring, dual-agent correction, multi-agent debate, and stochastic majority voting, grounded in a shared three-level rubric usable by both human annotators and LLM judges. Using PKU-SafeRLHF with human safety annotations, we show that multi- agent mechanisms detect psychosocial risks more accurately than non-LLM baselines and single-agent judging; dual-agent correction and majority voting provide the best trade-off between accuracy, alignment with human ratings, and robustness, while debate attains higher recall but over-flags borderline cases. We release Dialog- Guard as open-source software with a web interface that provides per-dimension risk scores and explainable natural-language ratio- nales. A formative study with 12 practitioners illustrates how it supports prompt design, auditing, and supervision of web-facing applications for vulnerable users.

🔍 Key Points

  • Introduction of DialogGuard: a multi-agent framework for evaluating psychosocial risks in responses generated by large language models (LLMs) across five critical dimensions.
  • Demonstration that multi-agent evaluation systems outperform traditional single-agent approaches and non-LLM baselines in accurately detecting psychosocial risks.
  • Empirical analysis reveals the dual-agent correction and majority voting mechanisms provide the best accuracy and alignment with human judgements, balancing precision and recall effectively.
  • Development of a comprehensive web interface for DialogGuard, facilitating transparency and usability in real-world applications, allowing practitioners to assess and revise LLM outputs based on risk evaluations.
  • Contribution of open-source software, enabling widespread use and further development of psychosocial safety assessment tools in LLM applications.

💡 Why This Paper Matters

This paper's introduction of DialogGuard marks a significant advancement in ensuring the psychosocial safety of LLM-generated responses, addressing a critical gap in the deployment of AI in sensitive contexts. By proposing a structured and systematic evaluation approach, this work not only enhances the capabilities of AI systems in mental health-related applications but also promotes accountability and ethical considerations in AI interactions.

🎯 Why It's Interesting for AI Security Researchers

For AI security researchers, this paper provides crucial insights into the challenges and methodologies associated with evaluating the safety of AI outputs, particularly in sensitive applications. The findings on multi-agent evaluation frameworks can guide the development of robust safety mechanisms that mitigate risks associated with AI interactions, making it a valuable resource for enhancing safety standards and developing more secure AI systems.

📚 Read the Full Paper