← Back to Library

JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese Large Language Models

Authors: Junyu Liu, Zirui Li, Qian Niu, Zequn Zhang, Yue Xun, Wenlong Hou, Shujun Wang, Yusuke Iwasawa, Yutaka Matsuo, Kan Hatakeyama-Sato

Published: 2026-01-04

arXiv ID: 2601.01627v1

Added to Library: 2026-01-07 10:03 UTC

Red Teaming

📄 Abstract

As Large Language Models (LLMs) are increasingly deployed in healthcare field, it becomes essential to carefully evaluate their medical safety before clinical use. However, existing safety benchmarks remain predominantly English-centric, and test with only single-turn prompts despite multi-turn clinical consultations. To address these gaps, we introduce JMedEthicBench, the first multi-turn conversational benchmark for evaluating medical safety of LLMs for Japanese healthcare. Our benchmark is based on 67 guidelines from the Japan Medical Association and contains over 50,000 adversarial conversations generated using seven automatically discovered jailbreak strategies. Using a dual-LLM scoring protocol, we evaluate 27 models and find that commercial models maintain robust safety while medical-specialized models exhibit increased vulnerability. Furthermore, safety scores decline significantly across conversation turns (median: 9.5 to 5.0, $p < 0.001$). Cross-lingual evaluation on both Japanese and English versions of our benchmark reveals that medical model vulnerabilities persist across languages, indicating inherent alignment limitations rather than language-specific factors. These findings suggest that domain-specific fine-tuning may accidentally weaken safety mechanisms and that multi-turn interactions represent a distinct threat surface requiring dedicated alignment strategies.

🔍 Key Points

  • Introduction of JMedEthicBench, a benchmark specifically designed for evaluating the medical safety of Japanese Large Language Models (LLMs) through multi-turn conversational scenarios.
  • Development of over 50,000 adversarial conversations based on 67 guidelines from the Japan Medical Association, addressing previous benchmarks' limitations in language and context.
  • Evaluation of 27 LLM models showing that commercial models maintain robust safety while medical-specialized models demonstrate increased vulnerabilities, particularly in multi-turn interactions.
  • Evidence that safety scores significantly decline across conversation turns, highlighting the unique challenges posed by multi-turn medical consultations.
  • Findings indicate that vulnerabilities in medical models persist across languages, suggesting inherent alignment issues in medical-specific training.

💡 Why This Paper Matters

The introduction of JMedEthicBench is crucial as it fills the existing gap in evaluating medical safety in non-English languages, especially in Japanese healthcare contexts. By emphasizing the need for multi-turn conversational assessments, this paper offers valuable insights that can significantly inform the development and fine-tuning of LLMs, enhancing their reliability and safety in real-world healthcare applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it tackles critical issues surrounding the safety and ethical implications of deploying LLMs in medical settings. Understanding how adversarial strategies can exploit vulnerabilities in language models provides foundational insights for developing improved safety mechanisms. The focus on multi-turn interactions and the explicit evaluation of model weaknesses contribute to advancing research in AI safety and can guide future standards for responsible AI use in high-stakes environments.

📚 Read the Full Paper