← Back to Library

Faithfulness vs. Safety: Evaluating LLM Behavior Under Counterfactual Medical Evidence

Authors: Kaijie Mo, Siddhartha Venkatayogi, Chantal Shaib, Ramez Kouzy, Wei Xu, Byron C. Wallace, Junyi Jessy Li

Published: 2026-01-17

arXiv ID: 2601.11886v1

Added to Library: 2026-01-21 03:02 UTC

Safety

📄 Abstract

In high-stakes domains like medicine, it may be generally desirable for models to faithfully adhere to the context provided. But what happens if the context does not align with model priors or safety protocols? In this paper, we investigate how LLMs behave and reason when presented with counterfactual or even adversarial medical evidence. We first construct MedCounterFact, a counterfactual medical QA dataset that requires the models to answer clinical comparison questions (i.e., judge the efficacy of certain treatments, with evidence consisting of randomized controlled trials provided as context). In MedCounterFact, real-world medical interventions within the questions and evidence are systematically replaced with four types of counterfactual stimuli, ranging from unknown words to toxic substances. Our evaluation across multiple frontier LLMs on MedCounterFact reveals that in the presence of counterfactual evidence, existing models overwhelmingly accept such "evidence" at face value even when it is dangerous or implausible, and provide confident and uncaveated answers. While it may be prudent to draw a boundary between faithfulness and safety, our findings reveal that there exists no such boundary yet.

🔍 Key Points

  • Introduction of MedCounterFact, a unique counterfactual medical QA dataset that challenges LLMs to reason under misleading evidence contexts.
  • Demonstration that LLMs tend to accept counterfactual medical evidence at face value, compromising safety for the sake of faithfulness to context.
  • Empirical results showing that current frontier LLMs provide confident responses even when the evidence is harmful or implausible, highlighting a critical safety failure.
  • Analysis of counterfactual stimuli indicates that model responses are not sensitive to the implausibility of medical interventions, raising concerns about the use of LLMs in medical decision-making.

💡 Why This Paper Matters

This paper is relevant as it uncovers significant safety vulnerabilities in LLMs when applied in sensitive fields such as medicine. The findings emphasize the urgent need for improved safety mechanisms that can distinguish between valid and counterfactual information in order to prevent harmful consequences in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper significant because it highlights critical flaws in current AI models' responses to manipulated inputs. Understanding these vulnerabilities can help in developing more robust models and safety protocols, which are essential for ensuring responsible usage of AI in high-stakes scenarios.

📚 Read the Full Paper