← Back to Library

Safety Not Found (404): Hidden Risks of LLM-Based Robotics Decision Making

Authors: Jua Han, Jaeyoon Seo, Jungbin Min, Jean Oh, Jihie Kim

Published: 2026-01-09

arXiv ID: 2601.05529v2

Added to Library: 2026-01-16 03:04 UTC

Safety

📄 Abstract

One mistake by an AI system in a safety-critical setting can cost lives. As Large Language Models (LLMs) become integral to robotics decision-making, the physical dimension of risk grows; a single wrong instruction can directly endanger human safety. This paper addresses the urgent need to systematically evaluate LLM performance in scenarios where even minor errors are catastrophic. Through a qualitative evaluation of a fire evacuation scenario, we identified critical failure cases in LLM-based decision-making. Based on these, we designed seven tasks for quantitative assessment, categorized into: Complete Information, Incomplete Information, and Safety-Oriented Spatial Reasoning (SOSR). Complete information tasks utilize ASCII maps to minimize interpretation ambiguity and isolate spatial reasoning from visual processing. Incomplete information tasks require models to infer missing context, testing for spatial continuity versus hallucinations. SOSR tasks use natural language to evaluate safe decision-making in life-threatening contexts. We benchmark various LLMs and Vision-Language Models (VLMs) across these tasks. Beyond aggregate performance, we analyze the implications of a 1% failure rate, highlighting how "rare" errors escalate into catastrophic outcomes. Results reveal serious vulnerabilities: several models achieved a 0% success rate in ASCII navigation, while in a simulated fire drill, models instructed robots to move toward hazardous areas instead of emergency exits. Our findings lead to a sobering conclusion: current LLMs are not ready for direct deployment in safety-critical systems. A 99% accuracy rate is dangerously misleading in robotics, as it implies one out of every hundred executions could result in catastrophic harm. We demonstrate that even state-of-the-art models cannot guarantee safety, and absolute reliance on them creates unacceptable risks.

🔍 Key Points

  • The paper identifies significant safety risks associated with deploying Large Language Models (LLMs) in robotics, highlighting critical failure cases in safety-critical environments like fire evacuations.
  • It introduces a framework of seven tasks to quantitatively assess LLMs' decision-making abilities, emphasizing the importance of thorough evaluation beyond conventional accuracy metrics.
  • The study reveals that common benchmarks like '99% accuracy' are misleading in high-stakes applications, as even rare errors can lead to catastrophic consequences.
  • A series of diagnostic tasks uncover vulnerabilities in spatial reasoning and decision-making, illustrating that some models achieved a 0% success rate in essential navigation tasks.
  • The findings call for reevaluation of LLM reliance in safety-critical settings, emphasizing the need for improved safety evaluations for AI systems.

💡 Why This Paper Matters

This paper is critical as it systematically addresses the urgent safety implications of integrating LLMs in robotics. The insights and findings challenge the notion that current LLMs can be safely deployed in real-world scenarios, particularly where human safety is at risk. By exposing the shortcomings of these models in safety-relevant tasks, the research emphasizes the need for more rigorous evaluation standards and methodologies to ensure that AI systems do not compromise human safety in practical applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it sheds light on the potential risks and vulnerabilities inherent in the deployment of AI systems, particularly in robotics. Understanding how LLM-based decision-making can lead to dangerous outcomes is crucial for developing robust AI safety protocols and risk mitigation strategies. The paper's findings spur necessary discussions on the ethics of AI deployment in life-critical applications, making it a vital resource for researchers focused on ensuring the safe and ethical use of AI technologies.

📚 Read the Full Paper