โ† Back to Library

Safety Not Found (404): Hidden Risks of LLM-Based Robotics Decision Making

Authors: Jua Han, Jaeyoon Seo, Jungbin Min, Jean Oh, Jihie Kim

Published: 2026-01-09

arXiv ID: 2601.05529v1

Added to Library: 2026-01-12 03:03 UTC

Safety

๐Ÿ“„ Abstract

One mistake by an AI system in a safety-critical setting can cost lives. As Large Language Models (LLMs) become integral to robotics decision-making, the physical dimension of risk grows; a single wrong instruction can directly endanger human safety. This paper addresses the urgent need to systematically evaluate LLM performance in scenarios where even minor errors are catastrophic. Through a qualitative evaluation of a fire evacuation scenario, we identified critical failure cases in LLM-based decision-making. Based on these, we designed seven tasks for quantitative assessment, categorized into: Complete Information, Incomplete Information, and Safety-Oriented Spatial Reasoning (SOSR). Complete information tasks utilize ASCII maps to minimize interpretation ambiguity and isolate spatial reasoning from visual processing. Incomplete information tasks require models to infer missing context, testing for spatial continuity versus hallucinations. SOSR tasks use natural language to evaluate safe decision-making in life-threatening contexts. We benchmark various LLMs and Vision-Language Models (VLMs) across these tasks. Beyond aggregate performance, we analyze the implications of a 1% failure rate, highlighting how "rare" errors escalate into catastrophic outcomes. Results reveal serious vulnerabilities: several models achieved a 0% success rate in ASCII navigation, while in a simulated fire drill, models instructed robots to move toward hazardous areas instead of emergency exits. Our findings lead to a sobering conclusion: current LLMs are not ready for direct deployment in safety-critical systems. A 99% accuracy rate is dangerously misleading in robotics, as it implies one out of every hundred executions could result in catastrophic harm. We demonstrate that even state-of-the-art models cannot guarantee safety, and absolute reliance on them creates unacceptable risks.

๐Ÿ” Key Points

  • The paper identifies critical vulnerabilities in the decision-making capabilities of Large Language Models (LLMs) in safety-critical robotic contexts, showing they are not robust enough for real-world deployment.
  • Through systematic evaluation across three categories of tasksโ€”Complete Information, Incomplete Information, and Safety-Oriented Spatial Reasoning (SOSR)โ€”the authors reveal how LLMs collapse under increasing complexity and uncertainty.
  • Findings indicate that even state-of-the-art LLMs can produce catastrophic errors, such as instructing robots to enter hazardous areas instead of safe exits during emergency scenarios.
  • The authors emphasize the misleading nature of aggregate accuracy metrics, demonstrating that a seemingly high success rate (e.g., 99%) can still result in significant risks when applied in safety-critical applications.
  • The study proposes a framework for assessing the reliability and safety of LLMs in robotic decisions, which could serve as a benchmark for future research and application.

๐Ÿ’ก Why This Paper Matters

This paper is crucial as it highlights the potentially lethal implications of deploying LLMs in safety-critical robotic systems, providing empirical evidence of their vulnerabilities in real-life scenarios. It underscores the urgent need for more robust and reliable AI systems that can guarantee human safety, prompting the AI research community to rethink how model performance is measured and validated.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

For AI security researchers, this paper addresses fundamental risks associated with LLM deployment in safety-sensitive environments, providing a comprehensive analysis of decision-making failures that could lead to harm. The findings underscore the importance of developing safety protocols and better evaluation frameworks, making this research relevant for establishing safer AI systems in real-world applications.

๐Ÿ“š Read the Full Paper