← Back to Library

From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

Authors: Zhengshen Zhang, Hao Li, Yalun Dai, Zhengbang Zhu, Lei Zhou, Chenchen Liu, Dong Wang, Francis E. H. Tay, Sijin Chen, Ziwei Liu, Yuxiao Liu, Xinghang Li, Pan Zhou

Published: 2025-10-20

arXiv ID: 2510.17439v1

Added to Library: 2025-11-14 23:09 UTC

📄 Abstract

Existing vision-language-action (VLA) models act in 3D real-world but are typically built on 2D encoders, leaving a spatial reasoning gap that limits generalization and adaptability. Recent 3D integration techniques for VLAs either require specialized sensors and transfer poorly across modalities, or inject weak cues that lack geometry and degrade vision-language alignment. In this work, we introduce FALCON (From Spatial to Action), a novel paradigm that injects rich 3D spatial tokens into the action head. FALCON leverages spatial foundation models to deliver strong geometric priors from RGB alone, and includes an Embodied Spatial Model that can optionally fuse depth, or pose for higher fidelity when available, without retraining or architectural changes. To preserve language reasoning, spatial tokens are consumed by a Spatial-Enhanced Action Head rather than being concatenated into the vision-language backbone. These designs enable FALCON to address limitations in spatial representation, modality transferability, and alignment. In comprehensive evaluations across three simulation benchmarks and eleven real-world tasks, our proposed FALCON achieves state-of-the-art performance, consistently surpasses competitive baselines, and remains robust under clutter, spatial-prompt conditioning, and variations in object scale and height.

🔍 Key Points

  • OpenGuardrails introduces a Configurable Policy Adaptation mechanism that enables per-request customization of safety categories and sensitivity thresholds, offering flexibility critical for enterprise applications.
  • The platform features a Unified LLM-based Guard Architecture that integrates content-safety and manipulation detection into a single model, enhancing robustness and deployment efficiency.
  • OpenGuardrails employs a Scalable and Efficient Model Design, reducing a 14B model to 3.3B parameters while maintaining over 98% accuracy on benchmarks, facilitating low-latency deployment.
  • The system supports 119 languages, achieving state-of-the-art performance on multilingual safety benchmarks, making it suitable for global applications.
  • OpenGuardrails sets a new standard in safety infrastructure by being fully open-source, allowing for customization and extension in real-world use cases.

💡 Why This Paper Matters

The paper presents OpenGuardrails as a pioneering open-source platform that significantly enhances the safety and reliability of large language models in practical applications. Its innovations in customizable policy mechanisms and unified architecture position it as a critical tool for ensuring safe deployment of AI technologies across diverse applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it addresses critical issues in model safety, manipulation, and privacy compliance. The technical innovations in configurable safety policies and the integration of detection capabilities within a unified model provide a framework for advancing research in AI safety and security, prompting further exploration of adaptive governance in machine learning systems.

📚 Read the Full Paper