← Back to Library

An LLM-driven Scenario Generation Pipeline Using an Extended Scenic DSL for Autonomous Driving Safety Validation

Authors: Fida Khandaker Safa, Yupeng Jiang, Xi Zheng

Published: 2026-02-24

arXiv ID: 2602.20644v1

Added to Library: 2026-02-25 03:02 UTC

Safety

📄 Abstract

Real-world crash reports, which combine textual summaries and sketches, are valuable for scenario-based testing of autonomous driving systems (ADS). However, current methods cannot effectively translate this multimodal data into precise, executable simulation scenarios, hindering the scalability of ADS safety validation. In this work, we propose a scalable and verifiable pipeline that uses a large language model (GPT-4o mini) and a probabilistic intermediate representation (an Extended Scenic domain-specific language) to automatically extract semantic scenario configurations from crash reports and generate corresponding simulation-ready scenarios. Unlike earlier approaches such as ScenicNL and LCTGen (which generate scenarios directly from text) or TARGET (which uses deterministic mappings from traffic rules), our method introduces an intermediate Scenic DSL layer to separate high-level semantic understanding from low-level scenario rendering, reducing errors and capturing real-world variability. We evaluated the pipeline on cases from the NHTSA CIREN database. The results show high accuracy in knowledge extraction: 100% correctness for environmental and road network attributes, and 97% and 98% for oracle and actor trajectories, respectively, compared to human-derived ground truth. We executed the generated scenarios in the CARLA simulator using the Autoware driving stack, and they consistently triggered the intended traffic-rule violations (such as opposite-lane crossing and red-light running) across 2,000 scenario variations. These findings demonstrate that the proposed pipeline provides a legally grounded, scalable, and verifiable approach to ADS safety validation.

🔍 Key Points

  • Development of an automated pipeline that converts multimodal crash reports into simulation-ready scenarios using a large language model (LLM) and an Extended Scenic domain-specific language (DSL).
  • Introduction of an Extended Scenic DSL as an intermediate layer, allowing separation of semantic understanding from low-level scenario rendering, which enhances accuracy and reduces hallucination errors in scenario generation.
  • Achieved high representation accuracy with 99% correctness in capturing environmental and road attributes, 97% correctness in actor trajectories, demonstrating the effectiveness of the proposed knowledge extraction process.
  • Execution of thousands of generated scenarios in the CARLA simulator, which triggered realistic traffic-rule violations and successfully recreated critical unsafe scenarios, validating the utility of the generated scenarios for autonomous driving system testing.
  • Proven scalability of the pipeline, enabling systematic testing of numerous variations from a single crash report, which significantly enhances coverage of rare and complex driving situations.

💡 Why This Paper Matters

This paper presents a significant advancement in the validation of autonomous driving systems by automating the generation of test scenarios from real-world crash reports using advanced AI techniques. The developed pipeline not only improves the accuracy of scenario representation but also enhances the scalability of safety testing, providing a potential framework for future developments in autonomous driving technology. The ability to efficiently reproduce critical edge cases for testing emphasizes the practical implications of the research, making it a cornerstone for future safety validations in autonomous vehicles.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper particularly relevant because it addresses the critical issue of validating autonomous systems against potential failures that could arise from unforeseen driving scenarios. By leveraging LLMs and structured scenario generation, this research highlights vulnerabilities in existing autonomous driving algorithms and offers a method to rigorously test the robustness of these systems. Understanding how AI can aid in uncovering edge cases and traffic rule violations is essential for enhancing the security and reliability of autonomous vehicles, ultimately contributing to safer AI applications in transportation.

📚 Read the Full Paper