← Back to Library

From Inference Routing to Agent Orchestration: Declarative Policy Compilation with Cross-Layer Verification

Authors: Huamin Chen, Xunzhuo Liu, Bowei He, Xue Liu

Published: 2026-03-28

arXiv ID: 2603.27299v1

Added to Library: 2026-03-31 02:01 UTC

📄 Abstract

The Semantic Router DSL is a non-Turing-complete policy language deployed in production for per-request LLM inference routing: content signals (embedding similarity, PII detection, jailbreak scoring) feed into weighted projections and priority-ordered decision trees that select a model, enforce privacy policies, and produce structured audit traces -- all from a single declarative source file. Prior work established conflict-free compilation for probabilistic predicates and positioned the DSL within the Workload-Router-Pool inference architecture. This paper extends the same language from stateless, per-request routing to multi-step agent workflows -- the full path from inference gateway to agent orchestration to infrastructure deployment. The DSL compiler emits verified decision nodes for orchestration frameworks (LangGraph, OpenClaw), Kubernetes artifacts (NetworkPolicy, Sandbox CRD, ConfigMap), YANG/NETCONF payloads, and protocol-boundary gates (MCP, A2A) -- all from the same source. Because the language is non-Turing-complete, the compiler guarantees exhaustive routing, conflict-free branching, referential integrity, and audit traces structurally coupled to the decision logic. Because signal definitions are shared across targets, a threshold change propagates from inference gateway to agent gate to infrastructure artifact in one compilation step -- eliminating cross-team coordination as the primary source of policy drift. We ground the approach in four pillars -- auditability, cost efficiency, verifiability, and tunability -- and identify the verification boundary at each layer.

🔍 Key Points

  • Development of a systematic taxonomy categorizing 190 security advisories in OpenClaw based on architectural layers and adversarial techniques, providing a comprehensive understanding of vulnerabilities specific to AI agent frameworks.
  • Identification of a complete unauthenticated remote code execution (RCE) path utilizing vulnerabilities in the Gateway and Node-Host subsystems, emphasizing critical security flaws resulting from inter-layer trust assumptions.
  • Analysis revealing that existing security mechanisms, notably the exec allowlist and plugin/skill distribution system, are fundamentally flawed due to reliance on closed-world assumptions and lack of robust identity checks against mutable fields.
  • Proposed novel defense strategies, including a unified inter-layer policy enforcement model and improved context provenance tagging for inputs to mitigate prompt injection vulnerabilities and enhance trust within the architecture.

💡 Why This Paper Matters

This paper presents a critical analysis and categorization of security vulnerabilities within the OpenClaw AI agent framework, highlighting significant architectural flaws that have broad implications for the security of AI applications. By developing a comprehensive taxonomy and identifying systemic weaknesses, it lays the groundwork for future research and defense mechanisms aimed at improving the security posture of AI agent frameworks. Understanding these vulnerabilities is essential for ensuring safe and reliable implementations of AI technology.

🎯 Why It's Interesting for AI Security Researchers

This paper is of paramount interest to AI security researchers as it uncovers foundational vulnerabilities specific to AI-backed agent frameworks, an emerging area of concern in cybersecurity. The methods and findings provide a detailed insight into the security landscape of AI systems, emphasizing the need for robust defenses against sophisticated attacks that exploit the unique characteristics of these frameworks. Additionally, it contributes valuable knowledge that can inform future designs, audits, and security policies in AI technologies.

📚 Read the Full Paper