← Back to Library

TraceSafe: A Systematic Assessment of LLM Guardrails on Multi-Step Tool-Calling Trajectories

Authors: Yen-Shan Chen, Sian-Yao Huang, Cheng-Lin Yang, Yun-Nung Chen

Published: 2026-04-08

arXiv ID: 2604.07223v1

Added to Library: 2026-04-09 02:00 UTC

📄 Abstract

As large language models (LLMs) evolve from static chatbots into autonomous agents, the primary vulnerability surface shifts from final outputs to intermediate execution traces. While safety guardrails are well-benchmarked for natural language responses, their efficacy remains largely unexplored within multi-step tool-use trajectories. To address this gap, we introduce TraceSafe-Bench, the first comprehensive benchmark specifically designed to assess mid-trajectory safety. It encompasses 12 risk categories, ranging from security threats (e.g., prompt injection, privacy leaks) to operational failures (e.g., hallucinations, interface inconsistencies), featuring over 1,000 unique execution instances. Our evaluation of 13 LLM-as-a-guard models and 7 specialized guardrails yields three critical findings: 1) Structural Bottleneck: Guardrail efficacy is driven more by structural data competence (e.g., JSON parsing) than semantic safety alignment. Performance correlates strongly with structured-to-text benchmarks ($ρ=0.79$) but shows near-zero correlation with standard jailbreak robustness. 2) Architecture over Scale: Model architecture influences risk detection performance more significantly than model size, with general-purpose LLMs consistently outperforming specialized safety guardrails in trajectory analysis. 3) Temporal Stability: Accuracy remains resilient across extended trajectories. Increased execution steps allow models to pivot from static tool definitions to dynamic execution behaviors, actually improving risk detection performance in later stages. Our findings suggest that securing agentic workflows requires jointly optimizing for structural reasoning and safety alignment to effectively mitigate mid-trajectory risks.

🔍 Key Points

  • Introduction of the inscriptive jailbreak attack, enabling harmful textual payloads in visually benign outputs of text-to-image models.
  • Development of Etch, a compositional optimization framework that decomposes prompts into three orthogonal layers to enhance the effectiveness of inscriptive attacks.
  • Demonstrated an average attack success rate of 65.57% across various T2I models, with peak success rates reaching 91.00%, significantly outperforming existing methods.
  • Exposed vulnerabilities in current safety models, highlighting the need for advanced typography-aware defense mechanisms.
  • Laid the groundwork for future research on multimodal defenses against inscriptive attacks.

💡 Why This Paper Matters

The paper presents a significant advancement in understanding exploitative techniques utilized against text-to-image AI models, showcasing the critical vulnerabilities of existing safety measures. It underscores the urgent necessity for improved protective strategies tailored to guard against novel attack vectors involving embedded harmful content.

🎯 Why It's Interesting for AI Security Researchers

This paper is of high interest to AI security researchers as it reveals and formalizes a new and previously overlooked attack surface against text-to-image models. The introduction of the inscriptive jailbreak demonstrates how adversaries can exploit advanced AI capabilities to generate harmful output, thus necessitating a reevaluation of security protocols and defense mechanisms.

📚 Read the Full Paper