← Back to Library

VEIL: Jailbreaking Text-to-Video Models via Visual Exploitation from Implicit Language

Authors: Zonghao Ying, Moyang Chen, Nizhang Li, Zhiqiang Wang, Wenxin Zhang, Quanchen Zou, Zonglei Jing, Aishan Liu, Xianglong Liu

Published: 2025-11-17

arXiv ID: 2511.13127v1

Added to Library: 2025-11-18 04:00 UTC

Red Teaming

📄 Abstract

Jailbreak attacks can circumvent model safety guardrails and reveal critical blind spots. Prior attacks on text-to-video (T2V) models typically add adversarial perturbations to obviously unsafe prompts, which are often easy to detect and defend. In contrast, we show that benign-looking prompts containing rich, implicit cues can induce T2V models to generate semantically unsafe videos that both violate policy and preserve the original (blocked) intent. To realize this, we propose VEIL, a jailbreak framework that leverages T2V models' cross-modal associative patterns via a modular prompt design. Specifically, our prompts combine three components: neutral scene anchors, which provide the surface-level scene description extracted from the blocked intent to maintain plausibility; latent auditory triggers, textual descriptions of innocuous-sounding audio events (e.g., creaking, muffled noises) that exploit learned audio-visual co-occurrence priors to bias the model toward particular unsafe visual concepts; and stylistic modulators, cinematic directives (e.g., camera framing, atmosphere) that amplify and stabilize the latent trigger's effect. We formalize attack generation as a constrained optimization over the above modular prompt space and solve it with a guided search procedure that balances stealth and effectiveness. Extensive experiments over 7 T2V models demonstrate the efficacy of our attack, achieving a 23 percent improvement in average attack success rate in commercial models.

🔍 Key Points

  • VEIL introduces a novel approach to jailbreaking text-to-video (T2V) models by exploiting learned cross-modal associations between audio cues and stylistic visual elements, shifting the focus from manipulating explicit unsafe prompts to using benign components that yield harmful outputs.
  • The framework formalizes attack generation as a constrained optimization problem, employing a guided search method to balance stealth and attack effectiveness, achieving a 23% improvement in attack success rates over previous methods across several commercial T2V models.
  • VEIL incorporates a modular prompt design which consists of neutral scene anchors, auditory triggers, and stylistic modulators, enhancing the ability of prompts to bypass safety filters while still leading to undesirable video content.
  • Extensive experiments validate the efficacy of VEIL, demonstrating superior performance against current techniques and revealing vulnerabilities in T2V models that may circumvent traditional defenses, particularly in detecting harmful outputs.
  • The study highlights critical limitations in existing safety mechanisms for T2V models and sets the stage for future research into improving model robustness against subtle attack vectors.

💡 Why This Paper Matters

This paper is crucial as it identifies and exploits a new class of vulnerabilities in T2V models, demonstrating how implicit knowledge encoded in these models can be manipulated through structured yet benign interactions. The findings underscore the need for reassessing the safety and security of generative AI systems, particularly as T2V models become more prevalent in producing potentially harmful content.

🎯 Why It's Interesting for AI Security Researchers

The work is highly relevant to AI security researchers as it challenges traditional notions of prompt safety and highlights the need for advanced protective measures in generative models. By proposing novel attack vectors that exploit model architecture and learned associations, this research can inform future security frameworks, defenses, and robustness assessments in generative AI applications.

📚 Read the Full Paper