← Back to Library

Two Frames Matter: A Temporal Attack for Text-to-Video Model Jailbreaking

Authors: Moyang Chen, Zonghao Ying, Wenzhuo Xu, Quancheng Zou, Deyue Zhang, Dongdong Yang, Xiangzheng Zhang

Published: 2026-03-07

arXiv ID: 2603.07028v1

Added to Library: 2026-03-10 03:01 UTC

Red Teaming

📄 Abstract

Recent text-to-video (T2V) models can synthesize complex videos from lightweight natural language prompts, raising urgent concerns about safety alignment in the event of misuse in the real world. Prior jailbreak attacks typically rewrite unsafe prompts into paraphrases that evade content filters while preserving meaning. Yet, these approaches often still retain explicit sensitive cues in the input text and therefore overlook a more profound, video-specific weakness. In this paper, we identify a temporal trajectory infilling vulnerability of T2V systems under fragmented prompts: when the prompt specifies only sparse boundary conditions (e.g., start and end frames) and leaves the intermediate evolution underspecified, the model may autonomously reconstruct a plausible trajectory that includes harmful intermediate frames, despite the prompt appearing benign to input or output side filtering. Building on this observation, we propose TFM. This fragmented prompting framework converts an originally unsafe request into a temporally sparse two-frame extraction and further reduces overtly sensitive cues via implicit substitution. Extensive evaluations across multiple open-source and commercial T2V models demonstrate that TFM consistently enhances jailbreak effectiveness, achieving up to a 12% increase in attack success rate on commercial systems. Our findings highlight the need for temporally aware safety mechanisms that account for model-driven completion beyond prompt surface form.

🔍 Key Points

  • Identification of a video-specific vulnerability in text-to-video (T2V) models, which can infill harmful intermediate frames from sparse boundary conditions in fragmented prompts.
  • Development of a two-step framework, titled TFM (Two Frame Matter), which uses Temporal Boundary Prompting (TBP) and Covert Substitution Mechanism (CSM) to enhance the effectiveness of jailbreak attacks on T2V models.
  • Empirical validation across multiple open-source and commercial T2V models demonstrating that TFM achieves up to a 12% increase in the attack success rate compared to existing methods.
  • The introduction of a novel threat model for T2V systems, enabling a stringent black-box context for evaluating prompt vulnerabilities against safety filters.
  • Necessary implications for the design of safer and more robust T2V models, highlighting the importance of temporally aware safety mechanisms.

💡 Why This Paper Matters

This paper presents significant advancements in understanding the vulnerabilities of text-to-video models, specifically how they can be manipulated through temporal prompt engineering. By demonstrating the effectiveness of the TFM framework in executing successful jailbreak attacks, the authors emphasize the urgent need for improved safety mechanisms, particularly as T2V technology becomes more prevalent and potent in real-world applications. The findings underscore the imperative for ongoing research into AI safety, ensuring responsible deployment of advanced generative systems.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of interest to AI security researchers as it unveils sophisticated attack vectors specific to the rapidly evolving text-to-video generative systems. It emphasizes the interplay between model structure and prompt engineering, serving as a reference for developing countermeasures against existing vulnerabilities. Furthermore, it provides empirical evidence demonstrating the effectiveness of novel attack methodologies, critical for anticipating and mitigating potential risks associated with AI-generated content.

📚 Read the Full Paper