← Back to Library

Infinite-Story: A Training-Free Consistent Text-to-Image Generation

Authors: Jihun Park, Kyoungmin Lee, Jongmin Gim, Hyeonseo Jo, Minseok Oh, Wonhyeok Choi, Kyumin Hwang, Jaeyeul Kim, Minwoo Choi, Sunghoon Im

Published: 2025-11-17

arXiv ID: 2511.13002v1

Added to Library: 2025-11-18 04:00 UTC

📄 Abstract

We present Infinite-Story, a training-free framework for consistent text-to-image (T2I) generation tailored for multi-prompt storytelling scenarios. Built upon a scale-wise autoregressive model, our method addresses two key challenges in consistent T2I generation: identity inconsistency and style inconsistency. To overcome these issues, we introduce three complementary techniques: Identity Prompt Replacement, which mitigates context bias in text encoders to align identity attributes across prompts; and a unified attention guidance mechanism comprising Adaptive Style Injection and Synchronized Guidance Adaptation, which jointly enforce global style and identity appearance consistency while preserving prompt fidelity. Unlike prior diffusion-based approaches that require fine-tuning or suffer from slow inference, Infinite-Story operates entirely at test time, delivering high identity and style consistency across diverse prompts. Extensive experiments demonstrate that our method achieves state-of-the-art generation performance, while offering over 6X faster inference (1.72 seconds per image) than the existing fastest consistent T2I models, highlighting its effectiveness and practicality for real-world visual storytelling.

🔍 Key Points

  • VEIL introduces a novel approach to jailbreaking text-to-video (T2V) models by exploiting learned cross-modal associations between audio cues and stylistic visual elements, shifting the focus from manipulating explicit unsafe prompts to using benign components that yield harmful outputs.
  • The framework formalizes attack generation as a constrained optimization problem, employing a guided search method to balance stealth and attack effectiveness, achieving a 23% improvement in attack success rates over previous methods across several commercial T2V models.
  • VEIL incorporates a modular prompt design which consists of neutral scene anchors, auditory triggers, and stylistic modulators, enhancing the ability of prompts to bypass safety filters while still leading to undesirable video content.
  • Extensive experiments validate the efficacy of VEIL, demonstrating superior performance against current techniques and revealing vulnerabilities in T2V models that may circumvent traditional defenses, particularly in detecting harmful outputs.
  • The study highlights critical limitations in existing safety mechanisms for T2V models and sets the stage for future research into improving model robustness against subtle attack vectors.

💡 Why This Paper Matters

This paper is crucial as it identifies and exploits a new class of vulnerabilities in T2V models, demonstrating how implicit knowledge encoded in these models can be manipulated through structured yet benign interactions. The findings underscore the need for reassessing the safety and security of generative AI systems, particularly as T2V models become more prevalent in producing potentially harmful content.

🎯 Why It's Interesting for AI Security Researchers

The work is highly relevant to AI security researchers as it challenges traditional notions of prompt safety and highlights the need for advanced protective measures in generative models. By proposing novel attack vectors that exploit model architecture and learned associations, this research can inform future security frameworks, defenses, and robustness assessments in generative AI applications.

📚 Read the Full Paper