← Back to Library

MacPrompt: Maraconic-guided Jailbreak against Text-to-Image Models

Authors: Xi Ye, Yiwen Liu, Lina Wang, Run Wang, Geying Yang, Yufei Hou, Jiayi Yu

Published: 2026-01-12

arXiv ID: 2601.07141v1

Added to Library: 2026-01-13 04:01 UTC

Red Teaming

📄 Abstract

Text-to-image (T2I) models have raised increasing safety concerns due to their capacity to generate NSFW and other banned objects. To mitigate these risks, safety filters and concept removal techniques have been introduced to block inappropriate prompts or erase sensitive concepts from the models. However, all the existing defense methods are not well prepared to handle diverse adversarial prompts. In this work, we introduce MacPrompt, a novel black-box and cross-lingual attack that reveals previously overlooked vulnerabilities in T2I safety mechanisms. Unlike existing attacks that rely on synonym substitution or prompt obfuscation, MacPrompt constructs macaronic adversarial prompts by performing cross-lingual character-level recombination of harmful terms, enabling fine-grained control over both semantics and appearance. By leveraging this design, MacPrompt crafts prompts with high semantic similarity to the original harmful inputs (up to 0.96) while bypassing major safety filters (up to 100%). More critically, it achieves attack success rates as high as 92% for sex-related content and 90% for violence, effectively breaking even state-of-the-art concept removal defenses. These results underscore the pressing need to reassess the robustness of existing T2I safety mechanisms against linguistically diverse and fine-grained adversarial strategies.

🔍 Key Points

  • Introduction of MacPrompt, a novel black-box attack method that exploits cross-lingual prompt manipulation to bypass safety filters in Text-to-Image (T2I) models.
  • Utilization of macaronic substitutes through character-level recombination of harmful terms across languages allows for high semantic similarity while evading detection, achieving attack success rates of 92% for sex-related content and 90% for violence-related prompts.
  • Extensive experimental validation demonstrates that MacPrompt effectively defeats both text filters and advanced concept removal defenses, revealing significant vulnerabilities in T2I systems' current safety mechanisms.
  • The developed method emphasizes the need for a comprehensive reassessment of defense strategies against multilingual and fine-grained adversarial attacks in generative models.
  • MacPrompt provides a framework for further research into cross-lingual adversarial robustness, pushing the boundaries of how generative models are evaluated for safety and effectiveness.

💡 Why This Paper Matters

This paper presents critical advancements in understanding the vulnerabilities of Text-to-Image models concerning safety mechanisms. By unveiling the effectiveness of its novel methodology, MacPrompt, it emphasizes the inadequacies of current defenses against adversarial prompting. These findings are vital for refining future safety protocols, ensuring responsible AI deployment in systems capable of generating sensitive content.

🎯 Why It's Interesting for AI Security Researchers

This paper is significant to AI security researchers as it not only demonstrates vulnerabilities in existing T2I safety mechanisms but also introduces a practical avenue for testing and improving these defenses. The cross-lingual approach of the attack highlights a gap in the current methodologies used to secure generative models against adversarial misuse, prompting the need for more robust and sophisticated defensive strategies.

📚 Read the Full Paper