← Back to Library

LumiTex: Towards High-Fidelity PBR Texture Generation with Illumination Context

Authors: Jingzhi Bao, Hongze Chen, Lingting Zhu, Chenyu Liu, Runze Zhang, Keyang Luo, Zeyu Hu, Weikai Chen, Yingda Yin, Xin Wang, Zehong Lin, Jun Zhang, Xiaoguang Han

Published: 2025-11-24

arXiv ID: 2511.19437v1

Added to Library: 2025-11-25 04:01 UTC

πŸ“„ Abstract

Physically-based rendering (PBR) provides a principled standard for realistic material-lighting interactions in computer graphics. Despite recent advances in generating PBR textures, existing methods fail to address two fundamental challenges: 1) materials decomposition from image prompts under limited illumination cues, and 2) seamless and view-consistent texture completion. To this end, we propose LumiTex, an end-to-end framework that comprises three key components: (1) a multi-branch generation scheme that disentangles albedo and metallic-roughness under shared illumination priors for robust material understanding, (2) a lighting-aware material attention mechanism that injects illumination context into the decoding process for physically grounded generation of albedo, metallic, and roughness maps, and (3) a geometry-guided inpainting module based on a large view synthesis model that enriches texture coverage and ensures seamless, view-consistent UV completion. Extensive experiments demonstrate that LumiTex achieves state-of-the-art performance in texture quality, surpassing both existing open-source and commercial methods.

πŸ” Key Points

  • Introduction of RoguePrompt as an automated jailbreak pipeline for large language models (LLMs) that preserves harmful intent while bypassing existing moderation systems.
  • Utilization of dual-layer encryption (VigenΓ¨re and ROT-13 ciphers) to create self-reconstructing jailbreak prompts, allowing the original malicious requests to be executed without triggering safety mechanisms.
  • Robust performance in breaking through moderation filters, achieving 84.7% bypass, 80.2% reconstruction, and 71.5% execution success rates against GPT-4o using a set of real-world prompts previously marked as forbidden.
  • Comparison against five baseline jailbreak methods, demonstrating superior effectiveness and revealing critical blind spots in current moderation practices.
  • Proposed evaluation methodology focused on bypass, reconstruction, and execution metrics that highlight how jailbreak risks persist even in sophisticated AI systems.

πŸ’‘ Why This Paper Matters

This paper is significant as it showcases the potential vulnerabilities within LLM moderation systems, exposing how advanced techniques like RoguePrompt can effectively subvert these safeguards. The substantial success rates of the proposed attacks indicate a pressing need for developing more resilient moderation frameworks that can contend with multi-stage decoding and intent reconstruction operations. By highlighting these vulnerabilities, the research calls for a reevaluation of existing defenses in AI systems and proposes pathways to enhance their reliability against sophisticated threats.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper particularly compelling as it delves into the intersection of adversarial attacks on language models and content moderation. The novel post-processing techniques introduced, alongside concrete quantitative results, provide insights into the current limitations of LLM defenses. Furthermore, the evaluation framework proposed here serves as a benchmark for assessing the robustness of future moderation strategies. Understanding how these attack vectors operate will be crucial for improving AI safety and developing models that better recognize and mitigate potential misuse.

πŸ“š Read the Full Paper