← Back to Library

SegTune: Structured and Fine-Grained Control for Song Generation

Authors: Pengfei Cai, Joanna Wang, Haorui Zheng, Xu Li, Zihao Ji, Teng Ma, Zhongliang Liu, Chen Zhang, Pengfei Wan

Published: 2025-10-21

arXiv ID: 2510.18416v1

Added to Library: 2025-11-14 23:08 UTC

📄 Abstract

Recent advancements in song generation have shown promising results in generating songs from lyrics and/or global text prompts. However, most existing systems lack the ability to model the temporally varying attributes of songs, limiting fine-grained control over musical structure and dynamics. In this paper, we propose SegTune, a non-autoregressive framework for structured and controllable song generation. SegTune enables segment-level control by allowing users or large language models to specify local musical descriptions aligned to song sections.The segmental prompts are injected into the model by temporally broadcasting them to corresponding time windows, while global prompts influence the whole song to ensure stylistic coherence. To obtain accurate segment durations and enable precise lyric-to-music alignment, we introduce an LLM-based duration predictor that autoregressively generates sentence-level timestamped lyrics in LRC format. We further construct a large-scale data pipeline for collecting high-quality songs with aligned lyrics and prompts, and propose new evaluation metrics to assess segment-level alignment and vocal attribute consistency. Experimental results show that SegTune achieves superior controllability and musical coherence compared to existing baselines. See https://cai525.github.io/SegTune_demo for demos of our work.

🔍 Key Points

  • OpenGuardrails introduces a Configurable Policy Adaptation mechanism that enables per-request customization of safety categories and sensitivity thresholds, offering flexibility critical for enterprise applications.
  • The platform features a Unified LLM-based Guard Architecture that integrates content-safety and manipulation detection into a single model, enhancing robustness and deployment efficiency.
  • OpenGuardrails employs a Scalable and Efficient Model Design, reducing a 14B model to 3.3B parameters while maintaining over 98% accuracy on benchmarks, facilitating low-latency deployment.
  • The system supports 119 languages, achieving state-of-the-art performance on multilingual safety benchmarks, making it suitable for global applications.
  • OpenGuardrails sets a new standard in safety infrastructure by being fully open-source, allowing for customization and extension in real-world use cases.

💡 Why This Paper Matters

The paper presents OpenGuardrails as a pioneering open-source platform that significantly enhances the safety and reliability of large language models in practical applications. Its innovations in customizable policy mechanisms and unified architecture position it as a critical tool for ensuring safe deployment of AI technologies across diverse applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it addresses critical issues in model safety, manipulation, and privacy compliance. The technical innovations in configurable safety policies and the integration of detection capabilities within a unified model provide a framework for advancing research in AI safety and security, prompting further exploration of adaptive governance in machine learning systems.

📚 Read the Full Paper