← Back to Library

Steering Autoregressive Music Generation with Recursive Feature Machines

Authors: Daniel Zhao, Daniel Beaglehole, Taylor Berg-Kirkpatrick, Julian McAuley, Zachary Novack

Published: 2025-10-21

arXiv ID: 2510.19127v1

Added to Library: 2025-11-14 23:08 UTC

📄 Abstract

Controllable music generation remains a significant challenge, with existing methods often requiring model retraining or introducing audible artifacts. We introduce MusicRFM, a framework that adapts Recursive Feature Machines (RFMs) to enable fine-grained, interpretable control over frozen, pre-trained music models by directly steering their internal activations. RFMs analyze a model's internal gradients to produce interpretable "concept directions", or specific axes in the activation space that correspond to musical attributes like notes or chords. We first train lightweight RFM probes to discover these directions within MusicGen's hidden states; then, during inference, we inject them back into the model to guide the generation process in real-time without per-step optimization. We present advanced mechanisms for this control, including dynamic, time-varying schedules and methods for the simultaneous enforcement of multiple musical properties. Our method successfully navigates the trade-off between control and generation quality: we can increase the accuracy of generating a target musical note from 0.23 to 0.82, while text prompt adherence remains within approximately 0.02 of the unsteered baseline, demonstrating effective control with minimal impact on prompt fidelity. We release code to encourage further exploration on RFMs in the music domain.

🔍 Key Points

  • OpenGuardrails introduces a Configurable Policy Adaptation mechanism that enables per-request customization of safety categories and sensitivity thresholds, offering flexibility critical for enterprise applications.
  • The platform features a Unified LLM-based Guard Architecture that integrates content-safety and manipulation detection into a single model, enhancing robustness and deployment efficiency.
  • OpenGuardrails employs a Scalable and Efficient Model Design, reducing a 14B model to 3.3B parameters while maintaining over 98% accuracy on benchmarks, facilitating low-latency deployment.
  • The system supports 119 languages, achieving state-of-the-art performance on multilingual safety benchmarks, making it suitable for global applications.
  • OpenGuardrails sets a new standard in safety infrastructure by being fully open-source, allowing for customization and extension in real-world use cases.

💡 Why This Paper Matters

The paper presents OpenGuardrails as a pioneering open-source platform that significantly enhances the safety and reliability of large language models in practical applications. Its innovations in customizable policy mechanisms and unified architecture position it as a critical tool for ensuring safe deployment of AI technologies across diverse applications.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it addresses critical issues in model safety, manipulation, and privacy compliance. The technical innovations in configurable safety policies and the integration of detection capabilities within a unified model provide a framework for advancing research in AI safety and security, prompting further exploration of adaptive governance in machine learning systems.

📚 Read the Full Paper