← Back to Library

Rethinking Safety in LLM Fine-tuning: An Optimization Perspective

Authors: Minseon Kim, Jin Myung Kwak, Lama Alssum, Bernard Ghanem, Philip Torr, David Krueger, Fazl Barez, Adel Bibi

Published: 2025-08-17

arXiv ID: 2508.12531v1

Added to Library: 2025-08-19 04:03 UTC

Safety

📄 Abstract

Fine-tuning language models is commonly believed to inevitably harm their safety, i.e., refusing to respond to harmful user requests, even when using harmless datasets, thus requiring additional safety measures. We challenge this belief through systematic testing, showing that poor optimization choices, rather than inherent trade-offs, often cause safety problems, measured as harmful responses to adversarial prompts. By properly selecting key training hyper-parameters, e.g., learning rate, batch size, and gradient steps, we reduce unsafe model responses from 16\% to approximately 5\%, as measured by keyword matching, while maintaining utility performance. Based on this observation, we propose a simple exponential moving average (EMA) momentum technique in parameter space that preserves safety performance by creating a stable optimization path and retains the original pre-trained model's safety properties. Our experiments on the Llama families across multiple datasets (Dolly, Alpaca, ORCA) demonstrate that safety problems during fine-tuning can largely be avoided without specialized interventions, outperforming existing approaches that require additional safety data while offering practical guidelines for maintaining both model performance and safety during adaptation.

🔍 Key Points

  • The paper challenges the belief that fine-tuning language models on harmless datasets will inevitably degrade their safety, highlighting that poor optimization choices are often the cause of safety risks.
  • By optimizing key hyper-parameters such as learning rate, batch size, and gradient steps, the authors demonstrate a reduction in unsafe model responses from 16% to about 5%.
  • Introduces a novel exponential moving average (EMA) momentum technique during fine-tuning, which helps to stabilize the optimization path and preserve safety properties of the pre-trained model.
  • Experiments conducted on the Llama model families across multiple datasets (Dolly, Alpaca, ORCA) show that safety risks can largely be mitigated without additional safety interventions or datasets.
  • The proposed EMA technique significantly outperforms existing solutions that rely on external safety data, while still achieving competitive utility performance.

💡 Why This Paper Matters

This paper is relevant and important as it provides a fresh perspective on maintaining safety in fine-tuning large language models by demonstrating that safety issues often stem from suboptimal optimization rather than inherent risks. The novel approach of using exponential moving averages within parameter spaces presents a practical solution to enhance model safety without compromising task performance, which can be crucial for the deployment of language models in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers would find this paper of interest because it addresses a critical issue regarding the safety of language models, which can be vulnerable to safety risks during finetuning. The presented methods and findings offer practical optimization strategies that can be utilized to enhance the robustness of AI systems against adversarial prompts, reduce potential misuse, and ultimately contribute to safer AI deployments.

📚 Read the Full Paper