← Back to Library

EASE: Practical and Efficient Safety Alignment for Small Language Models

Authors: Haonan Shi, Guoli Wang, Tu Ouyang, An Wang

Published: 2025-11-09

arXiv ID: 2511.06512v1

Added to Library: 2025-11-11 05:01 UTC

Red Teaming

📄 Abstract

Small language models (SLMs) are increasingly deployed on edge devices, making their safety alignment crucial yet challenging. Current shallow alignment methods that rely on direct refusal of malicious queries fail to provide robust protection, particularly against adversarial jailbreaks. While deliberative safety reasoning alignment offers deeper alignment for defending against sophisticated attacks, effectively implanting such reasoning capability in SLMs with limited capabilities remains an open challenge. Moreover, safety reasoning incurs significant computational overhead as models apply reasoning to nearly all queries, making it impractical for resource-constrained edge deployment scenarios that demand rapid responses. We propose EASE, a novel framework that enables practical and Efficient safety Alignment for Small languagE models. Our approach first identifies the optimal safety reasoning teacher that can effectively distill safety reasoning capabilities to SLMs. We then align models to selectively activate safety reasoning for dangerous adversarial jailbreak queries while providing direct responses to straightforward malicious queries and general helpful tasks. This selective mechanism enables small models to maintain robust safety guarantees against sophisticated attacks while preserving computational efficiency for benign interactions. Experimental results demonstrate that EASE reduces jailbreak attack success rates by up to 17% compared to shallow alignment methods while reducing inference overhead by up to 90% compared to deliberative safety reasoning alignment, making it practical for SLMs real-world edge deployments.

🔍 Key Points

  • EASE framework introduces a two-phase safety alignment methodology for small language models (SLMs), effectively combining safety reasoning knowledge distillation and boundary calibration for selective reasoning activation.
  • The framework significantly improves safety performance against adversarial jailbreak attacks, reducing attack success rates by up to 17% when compared to shallow alignment methods.
  • EASE reduces inference overhead by up to 90% compared to deliberative safety reasoning methods, thus enabling practical deployment in resource-constrained environments.
  • The research emphasizes the importance of selecting an appropriate safety reasoning teacher model, demonstrating that Large Reasoning Models (LRMs) are more effective than Large Language Models (LLMs) for knowledge distillation in SLMs.
  • EASE strikes a balance between safety and efficiency, preserving general task performance while enhancing safety robustness, making it suitable for practical applications.

💡 Why This Paper Matters

This paper is critically relevant as it addresses the pressing challenge of ensuring safety alignment in small language models, which are widely used in resource-constrained environments. The proposed EASE framework not only enhances safety performance against sophisticated adversarial attacks but also maintains the efficiency necessary for real-world applications. Its innovative approach and empirical results present a significant advancement in the field of AI safety, demonstrating that it is possible to achieve both safety and efficiency in small models.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers would find this paper of great interest as it tackles the vulnerability of small language models to adversarial attacks, which is a key concern in AI application safety. The findings related to selective reasoning activation and model alignment strategies provide insights into designing more resilient models against malicious inputs. Furthermore, the evaluation of various safety alignment methods offers a practical framework that researchers can build upon to enhance the security and robustness of AI systems in real-world deployments.

📚 Read the Full Paper