← Back to Library

SecureBreak -- A dataset towards safe and secure models

Authors: Marco Arazzi, Vignesh Kumar Kembu, Antonino Nocera

Published: 2026-03-23

arXiv ID: 2603.21975v1

Added to Library: 2026-03-24 03:01 UTC

Red Teaming

πŸ“„ Abstract

Large language models are becoming pervasive core components in many real-world applications. As a consequence, security alignment represents a critical requirement for their safe deployment. Although previous related works focused primarily on model architectures and alignment methodologies, these approaches alone cannot ensure the complete elimination of harmful generations. This concern is reinforced by the growing body of scientific literature showing that attacks, such as jailbreaking and prompt injection, can bypass existing security alignment mechanisms. As a consequence, additional security strategies are needed both to provide qualitative feedback on the robustness of the obtained security alignment at the training stage, and to create an ``ultimate'' defense layer to block unsafe outputs possibly produced by deployed models. To provide a contribution in this scenario, this paper introduces SecureBreak, a safety-oriented dataset designed to support the development of AI-driven solutions for detecting harmful LLM outputs caused by residual weaknesses in security alignment. The dataset is highly reliable due to careful manual annotation, where labels are assigned conservatively to ensure safety. It performs well in detecting unsafe content across multiple risk categories. Tests with pre-trained LLMs show improved results after fine-tuning on SecureBreak. Overall, the dataset is useful both for post-generation safety filtering and for guiding further model alignment and security improvements.

πŸ” Key Points

  • Introduction of SecureBreak, a safety-oriented dataset designed to classify safe and unsafe outputs of LLMs, highlighting the need for improved security alignment in real-world applications.
  • The dataset was created from existing harmful prompts found in the JailbreakBench dataset, focusing on response-level classification through expert human annotation to ensure high quality and reliability.
  • Experimental results demonstrate that fine-tuning LLMs on SecureBreak significantly improves their ability to classify responses as safe or unsafe, outperforming base models across multiple risk categories.
  • Findings indicate that smaller models fine-tuned with SecureBreak can achieve high safety classification accuracy, effectively creating robust external defense mechanisms against harmful outputs from LLMs.
  • SecureBreak not only serves as a tool for post-generation filtering but also as a supervisory signal for ongoing safety alignment improvements in LLMs.

πŸ’‘ Why This Paper Matters

This paper presents a significant advancement in the field of AI alignment by addressing the critical issue of harmful outputs generated by large language models (LLMs). By introducing and validating the SecureBreak dataset, the authors provide a reliable foundation for developing and evaluating AI systems focused on security alignment and safety filtering. As LLMs become increasingly integrated into various applications, ensuring their safe and responsible use is crucial, making this work highly relevant and timely.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it tackles the pressing issue of safety and security in LLMsβ€”one of the primary concerns in deploying these models in sensitive applications. The insights gained from the SecureBreak dataset and the analysis of different model behaviors underlines the significance of continuous evaluation and optimization of AI systems, which aligns with the broader goal of enhancing the trustworthiness and reliability of AI technologies.

πŸ“š Read the Full Paper