← Back to Library

TeleAI-Safety: A comprehensive LLM jailbreaking benchmark towards attacks, defenses, and evaluations

Authors: Xiuyuan Chen, Jian Zhao, Yuxiang He, Yuan Xun, Xinwei Liu, Yanshu Li, Huilin Zhou, Wei Cai, Ziyan Shi, Yuchen Yuan, Tianle Zhang, Chi Zhang, Xuelong Li

Published: 2025-12-05

arXiv ID: 2512.05485v2

Added to Library: 2025-12-09 04:00 UTC

Red Teaming Safety

📄 Abstract

While the deployment of large language models (LLMs) in high-value industries continues to expand, the systematic assessment of their safety against jailbreak and prompt-based attacks remains insufficient. Existing safety evaluation benchmarks and frameworks are often limited by an imbalanced integration of core components (attack, defense, and evaluation methods) and an isolation between flexible evaluation frameworks and standardized benchmarking capabilities. These limitations hinder reliable cross-study comparisons and create unnecessary overhead for comprehensive risk assessment. To address these gaps, we present TeleAI-Safety, a modular and reproducible framework coupled with a systematic benchmark for rigorous LLM safety evaluation. Our framework integrates a broad collection of 19 attack methods (including one self-developed method), 29 defense methods, and 19 evaluation methods (including one self-developed method). With a curated attack corpus of 342 samples spanning 12 distinct risk categories, the TeleAI-Safety benchmark conducts extensive evaluations across 14 target models. The results reveal systematic vulnerabilities and model-specific failure cases, highlighting critical trade-offs between safety and utility, and identifying potential defense patterns for future optimization. In practical scenarios, TeleAI-Safety can be flexibly adjusted with customized attack, defense, and evaluation combinations to meet specific demands. We release our complete code and evaluation results to facilitate reproducible research and establish unified safety baselines.

🔍 Key Points

  • Introduction of TeleAI-Safety, a modular and reproducible framework for assessing LLM safety against jailbreaking attacks and prompt-based vulnerabilities.
  • Integration of a comprehensive collection of attack (19 methods), defense (29 methods), and evaluation (19 methods) techniques, enhancing the flexibility and robustness of safety evaluations.
  • Creation of an extensive attack corpus with 342 samples spanning 12 risk categories to enable systematic benchmarking across 14 target models, revealing model-specific vulnerabilities and trade-offs between safety and utility.
  • Development of self-developed methods such as Morpheus (adaptive multi-round attack agent) and RADAR (multi-agent evaluation method) that push the boundaries of traditional safety assessment methodologies.
  • Identification of critical gaps in current LLM security evaluations, advocating for a unified and standardized framework to address evaluator inconsistencies and optimize defense mechanisms.

💡 Why This Paper Matters

The paper introduces a significant advancement in the evaluation of large language models (LLMs) against jailbreak attacks with TeleAI-Safety, which systematically combines attack strategies, defenses, and evaluation methods. Its findings contribute to identifying vulnerabilities within models, promoting safer and more secure AI applications in high-risk industries.

🎯 Why It's Interesting for AI Security Researchers

This paper offers valuable insights for AI security researchers as it not only enhances existing frameworks for vulnerability assessment but also proposes novel methodologies and datasets that address current limitations. The comprehensive analysis of attack methods, defenses, and evaluators will help researchers improve the resilience of LLMs, contributing to the ongoing discourse on AI security standards.

📚 Read the Full Paper