← Back to Library

JADES: A Universal Framework for Jailbreak Assessment via Decompositional Scoring

Authors: Junjie Chu, Mingjie Li, Ziqing Yang, Ye Leng, Chenhao Lin, Chao Shen, Michael Backes, Yun Shen, Yang Zhang

Published: 2025-08-28

arXiv ID: 2508.20848v1

Added to Library: 2025-08-29 04:00 UTC

Red Teaming

📄 Abstract

Accurately determining whether a jailbreak attempt has succeeded is a fundamental yet unresolved challenge. Existing evaluation methods rely on misaligned proxy indicators or naive holistic judgments. They frequently misinterpret model responses, leading to inconsistent and subjective assessments that misalign with human perception. To address this gap, we introduce JADES (Jailbreak Assessment via Decompositional Scoring), a universal jailbreak evaluation framework. Its key mechanism is to automatically decompose an input harmful question into a set of weighted sub-questions, score each sub-answer, and weight-aggregate the sub-scores into a final decision. JADES also incorporates an optional fact-checking module to strengthen the detection of hallucinations in jailbreak responses. We validate JADES on JailbreakQR, a newly introduced benchmark proposed in this work, consisting of 400 pairs of jailbreak prompts and responses, each meticulously annotated by humans. In a binary setting (success/failure), JADES achieves 98.5% agreement with human evaluators, outperforming strong baselines by over 9%. Re-evaluating five popular attacks on four LLMs reveals substantial overestimation (e.g., LAA's attack success rate on GPT-3.5-Turbo drops from 93% to 69%). Our results show that JADES could deliver accurate, consistent, and interpretable evaluations, providing a reliable basis for measuring future jailbreak attacks.

🔍 Key Points

  • Introduction of JADES, a decompositional scoring framework for evaluating jailbreak attempts in large language models (LLMs) that improves accuracy and transparency in assessments.
  • Validation of JADES through the creation of two benchmark datasets, JailbreakQR and HarmfulQA, allowing for extensive testing against existing methodologies.
  • Demonstration that previous automated evaluation methods significantly overestimate the success rates of jailbreak attacks, highlighting the inaccuracies inherent in binary classification methods.
  • Introduction of a fact-checking module within JADES to address hallucinations in generated content, further strengthening the reliability of evaluations.
  • Empirical evidence indicating that most jailbreak attempts classified as successful by traditional metrics are often only partially successful, calling for a nuanced understanding of jailbreak effectiveness.

💡 Why This Paper Matters

The JADES framework represents a significant advance in the ability to accurately and consistently assess the effectiveness of jailbreaks in large language models. By employing a nuanced, decompositional approach and validating results against well-structured benchmarks, this work provides both clarity and reliability in the evaluation of safety circumventing strategies. Thus, it establishes a stronger foundation for future research and application in AI safety, security, and model robustness.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it addresses a pressing issue: the evaluation of jailbreak effectiveness against AI models that handle sensitive data. By introducing a systematic framework for assessment, it not only enhances understanding of jailbreak vulnerabilities but also aids in the development of more robust safety measures. Furthermore, the discussions on overestimation of jailbreak risks challenge existing narratives, encouraging researchers to adopt more reliable evaluation metrics in their own work.

📚 Read the Full Paper