โ† Back to Library

ObjexMT: Objective Extraction and Metacognitive Calibration for LLM-as-a-Judge under Multi-Turn Jailbreaks

Authors: Hyunjun Kim, Junwoo Ha, Sangyoon Yu, Haon Park

Published: 2025-08-23

arXiv ID: 2508.16889v1

Added to Library: 2025-08-26 04:01 UTC

Red Teaming

๐Ÿ“„ Abstract

Large language models (LLMs) are increasingly used as judges of other models, yet it is unclear whether a judge can reliably infer the latent objective of the conversation it evaluates, especially when the goal is distributed across noisy, adversarial, multi-turn jailbreaks. We introduce OBJEX(MT), a benchmark that requires a model to (i) distill a transcript into a single-sentence base objective and (ii) report its own confidence. Accuracy is scored by an LLM judge using semantic similarity between extracted and gold objectives; correctness uses a single human-aligned threshold calibrated once on N=100 items (tau* = 0.61); and metacognition is evaluated with ECE, Brier score, Wrong@High-Conf, and risk-coverage curves. We evaluate gpt-4.1, claude-sonnet-4, and Qwen3-235B-A22B-FP8 on SafeMT Attack_600, SafeMTData_1K, MHJ, and CoSafe. claude-sonnet-4 attains the highest objective-extraction accuracy (0.515) and the best calibration (ECE 0.296; Brier 0.324), while gpt-4.1 and Qwen3 tie at 0.441 accuracy yet show marked overconfidence (mean confidence approx. 0.88 vs. accuracy approx. 0.44; Wrong@0.90 approx. 48-52%). Performance varies sharply across datasets (approx. 0.167-0.865), with MHJ comparatively easy and Attack_600/CoSafe harder. These results indicate that LLM judges often misinfer objectives with high confidence in multi-turn jailbreaks and suggest operational guidance: provide judges with explicit objectives when possible and use selective prediction or abstention to manage risk. We release prompts, scoring templates, and complete logs to facilitate replication and analysis.

๐Ÿ” Key Points

  • Introduction of OBJEX(MT), a benchmark for evaluating LLMs' ability to recover latent objectives from multi-turn jailbreaks and calibrate their confidence levels.
  • Evaluation of three LLMs (gpt-4.1, claude-sonnet-4, Qwen3-235B-A22B-FP8) across various datasets revealing significant variations in accuracy and calibration, specifically identifying claude-sonnet-4 as the most accurate and well-calibrated model.
  • Identification of the critical issue where LLM judges misinfer objectives while expressing high self-reported confidence, highlighting the need for careful interpretation of confidence scores in safety-critical applications.
  • Results suggest operational strategies for using LLM judges, emphasizing providing explicit objectives when feasible and employing selective prediction techniques to mitigate risks.
  • Release of extensive materials including prompts and scoring templates to aid future replication and analysis, which contribute to advancing the field's understanding of LLM reliability.

๐Ÿ’ก Why This Paper Matters

The OBJEX(MT) benchmark is a significant advancement in evaluating LLMs' capabilities as judges in challenging scenarios like multi-turn jailbreaks. It not only assesses their accuracy in inferring objectives but also their confidence calibration, crucial for ensuring safety in AI applications. The findings urge the community to design better evaluation strategies and practices, highlighting the nuanced behaviors of LLMs under adversarial conditions. This is essential for ongoing efforts in AI safety and effectiveness.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

This paper is pertinent to AI security researchers because it tackles a foundational aspect of LLM utility in safety evaluationsโ€”how well these models can ascertain the intent behind complex, adversarial inputs. Given the increasing reliance on LLMs in applications that affect user safety, understanding their limitations and error modes is crucial. The findings provide actionable insights that can guide the design of more robust AI systems capable of rigorous safety evaluations, which is vital in a landscape where adversarial exploitation of AI can have significant consequences.

๐Ÿ“š Read the Full Paper