← Back to Library

Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks

Authors: Junjie Chu, Xinyue Shen, Ye Leng, Michael Backes, Yun Shen, Yang Zhang

Published: 2026-03-03

arXiv ID: 2603.04459v1

Added to Library: 2026-03-06 04:01 UTC

Safety

📄 Abstract

The rapid growth of research in LLM safety makes it hard to track all advances. Benchmarks are therefore crucial for capturing key trends and enabling systematic comparisons. Yet, it remains unclear why certain benchmarks gain prominence, and no systematic assessment has been conducted on their academic influence or code quality. This paper fills this gap by presenting the first multi-dimensional evaluation of the influence (based on five metrics) and code quality (based on both automated and human assessment) on LLM safety benchmarks, analyzing 31 benchmarks and 382 non-benchmarks across prompt injection, jailbreak, and hallucination. We find that benchmark papers show no significant advantage in academic influence (e.g., citation count and density) over non-benchmark papers. We uncover a key misalignment: while author prominence correlates with paper influence, neither author prominence nor paper influence shows a significant correlation with code quality. Our results also indicate substantial room for improvement in code and supplementary materials: only 39% of repositories are ready-to-use, 16% include flawless installation guides, and a mere 6% address ethical considerations. Given that the work of prominent researchers tends to attract greater attention, they need to lead the effort in setting higher standards.

🔍 Key Points

  • This paper presents the first multi-dimensional evaluation of the influence and code quality of LLM safety benchmarks, including quantitative metrics related to academic influence and qualitative assessments of code repositories.
  • It was found that benchmark papers do not significantly outperform non-benchmark papers in terms of academic influence metrics like citation count and density, indicating potential underperformance in impact as compared to their prevalence in the field.
  • A notable finding is the disconnection between author prominence and code quality, suggesting that having prominent authors does not guarantee high-quality code repositories, which raises concerns about reproducibility and usability in the research community.
  • The analysis revealed substantial areas for improvement in code quality, with only 39% of repositories being ready-to-use, highlighting the need for better documentation and ethical considerations in code practices related to LLM safety.
  • The study emphasizes the necessity for prominent researchers to elevate standards in code repository quality to foster better usability and support future research. This includes clear installation guides and ethical guidelines.

💡 Why This Paper Matters

This paper is significant as it addresses an important gap in the literature surrounding LLM safety benchmarks, providing a systematic evaluation of both their influence in the academic community and the quality of their associated code repositories. The findings suggest that there is room for improvement in technical implementations that are critical for advancing research and ensuring that benchmarks serve their intended purpose effectively. By pinpointing weaknesses in code quality and repository management, it calls attention to widespread issues that could hinder future innovation in LLM safety.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper valuable as it directly tackles the crucial aspect of reproducibility in LLM safety research. By systematically evaluating various benchmarks and highlighting discrepancies between expected and actual performance in terms of code quality and academic influence, the study offers insights that can lead to better practices in code repository management, thereby contributing to enhanced integrity and robustness in AI safety evaluations. Moreover, the critical look at ethical considerations within code repositories aligns with the growing concern regarding responsible AI usage, making the findings particularly relevant in today's landscape.

📚 Read the Full Paper