← Back to Library

RACA: Representation-Aware Coverage Criteria for LLM Safety Testing

Authors: Zeming Wei, Zhixin Zhang, Chengcan Wu, Yihao Zhang, Xiaokun Luan, Meng Sun

Published: 2026-02-02

arXiv ID: 2602.02280v1

Added to Library: 2026-02-03 08:00 UTC

Red Teaming Safety

πŸ“„ Abstract

Recent advancements in LLMs have led to significant breakthroughs in various AI applications. However, their sophisticated capabilities also introduce severe safety concerns, particularly the generation of harmful content through jailbreak attacks. Current safety testing for LLMs often relies on static datasets and lacks systematic criteria to evaluate the quality and adequacy of these tests. While coverage criteria have been effective for smaller neural networks, they are not directly applicable to LLMs due to scalability issues and differing objectives. To address these challenges, this paper introduces RACA, a novel set of coverage criteria specifically designed for LLM safety testing. RACA leverages representation engineering to focus on safety-critical concepts within LLMs, thereby reducing dimensionality and filtering out irrelevant information. The framework operates in three stages: first, it identifies safety-critical representations using a small, expert-curated calibration set of jailbreak prompts. Second, it calculates conceptual activation scores for a given test suite based on these representations. Finally, it computes coverage results using six sub-criteria that assess both individual and compositional safety concepts. We conduct comprehensive experiments to validate RACA's effectiveness, applicability, and generalization, where the results demonstrate that RACA successfully identifies high-quality jailbreak prompts and is superior to traditional neuron-level criteria. We also showcase its practical application in real-world scenarios, such as test set prioritization and attack prompt sampling. Furthermore, our findings confirm RACA's generalization to various scenarios and its robustness across various configurations. Overall, RACA provides a new framework for evaluating the safety of LLMs, contributing a valuable technique to the field of testing for AI.

πŸ” Key Points

  • Introduction of RACA, a novel set of representation-aware coverage criteria specifically designed for Large Language Model (LLM) safety testing, addressing scalability and irrelevance issues of traditional neuron-level criteria.
  • RACA operates through three key stages: identifying safety-critical representations using a calibration set, calculating conceptual activation scores, and computing coverage results based on six comprehensive sub-criteria focused on individual and compositional safety concepts.
  • Comprehensive experiments demonstrate RACA's superiority over traditional neuron-level coverage metrics, showcasing its effectiveness in identifying high-quality jailbreak prompts and its robust application in real-world scenarios such as test set prioritization and attack prompt sampling.
  • RACA's design principles ensure it is synonym-insensitive to prevent redundancy, invalid-insensitive to eliminate irrelevant inputs, and jailbreak-sensitive to focus on potential threats, making it a principled evaluation framework for LLM safety.
  • RACA confirms its generalization ability across various LLM architectures and configurations, proving applicable even when the calibration set size is reduced.

πŸ’‘ Why This Paper Matters

This paper is significant as it proposes a much-needed solution to the pressing issue of LLM safety testing amidst rising security concerns about harmful content generation. By developing RACA, the authors provide a systematic approach that not only enhances the robustness and effectiveness of safety evaluations for LLMs but also offers practical applications in real-world scenarios. The findings emphasize the critical nature of adapting testing frameworks to the unique characteristics of LLMs, facilitating improved AI safety management.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of keen interest to AI security researchers as it addresses a fundamental challenge in the fieldβ€”ensuring the safety of LLMs against adversarial attacks, particularly jailbreaks. The introduction of RACA presents a specialized framework that focuses on the unique architectures and operational nuances of LLMs, promising to advance methodologies for evaluating and enhancing the safety of AI systems. The research has implications for developing more robust defenses and tools in the battle against AI misuse, which is a significant concern within the AI community.

πŸ“š Read the Full Paper