← Back to Library

SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models

Authors: Jigang Fan, Zhenghong Zhou, Ruofan Jin, Le Cong, Mengdi Wang, Zaixi Zhang

Published: 2025-09-03

arXiv ID: 2509.03487v1

Added to Library: 2025-09-04 04:04 UTC

Red Teaming

📄 Abstract

Proteins play crucial roles in almost all biological processes. The advancement of deep learning has greatly accelerated the development of protein foundation models, leading to significant successes in protein understanding and design. However, the lack of systematic red-teaming for these models has raised serious concerns about their potential misuse, such as generating proteins with biological safety risks. This paper introduces SafeProtein, the first red-teaming framework designed for protein foundation models to the best of our knowledge. SafeProtein combines multimodal prompt engineering and heuristic beam search to systematically design red-teaming methods and conduct tests on protein foundation models. We also curated SafeProtein-Bench, which includes a manually constructed red-teaming benchmark dataset and a comprehensive evaluation protocol. SafeProtein achieved continuous jailbreaks on state-of-the-art protein foundation models (up to 70% attack success rate for ESM3), revealing potential biological safety risks in current protein foundation models and providing insights for the development of robust security protection technologies for frontier models. The codes will be made publicly available at https://github.com/jigang-fan/SafeProtein.

🔍 Key Points

  • Introduction of SafeProtein as the first systematic red-teaming framework tailored for protein foundation models, addressing potential biological safety risks.
  • Creation of SafeProtein-Bench, a dedicated benchmark dataset featuring harmful proteins, evaluation protocols, and a structured testing methodology.
  • Demonstration of a high jailbreak success rate (up to 70%) on state-of-the-art models like ESM3, revealing vulnerabilities in current protein models.
  • Utilization of multimodal prompt engineering combined with heuristic beam search to enhance red-teaming efficacy and model evaluation.
  • Identification of the biosafety risks associated with the design capabilities of protein foundation models, emphasizing the need for improved safety protocols.

💡 Why This Paper Matters

The SafeProtein framework not only highlights critical vulnerabilities in protein foundation models concerning biosafety but also provides a structured approach for assessing their risks. Its novel methodologies and benchmarks are essential for researchers and developers who aim to create safer, more secure AI-driven protein design tools, underscoring the importance of safety in biological applications of generative AI.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant to AI security researchers as it introduces the concept of red-teaming within the domain of protein foundation models, establishing a new paradigm for evaluating AI's dual-use potential. The framework and methodologies proposed can inform future security measures and ethical guidelines, making it important for understanding how AI can unintentionally facilitate the design of harmful biological entities.

📚 Read the Full Paper