← Back to Library

PromptScreen: Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline

Authors: Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena, Dhruv Kumar

Published: 2025-12-22

arXiv ID: 2512.19011v2

Added to Library: 2026-01-12 03:01 UTC

Red Teaming

📄 Abstract

Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen, an efficient and systematically evaluated defense architecture that mitigates these threats through a lightweight, multi-stage pipeline. Its core component is a semantic filter based on text normalization, TF-IDF representations, and a Linear SVM classifier. Despite its simplicity, this module achieves 93.4% accuracy and 96.5% specificity on held-out data, substantially reducing attack throughput while incurring negligible computational overhead. Building on this efficient foundation, the full pipeline integrates complementary detection and mitigation mechanisms that operate at successive stages, providing strong robustness with minimal latency. In comparative experiments, our SVM-based configuration improves overall accuracy from 35.1% to 93.4% while reducing average time-to-completion from approximately 450 s to 47 s, yielding over 10 times lower latency than ShieldGemma. These results demonstrate that the proposed design simultaneously advances defensive precision and efficiency, addressing a core limitation of current model-based moderators. Evaluation across a curated corpus of over 30,000 labeled prompts, including benign, jailbreak, and application-layer injections, confirms that staged, resource-efficient defenses can robustly secure modern LLM-driven applications.

🔍 Key Points

  • Introduction of PromptScreen, a lightweight multi-stage defense architecture to combat prompt injection and jailbreaking attacks on LLM-based systems.
  • Key component is a semantic filter using TF-IDF representation and a linear SVM classifier, achieving 93.4% accuracy and 96.5% specificity with minimal computational overhead.
  • Demonstrated a significant improvement in overall accuracy from 35.1% to 93.4%, and reduction in average time-to-completion from 450 seconds to 47 seconds compared to existing systems like ShieldGemma.
  • The defense architecture integrates various mechanisms to address different forms of adversarial signals, ensuring robust protection in varied scenarios with controlled performance trade-offs.
  • Comprehensive evaluation on a curated dataset of over 30,000 labeled prompts, confirming the effectiveness of their algorithm in realtime applications.

💡 Why This Paper Matters

The paper presents a novel and efficient defense system for large language models, significantly enhancing security against prompt injection and jailbreaking attacks. Its practical implications for AI deployments in sensitive applications make it a crucial advancement in AI safety, underscoring the need for scalability and modularity in security measures.

🎯 Why It's Interesting for AI Security Researchers

This paper is particularly relevant for AI security researchers as it introduces innovative methodologies for addressing persistent vulnerabilities in language models. The combined approach of utilizing classical machine learning methods alongside emergent threats in LLM contexts provides vital insights into developing more resilient AI systems. Additionally, the systematic evaluation framework established for benchmarking defenses presents a valuable resource for future research and comparison in the field.

📚 Read the Full Paper