← Back to Library

BreakFun: Jailbreaking LLMs via Schema Exploitation

Authors: Amirkia Rafiei Oskooei, Mehmet S. Aktas

Published: 2025-10-19

arXiv ID: 2510.17904v1

Added to Library: 2025-10-22 03:02 UTC

Red Teaming

📄 Abstract

The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their widespread adoption but also makes them paradoxically vulnerable. In this paper, we investigate this vulnerability through BreakFun, a jailbreak methodology that weaponizes an LLM's adherence to structured schemas. BreakFun employs a three-part prompt that combines an innocent framing and a Chain-of-Thought distraction with a core "Trojan Schema"--a carefully crafted data structure that compels the model to generate harmful content, exploiting the LLM's strong tendency to follow structures and schemas. We demonstrate this vulnerability is highly transferable, achieving an average success rate of 89% across 13 foundational and proprietary models on JailbreakBench, and reaching a 100% Attack Success Rate (ASR) on several prominent models. A rigorous ablation study confirms this Trojan Schema is the attack's primary causal factor. To counter this, we introduce the Adversarial Prompt Deconstruction guardrail, a defense that utilizes a secondary LLM to perform a "Literal Transcription"--extracting all human-readable text to isolate and reveal the user's true harmful intent. Our proof-of-concept guardrail demonstrates high efficacy against the attack, validating that targeting the deceptive schema is a viable mitigation strategy. Our work provides a look into how an LLM's core strengths can be turned into critical weaknesses, offering a fresh perspective for building more robustly aligned models.

🔍 Key Points

  • Introduction of BreakFun, a jailbreak methodology utilizing cognitive misdirection to exploit LLMs' structured reasoning capabilities.
  • Demonstration of high transferability of the attack with an average Attack Success Rate (ASR) of 89% across 13 models, and 100% ASR on prominent models during testing.
  • Establishment of a causal mechanism through a rigorous ablation study, identifying the Trojan Schema as a primary factor in the jailbreak's effectiveness.
  • Proposal of the Adversarial Prompt Deconstruction guardrail as a novel defense mechanism that effectively neutralizes the BreakFun attack by isolating harmful content from deceptive syntax.
  • Insight into the systemic vulnerabilities present within LLMs, highlighting the tension between LLM capabilities and their inherent security weaknesses.

💡 Why This Paper Matters

The paper presents critical findings on the vulnerabilities of Large Language Models (LLMs), demonstrating that their strengths in structured reasoning can be exploited in malicious ways. Introducing a systematic attack methodology and a corresponding defense mechanism, this research is significant for advancing AI safety and security protocols, underlining the necessity of developing more robust alignment strategies for future AI models.

🎯 Why It's Interesting for AI Security Researchers

This paper is of significant interest to AI security researchers as it not only details a novel attack vector targeting cognitive processes in LLMs but also addresses the prevalent vulnerabilities in widely-used models. With its rigorous empirical evaluation and exploration of defensive strategies, the findings of this study contribute to a deeper understanding of AI safety challenges and potential remediation, which are crucial for building resilient AI systems.

📚 Read the Full Paper