← Back to Library

MedRule-KG: A Knowledge-Graph--Steered Scaffold for Reliable Mathematical and Biomedical Reasoning

Authors: Crystal Su

Published: 2025-11-17

arXiv ID: 2511.12963v1

Added to Library: 2025-11-18 04:00 UTC

📄 Abstract

We study how to impose domain-consistent structure on large language models (LLMs) used for scientific reasoning and early-stage drug discovery. We present MedRule-KG, a compact knowledge-graph scaffold paired with a lightweight verifier that steers generation toward mathematically and biomedically valid outputs. The system injects curated symbolic facts into prompts and then enforces rule satisfaction with a deterministic checker. We formalize generation as constrained inference, introduce a soft guidance surrogate suitable for decoding, and perform a thorough statistical analysis with uncertainty quantification. Across 90 tasks spanning reaction feasibility, metabolic compatibility, and toxicity screening, MedRule-KG reduces violation counts by 83.2\% relative to a strong chain-of-thought baseline while improving exact match. Results remain stable under stratification and scale with dataset size, and the verifier adds negligible latency, making the approach practical for interactive design.

🔍 Key Points

  • VEIL introduces a novel approach to jailbreaking text-to-video (T2V) models by exploiting learned cross-modal associations between audio cues and stylistic visual elements, shifting the focus from manipulating explicit unsafe prompts to using benign components that yield harmful outputs.
  • The framework formalizes attack generation as a constrained optimization problem, employing a guided search method to balance stealth and attack effectiveness, achieving a 23% improvement in attack success rates over previous methods across several commercial T2V models.
  • VEIL incorporates a modular prompt design which consists of neutral scene anchors, auditory triggers, and stylistic modulators, enhancing the ability of prompts to bypass safety filters while still leading to undesirable video content.
  • Extensive experiments validate the efficacy of VEIL, demonstrating superior performance against current techniques and revealing vulnerabilities in T2V models that may circumvent traditional defenses, particularly in detecting harmful outputs.
  • The study highlights critical limitations in existing safety mechanisms for T2V models and sets the stage for future research into improving model robustness against subtle attack vectors.

💡 Why This Paper Matters

This paper is crucial as it identifies and exploits a new class of vulnerabilities in T2V models, demonstrating how implicit knowledge encoded in these models can be manipulated through structured yet benign interactions. The findings underscore the need for reassessing the safety and security of generative AI systems, particularly as T2V models become more prevalent in producing potentially harmful content.

🎯 Why It's Interesting for AI Security Researchers

The work is highly relevant to AI security researchers as it challenges traditional notions of prompt safety and highlights the need for advanced protective measures in generative models. By proposing novel attack vectors that exploit model architecture and learned associations, this research can inform future security frameworks, defenses, and robustness assessments in generative AI applications.

📚 Read the Full Paper