← Back to Library

vLLM Hook v0: A Plug-in for Programming Model Internals on vLLM

Authors: Ching-Yun Ko, Pin-Yu Chen

Published: 2026-02-02

arXiv ID: 2603.06588v1

Added to Library: 2026-03-10 03:01 UTC

📄 Abstract

Modern artificial intelligence (AI) models are deployed on inference engines to optimize runtime efficiency and resource allocation, particularly for transformer-based large language models (LLMs). The vLLM project is a major open-source library to support model serving and inference. However, the current implementation of vLLM limits programmability of the internal states of deployed models. This prevents the use of popular test-time model alignment and enhancement methods. For example, it prevents the detection of adversarial prompts based on attention patterns or the adjustment of model responses based on activation steering. To bridge this critical gap, we present vLLM Hook, an opensource plug-in to enable the programming of internal states for vLLM models. Based on a configuration file specifying which internal states to capture, vLLM Hook provides seamless integration to vLLM and supports two essential features: passive programming and active programming. For passive programming, vLLM Hook probes the selected internal states for subsequent analysis, while keeping the model generation intact. For active programming, vLLM Hook enables efficient intervention of model generation by altering the selected internal states. In addition to presenting the core functions of vLLM Hook, in version 0, we demonstrate 3 use cases including prompt injection detection, enhanced retrieval-augmented retrieval (RAG), and activation steering. Finally, we welcome the community's contribution to improve vLLM Hook via https://github.com/ibm/vllm-hook.

🔍 Key Points

  • Identification of a video-specific vulnerability in text-to-video (T2V) models, which can infill harmful intermediate frames from sparse boundary conditions in fragmented prompts.
  • Development of a two-step framework, titled TFM (Two Frame Matter), which uses Temporal Boundary Prompting (TBP) and Covert Substitution Mechanism (CSM) to enhance the effectiveness of jailbreak attacks on T2V models.
  • Empirical validation across multiple open-source and commercial T2V models demonstrating that TFM achieves up to a 12% increase in the attack success rate compared to existing methods.
  • The introduction of a novel threat model for T2V systems, enabling a stringent black-box context for evaluating prompt vulnerabilities against safety filters.
  • Necessary implications for the design of safer and more robust T2V models, highlighting the importance of temporally aware safety mechanisms.

💡 Why This Paper Matters

This paper presents significant advancements in understanding the vulnerabilities of text-to-video models, specifically how they can be manipulated through temporal prompt engineering. By demonstrating the effectiveness of the TFM framework in executing successful jailbreak attacks, the authors emphasize the urgent need for improved safety mechanisms, particularly as T2V technology becomes more prevalent and potent in real-world applications. The findings underscore the imperative for ongoing research into AI safety, ensuring responsible deployment of advanced generative systems.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of interest to AI security researchers as it unveils sophisticated attack vectors specific to the rapidly evolving text-to-video generative systems. It emphasizes the interplay between model structure and prompt engineering, serving as a reference for developing countermeasures against existing vulnerabilities. Furthermore, it provides empirical evidence demonstrating the effectiveness of novel attack methodologies, critical for anticipating and mitigating potential risks associated with AI-generated content.

📚 Read the Full Paper