← Back to Library

Backdoor Attacks on Open Vocabulary Object Detectors via Multi-Modal Prompt Tuning

Authors: Ankita Raj, Chetan Arora

Published: 2025-11-16

arXiv ID: 2511.12735v1

Added to Library: 2025-11-18 04:00 UTC

📄 Abstract

Open-vocabulary object detectors (OVODs) unify vision and language to detect arbitrary object categories based on text prompts, enabling strong zero-shot generalization to novel concepts. As these models gain traction in high-stakes applications such as robotics, autonomous driving, and surveillance, understanding their security risks becomes crucial. In this work, we conduct the first study of backdoor attacks on OVODs and reveal a new attack surface introduced by prompt tuning. We propose TrAP (Trigger-Aware Prompt tuning), a multi-modal backdoor injection strategy that jointly optimizes prompt parameters in both image and text modalities along with visual triggers. TrAP enables the attacker to implant malicious behavior using lightweight, learnable prompt tokens without retraining the base model weights, thus preserving generalization while embedding a hidden backdoor. We adopt a curriculum-based training strategy that progressively shrinks the trigger size, enabling effective backdoor activation using small trigger patches at inference. Experiments across multiple datasets show that TrAP achieves high attack success rates for both object misclassification and object disappearance attacks, while also improving clean image performance on downstream datasets compared to the zero-shot setting.

🔍 Key Points

  • VEIL introduces a novel approach to jailbreaking text-to-video (T2V) models by exploiting learned cross-modal associations between audio cues and stylistic visual elements, shifting the focus from manipulating explicit unsafe prompts to using benign components that yield harmful outputs.
  • The framework formalizes attack generation as a constrained optimization problem, employing a guided search method to balance stealth and attack effectiveness, achieving a 23% improvement in attack success rates over previous methods across several commercial T2V models.
  • VEIL incorporates a modular prompt design which consists of neutral scene anchors, auditory triggers, and stylistic modulators, enhancing the ability of prompts to bypass safety filters while still leading to undesirable video content.
  • Extensive experiments validate the efficacy of VEIL, demonstrating superior performance against current techniques and revealing vulnerabilities in T2V models that may circumvent traditional defenses, particularly in detecting harmful outputs.
  • The study highlights critical limitations in existing safety mechanisms for T2V models and sets the stage for future research into improving model robustness against subtle attack vectors.

💡 Why This Paper Matters

This paper is crucial as it identifies and exploits a new class of vulnerabilities in T2V models, demonstrating how implicit knowledge encoded in these models can be manipulated through structured yet benign interactions. The findings underscore the need for reassessing the safety and security of generative AI systems, particularly as T2V models become more prevalent in producing potentially harmful content.

🎯 Why It's Interesting for AI Security Researchers

The work is highly relevant to AI security researchers as it challenges traditional notions of prompt safety and highlights the need for advanced protective measures in generative models. By proposing novel attack vectors that exploit model architecture and learned associations, this research can inform future security frameworks, defenses, and robustness assessments in generative AI applications.

📚 Read the Full Paper