← Back to Library

Effective and Stealthy One-Shot Jailbreaks on Deployed Mobile Vision-Language Agents

Authors: Renhua Ding, Xiao Yang, Zhengwei Fang, Jun Luo, Kun He, Jun Zhu

Published: 2025-10-09

arXiv ID: 2510.07809v1

Added to Library: 2025-10-10 04:01 UTC

Red Teaming

📄 Abstract

Large vision-language models (LVLMs) enable autonomous mobile agents to operate smartphone user interfaces, yet vulnerabilities to UI-level attacks remain critically understudied. Existing research often depends on conspicuous UI overlays, elevated permissions, or impractical threat models, limiting stealth and real-world applicability. In this paper, we present a practical and stealthy one-shot jailbreak attack that leverages in-app prompt injections: malicious applications embed short prompts in UI text that remain inert during human interaction but are revealed when an agent drives the UI via ADB (Android Debug Bridge). Our framework comprises three crucial components: (1) low-privilege perception-chain targeting, which injects payloads into malicious apps as the agent's visual inputs; (2) stealthy user-invisible activation, a touch-based trigger that discriminates agent from human touches using physical touch attributes and exposes the payload only during agent operation; and (3) one-shot prompt efficacy, a heuristic-guided, character-level iterative-deepening search algorithm (HG-IDA*) that performs one-shot, keyword-level detoxification to evade on-device safety filters. We evaluate across multiple LVLM backends, including closed-source services and representative open-source models within three Android applications, and we observe high planning and execution hijack rates in single-shot scenarios (e.g., GPT-4o: 82.5% planning / 75.0% execution). These findings expose a fundamental security vulnerability in current mobile agents with immediate implications for autonomous smartphone operation.

🔍 Key Points

  • Development of a stealthy one-shot jailbreak attack against large vision-language models (LVLMs) embedded in mobile agents.
  • Introduction of a three-component framework: (1) low-privilege perception-chain targeting, (2) user-invisible activation via touch attributes, and (3) one-shot prompt efficacy using HG-IDA* for keyword detoxification.
  • Demonstration of high attack success rates (e.g., 82.5% planning and 75.0% execution with GPT-4o) against popularLVLM backends in realistic scenarios.
  • Comprehensive evaluation against various Android applications showing significant vulnerabilities of mobile agents to UI-level attacks.
  • Finding that modular mobile agent architectures increase attack surfaces, enabling cross-application exploitability.

💡 Why This Paper Matters

This paper highlights critical security vulnerabilities in deployed vision-language models used for autonomous mobile operations. Its findings underline the necessity for enhanced security measures in the design of mobile agents to prevent potential misuse, emphasizing the real-world implications of such attacks on user privacy and safety.

🎯 Why It's Interesting for AI Security Researchers

This research is pertinent to AI security researchers as it identifies and analyzes significant weaknesses in mobile vision-language frameworks that are increasingly being integrated into consumer technology. By demonstrating effective jailbreak strategies, it provides insights into the potential threats posed by malicious applications and motivates the development of robust defenses against such vulnerabilities.

📚 Read the Full Paper