← Back to Library

Practical and Stealthy Touch-Guided Jailbreak Attacks on Deployed Mobile Vision-Language Agents

Authors: Renhua Ding, Xiao Yang, Zhengwei Fang, Jun Luo, Kun He, Jun Zhu

Published: 2025-10-09

arXiv ID: 2510.07809v2

Added to Library: 2025-12-08 18:00 UTC

Red Teaming

📄 Abstract

Large vision-language models (LVLMs) enable autonomous mobile agents to operate smartphone user interfaces, yet vulnerabilities in their perception and interaction remain critically understudied. Existing research often relies on conspicuous overlays, elevated permissions, or unrealistic threat assumptions, limiting stealth and real-world feasibility. In this paper, we introduce a practical and stealthy jailbreak attack framework, which comprises three key components: (i) non-privileged perception compromise, which injects visual payloads into the application interface without requiring elevated system permissions; (ii) agent-attributable activation, which leverages input attribution signals to distinguish agent from human interactions and limits prompt exposure to transient intervals to preserve stealth from end users; and (iii) efficient one-shot jailbreak, a heuristic iterative deepening search algorithm (HG-IDA*) that performs keyword-level detoxification to bypass built-in safety alignment of LVLMs. Moreover, we developed three representative Android applications and curated a prompt-injection dataset for mobile agents. We evaluated our attack across multiple LVLM backends, including closed-source services and representative open-source models, and observed high planning and execution hijack rates (e.g., GPT-4o: 82.5% planning / 75.0% execution), exposing a fundamental security vulnerability in current mobile agents and underscoring critical implications for autonomous smartphone operation.

🔍 Key Points

  • Introduction of a novel jailbreak attack framework targeting mobile vision-language models (LVLMs) that operates without elevated permissions, increasing stealth and practical feasibility.
  • Three core components of the attack framework: non-privileged perception compromise, agent-attributable activation, and efficient one-shot jailbreak using HG-IDA* algorithm for keyword-level detoxification.
  • Empirical evaluation across various LVLM backends demonstrating high rates of success in executing planned malicious commands (e.g., 82.5% planning success and 75% execution success on GPT-4o).
  • Development of a curated prompt-injection dataset and representative Android applications to assess the jailbreak effectiveness in realistic conditions.
  • Identification of fundamental security vulnerabilities in LVLMs used in mobile agents, with implications for privacy and safety in autonomous smartphone operations.

💡 Why This Paper Matters

This paper presents a significant advancement in understanding vulnerabilities in vision-language models used by mobile agents. By developing a stealthy and low-privilege jailbreak attack methodology, the authors highlight critical security weaknesses that can lead to severe privacy violations, financial losses, and safety risks. The findings not only call for immediate attention to enhancing security measures but also provide foundational knowledge for future research in safeguarding mobile AI applications.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper of interest because it exposes vulnerabilities in widely used mobile agent technologies and introduces novel attack methodologies that bypass existing safety mechanisms. The insights gained from this study are crucial for developing robust defenses against prompt-injection attacks and improving the overall security of AI applications in real-world environments.

📚 Read the Full Paper