← Back to Library

ceLLMate: Sandboxing Browser AI Agents

Authors: Luoxi Meng, Henry Feng, Ilia Shumailov, Earlence Fernandes

Published: 2025-12-14

arXiv ID: 2512.12594v1

Added to Library: 2026-01-07 10:12 UTC

📄 Abstract

Browser-using agents (BUAs) are an emerging class of autonomous agents that interact with web browsers in human-like ways, including clicking, scrolling, filling forms, and navigating across pages. While these agents help automate repetitive online tasks, they are vulnerable to prompt injection attacks that can trick an agent into performing undesired actions, such as leaking private information or issuing state-changing requests. We propose ceLLMate, a browser-level sandboxing framework that restricts the agent's ambient authority and reduces the blast radius of prompt injections. We address two fundamental challenges: (1) The semantic gap challenge in policy enforcement arises because the agent operates through low-level UI observations and manipulations; however, writing and enforcing policies directly over UI-level events is brittle and error-prone. To address this challenge, we introduce an agent sitemap that maps low-level browser behaviors to high-level semantic actions. (2) Policy prediction in BUAs is the norm rather than the exception. BUAs have no app developer to pre-declare sandboxing policies, and thus, ceLLMate pairs website-authored mandatory policies with an automated policy-prediction layer that adapts and instantiates these policies from the user's natural-language task. We implement ceLLMate as an agent-agnostic browser extension and demonstrate how it enables sandboxing policies that effectively block various types of prompt injection attacks with negligible overhead.

🔍 Key Points

  • Cisco's Integrated AI Security and Safety Framework provides a comprehensive, lifecycle-aware taxonomy for understanding and managing AI security threats and content harms, integrating AI security with AI safety effectively.
  • The framework incorporates five key design principles that address the unique threats posed by modern AI systems: integration of security and content harms, lifecycle awareness, multi-agent communication, multi-modality of data, and an 'AI security compass' for operational direction.
  • The taxonomy includes detailed classifications of threats including supply chain vulnerabilities, runtime manipulations, model integrity attacks, and harmful content generation, structured into objectives, techniques, and subtechniques for precise risk mapping and mitigation.
  • Operationalization of the framework connects to existing global policy and regulatory frameworks, facilitating compliance with emerging AI laws and standards, which underscores its practical relevance for organizations.
  • The report highlights the gaps in existing AI security frameworks, offering a much-needed comprehensive solution that can evolve as AI technologies develop.

💡 Why This Paper Matters

The 'Cisco Integrated AI Security and Safety Framework Report' is instrumental in providing organizations with a structured approach to identify, classify, and mitigate the myriad risks associated with AI systems. As AI continues to proliferate in critical domains, the outlined framework serves as a crucial resource for security practitioners aiming to adapt their strategies to the evolving threat landscape. By bridging the gap between AI innovation and security practices, this report not only highlights the existing vulnerabilities within AI frameworks but also offers actionable insights for improving AI operational safety and integrity.

🎯 Why It's Interesting for AI Security Researchers

This paper will be of particular interest to AI security researchers as it presents a robust framework that synthesizes multiple existing threats and vulnerabilities into a coherent operational schema. The detailed taxonomy enables researchers to better understand the complexities of AI risks, supports the development of targeted defenses, and fosters collaboration across various sectors addressing AI safety. Furthermore, it emphasizes the intersection of security and ethical considerations, a growing area of research in the responsible deployment of AI technologies.

📚 Read the Full Paper