← Back to Library

Cisco Integrated AI Security and Safety Framework Report

Authors: Amy Chang, Tiffany Saade, Sanket Mendapara, Adam Swanda, Ankit Garg

Published: 2025-12-15

arXiv ID: 2512.12921v1

Added to Library: 2026-01-07 10:12 UTC

Red Teaming

📄 Abstract

Artificial intelligence (AI) systems are being readily and rapidly adopted, increasingly permeating critical domains: from consumer platforms and enterprise software to networked systems with embedded agents. While this has unlocked potential for human productivity gains, the attack surface has expanded accordingly: threats now span content safety failures (e.g., harmful or deceptive outputs), model and data integrity compromise (e.g., poisoning, supply-chain tampering), runtime manipulations (e.g., prompt injection, tool and agent misuse), and ecosystem risks (e.g., orchestration abuse, multi-agent collusion). Existing frameworks such as MITRE ATLAS, National Institute of Standards and Technology (NIST) AI 100-2 Adversarial Machine Learning (AML) taxonomy, and OWASP Top 10s for Large Language Models (LLMs) and Agentic AI Applications provide valuable viewpoints, but each covers only slices of this multi-dimensional space. This paper presents Cisco's Integrated AI Security and Safety Framework ("AI Security Framework"), a unified, lifecycle-aware taxonomy and operationalization framework that can be used to classify, integrate, and operationalize the full range of AI risks. It integrates AI security and AI safety across modalities, agents, pipelines, and the broader ecosystem. The AI Security Framework is designed to be practical for threat identification, red-teaming, risk prioritization, and it is comprehensive in scope and can be extensible to emerging deployments in multimodal contexts, humanoids, wearables, and sensory infrastructures. We analyze gaps in prevailing frameworks, discuss design principles for our framework, and demonstrate how the taxonomy provides structure for understanding how modern AI systems fail, how adversaries exploit these failures, and how organizations can build defenses across the AI lifecycle that evolve alongside capability advancements.

🔍 Key Points

  • Cisco's Integrated AI Security and Safety Framework provides a comprehensive, lifecycle-aware taxonomy for understanding and managing AI security threats and content harms, integrating AI security with AI safety effectively.
  • The framework incorporates five key design principles that address the unique threats posed by modern AI systems: integration of security and content harms, lifecycle awareness, multi-agent communication, multi-modality of data, and an 'AI security compass' for operational direction.
  • The taxonomy includes detailed classifications of threats including supply chain vulnerabilities, runtime manipulations, model integrity attacks, and harmful content generation, structured into objectives, techniques, and subtechniques for precise risk mapping and mitigation.
  • Operationalization of the framework connects to existing global policy and regulatory frameworks, facilitating compliance with emerging AI laws and standards, which underscores its practical relevance for organizations.
  • The report highlights the gaps in existing AI security frameworks, offering a much-needed comprehensive solution that can evolve as AI technologies develop.

💡 Why This Paper Matters

The 'Cisco Integrated AI Security and Safety Framework Report' is instrumental in providing organizations with a structured approach to identify, classify, and mitigate the myriad risks associated with AI systems. As AI continues to proliferate in critical domains, the outlined framework serves as a crucial resource for security practitioners aiming to adapt their strategies to the evolving threat landscape. By bridging the gap between AI innovation and security practices, this report not only highlights the existing vulnerabilities within AI frameworks but also offers actionable insights for improving AI operational safety and integrity.

🎯 Why It's Interesting for AI Security Researchers

This paper will be of particular interest to AI security researchers as it presents a robust framework that synthesizes multiple existing threats and vulnerabilities into a coherent operational schema. The detailed taxonomy enables researchers to better understand the complexities of AI risks, supports the development of targeted defenses, and fosters collaboration across various sectors addressing AI safety. Furthermore, it emphasizes the intersection of security and ethical considerations, a growing area of research in the responsible deployment of AI technologies.

📚 Read the Full Paper