← Back to Library

Transformer Injectivity & Geometric Robustness - Analytic Margins and Bi-Lipschitz Uniformity of Sequence-Level Hidden States

Authors: Mikael von Strauss

Published: 2025-11-17

arXiv ID: 2511.14808v1

Added to Library: 2025-11-20 03:01 UTC

📄 Abstract

Under real-analytic assumptions on decoder-only Transformers, recent work shows that the map from discrete prompts to last-token hidden states is generically injective on finite prompt sets. We refine this picture: for each layer $\ell$ we define a collision discriminant $Δ^\ell \subset Θ$ and injective stratum $U^\ell = Θ\setminus Δ^\ell$, and prove a dichotomy -- either the model is nowhere injective on the set, or $U^\ell$ is open and dense and every $F^\ell_θ$ is injective. Under mild non-singularity assumptions on the optimizer and an absolutely continuous initialization, generic injectivity persists along smooth training trajectories over any fixed horizon. We also treat symmetry groups $G$, showing that discriminants and injective strata descend to the quotient $Θ/G$, so injectivity is naturally a property of functional equivalence classes. We complement these results with an empirical study of layerwise geometric diagnostics. We define a separation margin and a co-Lipschitz (lower Lipschitz) constant between prompt space and last-token representation space, estimated via nearest-neighbor statistics on large prompt sets. Applying these diagnostics to pretrained LLaMA-3 and Qwen models, we study behavior across layers, sequence lengths, model scales, and 8- and 4-bit activation quantization. On our sampled prompts we see no collisions in full precision or at 8 bits, while 4-bit quantization induces a small number of collisions and markedly shrinks co-Lipschitz estimates. For a small GPT-2 trained from scratch, normalized metrics remain stable over training. Overall, the results suggest that Transformer representations are generically and persistently injective in the continuous-parameter idealization, while their practical invertibility can be probed using simple geometric diagnostics.

🔍 Key Points

  • First comprehensive taxonomy of IPI-centric defense frameworks, covering five distinct dimensions: technical paradigms, intervention stages, model access, explainability, and automation level.
  • Thorough evaluation of representative IPI defense frameworks in both static and dynamic environments, highlighting the average attack success rates and areas of vulnerability.
  • Identification of six root causes of defensive failures, providing a detailed analysis of why current defenses may be circumvented and suggesting implications for future designs.
  • Design and demonstration of three novel adaptive attack strategies that exploit identified vulnerabilities, substantially increasing attack success rates against specific frameworks.
  • Contributions laid down provide actionable insights and foundational knowledge for the development of more robust and usable IPI-centric defense frameworks.

💡 Why This Paper Matters

This paper is pivotal due to its systematic approach in addressing a critical security gap in large language model-based agent systems. By integrating a unified taxonomy with comprehensive evaluations and adaptive attacks, it sets a precedent for future research and development in AI security fields, underscoring the necessity for more resilient defenses against advanced prompt injection attacks.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it not only reveals the current limitations in IPI defenses but also proposes a structured framework for understanding and improving defense mechanisms. Its findings underline the ongoing challenge of securing LLM-based systems, urging the community to adapt and innovate in response to evolving attack vectors.

📚 Read the Full Paper