← Back to Library

Attacks and Defenses Against LLM Fingerprinting

Authors: Kevin Kurian, Ethan Holland, Sean Oesch

Published: 2025-08-12

arXiv ID: 2508.09021v1

Added to Library: 2025-08-14 23:08 UTC

Safety

📄 Abstract

As large language models are increasingly deployed in sensitive environments, fingerprinting attacks pose significant privacy and security risks. We present a study of LLM fingerprinting from both offensive and defensive perspectives. Our attack methodology uses reinforcement learning to automatically optimize query selection, achieving better fingerprinting accuracy with only 3 queries compared to randomly selecting 3 queries from the same pool. Our defensive approach employs semantic-preserving output filtering through a secondary LLM to obfuscate model identity while maintaining semantic integrity. The defensive method reduces fingerprinting accuracy across tested models while preserving output quality. These contributions show the potential to improve fingerprinting tools capabilities while providing practical mitigation strategies against fingerprinting attacks.

🔍 Key Points

  • Development of a reinforcement learning (RL) framework for optimizing query selection in LLM fingerprinting, achieving over 93% accuracy with significantly fewer queries than traditional methods.
  • Introduction of a semantic-preserving output filtering mechanism using a secondary LLM, which reduces the effectiveness of fingerprinting attacks while maintaining semantic fidelity with high cosine similarity scores.
  • Comprehensive evaluation showcasing the RL-optimized query sets outperforming randomly selected queries, highlighting improved efficiency and effectiveness in model identification strategies.
  • Identification of limitations within RL optimization and defense strategies, prompting future research directions to overcome challenges in real-time applications and broader model scopes.

💡 Why This Paper Matters

The paper addresses critical vulnerabilities in the deployment of large language models by proposing innovative offensive and defensive strategies against model fingerprinting. The findings signal significant advancements in both enhancing the efficacy of fingerprinting techniques and providing robust countermeasures, contributing essential knowledge to the field of AI security.

🎯 Why It's Interesting for AI Security Researchers

This paper is highly relevant for AI security researchers as it explores the emerging threat of fingerprinting in LLMs, a field of increasing concern given the proliferation of these models in sensitive applications. The novel methodologies and frameworks introduced for both attacking and defending against fingerprinting highlight crucial advancements in understanding and mitigating privacy risks associated with LLM deployments.

📚 Read the Full Paper