← Back to Library

Can Adversarial Code Comments Fool AI Security Reviewers -- Large-Scale Empirical Study of Comment-Based Attacks and Defenses Against LLM Code Analysis

Authors: Scott Thornton

Published: 2026-02-18

arXiv ID: 2602.16741v1

Added to Library: 2026-02-20 03:04 UTC

Safety

📄 Abstract

AI-assisted code review is widely used to detect vulnerabilities before production release. Prior work shows that adversarial prompt manipulation can degrade large language model (LLM) performance in code generation. We test whether similar comment-based manipulation misleads LLMs during vulnerability detection. We build a 100-sample benchmark across Python, JavaScript, and Java, each paired with eight comment variants ranging from no comments to adversarial strategies such as authority spoofing and technical deception. Eight frontier models, five commercial and three open-source, are evaluated in 9,366 trials. Adversarial comments produce small, statistically non-significant effects on detection accuracy (McNemar exact p > 0.21; all 95 percent confidence intervals include zero). This holds for commercial models with 89 to 96 percent baseline detection and open-source models with 53 to 72 percent, despite large absolute performance gaps. Unlike generation settings where comment manipulation achieves high attack success, detection performance does not meaningfully degrade. More complex adversarial strategies offer no advantage over simple manipulative comments. We test four automated defenses across 4,646 additional trials (14,012 total). Static analysis cross-referencing performs best at 96.9 percent detection and recovers 47 percent of baseline misses. Comment stripping reduces detection for weaker models by removing helpful context. Failures concentrate on inherently difficult vulnerability classes, including race conditions, timing side channels, and complex authorization logic, rather than on adversarial comments.

🔍 Key Points

  • Adversarial comments have a statistically non-significant impact on the detection accuracy of vulnerabilities in AI code review models, indicating robustness against comment manipulation.
  • Sophisticated adversarial strategies like authority spoofing, attention dilution, and technical deception do not outperform simpler adversarial comments; the overall effect remains minimal across all evaluated models.
  • Static analysis cross-referencing with SAST tools provides the most significant increase in detection rates, achieving a 96.9% detection rate compared to other defense strategies tested.
  • The study identifies specific hard-to-detect vulnerability patterns (e.g., race conditions, timing attacks) as the real challenge in AI-assisted code review, rather than adversarial comment manipulation.
  • The robustness of both commercial and open-source models against adversarial comments suggests that adversarial resistance is a generalized property of instruction-tuned language models.

💡 Why This Paper Matters

This paper is crucial as it sheds light on the resilience of AI code review systems against adversarial comment strategies, emphasizing that adversarial manipulation is not the primary vulnerability for these systems. Instead, it reveals the more significant challenge lies in detecting complex vulnerability patterns. The findings suggest that developers can deploy AI-assisted code reviews with greater confidence in their robustness to comment-based attacks while focusing on improving detection of harder vulnerabilities.

🎯 Why It's Interesting for AI Security Researchers

This paper is of particular interest to AI security researchers as it addresses the pressing need for understanding the security of AI systems in software development. By demonstrating the effectiveness of comment-based adversarial attack strategies—and their limitations—this research provides insights into the vulnerabilities inherent in AI code review processes and informs the development of more robust protective measures, aiding in the advancement of secure AI technologies.

📚 Read the Full Paper