← Back to Library

Secure Code Generation at Scale with Reflexion

Authors: Arup Datta, Ahmed Aljohani, Hyunsook Do

Published: 2025-11-05

arXiv ID: 2511.03898v1

Added to Library: 2025-11-14 23:04 UTC

πŸ“„ Abstract

Large language models (LLMs) are now widely used to draft and refactor code, but code that works is not necessarily secure. We evaluate secure code generation using the Instruct Prime, which eliminated compliance-required prompts and cue contamination, and evaluate five instruction-tuned code LLMs using a zero-shot baseline and a three-round reflexion prompting approach. Security is measured using the Insecure Code Detector (ICD), and results are reported by measuring Repair, Regression, and NetGain metrics, considering the programming language and CWE family. Our findings show that insecurity remains common at the first round: roughly 25-33% of programs are insecure at a zero-shot baseline (t0 ). Weak cryptography/config-dependent bugs are the hardest to avoid while templated ones like XSS, code injection, and hard-coded secrets are handled more reliably. Python yields the highest secure rates; C and C# are the lowest, with Java, JS, PHP, and C++ in the middle. Reflexion prompting improves security for all models, improving average accuracy from 70.74% at t0 to 79.43% at t3 , with the largest gains in the first round followed by diminishing returns. The trends with Repair, Regression, and NetGain metrics show that applying one to two rounds produces most of the benefits. A replication package is available at https://doi.org/10.5281/zenodo.17065846.

πŸ” Key Points

  • Introduction of the Ο‡mera framework as the first principled attack evaluation method on LLM factual memory under prompt injection in adversarial scenarios.
  • Demonstration of various MitM attacks categorized into Ξ±, Ξ², and Ξ³ types, showcasing how even trivial instruction-based attacks can successfully deceive LLMs with notable accuracy.
  • Empirical evidence showing high uncertainty levels in LLM responses during attacks, which can be leveraged to build a defense mechanism using machine learning classifiers to alert users of potentially manipulated responses.
  • Release of a novel factually adversarial dataset containing 3000 samples designed to benchmark and facilitate further research in adversarial vulnerabilities within LLMs.
  • High performance of Random Forest classifiers (up to ~96% AUC) in detecting attacked queries using uncertainty metrics, establishing a pathway towards user safety in LLM applications.

πŸ’‘ Why This Paper Matters

This paper is crucial as it addresses the significant vulnerability of LLMs to adversarial attacks, particularly in contexts where factual accuracy is paramount, such as in information retrieval and question-answering systems. By unveiling specific weaknesses and developing the Ο‡mera framework, the authors pave the way for future research aimed at enhancing the robustness and trustworthiness of AI systems, thus contributing to safer AI deployment in critical applications.

🎯 Why It's Interesting for AI Security Researchers

This research holds great interest for AI security researchers as it delineates a clear framework for understanding and evaluating adversarial threats in LLMs, a topic of growing concern with the increasing reliance on these models for critical tasks. The findings not only highlight existing vulnerabilities but also propose empirical methods for detection and mitigation, guiding future research and practical implementations aimed at strengthening AI security.

πŸ“š Read the Full Paper