← Back to Library

Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision

Authors: Manisha Mukherjee, Vincent J. Hellendoorn

Published: 2026-03-02

arXiv ID: 2603.01494v1

Added to Library: 2026-03-03 04:02 UTC

Safety

📄 Abstract

Large Language Models (LLMs) are increasingly deployed for code generation in high-stakes software development, yet their limited transparency in security reasoning and brittleness to evolving vulnerability patterns raise critical trustworthiness concerns. Models trained on static datasets cannot readily adapt to newly discovered vulnerabilities or changing security standards without retraining, leading to the repeated generation of unsafe code. We present a principled approach to trustworthy code generation by design that operates as an inference-time safety mechanism. Our approach employs retrieval-augmented generation to surface relevant security risks in generated code and retrieve related security discussions from a curated Stack Overflow knowledge base, which are then used to guide an LLM during code revision. This design emphasizes three aspects relevant to trustworthiness: (1) interpretability, through transparent safety interventions grounded in expert community explanations; (2) robustness, by allowing adaptation to evolving security practices without model retraining; and (3) safety alignment, through real-time intervention before unsafe code reaches deployment. Across real-world and benchmark datasets, our approach improves the security of LLM-generated code compared to prompting alone, while introducing no new vulnerabilities as measured by static analysis. These results suggest that principled, retrieval-augmented inference-time interventions can serve as a complementary mechanism for improving the safety of LLM-based code generation, and highlight the ongoing value of community knowledge in supporting trustworthy AI deployment.

🔍 Key Points

  • Introduction of SOSecure, an inference-time safety mechanism for improving the security of code generated by LLMs.
  • Utilization of retrieval-augmented generation to tap into community-driven knowledge from Stack Overflow for real-time code revision.
  • Demonstration that SOSecure effectively improves the security of generated code by reducing vulnerabilities significantly on multiple datasets without introducing new issues.
  • The approach emphasizes interpretability and robustness by grounding model decisions in human-authored explanations of security concerns.
  • Findings suggest that community knowledge plays a critical role in enhancing trustworthiness in AI systems for code generation.

💡 Why This Paper Matters

This paper presents a significant advancement in securing code generation by large language models through SOSecure, a novel inference-time safety mechanism. By leveraging community knowledge to inform real-time code revisions, it addresses critical trustworthiness issues in AI-generated code, particularly in security-sensitive environments. The empirical results demonstrate that SOSecure not only enhances the security of outputs but does so without compromising functionality, marking a vital step towards the reliable deployment of AI in software development.

🎯 Why It's Interesting for AI Security Researchers

This paper would be of interest to AI security researchers as it tackles the pressing issue of vulnerabilities inherent in AI-generated code. By introducing and validating a mechanism that actively involves community-driven knowledge in the revision process, it opens up new avenues for improving the security posture of AI applications in highly sensitive areas such as software engineering. Researchers focused on secure AI deployment will find valuable insights and methodologies applicable to their work in making AI systems safe and trustworthy.

📚 Read the Full Paper