← Back to Library

SAFENLIDB: A Privacy-Preserving Safety Alignment Framework for LLM-based Natural Language Database Interfaces

Authors: Ruiheng Liu, XiaoBing Chen, Jinyu Zhang, Qiongwen Zhang, Yu Zhang, Bailong Yang

Published: 2025-11-10

arXiv ID: 2511.06778v1

Added to Library: 2025-11-11 05:02 UTC

Safety

📄 Abstract

The rapid advancement of Large Language Models (LLMs) has driven significant progress in Natural Language Interface to Database (NLIDB). However, the widespread adoption of LLMs has raised critical privacy and security concerns. During interactions, LLMs may unintentionally expose confidential database contents or be manipulated by attackers to exfiltrate data through seemingly benign queries. While current efforts typically rely on rule-based heuristics or LLM agents to mitigate this leakage risk, these methods still struggle with complex inference-based attacks, suffer from high false positive rates, and often compromise the reliability of SQL queries. To address these challenges, we propose \textsc{SafeNlidb}, a novel privacy-security alignment framework for LLM-based NLIDB. The framework features an automated pipeline that generates hybrid chain-of-thought interaction data from scratch, seamlessly combining implicit security reasoning with SQL generation. Additionally, we introduce reasoning warm-up and alternating preference optimization to overcome the multi-preference oscillations of Direct Preference Optimization (DPO), enabling LLMs to produce security-aware SQL through fine-grained reasoning without the need for human-annotated preference data. Extensive experiments demonstrate that our method outperforms both larger-scale LLMs and ideal-setting baselines, achieving significant security improvements while preserving high utility.WARNING: This work may contain content that is offensive and harmful!

🔍 Key Points

  • Proposes SafeNlidb, an end-to-end privacy-security alignment framework for Large Language Model (LLM)-based Natural Language Interfaces to Databases (NLIDB) that mitigates privacy leaks during user interactions with databases.
  • Utilizes an automated pipeline for hybrid chain-of-thought (H-CoT) data synthesis, combining implicit security reasoning with SQL generation, thereby eliminating the need for manual data annotation.
  • Introduces reasoning warm-up and alternating preference optimization (APO) techniques that stabilize multi-preference optimization in LLMs, improving both security awareness and SQL generative reliability.
  • Extensive experiments demonstrate that SafeNlidb outperforms larger LLMs and ideal baselines across security and reliability metrics while maintaining high utility in SQL generation.
  • Develops ShieldSQL, a benchmark for evaluating privacy risks in NLIDB systems, enabling broader assessment of security-aware LLMs.

💡 Why This Paper Matters

This paper is significant as it addresses critical privacy and security issues arising from the deployment of LLMs in database interfaces, an area growing in importance due to increased reliance on AI for data interactions. SafeNlidb successfully demonstrates a novel integration of security reasoning and SQL generation, setting a new standard for safety in NLIDB systems while offering a robust evaluation benchmark (ShieldSQL) to guide future research.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers will find this paper of particular interest because it offers innovative methodologies for mitigating privacy and security vulnerabilities in LLMs, an increasingly essential focus given the widespread use of AI in sensitive data contexts. The proposed framework and evaluation benchmarks provide insights and tools that can enhance the development of safer AI systems, making substantial contributions to the growing field of AI ethics and security.

📚 Read the Full Paper