← Back to Library

From Prompts to Protection: Large Language Model-Enabled In-Context Learning for Smart Public Safety UAV

Authors: Yousef Emami, Hao Zhou, Miguel Gutierrez Gaitan, Kai Li, Luis Almeida, Zhu Han

Published: 2025-06-03

arXiv ID: 2506.02649v1

Added to Library: 2025-06-04 04:04 UTC

📄 Abstract

A public safety Unmanned Aerial Vehicle (UAV) enhances situational awareness in emergency response. Its agility and ability to optimize mobility and establish Line-of-Sight (LoS) communication make it increasingly vital for managing emergencies such as disaster response, search and rescue, and wildfire monitoring. While Deep Reinforcement Learning (DRL) has been applied to optimize UAV navigation and control, its high training complexity, low sample efficiency, and simulation-to-reality gap limit its practicality in public safety. Recent advances in Large Language Models (LLMs) offer a compelling alternative. With strong reasoning and generalization capabilities, LLMs can adapt to new tasks through In-Context Learning (ICL), which enables task adaptation via natural language prompts and example-based guidance, without retraining. Deploying LLMs at the network edge, rather than in the cloud, further reduces latency and preserves data privacy, thereby making them suitable for real-time, mission-critical public safety UAVs. This paper proposes the integration of LLM-enabled ICL with public safety UAV to address the key functions, such as path planning and velocity control, in the context of emergency response. We present a case study on data collection scheduling where the LLM-enabled ICL framework can significantly reduce packet loss compared to conventional approaches, while also mitigating potential jailbreaking vulnerabilities. Finally, we discuss LLM optimizers and specify future research directions. The ICL framework enables adaptive, context-aware decision-making for public safety UAV, thus offering a lightweight and efficient solution for enhancing UAV autonomy and responsiveness in emergencies.

🔍 Key Points

  • Introduction of the Attempt to Persuade Eval (APE) benchmark, focusing on evaluating large language models (LLMs) on their propensity to attempt persuasion rather than success rates,

💡 Why This Paper Matters

This paper addresses a critical gap in AI safety research by emphasizing the need to evaluate LLMs' persuasive attempts, especially in potentially harmful contexts. The findings highlight substantial risks to users and society, underscoring the necessity for improved safeguards in AI applications. By introducing APE as a benchmark, the research sets the stage for more thorough evaluations and discussions surrounding persuasive AI.

🎯 Why It's Interesting for AI Security Researchers

This paper is crucial for AI security researchers as it provides a systematic approach for evaluating the persuasive behaviors of LLMs on harmful content. By revealing how models can be manipulated to engage in persuasion attempts on unethical topics, it highlights vulnerabilities in current AI systems and emphasizes the need for enhanced security measures to prevent malicious use.

📚 Read the Full Paper