← Back to Library

EAGER: Edge-Aligned LLM Defense for Robust, Efficient, and Accurate Cybersecurity Question Answering

Authors: Onat Gungor, Roshan Sood, Jiasheng Zhou, Tajana Rosing

Published: 2025-11-24

arXiv ID: 2511.19523v1

Added to Library: 2025-11-26 03:01 UTC

Safety

📄 Abstract

Large Language Models (LLMs) are highly effective for cybersecurity question answering (QA) but are difficult to deploy on edge devices due to their size. Quantization reduces memory and compute requirements but often degrades accuracy and increases vulnerability to adversarial attacks. We present EAGER, an edge-aligned defense framework that integrates parameter-efficient quantization with domain-specific preference alignment to jointly optimize efficiency, robustness, and accuracy. Unlike prior methods that address these aspects separately, EAGER leverages Quantized Low-Rank Adaptation (QLoRA) for low-cost fine-tuning and Direct Preference Optimization (DPO) on a self-constructed cybersecurity preference dataset, eliminating the need for human labels. Experiments show that EAGER reduces adversarial attack success rates by up to 7.3x and improves QA accuracy by up to 55% over state-of-the-art defenses, while achieving the lowest response latency on a Jetson Orin, demonstrating its practical edge deployment.

🔍 Key Points

  • EAGER introduces a co-designed framework integrating quantization-aware fine-tuning with Direct Preference Optimization (DPO), targeting efficiency, robustness, and accuracy in cybersecurity QA.
  • The framework achieves significant improvements, reducing adversarial attack success rates by up to 7.3x and improving QA accuracy by up to 55% compared to existing defenses.
  • EAGER successfully eliminates the need for human-labeled datasets by constructing a self-annotated cybersecurity preference dataset, showcasing an innovative approach to preference alignment.
  • The model demonstrates the lowest response latency on an edge device (Jetson Orin), validating its practical deployment in resource-constrained environments.
  • By successfully enhancing both utility and resilience against prompt injection attacks, EAGER establishes a balanced approach that does not compromise critical aspects of performance.

💡 Why This Paper Matters

The EAGER framework represents a significant advancement in deploying large language models for cybersecurity applications on edge devices, balancing essential trade-offs between efficiency, accuracy, and robustness. Its innovative integration of techniques to counteract vulnerabilities while maintaining high performance is crucial for practical implementations in security-sensitive environments.

🎯 Why It's Interesting for AI Security Researchers

This paper is relevant to AI security researchers as it addresses critical challenges faced in deploying large language models (LLMs) securely on edge devices. With increasing threats such as prompt injection attacks, the proposed EAGER framework offers insights into advanced defenses combining efficiency and robustness, which are essential for developing secure AI solutions in cybersecurity contexts.

📚 Read the Full Paper