← Back to Library

Can LLM Infer Risk Information From MCP Server System Logs?

Authors: Jiayi Fu, Yuansen Zhang, Yinggui Wang

Published: 2025-11-08

arXiv ID: 2511.05867v3

Added to Library: 2026-01-23 03:01 UTC

📄 Abstract

Large Language Models (LLMs) demonstrate strong capabilities in solving complex tasks when integrated with external tools. The Model Context Protocol (MCP) has become a standard interface for enabling such tool-based interactions. However, these interactions introduce substantial security concerns, particularly when the MCP server is compromised or untrustworthy. While prior benchmarks primarily focus on prompt injection attacks or analyze the vulnerabilities of LLM-MCP interaction trajectories, limited attention has been given to the underlying system logs associated with malicious MCP servers. To address this gap, we present the first synthetic benchmark for evaluating LLMs' ability to identify security risks from system logs. We define nine categories of MCP server risks and generate 1,800 synthetic system logs using ten state-of-the-art LLMs. These logs are embedded in the return values of 243 curated MCP servers, yielding a dataset of 2,421 chat histories for training and 471 queries for evaluation. Our pilot experiments reveal that smaller models often fail to detect risky system logs, leading to high false negatives. While models trained with supervised fine-tuning (SFT) tend to over-flag benign logs, resulting in elevated false positives, Reinforcement Learning with Verifiable Reward (RLVR) offers a better precision-recall balance. In particular, after training with Group Relative Policy Optimization (GRPO), Llama3.1-8B-Instruct achieves 83 percent accuracy, surpassing the best-performing large remote model by 9 percentage points. Fine-grained, per-category analysis further underscores the effectiveness of reinforcement learning in enhancing LLM safety within the MCP framework. Code and data are available at https://github.com/PorUna-byte/MCP-RiskCue.

🔍 Key Points

  • Introduction of RECAP, a resource-efficient adversarial prompting method that retrieves pre-trained adversarial prompts instead of generating them, saving time and resources.
  • Categorization of a dataset of 1,000 prompts into seven harm-related categories to facilitate better target matching for adversarial prompting.
  • RECAP leverages a retrieval database and hierarchical success rates of adversarial techniques (GCG, PEZ, GBDA) to enhance the effectiveness of attacks against large language models.
  • Demonstration of competitive attack success rates with RECAP, achieving 33% success while being considerably faster (4 minutes) than methods like GCG which took approximately 8 hours for similar tasks.
  • Applicability of RECAP to black-box models, which enhances its utility in real-world scenarios where model internals are not accessible.

💡 Why This Paper Matters

This paper is significant as it presents a novel method to improve the evaluation of security in large language models without the extensive computational resources typically required for adversarial prompting. By utilizing a retrieval-based approach, it remains accessible for smaller organizations, allowing for more effective adversarial attacks and ultimately aiding in enhancing the safety of AI systems. In a landscape where LLMs are increasingly deployed, ensuring the robustness of these models against adversarial threats is critical.

🎯 Why It's Interesting for AI Security Researchers

This research is highly relevant to AI security researchers as it addresses one of the fundamental challenges of securing large language models against adversarial attacks. It provides a practical framework for evaluating model vulnerabilities in a resource-efficient manner, which is particularly important as LLMs become more integrated into applications. By offering insights into the effectiveness of various adversarial techniques and presenting a method that does not require access to model internals, this work contributes to a growing body of literature that seeks to bolster the security and ethical use of AI technologies.

📚 Read the Full Paper