← Back to Library

Multilingual Hidden Prompt Injection Attacks on LLM-Based Academic Reviewing

Authors: Panagiotis Theocharopoulos, Ajinkya Kulkarni, Mathew Magimai. -Doss

Published: 2025-12-29

arXiv ID: 2512.23684v1

Added to Library: 2026-01-07 10:06 UTC

Red Teaming

📄 Abstract

Large language models (LLMs) are increasingly considered for use in high-impact workflows, including academic peer review. However, LLMs are vulnerable to document-level hidden prompt injection attacks. In this work, we construct a dataset of approximately 500 real academic papers accepted to ICML and evaluate the effect of embedding hidden adversarial prompts within these documents. Each paper is injected with semantically equivalent instructions in four different languages and reviewed using an LLM. We find that prompt injection induces substantial changes in review scores and accept/reject decisions for English, Japanese, and Chinese injections, while Arabic injections produce little to no effect. These results highlight the susceptibility of LLM-based reviewing systems to document-level prompt injection and reveal notable differences in vulnerability across languages.

🔍 Key Points

  • The study demonstrates the vulnerability of Large Language Models (LLMs) to document-level hidden prompt injection attacks, highlighting significant impacts on academic review scores and acceptance decisions.
  • A comprehensive dataset of 500 real academic papers was constructed for evaluation, showcasing the effects of hidden prompts injected in multiple languages with notable differences in susceptibility among these languages.
  • Results indicated substantial adverse effects on reviews for English, Japanese, and Chinese injections, while Arabic injections showed minimal impact, revealing critical implications for multilingual robustness in LLM-based applications.
  • The analysis features precise experimental metrics, including score drift, Injection Success Rate (ISR), and transitions in acceptance outcomes, contributing to a robust assessment of the risks posed by these attacks.
  • The paper advocates for further research on multilingual vulnerability and mitigation strategies, underscoring the immediate relevance of understanding these risks in high-stakes academic peer review scenarios.

💡 Why This Paper Matters

This paper is crucial as it illuminates a significant security threat to LLMs utilized in critical processes such as academic peer review. The findings underscore the necessity of addressing the vulnerabilities inherent in LLMs, particularly in multilingual contexts, where the impact of adversarial inputs varies greatly. By revealing how hidden prompt injections can skew review outcomes, it calls for urgent attention to develop safeguards to ensure the reliability and integrity of automated decision-support systems in academia and beyond.

🎯 Why It's Interesting for AI Security Researchers

AI security researchers would find this paper of great interest as it addresses the emergent threat of prompt injection attacks on LLMs, a significant concern given the increasing reliance on these models in critical workflows. The research not only examines this vulnerability in a novel multilingual context but also provides empirical evidence of how these injections can alter decisions in high-stakes scenarios. The insights gained from this study can inform the development of more robust frameworks and countermeasures to protect against adversarial manipulations in AI systems.

📚 Read the Full Paper