← Back to Library

Evaluating LLMs Robustness in Less Resourced Languages with Proxy Models

Authors: Maciej ChrabΔ…szcz, Katarzyna Lorenc, Karolina Seweryn

Published: 2025-06-09

arXiv ID: 2506.07645v1

Added to Library: 2025-06-10 04:00 UTC

Red Teaming

πŸ“„ Abstract

Large language models (LLMs) have demonstrated impressive capabilities across various natural language processing (NLP) tasks in recent years. However, their susceptibility to jailbreaks and perturbations necessitates additional evaluations. Many LLMs are multilingual, but safety-related training data contains mainly high-resource languages like English. This can leave them vulnerable to perturbations in low-resource languages such as Polish. We show how surprisingly strong attacks can be cheaply created by altering just a few characters and using a small proxy model for word importance calculation. We find that these character and word-level attacks drastically alter the predictions of different LLMs, suggesting a potential vulnerability that can be used to circumvent their internal safety mechanisms. We validate our attack construction methodology on Polish, a low-resource language, and find potential vulnerabilities of LLMs in this language. Additionally, we show how it can be extended to other languages. We release the created datasets and code for further research.

πŸ€– AI Analysis

AI analysis is not available for this paper. This may be because the paper was not deemed relevant for AI security topics, or the analysis failed during processing.

πŸ“š Read the Full Paper