← Back to Library

IndoSafety: Culturally Grounded Safety for LLMs in Indonesian Languages

Authors: Muhammad Falensi Azmi, Muhammad Dehan Al Kautsar, Alfan Farizki Wicaksono, Fajri Koto

Published: 2025-06-03

arXiv ID: 2506.02573v1

Added to Library: 2025-06-04 04:01 UTC

Safety

📄 Abstract

Although region-specific large language models (LLMs) are increasingly developed, their safety remains underexplored, particularly in culturally diverse settings like Indonesia, where sensitivity to local norms is essential and highly valued by the community. In this work, we present IndoSafety, the first high-quality, human-verified safety evaluation dataset tailored for the Indonesian context, covering five language varieties: formal and colloquial Indonesian, along with three major local languages: Javanese, Sundanese, and Minangkabau. IndoSafety is constructed by extending prior safety frameworks to develop a taxonomy that captures Indonesia's sociocultural context. We find that existing Indonesian-centric LLMs often generate unsafe outputs, particularly in colloquial and local language settings, while fine-tuning on IndoSafety significantly improves safety while preserving task performance. Our work highlights the critical need for culturally grounded safety evaluation and provides a concrete step toward responsible LLM deployment in multilingual settings. Warning: This paper contains example data that may be offensive, harmful, or biased.

🔍 Key Points

  • Introduction of IndoSafety, the first human-verified safety evaluation dataset for Indonesian LLMs, which considers the unique sociocultural context of Indonesia across five language varieties.
  • Development of a detailed safety taxonomy that encompasses cultural sensitivities, including ethnicity, religions, historical controversies, and political issues relevant to Indonesia.
  • Empirical evaluation showing that existing Indonesian-centric LLMs often generate unsafe outputs, particularly in colloquial and local languages, and demonstrating the effectiveness of IndoSafety for fine-tuning to improve safety without sacrificing performance.
  • Implementation of novel multilingual safety evaluation methods through dataset creation and model tuning, highlighting the importance of culturally grounded evaluations for diverse languages.
  • Identification of critical risk areas where LLMs produce harmful responses, providing insights into model behavior across different prompt structures.

💡 Why This Paper Matters

This paper presents significant advancements in the safety evaluation of large language models by contextualizing their training and evaluation to culturally diverse settings, specifically Indonesia. By introducing IndoSafety, the authors not only contribute a novel dataset but also highlight critical areas of risk inherent to Indonesian language and culture, ensuring the responsible deployment of LLMs in a multilingual and multicultural landscape. The findings underline the necessity for local adaptation in AI technologies to enhance their safety and effectiveness.

🎯 Why It's Interesting for AI Security Researchers

The paper is a vital resource for AI security researchers as it addresses the often-overlooked aspect of cultural sensitivity in safety evaluations of language models. The introduction of the IndoSafety dataset and taxonomy provides a framework for understanding and mitigating harmful outputs tailored to specific sociocultural contexts. This work is particularly relevant in light of the increasing deployment of LLMs in diverse settings, necessitating research into safeguards that are not only technically sound but also culturally informed.

📚 Read the Full Paper