← Back to Library

Leveraging the Potential of Prompt Engineering for Hate Speech Detection in Low-Resource Languages

Authors: Ruhina Tabasshum Prome, Tarikul Islam Tamiti, Anomadarshi Barua

Published: 2025-06-30

arXiv ID: 2506.23930v1

Added to Library: 2025-07-01 04:01 UTC

Red Teaming

📄 Abstract

The rapid expansion of social media leads to a marked increase in hate speech, which threatens personal lives and results in numerous hate crimes. Detecting hate speech presents several challenges: diverse dialects, frequent code-mixing, and the prevalence of misspelled words in user-generated content on social media platforms. Recent progress in hate speech detection is typically concentrated on high-resource languages. However, low-resource languages still face significant challenges due to the lack of large-scale, high-quality datasets. This paper investigates how we can overcome this limitation via prompt engineering on large language models (LLMs) focusing on low-resource Bengali language. We investigate six prompting strategies - zero-shot prompting, refusal suppression, flattering the classifier, multi-shot prompting, role prompting, and finally our innovative metaphor prompting to detect hate speech effectively in low-resource languages. We pioneer the metaphor prompting to circumvent the built-in safety mechanisms of LLMs that marks a significant departure from existing jailbreaking methods. We investigate all six different prompting strategies on the Llama2-7B model and compare the results extensively with three pre-trained word embeddings - GloVe, Word2Vec, and FastText for three different deep learning models - multilayer perceptron (MLP), convolutional neural network (CNN), and bidirectional gated recurrent unit (BiGRU). To prove the effectiveness of our metaphor prompting in the low-resource Bengali language, we also evaluate it in another low-resource language - Hindi, and two high-resource languages - English and German. The performance of all prompting techniques is evaluated using the F1 score, and environmental impact factor (IF), which measures CO$_2$ emissions, electricity usage, and computational time.

🔍 Key Points

  • Introduces 'metaphor prompting' as a novel strategy to enhance hate speech detection in low-resource languages, effectively bypassing ethical constraints of large language models (LLMs) like Llama2-7B.
  • Conducts extensive experiments comparing six prompting strategies and their effectiveness in hate speech detection across multiple languages: Bengali, Hindi, English, and German.
  • Evaluates the environmental impact of hate speech detection methods, demonstrating how metaphor prompting not only improves accuracy but also reduces carbon footprint compared to traditional prompting methods.
  • Validates the superiority of prompted Llama2-7B over traditional deep learning methods for hate speech detection in both low-resource and high-resource language datasets while considering computational efficiency.
  • Establishes a comprehensive framework for evaluating the accuracy and environmental impacts of hate speech detection techniques, presenting a new approach to NLP model evaluation.

💡 Why This Paper Matters

This paper is significant as it addresses the critical problem of hate speech detection in low-resource languages through innovative methodologies that enhance model performance and reduce environmental impact. By introducing metaphor prompting, it opens new avenues for effectively utilizing LLMs in multilingual contexts, which is essential in a world increasingly reliant on digital communication.

🎯 Why It's Interesting for AI Security Researchers

The paper is of particular interest to AI security researchers as it explores the potential exploitation of LLMs through innovative prompting techniques. Understanding how such methods can compromise the ethical constraints of AI models is crucial for developing robust security frameworks. Furthermore, the paper contributes to the ongoing discussion on ensuring responsible AI usage, especially concerning sensitive applications like hate speech detection.

📚 Read the Full Paper