Presentation Information

[4O4-IS-2b-02]Emotion-Aware Hate Speech Detection Using Transformer-Based BERT Model

〇Tenzin Choesang1, Yusuke Manabe1 (1. Chiba Institute of Technology)
regular

Keywords:

Hate Speech Detection,BERT,Emotion Analysis

The increase in the use of hate speech and offensive language in social media leads to a great research challenge in accurate detection and contextual understanding. We propose an emotion analysis based hybrid framework consisting fine-tuned transformer models, rule-based post-processing and meta-level emotion analysis. Domain-specific BERT model for hate language (HateBERT) was introduced and pre-trained on the Davidson dataset followed by fine-tuning on multiple dataset variations as well as addition of explicit hate samples from HateXplain data. A rule-based threat detection system was proposed to further improve the identification of direct threats within sentences. The performance of the model was assessed using precision, recall, F1-score, and confusion matrices. The best performing configuration that combined Davidson dataset with HateXplain labelled data achieved an accuracy of 93.7%, surpassing any other configuration or traditional machine learning methodologies. The overall system attained 92.2% accuracy with fairly similar importance across types as the final outcome showed. Emotion analysis discovered that among primary negative emotions annoyance and anger prevailed later on followed by sorrow and regret which hints emotional complexity of harmful online discourse system users may face when communicating online.