Volume 12, Number 15, September 2022
A Transformer based Multi-Task Learning Approach Leveraging Translated
and Transliterated Data to Hate Speech Detection in Hindi
Authors
Prashant Kapil and Asif Ekbal, IIT Patna, India
Abstract
The increase in usage of the internet has also led to an increase in unsocial activities, hate speech is one of them. The increase in Hate speech over a few years has been one of the biggest problems and automated techniques need to be developed to detect it. This paper aims to use the eight publicly available Hindi datasets and explore different deep neural network techniques to detect aggression, hate, abuse, etc. We experimented on multilingual-bidirectional encoder representations from the transformer (M-BERT) and multilingual representations for Indian languages (MuRIL) in four settings (i) Single task learning (STL) framework. (ii) Transfering the encoder knowledge to the recurrent neural network (RNN). (iii) Multi-task learning (MTL) where eight Hindi datasets were jointly trained and (iv) pre-training the encoder with translated English tweets to Devanagari script and the same Devanagari scripts transliterated to romanized Hindi tweets and then fine-tuning it in MTL fashion. Experimental evaluation shows that cross-lingual information in MTL helps in improving the performance of all the datasets by a significant margin, hence outperforming the state-of-the-art approaches in terms of weightedF1 score. Qualitative and quantitative error analysis is also done to show the effects of the proposed approach.
Keywords
M-BERT, MuRIL, Weighted-F1, RNN, cross-lingual.