Joint Modelling of Emotion and Abusive Language Detection
- Auteur-es
- Santhosh Rajamanickam, Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova
- Nombre Auteurs
- 4
- Titre
- Joint Modelling of Emotion and Abusive Language Detection
- Année de publication
- 2020
- Référence (APA)
- Rajamanickam, S., Mishra, P., Yannakoudakis, H., & Shutova, E. (2020). Joint Modelling of Emotion and Abusive Language Detection. https://doi.org/10.18653/v1/2020.acl-main.394
- résumé
- The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online. Aiming to tackle this problem, the natural language processing (NLP) community has experimented with a range of techniques for abuse detection. While achieving substantial success, these methods have so far only focused on modelling the linguistic properties of the comments and the online communities of users, disregarding the emotional state of the users and how this might affect their language. The latter is, however, inextricably linked to abusive behaviour. In this paper, we present the first joint model of emotion and abusive language detection, experimenting in a multi-task learning framework that allows one task to inform the other. Our results demonstrate that incorporating affective features leads to significant improvements in abuse detection performance across datasets.
- Mots-clés
- emotion detection, abusive language detection, natural language processing, multi-task learning, affective features
- URL
- https://research.facebook.com/file/763339231062964/Joint-Modelling-of-Emotion-and-Abusive-Language-Detection.pdf
- doi
- https://doi.org/10.18653/v1/2020.acl-main.394
- Accessibilité de l'article
- Libre
- Champ
- Artificial Intelligence, Natural Language Processing & Speech
- Type contenu (théorique Applicative méthodologique)
- Méthodologique
- Méthode
- They propose a new approach that jointly learns to detect emotion and abuse, using a multi-task learning (MTL) framework that allows one task to inform the other.
- Cas d'usage
- Objectifs de l'article
- The main objectives of the article are to propose a new approach to abusive language detection that incorporates emotional features into the model via a multi-task learning framework, and to evaluate the effectiveness of this approach in improving abuse detection performance across datasets. The authors aim to address the limitations of existing methods, which have focused solely on modeling the linguistic properties of comments and online communities, and have not taken into account the emotional state of users and how this might affect their language. By developing a joint model of emotion and abusive language detection, the authors hope to provide a more comprehensive and accurate approach to identifying abusive behavior online.
- Question(s) de recherche/Hypothèses/conclusion
- Research question is whether incorporating affective features into a joint model of emotion and abusive language detection can improve abuse detection performance across datasets.
- Hypothesis : Multi-task learning framework that allows one task to inform the other will lead to better detection of abusive language, as the model will be able to leverage the complementary information provided by the two tasks.
- Conclusions : incorporating affective features into a joint model of emotion and abusive language detection leads to significant improvements in abuse detection performance across datasets. the emotional state of users is indeed linked to abusive behavior, and that jointly modeling emotion and abusive language detection can provide a more comprehensive and accurate approach to identifying abusive behavior online. The authors suggest that their approach could be extended to other complex semantic tasks, such as figurative language processing and inference, to further improve abuse detection performance.
- Cadre théorique/Auteur.es
-
Emotion with Ekman, Natural language processing (NLP) and machine learning techniques, specifically deep learning models such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs). Matthew E Peters et al. ; Sara Owsley Sood et al.
Theories of emotion and abusive behavior. Pushkar Mishra, Marco Del Tredici, Helen Yannakoudakis, and Ekaterina Shutova - Concepts clés
- Abusive language detection, Affective features
- Données collectées (type source)
-
datasets related to abusive language and emotion detection from Twitter.
Abuse detection task : annotated tweets from OffensEval 2019 and Waseem and Hovy 2016.
Emotion detection task : SemEval-2018 - Définition des émotions
- Ekman
- Ampleur expérimentation (volume de comptes)
-
OffensEval : 13240 tweets
Waseem and Hovy : 16202 tweets
SemEval18 : 11000 tweets - Technologies associées
- NLP, MTL, Classifiers, Convolutional neural networks (RNNs and CNNs), character-based models, and graph-based learning methods
- Mention de l'éthique
- Non
- Finalité communicationnelle
-
"Overall, our results also suggest the superiority of MTL over STL for abuse detection. With this new approach, one can build more complex models introducing new auxiliary tasks for abuse detection. For instance, we expect that abuse detection may also benefit from joint learning with complex semantic tasks, such as figurative language processing and inference."
Make digital spaces safer ?
- Pages du site
- Contenu
Fait partie de Joint Modelling of Emotion and Abusive Language Detection