Testing Grayscale Interventions to Reduce Negative Emotional Impact on Manual Reviewers
- Auteur-es
- Karunakaran, Sowmya; Ramakrishnan, Rashmi
- Nombre Auteurs
- 2
- Titre
- Testing Grayscale Interventions to Reduce Negative Emotional Impact on Manual Reviewers
- Année de publication
- 2019
- Référence (APA)
- Karunakaran, S., & Ramakrishnan, R. (2019). Testing Grayscale Interventions to Reduce Negative Emotional Impact on Manual Reviewers. The 4th Symposium on Computing and Mental Health, CHI Workshop’19. http://mentalhealth.media.mit.edu/wp-content/uploads/sites/15/2019/04/CMH2019_paper_39.pdf
- Mots-clés
- ND
- URL
- http://mentalhealth.media.mit.edu/wp-content/uploads/sites/15/2019/04/CMH2019_paper_39.pdf
- Accessibilité de l'article
- Open access
- Champ
- Human-Computer Interaction and Visualization
- Type contenu (théorique Applicative méthodologique)
- Applicative
- Méthode
-
We present a study measuring the emotional impact of reviewing difficult content by introducing simple image transformations such as grayscaling and blurring of content. We conduct a series of experiments on live content review queues. We maximize external validity by studying the impact on live manual review queues and test for differences in emotions, output quality, and task completion times with respect to our interventions.
Analysis of survey responses submitted to the moderators involved in the experiments - Cas d'usage
- Online content moderators
- Objectifs de l'article
-
Despite the importance of the subject, there is no prior research on the effects of technical interventions to reduce the associated emotional impact to reviewers.
To address this gap, we present a study measuring the emotional impact of reviewing difficult content by introducing simple image transformations such as grayscaling and blurring of content. - Question(s) de recherche/Hypothèses/conclusion
- Research question(s) : While machines and technology play a critical role in content moderation, there continues to be a need for manual reviews where human judgement is required in interpreting borderline cases as well as generation of ground truth for ML models. It is known, however, that such manual reviews could be emotionally challenging. Despite the importance of the subject, there is no prior research on the effects of technical interventions to reduce the associated emotional impact to reviewers.
- Hypothesis(es) : To address this gap, we present a study measuring the emotional impact of reviewing difficult content by introducing simple image transformations such as grayscaling and blurring of content.
- Conclusion(s) : We find that simple stylistic transformations can provide an easy to implement solution to significantly reduce the emotional impact of manual content reviews
- Cadre théorique/Auteur.es
- Emotional response's measure "PANAS Scale" (Watson, Clark, et Tellegen, 1988)
- Concepts clés
- Content moderation
- Emotional impact measurement
- Données collectées (type source)
-
We conduct a series of experiments on live content review queues. We maximize external validity by studying the impact on live manual review queues and test for differences in emotions, output quality, and task completion times with respect to our interventions.
Responses to a survey of moderators involved in the experiments. - Définition des émotions
- No definition
- Positive and negative labeling
- Ampleur expérimentation (volume de comptes)
- ND
- Technologies associées
- ND
- Mention de l'éthique
- We refrain from collecting any personally identifiable information to keep the study fully anonymous. Reviewers had the option to opt-out of taking the PANAS survey.
- Finalité communicationnelle
- We find that simple stylistic transformations can provide an easy to implement solution to significantly reduce the emotional impact of manual content reviews.
- Résumé
- With the rise in user generated content, there has been a significant increase in content shared online every day through social networks and content platforms. This in turn has increased the need to moderate content to ensure it complies with community guidelines and policies. Content moderation relies on automated processes and manual reviews by human reviewers to determine if content displayed in the form of images, videos or text, violate the platform’s acceptable use policies. For example, on Google Drive, Photos and Blogger, in the past year, 160,000 pieces of violent extremism content were taken down [1]. While machines and technology play a critical role in content moderation, there continues to be a need for manual reviews where human judgement is required in interpreting borderline cases as well as generation of ground truth for ML models. It is known, however, that such manual reviews could be emotionally challenging.
- Pages du site
- Contenu
Fait partie de Testing Grayscale Interventions to Reduce Negative Emotional Impact on Manual Reviewers