“It’s common and a part of being a content creator”: Understanding How Creators Experience and Cope with Hate and Harassment Online
- Auteur-es
- Thomas, Kurt; Kelley, Patrick Gage; Consolvo, Sunny; Samermit, Patrawat; Bursztein, Elie
- Nombre Auteurs
- 5
- Titre
- “It’s common and a part of being a content creator”: Understanding How Creators Experience and Cope with Hate and Harassment Online
- Année de publication
- 2022
- Référence (APA)
- Thomas, K., Kelley, P. G., Consolvo, S., Samermit, P., & Bursztein, E. (2022). “It’s common and a part of being a content creator” : Understanding How Creators Experience and Cope with Hate and Harassment Online. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1‑15. https://doi.org/10.1145/3491102.3501879
- Mots-clés
- Security and privacy, hate, harassment, creators, content moderation
- URL
- https://dl.acm.org/doi/10.1145/3491102.3501879
- doi
- https://doi.org/10.1145/3491102.3501879
- Accessibilité de l'article
- Open access
- Champ
- Security, Privacy and Abuse Prevention
- Type contenu (théorique Applicative méthodologique)
- Applicative
- Méthode
- Analysis of survey responses from content creators
- Cas d'usage
- American content creators on Facebook, Instagram, TikTok, Twitter or YouTube
- Objectifs de l'article
- We surveyed creators to understand their personal experiences with attacks (including toxic comments, impersonation, stalking, and more), the coping practices they employ, and gaps they experience with existing solutions (such as moderation or reporting)
- Question(s) de recherche/Hypothèses/conclusion
- Research question(s) : In order to prevent or mitigate attacks, creators and platforms rely on a variety of levers including community guidelines [53], content moderation tools [32, 33, 52, 71], and abuse reporting [14, 34], with an eye towards increasingly automated detection [21, 29]. However, multiple critiques have been leveled at these systems, including fragility to evasion or bias [5, 18, 35] and a lack of transparency for what policies are enforced [8]. Equally challenging, protections for hate and harassment are often disjointed, requiring unique interventions for a creator experiencing toxic comments, versus a creator whose personal information was leaked, or a creator being overloaded with negative reviews and ratings [64].
- Hypothesis(es) : As such, we argue there is a gap in the community’s current understanding for which solutions best protect creators, and how to prioritize improvements based of the frequency of attacks and their resulting harms.
- Conclusion(s) : Nearly every creator in our study experienced some form of hate and harassment, and for one in three, such experiences were a regular occurrence. As such, creators represent a population at-risk of hate and harassment compared to general internet users. Creators frequently relied on content moderation, and to a lesser extent reporting, for responding to attacks. However, when these platform-provided tools fell short, some creators engaged in protective practices such as self-censoring their personal attributes and beliefs, or leaving platforms and communities entirely, in order to avoid further harm.
- Cadre théorique/Auteur.es
- Tools used by platforms to combat hate and harassment (Pater et al., 2016)
- Shortcomings of these control measures (Pater et al., 2016 ; Crawford et Gillespie, 2016)
- Estimates of hate and harassment online (PEW Research Center, 2017)
- Research about people who are at a risk (Warford et al., 2021)
- Concepts clés
- Content moderation
- Online hate and harassment
- Données collectées (type source)
- Responses to a survey submitted to US content creators on Facebook, Instagram, TikTok, Twitter or YouTube
- Définition des émotions
- No definition
- List of harassment experiences
- Ampleur expérimentation (volume de comptes)
- 198 contacted, 145 replies, 135 retained
- Technologies associées
- Squadbox [42], which allows a person to appoint family members, friends, or community members to assist with review.
- keyword lists like Hatebase [26] and community-generated blocklists of abusive accounts like Blocktogether [9].
- HeartMob, which connects targets of harassment with supporters [8].
- Mention de l'éthique
-
3.4 Research ethics & anonymization
To ensure our work did not put participants at undue risk when recalling past sensitive experiences, our study plan was reviewed by a set of experts at Google in domains including ethics, human subjects research, policy, legal, security, privacy, and anti-abuse. We note that Google research does not require IRB approval, though we adhere to similarly strict standards. We alerted participants that our survey collected sensitive demographic data in our consent form. Additionally, all demographic questions included an option to “Prefer not to say”, and did not require any answer within our survey tool.
We took multiple steps to ensure the anonymity of participants. We note that our survey instrument never collected names, email addresses, social media handles, or other public identifers of creators. Distribution and compensation was handled solely by the organizers of the residency program who had access to identifying contact information, while the researchers involved in the study were the only parties with access to raw response data. Throughout the paper, the quotes we provide are the unedited responses of participants. We have only removed identifying information, including specifc platform and community names or features to protect the participants from de-anonymization. - Finalité communicationnelle
- As creators represent an at-risk population, it is our belief that their lived experiences act as a portent for how hate and harassment will evolve online. Lessons and protective practices that emerge for creators can thus inform the broader solution space for general internet users.
- Résumé
- Content creators—social media personalities with large audiences on platforms like Instagram, TikTok, and YouTube—face a heightened risk of online hate and harassment. We surveyed 135 creators to understand their personal experiences with attacks (including toxic comments, impersonation, stalking, and more), the coping practices they employ, and gaps they experience with existing solutions (such as moderation or reporting). We find that while a majority of creators view audience interactions favorably, nearly every creator could recall at least one incident of hate and harassment, and attacks are a regular occurrence for one in three creators. As a result of hate and harassment, creators report self-censoring their content and leaving platforms. Through their personal stories, their attitudes towards platform-provided tools, and their strategies for coping with attacks and harms, we inform the broader design space for how to better protect people online from hate and harassment.
- Pages du site
- Contenu
Fait partie de “It’s common and a part of being a content creator”: Understanding How Creators Experience and Cope with Hate and Harassment Online