SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
- Auteur-es
- Thomas, Kurt; Akhawe, Devdatta; Bailey, Michael; Boneh, Dan; Bursztein, Elie; Consolvo, Sunny; Dell, Nicola; Durumeric, Zakir; Kelley, Patrick Gage; Kumar, Deepak; McCoy, Damon; Meiklejohn, Sarah; Ristenpart, Thomas; Stringhini, Gianluca
- Nombre Auteurs
- 14
- Titre
- SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
- Année de publication
- 2021
- Référence (APA)
- Thomas, K., Akhawe, D., Bailey, M., Boneh, D., Bursztein, E., Consolvo, S., Dell, N., Durumeric, Z., Kelley, P. G., Kumar, D., McCoy, D., Meiklejohn, S., Ristenpart, T., & Stringhini, G. (2021). SoK : Hate, Harassment, and the Changing Landscape of Online Abuse. 2021 IEEE Symposium on Security and Privacy (SP), 247‑267. https://doi.org/10.1109/SP40001.2021.00028
- Mots-clés
- ND
- URL
- https://ieeexplore.ieee.org/abstract/document/9519435
- doi
- https://doi.org/10.1109/SP40001.2021.00028
- Accessibilité de l'article
- Open access
- Champ
- Security, Privacy and Abuse Prevention
- Type contenu (théorique Applicative méthodologique)
- Applicative
- Méthode
-
We collate over 150 research papers and prominent news stories related to hate and harassment and use them to create a taxonomy of seven distinct attack categories.
Analysis of responses to public survey
Wherever possible, we compare our results to similar surveys. - Cas d'usage
- Online hate and harassment
- Objectifs de l'article
- We propose a taxonomy for reasoning about online hate and harassment.
- Question(s) de recherche/Hypothèses/conclusion
- Research question(s) : In this work, we explore how online hate and harassment has transformed alongside technology and make a case for why the security community needs to help address this threat.
- Hypothesis(es) : In this work, we argued that security, privacy, and anti-abuse protections are failing to address the growing threat of online hate and harassment.
- Conclusion(s) : We proposed a taxonomy, built from over 150 research articles, to reason about these new threats. We also provided longitudinal evidence that hate and harassment has grown 4% over the last three years and now affects 48% of people globally. Young adults, LGBTQ+ individuals, and frequent social media users remain the communities most at risk of attack. We believe the computer security community must play a role in addressing this threat. To this end, we outlined five potential directions for improving protections that span technical, design, and policy changes to ultimately assist in identifying, preventing, mitigating, and recovering from hate and harassment attacks.
- Cadre théorique/Auteur.es
- Attack characteristics (Matthews et al., 2017 ; Sambasivan et al., 2019 ; Chatterjee et al., 2018)
- Motivation of attackers (Citron, 2014)
- Online hate and harassment(Slonje et Smith, 2008 ; Kowalski et al..., 2016)
- Cybercrime (Anderson, 2012)
- Violent extremism and emerging technologies (Brachman, 2006 ; Edwards et Gribbon, 2013 ; Lima et al., 2018)
- Disinformation and misinformation (Starbird, Arif et Wilson, 2019 ; )
- Concepts clés
- Online hate and harassment
- Données collectées (type source)
-
Taxonomy :
Examination of the last five years of research from IEEE S&P, USENIX Security, CCS, CHI, CSCW, ICWSM, WWW, SOUPS, and IMC, on topics related to hate speech, harassment, trolling, doxing, stalking, non-consensual image exposure, disruptive behavior, content moderation, and intimate partner violence. We then manually searched through the related works of these papers for relevant research, including findings from the social sciences and psychology communities (though restricted solely to online hate and harassment, rather than hate speech or bullying in general). Additionally, we relied on the domain expertise of the authors to identify related works and major recent news events.
Survey :
Our survey asked participants “Have you ever personally experienced any of the following online?” and then listed a fixed set of experiences that participants could select from. [The survey also] asking if this behavior was experienced (prevalence only) and not measuring frequency or severity. We did expand the set to include eight other experiences related to lockout and control, surveillance, content leakage, impersonation, and a deeper treatment of toxic content beyond just name calling (as used by earlier works). - Définition des émotions
- Definition of hate and harassment
- Ampleur expérimentation (volume de comptes)
-
Taxonomy: Review of over 150 press articles and scientific papers on the subject of online hate and harassment
Survey: 50,000 respondents in 22 countries (Brazil, China, Colombia, France, Germany, India, Indonesia, Ireland, Japan, Kenya, Mexico, Nigeria, Poland, Russia, Saudi Arabia, South Korea, Spain, Sweden, Turkey, United Kingdom, United States, Venezuela) - Technologies associées
- Nudges, indicators, and warnings
- Human moderation, review, and delisting
- Automated detection and curation
- Mention de l'éthique
-
B. Challenges for researchers
Researcher safety and ethics.
Currently, there are no best practices for how researchers can safely and ethically study online hate and harassment. Risks facing researchers include becoming a target of coordinated, hostile groups, as well as emotional harm stemming from reviewing toxic content (similar to risks for manual reviewers) [2]. Likewise, researchers must ensure they respect at-risk subjects and do not further endanger targets as they study hate and harassment. - Finalité communicationnelle
- We outlined five potential directions for improving protections that span technical, design, and policy changes to ultimately assist in identifying, preventing, mitigating, and recovering from hate and harassment attacks.
- Résumé
- We argue that existing security, privacy, and antiabuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks—such as toxic content and surveillance—that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
- Pages du site
- Contenu
Fait partie de SoK: Hate, Harassment, and the Changing Landscape of Online Abuse