
by Florian Zandt
In a bid to increase the digital safety of journalists and activists, Facebook announced that it would award said user groups the status of "involuntary public figures", effectively increasing the safety measures in place to prevent harassment and bullying. The term was first widely used in the context of the killing of George Floyd, enabling the platform to delete content mocking or praising his death. As our chart shows, Facebook is getting better at removing similarly hateful content quarter after quarter.
Between April and June 2021 alone, Facebook removed or flagged 31.5 million content pieces containing hate speech, a drastic increase compared to the first quarter number of 25.2 million. The prevalence rate of users encountering hateful posts and comments has allegedly also decreased to approximately 0.05 percent, meaning five out of 10,000 content pieces containing hate speech slipped past Facebook's flagging and deletion processes. This can partly be attributed to the improvements in the platform's AI algorithms, which are also reflected in the percentage of policy-violating content found before users reported it. In the second quarter of 2019, only 71 percent of content pieces were deleted before being shown to Facebook users, compared to roughly 98 percent between April and June 2021. Heavily relying on algorithms is not without its downsides, though: In Q2 of 2021, 416,000 content pieces removed for hate speech were restored at a later date, 328,000 through automatic processes not requiring a manual appeal.
Since FacebookFacebook Ramps Up Efforts Against Hate Speech started publishing its Community Standards Enforcement Report every quarter to create more transparency concerning their moderation measures, the total amount of flagged or removed content pieces containing hate speech as well as its proactive action rate continuously increased apart from the first quarter of 2021. The company defines hate speech as “violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics, or slurs. These characteristics include race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disability or disease.”
Start leaning Data Science and Business Intelligence tools:
Comments