One of Facebook’s ongoing struggles has been to reduce the amount of abuse on its platform.
On May 1st, the company’s latest effort in this pursuit, a button to label posts as hate speech, was rolled out to a number of users.
However, the feature didn’t appear to work as planned, with users temporarily given the ability to mark any post in the News Feed as hate speech.
Facebook vice president of product Guy Rosen subsequently took to Twitter to address the issue, stating it was an error with an internal test the company was running.
In its online Community Standards guide, Facebook defines hate speech as “a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease.”
As well, Facebook says it provides “some protections for immigration status” and defines an “attack” as “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”
However, many took issue with the hate speech button being available for all posts, regardless of what may be considered offensive under Facebook’s own criteria. For that brief period, seemingly innocuous posts like dog photos or pictures with friends were all able to be flagged as hate speech with this button.
Last month, Facebook CEO Mark Zuckerberg told U.S. Congress that the company is looking into how to effectively use AI to identify hate speech, although he noted the process is difficult. Overall, he said he expects that in five to 10 years, “We will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that.”
This is far from the only time a Facebook test has come under fire. Last year, the company’s testing of a revamped News Feed in certain foreign countries inadvertently led to the spread of fake news.
As well, an early 2018 experiment with a ‘news trust’ survey to gauge users’ faith in certain media sources was also blasted for being only two questions long and lacking in nuance.
Via: Vice News