Twitter is asking its community to provide input on dehumanizing language as a means of aiding the social network in developing its anti-dehumanization policy.
According to a September 25th, 2018 blog post, Twitter has been working on developing a new policy aimed at addressing the issue of dehumanizing language, or “language that treats others as less than human.”
“With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target,” reads an excerpt from the September 25th post attributed to legal, policy and trust & safety lead Vijaya Gadde and vice president of trust and safety Del Harvey.
“We want your feedback to ensure we consider global perspectives and how this policy may impact different communities and cultures.”
The survey allows users to provide input on how Twitter’s dehumanization policy can be improved, while also sharing thoughts on language necessary for healthy conversation that may nonetheless violate the policy itself.
“This is part of our singular effort to increase the health of the public conversation on our service and we hope this gives you a better understanding of how new rules are created,” wrote Gadde and Harvey.
Twitter’s new dehumanization survey is the latest attempt on the company’s part to tackle the issue of hate and acrimony on its platform.
In May 2018, the microblogging giant announced that it plans on using “behaviour-based signals” to mitigate hateful language.
Twitter users have until 6am PT/9am ET on October 9th, 2018 to fill out the social network’s nine-question survey.