Twitter to Tackle Dehumanizing Speech with User Help

Twitter stated that it will be transforming how it makes rules related to the Twitter use and it will incorporate feedback from others. As per a blog post published by Twitter, the company stated that it will urge everybody to give feedback regarding their new policy before it gets rolled out officially.

The post further stated that Twitter is bringing the transformation by urging people to give feedback regarding the new policy regarding dehumanizing language.

Twitter to Tackle Dehumanizing Speech with User Help

Their hateful conduct policy is extending to approach the issue of dehumanizing language and the harmful consequences it can produce in the actual world. In order to make the Twitter Rules easy to comprehend, they are working on something novel and urging people to take part in the process of development.

For the past few months, the company has been trying to make a policy to tackle content that makes people believe they are less than human. This includes normalizing severe abuse.

To a degree, dehumanizing language is incorporated in the present hateful conduct policy, that tackles hate speech including promoting violent behavior, direct attacks, and biased threats, and speech which discriminates against nationality, race, gender, orientation, disability, health disorders, etc.

Even with all the policies, people can still manage to be offensive and use harsh language to make other people feel inferior.

The upcoming policy is targeted at expanding the existing offensive conduct policy to include content that dehumanizes people depending on their association to a grouping, even if the language is not directly aimed at them.

Users cannot send the Feedback via e-mail. They need to fill out a survey.

The last day to fill the survey is 9th October. It is a short survey with just a couple of questions which the users can answer after going through the updated policy language.

For instance, the survey requires people to evaluate the transparency of the policy. They can give ratings from one to five. After that, people can offer suggestions regarding how the policy can be enhanced. They will also be asked to provide some examples of language that add to a wholesome discussion but may infringe on the system. This way, Twitter is trying to discover if there are any loopholes or exemptions.

Users can also offer more comments or feedback regarding the policy. They will have to enter their age, gender, and can provide their username if they want.

Twitter has not yet revealed to what extent the feedback will be used in the policy-making process. It has just stated that when it gets the feedback, it will carry on with the standard procedure and multiple teams will go through it.

The plan to include the society in policy-making is a significant move, and it could make individuals feel more engaged with the outline of the policies, and consequently – possibly! – more inclined to appreciate them.

However Twitter’s problems regarding harassment and hate language on its platform do not originate from bad rules– its rules mention stuff quite clearly, in several circumstances, concerning what can be permitted and what cannot. The issues arise because of slack in their implementation and execution. Twitter often fails to penalize or ban people who regularly violate the rules.

Jack Tucker is a security expert and he writes about Cyber security, cryptography, malware, social engineering, internet and is working at www.norton.com/setup

Leave a Reply