Twitter’s new ‘Safety Centre‘ offers a streamlined service for users concerned about online abuse.
The promotion of “good digital citizenship” points you in the direction of extended FAQs relating to existing features.
So for individuals new to the platform it helps. There is specific information breakdowns for teachers, students, and parents.
But some changes could help tackle online Islamophobia: an updated violent threats policy now extends to “threats of violence against others or promot[ing] violence against others”. To their credit, Twitter acknowledge that the previous policy was too narrow when dealing with specific types of abuse.
An added “enforcement option” gives the support team a chance to lock abusive accounts for a period of time. There is another interesting development relating to abusive users in the pipeline.
Shreyas Doshi, Twitter’s Director of Product Management blogged in April that: “we have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular”.
On the surface, the changes, like at Facebook, are superficial and tailored towards ease of access. But dealing with non-direct threats and abuse could limit the capacity of anti-Muslim trolls – and is a welcome change.
The big test is to see how Twitter implements these changes. But their intention to tackle all forms of online abuse with added vigour helps.