It's machines vs trolls as Twitter unveils new plan to silence abusers


The move is part of Twitter's attempts to improve what it describes as the health of public conversation on its platform.

Twitter has a troll problem.

Twitter is making some new changes that calls on how the collective Twitterverse is responding to tweets to influence how often people see them.

Twitter will begin using a wider range of signals to rank tweets in conversations and searches, hiding more replies that are likely to be abusive, the company said today. "Some troll-like behaviour is fun, good and humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search", wrote David Gasca, Twitter's product manager for health.

"We're also looking at how accounts are connected to those that violate our rules, and how they interact with each other", they explain.

"People contributing to the healthy conversation will be more visible in conversations and search", it said.

Harvey and Gasca said that there are many new signals that Twitter is taking in, most of which are not visible externally. For example, a user who signs up for multiple accounts at once, or users who repeatedly tag those who don't follow them in tweets.

Twitter has revealed that it is using new tools, including machine learning, to identify signals that indicate when trolls are about to ruin a conversation.

We don't know what Twitter will look like after this change, but the company says it's seen positive results in early testing, "resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations".

Twitter says that initial trials have shows that using this method has resulted in an 8pc fall in the number of abuse cases reported to it.

It said it had deleted or added warnings to about 29 million posts that had broken its rules on hate speech, graphic violence, terrorism and sex, during the first three months of the year. This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter.

This is to improve the health of the conversation and improve everyone's Twitter experience. Thousands of behavioral cues-such as heavily tweeting at accounts you don't even follow-will provide the service with more informed data so that it can protect healthy conversation from unwelcome nuisances.