Twitter Is Not A Neutral Arbiter Of Information

With all the accusations of Twitters’ role in far-right ideas promulgating online and the use of bots to push hashtags up the agenda, we wanted to take a closer look at how Twitter has been reacting to calls for the social media giant to take a larger role in preventing their platform being used for misinformation. When first quizzed about it, Twitter argued that the real-time nature of the platform ensured that the “truth” would always rise to the top. They were adamant that freedom of speech would not be curtailed on their platform. However, this seems to be at odds with a number of policies they have adopted, refusing to verify far-right pundits or commentators and suppressing the appearance of specific hashtags.

Rather than take the pro-active approach that Facebook decided to take to combat misinformation and fake accounts, Twitter was initially very reluctant to interfere with the content on their platform. In a blog-post in June 2017 they reasoned that the very nature of Twitter meant that the users had the opportunity to fact-check in real time, providing a constantly updated roster of facts and information that would outweigh any deliberate propagandising or ‘junk news’.

“Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information. This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth. Journalists, experts and engaged citizens Tweet side-by-side correcting and challenging public discourse in seconds. These vital interactions happen on Twitter every day, and we’re working to ensure we are surfacing the highest quality and most relevant content and context first.”

Twitter claimed to have been expanding their teams and resources dedicated to monitoring and dealing with the use of bots,

“We’ve been doubling down on our efforts here, expanding our team and resources, and building new tools and processes. We’ll continue to iterate, learn, and make improvements on a rolling basis to ensure our tech is effective in the face of new challenges.”

In our interview with Lisa-Maria Neudert of the Computational Propaganda Project (which you’ll find below), we discussed how bots can be used to push certain topics further up the agenda or disrupt the organic use of hashtags. This tactic was used by government forces in Libya during the Arab Sring to spam hashtags used to organise protestors and counter-government groups.

They are attempting to tackle spam at its source, identifying the mass distribution of Tweets and hashtag manipulation used to push certain topics to the top of the trending agenda. Twitter reduce the visibility of any “potentially spammy Tweets or accounts” whilst they conduct their investigations and will take action against accounts that abuse Twitter’s public API to automate activity.

In a more recent blog post, Twitter affirmed their continued desire to “strengthen Twitter against attempted manipulation, including malicious automated accounts and spam”. They claim to be continually improving their internal systems to detect and prevent spam and malicious automation (although the open API is used to encourage posting from other apps and games) and expanding their efforts “to educate the public on how to identify and use quality content on Twitter.”

After a closed Senate Intelligence Committee briefing earlier this month, Senator Mark Warner (D) called the information shared by Twitter as “inadequate” and “deeply disappointing”. He felt that their testimony “showed an enormous lack of understanding from the Twitter team of how serious this issue is”. 

There are a number of ways that Twitter could attempt to better crack down on bots. David Carroll, an assistant professor at the New School in New York, suggested that Twitter could deploy a bot detection tool to help users identify automated accounts, scholars at the University of Indiana proposed that Twitter could require certain users to prove they’re human by passing a “captcha” test before posting, or Twitter could enable users to directly flag suspected bot accounts.

Disobedient Media recently reported on accusations of pro-Clinton censorship during the 2016 election. Of tweets using the #DNCLeak hashtag, some 48% were hidden, whilst 25% of tweets using #PodestaEmails. In a written testimony to the Senate Judiciary Committee, Twitter general counsel Sean Edgett stated that,

“Approximately one quarter (25%) of [#PodestaEmails tweets] received internal tags from our automation detection systems that hid them from searches,”

Of the tweets using the #DNCLeak hashtag, just two percent came from “potentially Russian-linked accounts, according to Edgett. He explained that Twitter hid the tweets as “part of our general efforts at the time to fight automation and spam on our platform across all areas.”

The New York Times (in conjunction with FireEye) have also recently revealed that on Twitter (and on Facebook) thousands of suspected Russian-linked accounts used the platforms to spread anti-Clinton messages and promote leaked material. Many of these were bots who, according to FireEye researchers, put out identical messages seconds apart in the exact alphabetical order of their made-up names. For example on Election Day, they found that one group of Twitter bots sent out the hashtag #WarAgainstDemocrats more than 1,700 times. FireEye found that the suspected Russian bots sometimes managed to do just that, in one case causing the hashtag #HillaryDown to be listed as a trend.

There is no doubt that this use of hashtags to manipulate the political conversation is worrying, as well as being difficult to spot, but it seems that whilst some attempts to prevent bots pushing issues up the agenda can be successful, it is difficult to prevent them every time and it can lead to more legitimate hashtags being suppressed or discounted. This sort of interference without a more reliable strategy is concerning, social media firms are arguably a digital embodiment of the freedom of speech, a critical pillar of modern democracy, and to challenge or restrict this freedom is a slippery slope to venture down.

After suffering backlash for verifying Jason Kessler, the man who organized the white nationalist and neo-Nazi rally in Charlottesville, Virginia earlier this year, Twitter began revoking their coveted blue checkmark. Infamous white nationalist and neo-Nazi Richard Spencer, the alt-right activist Laura Loomer (who was recently kicked off of Uber and Lyft because of her anti-Muslim tweets), Kessler, and British anti-Muslim activist Tommy Robinson have all had their verification revoked. Twitter has also permanently banned both Milo Yiannopoulos and his former tour manager, Tim “Treadstone” Gionet.

That isn’t to say that these people deserve verification or the right to use Twitter, but this policy is at odds with Twitter’s claim that the platform does not want to interfere with content. The best way to shut down this sort of discussion is to prove them to be factually incorrect (to correct, as Twitter promote, in real time), not to simply deny a voice to those spreading hate. By pushing these ideas from the mainstream they fester underground and cannot be confronted in a meaningful way and by revoking verification or banning users outright, they simply create martyrs and sticks with which the left can be beaten.

Following this review, Twitter revised their terms of use to include the following statement,

“Twitter reserves the right to remove verification at any time without notice. Reasons for removal may reflect behaviors on and off Twitter that include …”

Twitter has also previously refused to verify the account of Wikileaks founder Julian Assange, despite the numerous fake and hoax Assange accounts. Mint Press News reported that “Julian Assange and other well-known government critics are still unverified, leading to speculation that Twitter is purposefully allowing their accounts to remain vulnerable.” Independent journalist Caitlin Johnstone has also reported on the matter, arguing that Twitter’s refusal to verify Assange serves to promote “pro-war propaganda”.

We spoke to Bret Schafer from the Alliance for Securing Democracy about the difficulties of ensuring false information is not propagated online, whilst upholding freedom of speech and expression. 

Twitter are not neutral in this fight, they have chosen to interfere in the free expression on their platform (which they are entitled to do, Twitter is not a public utility). But by making this decision, we all must ask ourselves, what end are they trying to achieve, are they attempting to simply ensure factual information is spread online and combat the weaponisation of information and speech? And if this is the case, are they taking the correct steps to ensure that freedom of speech is not compromised in the process?

If you enjoyed what you read here you can follow us on FacebookTwitter, and Instagram to keep up to date with everything we are covering, or sign up to our mailing list here! If you want to hear more from us you can check out our podcast, Chatter, or subscribe to us on iTunes here.

Leave a Reply

Your email address will not be published. Required fields are marked *