As the threat of white supremacy rises online in Australia, Twitter fails to combat it effectively

social media apps

This morning, the world looked on as President-elect Joe Biden was inaugurated in a city heavily locked down to prevent a repeat of the riots seen at the Capitol building on January 6. That incident brought to light an issue that American national security agencies have been warning about for years: the threat of violence – indeed, domestic terrorism – by far right groups.

In Australia, too, ASIO has been monitoring this threat as a rising concern, advising in 2019 that cases of this nature have “skyrocketed”.

Last week, the Online Hate Prevention Institute (OHPI) issued a briefing analysing the failure of Twitter to remove an Australian White Supremacist account that has been posting the same 12 messages of hate day after day to the #auspol hashtag for years. The institute said that the use automation to amp up the voice of extremism and potentially radicalise others must end.

In 2018 the One Nation Senator Pauline Hanson tabled a motion in parliament declaring that “It’s okay to be white”, which Coalition members voted in favour of. The statement was widely condemned as a dog whistle to white supremacist groups. The motion was voted down but spawned a spate of incidents targeting politicians who were opposed to the motion.

“The government clearly voted the wrong way, and yes, they did apologise, but that phrase: “It’s okay to be white” had its day in the sun,” said Andre Oboler, CEO of OHPI.

The Twitter account in OHPI’s case study promoted this sentiment, phrased in different ways.

“It’s not something that Twitter can claim that they don’t know how to find. That exact phrase is one of the first that you’ll be looking for if you’re conducting automatic searches for white nationalism.

“The way that this social media account has automated variations of this phrase is very interesting. Before we went public with this, we contacted both Twitter and If This Then That [the company who did the automation], and the Twitter account is still up, but there are no posts. Our analysis is that If This Then That has acted and suspended the automation service that allows these things to be posted, but Twitter still hasn’t removed 3200+ copies of this phrasing.

Oboler added that there has been a shift in the way that the Australian government and the social media giants have viewed right wing extremism since the Christchurch mosque attacks in 2019, but more needs to be done to combat this threat.

In the United States, the First Amendment is seen to protect all forms of speech, including hate speech, but in Australia, both terrorism and anti-discrimination law allows it to be addressed.

In Australian law, there are provisions relating to terrorism where content needs to be removed– if it’s something that has a political motivation, that shows the threat of violence, and is either trying to induce fear in the population or pressure decision makers. Section 18C of the Racial Discrimination Act addresses racist speech. In Victoria, an inquiry is underway looking at extending anti-vilification protections. Currently, they cover racism and religious vilification, and may end up covering issue like homophobia, misogyny and so on.

Oboler said that his research has indicated that the major social media platforms are now putting measures in place to remove hate speech online.

“These companies are accountable to shareholders, there’s a degree of pressure on them to ‘do the right thing’ and to varying degrees they’ve been trying to do that – some more successfully than others,” he said.

Facebook has been the most responsive to date, with more than 99% of offensive posts removed before they are seen by users. This is due to the use of artificial intelligence to identify problematic material.

“At the other extreme, we’ve got Twitter who don’t seem to have even basic automation to catch things like “It’s okay to be white””, said Oboler.

The new player in the social media realm are smaller platforms that promote themselves as ‘freedom of speech’ platforms who do not moderate content. Oboler said that they create another layer of difficulty to preventing far right extremism.

“A lot of them are targeting, or specifically built for, far rights groups such as Parler. As the major players crack down on the far right, those people migrate to such platforms. How do we deal with that, should the government step in?”

Image: Jason Howie

Leave a Reply

Your email address will not be published.