Recent months have seen a string of incidents where athletes from Australia and around the world were targets of racist online abuse.
Now, Instagram is launching a set of tools it says are designed to protect against this happening.
- “Limits” allows any user to limit comments and DM requests during spikes of increased attention
- AFL’s inclusion manager says abuse of players becoming “more persistent and constant”
- eSafety Commissioner says tweaks to Instagram are welcome but “the horse has bolted”
Will they help?
One of the most prominent international examples of such abuse came in July, after the Euro 2020 final, when three black players on England’s soccer team were targeted by hundreds of racist remarks on Instagram.
The platform did not block the posts and later said its auto-detection technology failed to recognise that orangutan emojis in this context were racist abuse.
Several AFL players in Australia have also copped racist abuse on Instagram and Twitter in recent months.
“Too many times this year, our players and their loved ones have been victim to this type of abuse — enough is enough,” St Kilda said in a statement in July, after Noongar wingman Bradley Hill became the latest player targeted.
“If you engage in online abuse, you are not with us. Together we rise above, and help stamp out, racism.”
Other clubs have been forced to issue similar statements.
Australia’s eSafety Commissioner Julie Inman Grant told the ABC her agency had been engaging with sporting associations across the country, including the AFL, to better understand how online abuse affects staff and athletes.
“We know that one in seven Australians reported experiencing some form of online hate, with those in the public eye — and from diverse ethnic backgrounds — being three times more likely to experience targeted online abuse,” she said.
The AFL’s inclusion manager Tanya Hosch said that racist online abuse of players has become “far more persistent and constant in recent months”.
“The players are really clear they’re exhausted by it emotionally,” she said.
So what are Instagram’s new tools, will they work, and what else can be done to combat the racist online abuse of athletes?
What are the new tools?
The tools are available to all Instagram users, not just athletes or other prominent public figures, although the platform says its designed them specifically with athletes in mind.
The main change is that users can now limit comments and direct message or DM requests during spikes of increased attention.
The tool, called “Limits”, automatically hides comments and DM requests from people who don’t follow the user, or who only recently followed the user.
In a statement for the launch, head of Instagram Adam Mosseri referred to the racist abuse of players after the Euro 2020 final.
“We developed this feature because we heard that creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know,” he said.
“In many cases this is an outpouring of support — like if they go viral after winning an Olympic medal.
“But sometimes it can also mean an influx of unwanted comments or messages.”
So an AFL player could, for example, turn on Limits before the game, in anticipation of what may come after.
Aside from Limits, Instagram will also issue stronger warnings when people try to post potentially offensive comments.
It’s also rolling out to the rest of the world the “Hidden Words” feature that it launched in Australia in April. This allows users to automatically filter messages with offensive words, phrases and emojis into a separate folder.
So will any of this work?
The eSafety Commissioner said her office welcomed the changes, while also expressing doubt that a new tool would alone solve a deep-seated and long-running issue.
“Retrospectively fixing an online safety issue is like trying to close the stable door after the horse has bolted,” Ms Inman Grant said.
But, she added, tools like Limits that give users the power to mute racist content without having to engage with that content first (while still preserving the evidence of the abuse) was the correct way to go.
“Any tools that require the player to engage with racist content to enable the blocking … is placing too much burden on the user.”
AFL’s Tanya Hosch also said the changes were welcome, but suspected that it would not stop people from engaging in racist abuse on the platform.
She said players were frustrated with the abusers themselves but also the platforms “for not dealing with these things more proactively.”
She said if she could ask for one thing from Instagram, it would be a “permanent ban from access to the platform” for people who have shown to be racially abusive.
“Having said that, every time we see progress, it is really welcome. But we’re going to keep asking for more,” she said.
The AFL Players Association and the Indigenous Players Alliance were contacted for comment.
Facebook Australia head of policy Mia Garlick said the company was committed to rooting out hate and had a “responsibility to make sure everyone feels safe when they are on Instagram”.
eSafety’s new powers to fine platforms, individuals
From January next year, when the Online Safety Act comes into effect, the eSafety Commissioner will have new powers to order digital platforms to remove abusive material.
The new legislation, which passed Parliament in June, regulates cyber-bullying of children, livestream broadcasts that could promote or incite extreme violence, and creates a complaint-based system for removing harmful material.
If a platform does not comply with eSafety’s request to remove a piece of content deemed to be abusive, the agency can issue fines of up to $550,000.
Individuals who post abusive content can be fined as much as $111,000.
Ms Inman Grant said the bar for what material was considered abusive would be set high.
“eSafety can require the removal of adult cyber-abuse material that targets an Australian if we are satisfied that the material is posted with the likely intention of causing serious harm — a high threshold,” she said.
Content with “racist or hateful sentiments” could be sufficiently abusive, she said.
The laws are likely to light a fire under platforms that have been otherwise slow to respond to user complaints about racist or other kinds of abuse.
Last week, the federal government laid out the minimum safety expectations that big tech companies will need to meet under the Act to avoid fines.
These include taking “actions against such emerging risks such as ‘volumetric attacks’ where ‘digital lynch mobs’ seek to overwhelm a victim with abuse”, communications minister Paul Fletcher said at the time.
‘Need for social and cultural change programs’
Ms Inman Grant said there was also a need for social and cultural change programs that target racism at its core, rather than simply penalising or blocking those who direct such abuse at others.
The scale of the cultural change that’s needed was evident last month, after former Crows captain Taylor Walker made racist remarks about North Adelaide player Robbie Young at an SANFL game.
Walker has been suspended for six matches, issued a $20,000 fine and appeared in an apology video this week alongside Young.
Prominent members of the Indigenous AFL community have criticised the video, saying the video missed the mark and could have been more heartfelt.
In June last year, eSafety and the AFL teamed up to launch the Play it Fair Online campaign, which encourages players and fans to be respectful online.
“Every time we see a player copping abuse online, it’s extremely demoralising and undermines the integrity of the game, but we should also take some encouragement from the outpouring of support that we have seen from other fans and players to drown out a lot of this negativity,” Ms Inman Grant said.
“This is our main call to action — if you see someone you know copping abuse online, be an upstander, not a bystander, and call out this behaviour.”