Published Jun 24 2019

Legislation and the Christchurch Call: The problems of policing the internet

Four weeks after a lone Australian gunman massacred 50 Muslim worshippers in Christchurch, New Zealand – livestreaming the shooting on Facebook – the Australian Parliament passed legislation designed to punish social media platforms showing violent videos.

The Sharing of Abhorrent Violent Material bill was “most likely a world first”, said Attorney General Christian Porter. The bill creates offences for hosting services or content service providers who don’t quickly remove videos depicting “abhorrent violent conduct” or fail to notify Australian federal police about them. Terrorist acts, murder, attempted murder, torture, rape and kidnap are covered by the bill.

The corporate penalties are substantial – up to $10.5 million, or 10 per cent of annual turnover. Individual citizens who provide a hosting service can be fined up to $2.1 million, or imprisoned for up to three years, or both.

The bill is partly a response to public outrage that the massacre was shared on Facebook, the world’s largest and most influential social media platform. Facebook was criticised for acting tardily to block the images, and for not having more efficient procedures in place. Links to the video were reportedly shared faster than the footage could be removed.

Links to the video were reportedly shared faster than the footage could be removed.

But the bill has also been criticised as a knee-jerk response that could potentially lead to unintended consequences.

Sunita Bose, the managing director of the Digital Industry Group, which represents Google, Facebook, Twitter, Amazon and Verizon Media in Australia, said its members would work to remove offensive content as soon as they were able. But “with the vast volumes of content uploaded to the internet every second, this is a highly complex problem”, she added.

Tech giants' agreement

In mid-May, New Zealand Prime Minister Jacinda Ardern co-hosted a meeting in Paris with French President Emmanuel Macron to discuss preventing the spread of terrorism online. That meeting was also attended by representatives from Facebook, Twitter, Microsoft, Google and Amazon. This time the technology companies responded more positively, by endorsing the Christchurch Call.

New Zealand Prime Minister Jacinda Ardern pays tribute to victims of the Christchurch massacre.

The call is essentially a self-regulation protocol designed to hinder the spread of violent or extremist online content (it’s so far been signed by 18 countries, including Australia).

The tech giants agreed to update their terms of use expressly to prohibit the distribution of terrorist or extremist content, and to improve reporting processes so users can promptly flag their concerns. A particular focus will be on improving their vetting measures for live streaming. They also agreed to publish regular updates on their progress in detecting and removing violent or extremist material.

In addition, the companies agreed to work collaboratively with each other, and with institutions such as governments, the education sector and non-government organisations, to improve understanding and share technical advances.

Multiple pathways hinder control

Will it work? Monash cybersecurity expert Carsten Rudolph, an associate professor in the Faculty of Information Technology, points out that “the internet was developed as an infrastructure that is very resilient”.

“So there are lots of pathways to exchange messages, and if one pathway is blocked, I can find another road. One of the main ideas of the internet is that there’s no central control.”

Telecommunication companies have the capacity to block IP addresses deemed to be the source of disturbing content – child pornography, for example. Paedophilia networks are also banned from sharing material on social media platforms.

“Facebook and others do invest quite a bit of effort and money into blocking this kind of content,” Dr Rudolph says. “Unfortunately, technology is not clever enough, even with machine learning, to do this automatically. So there are actually people sitting there, looking at images and deleting them if necessary.”

Automatic filters are notoriously error-prone – Dr Rudolph gives the example of images of the Little Mermaid statue in Copenhagen that have been blocked by a social media platform because the filter mistook it for a naked person. When a shooting is posted in real time, how does an artificial intelligence program distinguish it from a scene in a violent movie, or a video game with a first-person shooter? (The Christchurch protocol acknowledges that live content is particularly difficult to police.)

Hate speech is similarly hard to identify. Works of literature can include unsavoury rants by edgy characters. Should they be taken down, too?

What price freedom?

The larger question is what vision our society has for the internet and how much freedom we’re willing to allow, Dr Rudolph says.

“In principle, in Australia, we have a relatively open and free internet that doesn’t stop free speech and the rights that people like to have. Some other countries, like China, have a different approach – they have a controlled internet. Because of the control, they can probably stop content that they don’t want. Which is not our idea of a freely available internet for everyone.”

(The Christchurch Call affirms the importance of free speech: “All action on this issue must be consistent with principles of a free, open and secure internet, without compromising human rights and fundamental freedoms, including freedom of expression.”)

But the Chinese and the Western models converge in one important respect – they share a capacity to track the habits of individual users. This upends old-fashioned notions of privacy, and leaves us open to manipulation. A notorious example is when the now-defunct company Cambridge Analytica identified possible Trump supporters on Facebook, who then received material designed to sway their vote. Russian intelligence services have been implicated in this strategy (and also in influencing the Brexit referendum).

"There are lots of pathways to exchange messages, and if one pathway is blocked, I can find another road. One of the main ideas of the internet is that there’s no central control.”

What can be done to stop our data being used as a political weapon?

“I would go one step back and ask, do we as a society have some idea how we want to deal with the amount of data that gets collected and is sitting there?” Dr Rudolph says.

“At the moment, control of the data is sitting with big companies. I’m not sure that breaking up these companies would help if we don’t have a clear idea of how we think society should look. We do things on the internet, and we leave traces, and then companies earn money through the data.

“Basically, you’re giving them something for free. You could ask – should they pay you for collecting your data? Should they pay taxes for doing this? How can society benefit?”

About the Authors

  • Carsten rudolph

    Associate Professor of Cyber Security with the Faculty of Information Technology

    Carsten is an expert in the cyber security issues that accompany peer-to-peer trading schemes like those being considered for “smart grid” - based future energy delivery systems. He is also Director of the Oceania Cyber Security Centre, a collaboration of eight Victorian Universities with the broad aim of engaging with industry to develop research and training opportunities for dealing with cyber security issues.

Other stories you might like