Germany shooting: How live streamed murder videos became propaganda tools and how tech firms are resisting

Attacker used same 'modus operandi as Christchurch shooter' but new technologies mean far fewer people saw the video 

Anthony Cuthbertson
Thursday 10 October 2019 12:51 EDT
Comments
Germany synagogue shooting: round up

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Staring down the lens of a camera, a man with a high-pitched, heavily-accented voice introduced himself as “Anon” and began a hate-filled rant. “I think the Holocaust never happened,” he stated, before stammering into an incoherent tirade against feminism. “The root of all these problems is the Jew,” he concluded. “Would you like to be friends?”

It was the start of a 35-minute video that was live streamed on Twitch, an Amazon-owned video sharing platform that is most commonly used to broadcast video games like Call of Duty and Fortnite. By the end of the video, the man had killed two people in the eastern Germany city of Halle and attempted to enter a synagogue using homemade weapons, before being arrested.

Only five people were watching the Twitch stream at the time but thousands more saw the archived version before it was eventually taken down. Copies of the footage soon spread to other platforms, with an estimated 15,625 people viewing clips through public channels on the messaging app Telegram.

The method of live streaming the shooting through a head-mounted camera was reminiscent of the Christchurch terror attack in New Zealand earlier this year, during which a gunman indiscriminately opened fire on people inside two mosques, killing 51 people.

Whether it was intended to terrify minority communities or inspire copycat attacks, a German security official described it as “definitely the same modus operandi as Christchurch”.

But while the scale of the casualties was far smaller for the latest attack, so too was the reach of the video. In the 24 hours following the Christchurch attack, more than 300,000 versions of the stream were uploaded to Facebook alone.

In the months since, technology firms have been working hard to prevent the spread of distressing content and violent propaganda across social networks and video-sharing sites.

In a series of tweets following the attack, Twitch explained how it “worked with urgency” to remove the content and permanently suspend any accounts found to be sharing it. It took just 30 minutes for the original video to be flagged and taken down, and unlike the Christchurch video, the video did not continue to pop up on sites like YouTube and Twitter thanks to Twitch sharing the underlying code by which the footage can be immediately identified and removed.

“Once the video was removed, we shared the hash with an industry consortium to help prevent the proliferation of this content,” Twitch said. “We take this extremely seriously and are committed to working with industry peers, law enforcement, and any relevant parties to protect our community.”

What remains an issue is the spread of such content through messaging apps like Telegram and WhatsApp. Sharing the video through a private message means the data is encrypted and therefore cannot be identified by the apps in the same way.

It is only if the content is shared through public channels, such as those found on Telegram, that the content can be flagged and removed.

Megan Squire, a professor at the Centre for Analysis of the Radical Right, found that a manifesto file believed to come from the German attacker was also spreading on Telegram. Her analysis of the live stream's spread on Telegram found that it reached more than 10 public channels before being removed.

The Christchurch shooting was first broadcast to Facebook, before being uploaded and shared on sites like Twitter and YouTube.

At the time, online moderators described attempts to remove the footage as similar to a game of whack-a-mole. While this approach has become more effective through the sharing of video hash identifiers, it remains reactive.

Facebook is already working on the next step, which is to proactively recognise the videos as they are being live streamed. In order to do this, the technology giant has been training artificial intelligence algorithms to recognise videos of shootings as they are happening.

Footage captured on police body cameras during training exercises are being used to help train these algorithms, in the hope that future shootings will be instantly flagged or removed by this AI moderator.

"The technology Facebook is seeking to create could help identify firearms attacks in their early stages and potentially assist police across the world in their response to such incidents," said Neil Basu, the UK's top-ranking counter terrorism police officer.

"Technology that automatically stops live streaming of attacks once identified, would also significantly help prevent the glorification of such acts and the promotion of the toxic ideologies that drive them."

Facebook is working with law enforcement agencies in both the UK and US to develop these strategies, which could eventually find their way onto other platforms like YouTube and Twitch.

"We'll need to continue to iterate on our tactics because we know bad actors will continue to change theirs," Facebook said in a blog post at the time. "But we think these are important steps in improving our detection abilities."

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in