Facebook Live killings: Why the criticism has been harsh
'They’re not magicians'
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Videos of two murders have been uploaded to Facebook and watched by hundreds of thousands of users over the past two weeks.
Mark Zuckerberg has pledged to do more to prevent anything similar happening again, but the problem can't simply be blamed on Facebook.
One of the social network‘s biggest responsibilities is moderating users’ posts, but the sheer amount of content that’s uploaded to the site every day makes this an impossible task for people alone.
That’s why it uses artificial intelligence.
Facebook’s mysterious algorithms cut through the noise to filter out what’s acceptable and what’s not acceptable, saving the site’s human moderators from innocent posts and allowing them to focus on a much more manageable sample of data.
That’s the theory, anyway. Every so often, something unacceptable gets through that AI filter system.
“The things that you never see are the successes,” Tata Communications’ future technologist, David Eden, told The Independent. “What you don’t see are the things that have been removed. You only see the things Facebook’s AI left, and they tend to be massive, glaring mistakes.”
Mistakes don’t come much bigger than failing to spot the video of Steve Stephens killing Robert Godwin Snr. and the Facebook Live of Wuttisan Wongtalay killing his 11-month-old daughter, before taking his own life off-camera.
Stephens actually posted three clips: one in which he declared his intent to murder (posted at 11:09am PDT), another of the murder (posted at 11:11am PDT) and another Live video confession (posted at 11:22am PDT).
“We have a lot more to do here,” Mark Zuckerberg said last week, in the aftermath of Stephens’ murder.
“We’re reminded of this this week by the tragedy in Cleveland. Our hearts go out to the family and friends of Robert Godwin Snr. We have a lot of work and we will keep doing all we can to prevent tragedies like this from happening.”
Unfortunately, it’s not an easy problem to solve.
Facebook used an algorithm to successfully tackle clickbait, but only after it realised that, while lots of users were clicking on stories with phrases like “you’ll never guess what happened next…” and “this one trick…” in the headline, they didn’t actually Like the articles or spend much time reading them before returning to Facebook.
The murder videos are very different.
Unlike clickbait, which was universally despised, some Facebook users enjoy viewing content the majority of people would consider unacceptable. That makes the site’s task a lot tougher.
The video of Stephens’ shooting was first reported to Facebook by a user at 12:59pm PDT - over an hour and a half after it was uploaded. Facebook disabled Stephens’ account and made the videos private 23 minutes later, at 1:22pm PDT.
Wongtalay’s two videos, however, were up for around 24 hours. The first had been viewed 112,000 times, and the second had been viewed 258,000 times.
Users had already uploaded the clip to YouTube before Facebook’s moderators even knew about it.
“It is a huge, huge problem. A whole world of humans wouldn’t be able to crunch through the same volumes of data as these algorithms,” Stuart Laidlaw, the CEO of Cyberlytic, told The Independent.
“I imagine there’d be a lot more content out there, which Facebook wouldn’t be blocking if they weren’t using their algorithms. They’re not magicians. Can it be completely resolved by AI? Possibly. Right now, I think the answer is no.”
Mr Eden agrees that the problems can’t yet be fixed by AI, but evolving social norms and society’s constantly shifting perception of acceptability means it might always be ill-prepared for successfully dealing with unacceptable content.
“This is going to be a continuous battle,” he said. “AI is necessarily a little bit behind the curve because we can only train it on things that have happened.”
If it goes the other way and takes a heavy-handed approach, targeting all offensive content and more, Facebook will be criticised for censorship, and users will simply upload their content to a different site.
“You can never please all of the people all of the time,” said Mr Eden. “There is always going to be an element of restriction felt by certain people. It’s going to see-saw backwards and forwards all of the time.
“But there’s definitely a role for AI to play in terms of pre-determination.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments