YouTube will use artificial intelligence to decide if videos are safe for kids
The company has increased use of artificial intelligence during the coronavirus pandemic
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.YouTube will use artificial intelligence to automatically age-restrict videos that are inappropriate for children.
The video hosting site currently uses human reviewers to flag videos that it believes should not be watched by viewers under 18 years old, but will soon be using machine learning to make that decision.
“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions”, it wrote in a blog post.
Uploaders can appeal the decision if they believe it was incorrectly applied,” it continued.
YouTube said that it does not expect these changes to make a difference to those who collect revenue for videos.
Many videos that would be picked up under this setting already violate its advertiser-friendly guidelines, and as such already run limited or no adverts.
YouTube has increased use of artificial intelligence as a way to detect harmful content in its videos as a result of the coronavirus pandemic.
The company removed more videos in the second quarter of 2020 than it ever had before.
Due to the fact the video site could not rely on human moderators, it increased use of automated filters to take down videos which might violate its policies.
While YouTube’s content removal system is not necessarily more accurate, the company “accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible".
Other social media companies are also relying on artificial intelligence to keep their platforms secure, but are running into issues with users attempting to subvert their systems.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments