Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Australia to force search engines to crack down on AI-created images of child sexual abuse

‘The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree’

Maroosha Muzaffar
Friday 08 September 2023 07:59 EDT
Comments
File.
File. (AP)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Australia has introduced new regulations that mandate that search engines take effective measures to combat child sexual abuse content generated by artificial intelligence.

Local media reported that the online safety code, announced on Friday, will require search engines like Google, Bing, DuckDuckGo and Yahoo to take “appropriate steps” to prevent the proliferation of child sexual exploitation including “synthetic” images created by the artificial intelligence.

“As these tools become more democratised, predators could use this to create synthetic child sexual abuse material according to their predilections, or use anime – the sky is the limit. We need to know the companies are thinking about this and putting in appropriate guard rails,” Australia’s eSafety Commissioner Julie Inman Grant said on Friday.

Last month it was reported that youngsters are using AI-generated “deepfake” explicit content to harass their peers, hence contributing to a surge of online abuse driven by artificial intelligence.

“I expect over the next year, we’ll have huge, huge increases in these kinds of reports [related to AI],” Ms Inman Grant said.

Australia’s eSafety is an independent government agency dedicated to stopping bullying and image-based abuse online.

On Friday, she said in a statement: “The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree.”

The announcement of new regulations follows the decision by the eSafety commissioner to postpone the rollout of a previous iteration of the code in June.

“When the biggest players in the industry announced they would integrate generative AI into their search functions we had a draft code that was clearly no longer fit for purpose and could not deliver the community protections we required and expected,” Ms Inman Grant said.

“We asked the industry to have another go at drafting the code to meet those expectations and I want to commend them for delivering a code that will protect the safety of all Australians who use their products.”

“The tech industry now needs its seatbelt moment,” she added.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in