Meta criticised by its own oversight board over AI-generated adult images

Company accused of waiting for media to spot inappropriate posts

Andrew Griffin
Thursday 25 July 2024 07:06 EDT
Comments
(AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Meta’s own oversight board has attacked the company’s rules on adult images generated using artificial intelligence.

It needs to be more clear about banning sexually explicit images made of real people – and introduce changes to stop them spreading across the site, the Oversight Board said.

Meta established its Oversight Board to check its controversial decisions. It is funded by Meta but runs independently, and the company can choose whether to accept its suggestions.

The latest ruling came after the board reviewed two pornographic fakes of famous women created using artificial intelligence and posted on Meta‘s Facebook and Instagram.

Meta said it would review the board’s recommendations and provide an update on any changes adopted.

In its report, the board identified the two women only as female public figures from India and the United States, citing privacy concerns.

The board found both images violated Meta‘s rule barring “derogatory sexualized photoshop,” which the company classifies as a form of bullying and harassment, and said Meta should have removed them promptly.

In the case involving the Indian woman, Meta failed to review a user report of the image within 48 hours, prompting the ticket to be closed automatically with no action taken.

The user appealed, but the company again declined to act, and only reversed course after the board took up the case, it said.

In the American celebrity’s case, Meta‘s systems automatically removed the image.

“Restrictions on this content are legitimate,” the board said. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.”

The board recommended Meta update its rule to clarify its scope, saying, for example, that use of the word “photoshop” is “too narrow” and the prohibition should cover a broad range of editing techniques, including generative AI.

The board also slammed Meta for declining to add the Indian woman’s image to a database that enables automatic removals like the one that occurred in the American woman’s case.

According to the report, Meta told the board it relies on media coverage to determine when to add images to the database, a practice the board called “worrying.”

“Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.

Additional reporting by agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in