Apple responds to growing alarm over iPhone photo scanning feature
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Apple has responded to growing alarm over its new iPhone scanning feature from privacy experts and competitors.
Apple said that the feature had been designed with privacy in mind and that the actual analysis happens on a person’s iPhone rather than on Apple’s systems. The company will only be able to see photos if they are found to be similar enough to an existing database of child sexual abuse imagery, it said.
Despite those assurances, the new feature has been met with intense criticism from privacy and security campaigners who say that it could weaken fundamental protections on the iPhone and be abused to attack innocent users.
Critics have suggested, for instance, that governments could force Apple to add other kinds of imagery into its database, so that the tool could be used by despotic regimes to track dissidents, for instance. Other fears include the possibility that the system will go wrong and flag other kinds of images for review by Apple, essentially removing the privacy from entirely innocent images.
To find out what others are saying and join the conversation scroll down for the comments section or click here for our most commented on articles
Apple has responded to those criticisms in a new frequently asked questions document posted to its website under the name “Expanded Protections for Children”.
In the introduction to that document, it recognised that while the features have gained support from some organisations, that others had “reached out with questions”.
It first looks to address questions about the tool known as “communications safety in Messages”, which analyses photos sent to children for signs of abuse. It notes that Apple never gains access to those communications, that the end-to-end encryption is still ensured, and that children will be warned before any information is shared with their parents.
It then goes on to address the more controversial feature, known as “CSAM detection”. In that section, Apple makes a number of commitments designed to quell concern about the new feature.
It says that Apple will not scan all photos, but rather only those that have been uploaded to iCloud Photos, suggesting that any phones with that feature turned off will be exempt. Apple had not previously explicitly said that there would be a way of opting out of that scanning feature.
Apple also commits that it has only designed the system to detect child sexual abuse images, apparently in response to concerns that the scope of the feature could be widened in the future.
It says that if it is asked to add other kinds of images to its database it will “refuse any such demands”.
“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands,” Apple says. “We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.”
It also denies that there will be a way to do this without Apple’s help, by “injecting” other kinds of images into the database so that they would be flagged. It says that Apple is not able to add new images to that database, which comes from child safety organisations, and that because the database is the same for everyone it would not be possible to change it for one specific user.
It also says that “there is no automated reporting to law enforcement”, and so any report that was passed to authorities would be seen by Apple first. “In the unlikely event of the system flagging images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC,” it says.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments