Apple has responded to criticism of the new iPhone photo verification feature

Last week, Apple announced a new iOS and iPadOS 15 feature. Smartphones will automatically check images for cases of child abuse. If the system notices such content, the photos will be sent to staff members who will review each case and render a verdict.

This announcement met with widespread criticism among ordinary users and experts. Most of all, people are worried about possible data leaks and changes in verification parameters. Apple responded to these concerns in an official FAQ about the new feature.

Thus, Matthew Green, a professor of cryptography at Johns Hopkins University, fears that Apple will allow governments to use the system to detect not only illegal content with children, but for other purposes – in particular, to control the political opposition. Apple explained that it would not add non-child abuse tokens to the system at the request of state representatives:

Apple will deny such requests. The new system is designed exclusively to detect child abuse images in iCloud photos. We’ve seen demands before to launch new features that would impact user privacy, and we’ve unequivocally rejected them. And we will continue to refuse to do so. Apple

Another common concern is that for the new system to work, samples will be downloaded to devices for comparison – which means, in theory, this data could be decrypted and viewed. But Apple claims that only unreadable hashes will be available on the device, not the images themselves.

Hashes are chains of numbers that represent images for comparison, but they cannot be read or converted back into the original images. Thanks to cryptography, Apple can use these hashes to get information only about the iCloud accounts that store the series of photos that match those samples, and view only potentially illegal photos without access to any other images.
Apple

If a photo confirms child abuse, the information will be shared with the National Center for Missing and Exploited Children, a nonprofit organization backed by the U.S. police.

If no child abuse is found in the photos when an Apple employee reviews them, the images will not be shared with the agency and the suspicious account will be untagged.