Apple Isn’t Protecting Kids Against Sexual Abuse, It’s Protecting Apple

 Apple Isn’t Protecting Kids Against Sexual Abuse, It’s Protecting Apple

Apple has long positioned itself as a consumer privacy advocate, which is why the company's forthcoming adoption of a complicated new surveillance system ostensibly to stop the spread of child sexual abuse material (CSAM) on its platform has been met with so much rage from privacy advocates.

Less Invasive Than You Think
For all the furore, the systems Apple introduced probably aren't as scary as you've heard. The problem is they also won't do much to end the exploitation of children. It won't even end the use of Apple devices for creating, storing, and distributing CSAM. Nothing Apple could do would achieve those goals, because child exploitation is not a problem that can be solved solely through technology, and especially not consumer technology. No amount of scanning or monitoring from one company—or even groups of companies—will stop children from being hurt in the first place because this is not a technology problem. 

For context, Apple announced two tools: one that examines photos sent to and from minors enrolled in a parental control program and another that examines hashes (more on this below) of known CSAM photos as they are uploaded to its iCloud platform. The company says that the tools will be included in iOS and iPadOS 15. Customers can avoid using either tool, but only the parental control option is opt-in.

Apple has made it clear that it doesn't want people to call either of these processes scanning. As an intellectual exercise, I'll go along with the intellectual contortions required to avoid saying it scans photos, as this is incidental to my opinions on the subject. I've drawn much of my understanding of the tools from the company's FAQs on the subject, which I highly recommend people read. One other note: In its initial unveiling and subsequent statements, Apple uses the term Child Sexual Abuse Material (or CSAM) as opposed to "child pornography." I think it's a useful description of the material that centers the discussion on the problem of abuse, so I'll be using it here as well.

Monitoring Children's Messages
One of the new tools falls within the realm of parental control software and is entirely opt-in. If parents enable the feature for a child under the age of 12, the child will get an alert if they receive or try to send an image that Apple's machine learning engine thinks is sexually explicit. This only applies to content sent and received through Apple's default Messages app.

Apple says that any incoming images deemed explicit will be blurred. The system can be configured so that parents can receive an alert should an enrolled child opt to ignore the warning, as well if the child sends or views a sexually explicit image. For children ages 13 to 17, parents would not receive an alert.

Apple stresses that only parents who enroll their children in the program receive these notifications. The company does not retain or forward any of this information to law enforcement. The company is keen to point out that this analysis is all done on the device, instead of off in the cloud. 

Importantly, Apple says that this system does not break the end-to-end encryption on messages that prevents both Apple and any nosy three-letter agency from monitoring messages sent between Apple devices using the company's Messages app.

As a privacy concern, this feature doesn't seem so worrying. Parental control options are always invasions of privacy, but ones that are entirely under user control. It also directly interfaces with children—the people it seeks to protect—making it more likely to accomplish its goals.

Definitely Not Scanning iCloud Photos
The other of Apple's tools is more complicated and more troubling. It uses a technique called hashing to compare files to see if they are the same without actually viewing or examining the files themselves. Here's how that works: You feed a file into a hashing algorithm and it spits out a number (called a hash) that is derived from the contents of the file. Each hash is unique to the file that it is derived from. Change even the smallest aspect of a file and the hash will be different. The original file cannot be reconstructed from the hash itself either—it's a one-way process.

Think of it this way: If you're in a candy factory and you want to make sure that only a specific amount of candy has gone into each box, you can weigh the boxes to check. If the weight matches what you expect, you can be reasonably sure that it contains the right stuff. Hashing is similar but more precise. While two boxes might have the same weight and have different contents, only identical files have the same hash. 

Apple says that a database of hashes derived from verified CSAM will be loaded into its devices. Again, these hashes are just numbers, not the materials the numbers are derived from. The database is built and maintained by the National Center for Missing and Exploited Children (NCMEC). As user photos are sent to the iCloud Photos cloud service they are hashed and compared with the CSAM hashes. 

Apple says that a single match would not set off alarm bells. Rather, users would have to pass an undisclosed threshold of CSAM alerts before Apple takes action. Once the threshold is crossed, Apple says that only the flagged files would be examined and that the evaluation would be done by a human being. (It's worth wondering if these humans are trained and, critically, properly compensated for what is assuredly a taxing job.) Apple's FAQs explain what would happen next: "There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos."

A major concern voiced by critics of these tools is that the scope could be expanded. In its FAQs, Apple attempts to kibosh the idea that this is the start of a slippery slope that leads to greater surveillance: "We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it."

It's important to not conflate or overstate what these tools do. The parental control feature for children's messages can detect sexually explicit images that have never been seen before, but the iCloud Photos CSAM detection cannot. The iCloud CSAM detection can eventually allow Apple to see specific photos, suspend a user's account, and involve the NCMEC, but the parental controls cannot. Both systems can be avoided: Customers can opt not to use the parental controls and not to upload their photos to iCloud Photos.

To What End?
Apple's new plan for iCloud Photos won't make anyone happy, perhaps by design. Child safety advocates won't like its limitations and will surely note how it does nothing to prevent the creation or distribution of new CSAM—it only flags collections of known material, and possibly only large collections at that. Governments and law enforcement keen to get more access to devices also won't be happy since it doesn't really help in that regard. Privacy advocates have already made plain their distaste for any kind of new surveillance implemented by Apple.

Post a Comment

Previous Post Next Post