FAQ about Apple's Expanded Protections for Children

Originally published at: FAQ about Apple's Expanded Protections for Children - TidBITS

Apple is piercing the privacy veil on our devices to protect children. The company claims its efforts won’t open up a Pandora’s Box in the interests of averting sexual exploitation of children or recognition of sexual material handled by children under 18 when a parent wants oversight. But it’s a big change from its previous absolutist stance in favor of user privacy.

1 Like

It would have been a lot better received by the user base and privacy folk like EFF and others if the hashing was done only on images uploaded to their servers and not on the user’s device…that’s the first step to what FBI/LEO want which is a real government back door which we know is an oxymoron…and it’s a very strange step for a privacy centric company to take. The DB that they hash against will be required by China for instance to contain other images that the Chinese government considers subversive.

Then there’s the issue that they’re only going to find kiddie porn that has already been discovered and hashed for the DB…it will do nothing to find the snapshot the pervert took of the little girl down the street this morning and sent to his buddies…since that one has never been adjudicated as kiddie porn and hashed into the DB.

I suppose it might be true that they see the handwriting on the wall re privacy and decided to give a little now in hopes of preventing more intrusive government requirements down the road…but doing the hashing on only uploaded photos and doing it on their servers rather than user devices would accomplish the same purpose while still allowing them to say “we have no access to the iPhone’s contents…because now clearly they will have access and will get pressured by FBI/etc to give them more…and China will just pass a law saying they have to add images to the DB that China says to add and since Apple obeys local country laws they’ll do so and the dissidents will be oppressed even more.

What’s worse…once this gets out all a user has to do to prevent detection is to open the image on their phone and change a single pixel…which means that a different hash gets generated that is completely different since that’s the way hashes work…and thus the image will pass the DB hash test.

Offline encryption is pretty easy anyway…and while this particular special set of circumstances seems good…how long will it be until they’re pressured to modify again for another ‘critical national need’ like anti terrorism. Although to tell you the truth…any halfway smart terrorist is probably already off lining if they are any good…PGP and the like work just fine and the age old book method of selecting words is pretty much unbreakable…as long as the members of whatever group it is use the same edition and printing and don’t pass that info around then it’s just a 123-45 for 45th word on page 123 and a whole series of numbers like that. Essentially a one time pad and especially secure if some obscure book is used…even the NSA doesn’t have enough computers to run it through every possible book that was printed.

I don’t like it…

3 Likes

It’s possible Apple is already doing that. If so, why this?

1 Like

I don’t understand that at all – server-side would be far more invasive in terms of privacy. Apple has gone to tremendous lengths to do this client-side specifically to preserve our privacy as this way even Apple can’t see the images.

Many think this is the first step toward Apple encrypting iCloud backups – which wouldn’t work if Apple was going to implement a client-side scan. Encrypting iCloud backups would be awesome for user privacy and way outweigh this minor “invasion” of CSAM flagging.

I also don’t see how this is a “slippery slope.” The system Apple has built is so complicated and so specific to CSAM data, how could this be exploited by governments or others? For instance, there is only this one database of image fingerprints – it’s not like the system is set up to have databases for different topics. Which means adding in other fingerprints would get them mixed in with CSAM data, confusing the end results. Say Country A demands Apple include some fingerprints for dissident images, the threshold system still applies, so X infringing images have to found – there’s no way for the system to know how many of those are CSAM and how many are other types.

Sure, as Gruber says, Apple could change the way the system works – but I think we can trust Apple enough to know they wouldn’t do that. (They’re the same company that refused to build a compromised OS for the FBI even though technically, they have to ability to do so. This would be the same thing: “Yes, we could compromise our security, but we won’t.”)

(Some argue that Apple has already compromised by using servers in China for Chinese users as the law there requires, but that’s vastly different from Apple writing software to spy on its users. In that case China is doing any spying, not Apple.)

That’s not how this image fingerprinting works. The hash created is apparently the same regardless of the image size (resolution), color, and even cropping. I have no idea how that works, but it’s apparently not easy to modify the image to avoid the same fingerprint.

2 Likes

Isn’t the theory that Apple may eventually turn on encrypted backups, in which case server-side scanning couldn’t happen?

That’s the only scenario that makes sense of this to me. Apple is infamous for long-term planning, so it wouldn’t surprise me if this is part of a long process to go fully encrypted.

Backups wouldn’t affect iCloud Photos. Apple would have to disable web-based access to iCloud Photos in order to enable E2EE as they do for the face-matching characteristics based among devices for Photos’ People album, iCloud Keychain, and a variety of other miscellaneous elements.

Anything accessible via iCloud.com through a simple account login lacks E2EE. I’m not sure Apple is at the point where they would want to disable calendar, contacts, notes, and photos at iCloud.com—or build reliable in-browser device-based encryption, which is doable but would need to be locked to Safari on Apple devices, in which case, why not use a native app?

Related, email will always not be E2EE until such a point as a substantial number of email clients embed and enable automatic person-to-person encrypted messages. People and companies have been working on that for decades.

1 Like

It’s true that the actual matching and computation as to whether thresholds are met is done on the client Mac. However, it is only done on photos that are candidates to be uploaded (i.e. being imported into iCloud Library). If the process was carried out on the server ,then Apple would see the scores for EVERY photo uploaded. So, by having the computation happen on the client, Apple only gets involved when the threshold is met, rather than being involved in the upload of every photo.

Speak for yourself. I don’t trust them to do that. Perhaps here in the US, but I certainly wouldn’t trust them with that kind of power in China.

There’s a world of difference between saying we can’t build something from scratch vs. we can’t change this parameter from 20 to 2 or we can’t add another 1000 hashes to the 20 million we’re already checking against.

2 Likes

They wouldn’t have had to “build something from scratch” – all the FBI wanted was for Apple to turn off the escalating time limits for wrong passcodes. That could probably be easily done by turning off a few flags in the OS and recompiling a special less secure version of the OS.

And modifying this new CSAM checking isn’t just adding some new hashes – the whole system would have to be redesigned to work a completely different way for it to do other kinds of searching. That’s not trivial and would require much engineering and research.

2 Likes

As Rich put it (I’m not sure it’s in the article), Apple is distributing the computational task across all users. But 100% of images slated for upload are scanned and every uploaded images has a safety voucher attached. A voucher marks that scanning occurred, but reveals nothing without extra steps about whether a match was made.

2 Likes

Not the way the system is described. Apple could add hashes and tag them for different countries or purposes, fracture keys across matches in different ways, etc. They can do anything at this point and not tell us. (They could have done in the past, but likely would have violated various U.S. laws without providing any disclosure of it.)

The entire system is based on using an arbitrary set of image hashes without knowing what is in the pictures. These same kind of hashes could be created from other images.

1 Like

Apple’s argument in that case was as Simon says: the feds wanted them to create FedOS that would have different rules and help the FBI install this replacement OS without deleting the data in Secure Enclave or elsewhere. FedOS would have allowed far easier passcode cracking.

2 Likes

That’s a good point. The only other reason I can think of Apple adding this then, is that doing it this way (client-side) is more private (Apple never sees the images unless the system flags them and they have to be verified by a human).

In theory Apple could do it the same way server-side, but that’s a slipperier slope, as Apple could change the behavior with no warning to the user, and Apple is essentially scanning every image with no verification that it’s done privately (like it would be on-device).

Most companies would be going out of their way to do it server-side – it’s far easier – but Apple is going to extraordinary lengths to do this client-side to preserve our privacy. Yet the outcry is that Apple is suddenly evil for doing this.

1 Like

Yes, but wouldn’t those all be flagged as CSAM? Is the system really built so they can flag different kinds of content? From my (limited) reading it seemed that there is no setting for the type of content. It seemed very binary, either CSAM or not. Mixing in political or other “bad” content hashes would trigger the same way (once it exceeds the threshold) and it would take extra processing (or a human) to figure out what is what.

Speaking of the human side, what good would adding other trigger content do to whatever organization wanted it? This info isn’t automatically passed on to the authorities – it goes to human processor first. So would an Apple employee really be in charge of reporting a dissident image to China, for instance? That sounds like a scenario Apple would go to great lengths to avoid.

Creating a special OS build for the Feds alone and putting it onto an existing iOS device remotely without tampering with any user data on it in the process is easy?

OMG “completely redesigned”. Now that’s just patently false.

1 Like

If you read all the white papers Apple published, the system is only described at a high level. The three researchers who published papers Apple quoted only describe it at a high level. We have no idea of how the implementation works.

The CSAM hashes are a database that Apple creates from data provide by NCMEC (not sure if NCMEC runs the algorithm Apple gives them against images or what). That database can certainly have additional fields. One field can be “country to apply.” Another can be "kind of image type: CSAM, human rights activist faces, etc.

This info isn’t automatically passed on to the authorities – it goes to human processor first.

That’s a structure Apple describes, not a, say, legal process nor is there any transparency.

What you’re describing are a large array of assumptions, sorry! These include:

  • The high-level description is good enough to understand what’s happening under the hood. (Only one researcher even bothers to mention Apple didn’t show him, at least, the code.)
  • The high-level description is accurate and complete
  • Apple wouldn’t do X under any circumstances. It’s a belief, but I would point to Apple’s behavior regarding iCloud servers in China and its undisclosed prior use of outside contractors to review Siri recordings we didn’t know were being sent to anyone to listen to, much less outside contractors.
  • The process described is immutable. Apple controls the software and cloud servers. There is no transparency in the system design. No one will likely know if it changes, even assuming everything today is 100% accurately described and 100% works as we expect.

would an Apple employee really be in charge of reporting a dissident image to China, for instance? That sounds like a scenario Apple would go to great lengths to avoid.

Most likely, China would provide a series of hashes to Apple and would require Apple forward matching images and account information to Chinese censors to review.

Until China passes a law requiring companies to include image hashes that the Chinese government wants…and Apple will obey the local country laws.

1 Like

Interesting title.

The Hungarian government also just introduced a law with the exact same phrasing: For the expanded protection for children. Among other things, it also outlaws the showing of homosexuality on TV-shows, as it is not according to the Hungarian tradition or whatever, and in their point of view just as dangerous for children to see on-screen.

I wonder if they could even show an interview with Tim Cook on Hungarian TV given this - and now they will ask Apple to scan for photos in breach of that law too. Perhaps Tucker Carlson can find out doing his gig in Budapest this week…

The road to hell is still paved with good intentions - and Apple is killing the main reason to be a loyal customer: privacy.

PS. Years ago, I had a support case with a Windows user whose anti-virus app - which was using hashes similar to the way this new system is supposed to work to find viruses - stopped them from working on a GIS-data set, that apparently had the same hash as a virus - but was definitely not. This will happen too in this case.

1 Like

Nonsense…if the same images are scanned/hashed on the server the privacy implications re those images are the same…and if on the server then our devices haven’t been compromised with a capability that can easily be abused by requiring changes to the DB.

I really don’t understand why you’re in favor of server-side scanning. That to me is 100x worse. Apple can change the scanning algorithms at any time and none of us would know. They could add on scans for anything, even facial recognition looking for known criminals or whatever. Client-side is a lot more limited and that’s better from a privacy perspective.

I’m not saying they can’t change the on-device method, but that’s at least a lot more complicated and requires an OS update, etc. That seems a lot safer from a user-perspective.

2 Likes