FAQ about Apple's Expanded Protections for Children

This is speculation. While I agree we haven’t seen Apple’s code and can only read the high-level white papers describing their system, from what I’ve read I don’t see a way for this type of information to be included and/or passed on to the server.

For example, Apple’s white paper shows that the way an image is reported as a positive hit is by successful decryption. In other words, the image’s status (positive or negative as matching the database) is used in the encryption key (so to speak) so when the server tries to decrypt it, only positive hits are successful. There’s no flag or tag on the file that tells it is CSAM or something else. It’s just binary – successful or not.

Because of the second layer of encryption, nothing more can be deduced about the image until the threshold quantity is exceeded. So until then, there is no real information. Even if additional information about the image type – “CSAM”, “terrorist”, “activist”, “jaywalker” – is included, it would have to be included within the voucher which is inside the second layer of encryption which can only be seen once the threshold is exceeded.

That to me seems like a very poor system for finding terrorists or dissidents or others, as everything is still lumped together into one “exceeded the threshold” group of images.

Then you get into the issue of accuracy. Presumably Apple included the threshold system to prevent false positives and only flag egregious users (i.e. an account with a lot of CSAM). We don’t know what that threshold level is (Apple isn’t saying), but clearly it must be a certain amount in order for Apple to calculation their “1 in a trillion” odds of false positives.

Now if the system is modified to start watching for additional content (terrorists and jaywalkers), and those results are mixed into the group that exceeded the threshold, wouldn’t that interfere with Apple’s “1 in a trillion” calculation?

For example, say 10 is the threshold. Someone has to have at least 11 CSAM images to be reported. Apple has decided that the odds of someone with 9 images is lower than “1 in a trillion” and too risky (as some of those might be false positives), so they set the threshold higher.

But if the threshold is exceeded and there are 3 terrorist hits, 3 activists, 2 jaywalker, and 3 CSAM, none of those categories have enough hits to ensure an accurate report (you need at least 11 of each to ensure the 1 in a trillion odds). Yes, a human could look at the images and decide, but you’ve basically ruined the whole threshold approach by mixing in multiple types of content reporting into one system.

Now some countries wouldn’t care about threshold, of course, and would be willing to crack down on an individual just based on suspicion, but this seems like a weird system for Apple to create to let it be abused like that. From the description, it seems Apple only wants this to apply to the most egregious users, and the way that it’s designed that would apply to whatever content was being searched for (CSAM or something else).

If Apple just wanted to include a back door, there would be much easier ways of doing it. This system seems to me to be deliberately designed to be extremely limited – and any of the doomsday scenarios panicky people describe seem really unlikely. There are probably 100 other places we’re trusting Apple to do what they say that are more vulnerable than this supposed “backdoor.”

2 Likes

Correct. We don’t know anything except they’re building a system that they entirely control, offer no transparency into, and will not allow outside audits of.

The threshold could be set to 1. The sharding of the encryption happens on device. This can modified. It likely will be.

The best way for totalitarian governments to implement surveillance is on the back of systems that people all agree are necessary.

The most dangerous phrase in the world is “trust me.”

4 Likes

Thank you, Glenn.

*The most dangerous phrase in the world is “trust me.”

I agree. I have to admit that with Apple’s move I have lost trust in this company. This makes me sad.

In my view Apple crosses several red lines, foremost in respect to its reputation as a company that values the privacy of its users above anything else, independently whether this reputation is justified or not.

Specifically, Apple crosses red lines concerning the functionality of its apps:

  • iCloud backup should only do backups
  • Messages should only do encrypted messaging
  • Siri should only answer requests

Detecting child pornography or any other type of misbehaviour has no place in an operating system.

N. E. Fuchs

3 Likes

The CSAM image fingerprinting sounds like it is based on the PhotoDNA algorithms, which detect specific known images (despite transformations on those images), and are not image “recognition” techniques per se (they don’t recognise a known face in an arbitrary image). There’s some extra techniques described in there to prevent it being used to “test” images locally, and to not reveal information until a threshold number of matching images has been reached.

Quite clever stuff.

I don’t see how it could be repurposed for recognising “persons of interest” in any picture, or for detecting arbitrary documents or information - the fingerprint technique would be completely unsuitable, and another would have to be implemented (“that’s not what we’ve built”).

It clearly could be used to detect other commonly shared images, such as those of Tiananmen Square in 1989.

I think the question is absolutely about what future service development is facilitated, if Apple are already checking uploaded iCloud pictures against the database (which I understand they are already required to do, by US law).

One could also ask why they should only be required to obey US laws, whilst doing business across the world. Indeed Apple already do follow local laws where required to do so, if they want to continue doing business. I think Tim Cook himself did say of the FBI request, if that’s what the US Congress wants then they can pass laws to require it.

2 Likes

Tim Cook:

"What changes do you then make in your own behavior? What do you do less of? What do you not do anymore? What are you not as curious about anymore if you know that each time you’re on the web, looking at different things, exploring different things, you’re going to wind up constricting yourself more and more and more and more? That kind of world is not a world that any of us should aspire to.

And so I think most people, when they think of it like that … start thinking quickly about, ‘Well, what am I searching for? I look for this and that. I don’t really want people to know I’m looking at this and that, because I’m just curious about what it is’ or whatever. So it’s this change of behavior that happens that is one of the things that I deeply worry about, and I think that everyone should worry about it."

I think that applies here. It will change behavior. These devices follow us, monitoring everything we do (even our cars have these abilities). Even worrying about such things opens us up to, “Why are you worried about such things? Are you hiding something?”

And some people worry about a national ID card?

2 Likes

I’m not particularly…just not in favor of scanning on my device unless I can choose to turn it off…and disabling iCloud Photos requires losing a major function of iCloud. In addition…scanning on device is introducing a back door although it might be just a back window if one wants to debate semantics…and opening that access for on device scanning of any sort opens the door to t(e government demanding more access…and they can no longer say ‘we can’t do that’.

In addition…once all the court cases are decided…I believe that the courts will hold that anything uploaded to any cloud has no expectation of privacy…I don’t expect any today already so anything that gets uploaded that I want to remain private falls into Steve Gibson’s TNO (trust no one) and PIE (pre internet encryption) rule.

My preferred solution is that iCloud and iPhone provide universal encryption for which Apple doesn’t hold the key and for the courts to hold that providing your face or fingerprint is protected just like providing the password already is.

1 Like

And therein lie two potential problems.

If you believe that the database will have more than a small fraction of the world’s kiddie porn images…there’s some swampland in NJ I can sell you where Jimmy Hoffa and JFK are living like hermits. Yes…there are some identified and traded images… it most of them are home grown.

Second…since this is some sort of hash and image recognition (DNA) combination…the hash portion will be completely different if you change the image…and the recognition potential can and will be degraded by altering the image. The paper I read talked about ‘minor cropping’ being still recognized… it that leaves major cropping, expanding the canvas with Photoshop content aware fill, changing other image parameters…eventually the DNA match will fail and it will still be kiddie porn, just a so far not well known image. Which gets us back to the ‘this will only affect a small percentage of kiddie porn images on the internet.

It isn’t kiddie porn…but just for example I just did a search on a well known porn site with wife as the search term…and got 200,000 results…the round number tells me that the buffer just quit, not that it ran out of images matching the search.

This is only a minor bandaid solution anyway…so why is Apple sacrificing a long standing company policy that user privacy is their number one concern.

We also really don’t know why they did it…I’m sure the idea and decision were hashed out over hundreds of hours of internal discussion unless Tim Cook made a unilateral decision…and I also know that unless Apple monitors places like Tidbits Talk, Twitter, Mac oriented sites and the like and decides to change a decision based on what they see…and I highly doubt they do o4 will…there isn’t a darned thing we can do about it so it probably isn’t worth all the effort we have collectively put into these threads.

1 Like

I think I will still be using iCloud Photos.

  • iCloud Photos is a paid feature, and Apple prefers not to have CSAM on servers that they run, which I don’t think is unfair.
  • iCloud Photos has AFAIK never been stored with end-to-end encryption: i.e. the servers could already be rifled through by whomever anyway. So the idea that a government could compel whatever: couldn’t they already do that before? As an aside, to me, one of the perils of iOS/tablet computing is that it is difficult to find, integrate, and use services that are “full-encryption” e.g. as sync.com or SpiderOak say that they are. I think it was discussed recently how Google is going “full-encryption” for backups.

My problem is that Apple is putting code on my devices that use resources, but which cannot benefit me directly. Recently they started the Find My network, but that is optional, and can benefit me if my iPhone is offline. The recent COVID-19 Exposure Notifications function was optional. In-App Ratings & Reviews is optional.

But CSAM-scanning is, to me, like software DRM, including all the means that Apple takes to avoid jailbreaking: in that it really benefits someone else more than me, directly.

What I have not seen mentioned is how much disk space the hashes from NCMEC images take up. And how much power/battery NeuralHash and Private Set Integration take up. Are these hashes added with system updates, or sent to devices in the background? And if the number of NCMEC images increase, do the hashes ever decrease in size? As in: it’s 2030. Therefore all iOS devices have 2GB less storage because of ever-increasing image hashes, and your battery life is now 1 hour shorter.

For some time now, Apple has been criticized for not having an anti child porn policy when Facebook, etc. have had very active ones. It hasn’t seemed to matter that their policies haven’t seemed to be very effective over the long term at all. They still regularly get mostly very positive press for continuing to try:

Apple’s PR department seems to have done an exceedingly crappy job handling the press with this initiative. They need to respond better, and fast; Apple has been eating dog poop over this since it was announced. If I were Tim Cook I’d be kicking some PR butt.

1 Like

The description Apple provides of the technology makes it clear that it’s designed to find close matches while also deal with issues of orientation, obscuring, etc., designed to fool pixel-by-pixel matches.

This is definitely distinct from the “find all cows” or “find all pictures of Enemy of the State XYZ.” However, the difference isn’t extreme? We know they already have a “find all cows” ML algorithm for Photos that runs locally. Those matches are not performed in the cloud. So a coarse “category” matching already existing in all its operating systems with Photos. A tight matching algorithm will now be added for CSAM.

1 Like

First, I’d make the polite suggestion that the term “kiddie porn” is inappropriate given the nature of the material. I think the term emerged because people didn’t want to deal with the reality of what CSAM is—it’s not pornography, it’s violent criminal activity.

Second, the NCMEC database is 100% about preventing re-victimization, yes. While an important goal, it’s not a way to remove children who in actively dangerous situations, because NCMEC-validated and fingerprinted images are ones that have gone through a rigorous screening process.

Third, the database is certainly only some subset of CSAM circulating. It’s not prospective in finding new material (hence not protecting actively harmed children).

There’s a reasonable amount of attention that should be focused on keeping this material from circulating. The abrogation of civil rights and privacy of everyone using devices seems pretty high without any real discussion. Cloud-stored images seem like a different category (and peer-to-peer networks entirely differently) because they pass over public and private systems and have a lower expectation of privacy than stuff stored on devices in our possession. Since the point of this exercise is to prevent circulation of images, checking device uploads seems like the wrong focus, too, particularly if Apple already scans iCloud, which it may be doing? (Like is doing?)

3 Likes

I agree that the redistribution of known images is bad…and I’m all for Apple as well as Google, FB, LEO and the DA to go after them…just don’t go after them on my device…last time I checked a search required a warrant and while Apple isn’t the cops…if they turn users over tothe group and they turn them in to t(e cops then Apple is acting like law enforcement and the search requirements apply. I’m just fine with them scanning, hashing, reviewing, and turning over images on their computer over to t(e cops…but implementing what is essentially a back door in the OS that…right now for this specific good purpose …i# all i5 will be used for…but we know that China will require Apple to use the same PhotoDNA techniques on the images that China wants to know about (or the Chinese will roll their own photoDNA and tell Apple to use it…whatever)…and we know that FBI, et. al. Will try and use the back door to get Apple to take care of some other good purposes…white supremacy, black supremacy, the Moors, terrorists, what have you…and Apple can no longer say “No, we won’t put a back door in our system because the user’s right to privacy is paramount.

That’s my biggest gripe…that Apple…who says trust us, we believe in user privacy…is deliberately introducing the capability to donducr warrant less searches on our iPhones. Yes…today it is only photos that get uploaded to Apple’s iCloud server…but next month they can easily scan all of our photos sans warrant and narc us out to the cops just because Tim Cook decides that’s the thing to do.

My second biggest gripe is that…just like the security theater at the airport and the meaningless python roundup in the Everglades that happened recently and caught 230 or so snakes…out of many 10s of thousands that are there…is essentially a meaningless feel good gesture that won’t accomplish anything…because the vast majority of that material on the net or in iPhones is self generated and has never been legally ruled as illegal and photoDNA’ed to put it into the database. The material is only illegal if the child is under 18…and for vast quantities of material out there…it is simply not feasible to determine illegality or not…having worked on a case of this back in my working days the forensic gynecologist told me that determining age based on a photo of a teen female was at best +/- 2 years and in most cases it was 3.5 years based on physical appearance. So…this might catch a bad person or even a dozen or more… it that’s an insignificant impact on the problem and mostly let’s them say we are doing something about it. Maybe spending R&D money to eliminate zero day flaws that allow things like Pegasus (or whatever the correct name is if that’s not it)…is more productive in device security although maybe not as goodFirst, I’d make the polite suggest that the term “kiddie porn” is inappropriate given the nature of the material. I think the term emerged because people didn’t want to deal with the reality of what CSAM is—it’s not pornography, it’s violent criminal activity.

Solving this issue…unless we go to a police state…is an impossible task…which isn’t to say that LEO shouldn’t try…but Apple ain’t LEO and meaningless PR stunts aren’t going to help.

They’ve taken a significant step away from their ‘privacy is in our DNAk and user privacy first history.

2 Likes

A couple of Twitter threads that I found interesting and thought this group might as well:

(post deleted by author)

3 Likes

Me too…sorry if my last reply caused you to think I was uncivil…but in my view I was just telling it like it is. No worries though…I’m actually surprised that Adam hasn’t squashed this thread already even though I though the debates were civil and reasonable althoughi recognized them as pretty much pointless as Apple doesn’t really care what we users think IMO.

I’ve been offline all weekend while spending time with visiting friends and family, and I’m disappointed that this topic devolved as it did.

If you think you’re posting something I’m going to take exception to, don’t post it. Saves everyone trouble. I’ve removed all the back-and-forth sniping, and I will continue to cut hard on anything that doesn’t directly relate to discussion of what Apple’s doing here. If it continues to get out of hand, I’ll shut the thread down entirely.

As the Discourse FAQ says:

Be Agreeable, Even When You Disagree

You may wish to respond to something by disagreeing with it. That’s fine. But remember to criticize ideas, not people. Please avoid:

  • Name-calling
  • Ad hominem attacks
  • Responding to a post’s tone instead of its actual content
  • Knee-jerk contradiction

Instead, provide reasoned counter-arguments that improve the conversation.

3 Likes

Howard Oakley has posted on this subject:

1 Like