FAQ about Apple's Expanded Protections for Children

Because the database is on every phone. If it changes, anybody bothering to look for it will see that it has changed.

Even if Apple doesn’t put the file in a place that is easy to download, anyone who jailbreaks his phone (and therefore has access to the real file system) will be able to find and download it. Well within the capability of any security researcher.

Additionally, Apple’s just-published threat-review document (see @ace’s post), says that the database is shipped as a part of the OS itself (not a separate download) and they will be publishing hashes of it so you can confirm that the one you have hasn’t been tampered with or replaced.

1 Like

That file will not allow you to see what’s being checked against. The hashes have deliberately been created that way.

That file will also have to change routinely so as to keep up with an expanding database, unless you assume that for some reason no new child porn will ever be added from here on out.

With time we’ll see this file change and we won’t be able to tell if it’s because NCEMC added to their database or because Apple was forced to add material to check against while under gag order not to disclose it. I’d like to say this is harder in the States and much more likely in China, but with all the crap we saw after 9/11, especially with FISA & FISC, I have my doubts here as well.

1 Like

Yes, the blinded hash will be updated. Apple says it’s part of the signed OS image, so it will probably be updated whenever you get a new OS update. But this also means nobody else can replace it. The hash table on your phone (which has to match the one on the server) will be the official one Apple publishes.

If you believe Apple can’t be trusted, then this entire discussion is moot.

But this is an incredible amount of software development for a company that isn’t serious about privacy. They could have done what Facebook and Google do (just scan everything on the cloud servers and not tell anyone), if that was their intent.


I’ll repeat this yet again. It’s not about trusting Apple. It’s about Apple putting itself in a position where they are exposed to coercion.

I have no doubts what Apple pulled off here was technically good. I wouldn’t be surprised to learn it’s better than what MS or Google have implemented for their cloud searches.

But that’s besides the point. This is not a problem that can be circumvented by an elegant engineering solution. A fallacy that can often be observed in tech these days—not everything has an engineering solution (insert joke here about when you have a hammer…). The threats Apple now will come under, despite excellent engineering, are a beautiful example of that.

1 Like

There’s lots more detail in the new Security Threat Model Review document that answers this and similar concerns about coercion. It’s essential reading for anyone participating in this thread—I’ll have coverage up shortly.

1 Like

Unfortunately, reading through that document does not indicate to me that Apple has yet properly understood that they cannot engineer themselves out of this.

I’ve pointed this out before, but in principle, the damage is already done. They have demonstrated that they have created this tech and that they can deploy and use it. That is now public knowledge. They can swear they have good intentions all day long (and I’m inclined to actually believe that), but that won’t shield them from the pressure they can now be exposed to by authorities in countries where they can’t just walk away (eg. certainly China and the US).


If Apple changes the database because China or the US is forcing them, and they publish a matching number, how will the security researchers know, again?

You’re fixating on a third party doing this against Apple’s will. I’m worried about state actors forcing Apple to do this. In the latter case, I don’t see how people notice easily.

It’s not that Apple can’t be trusted, it’s that state level actors have a whole different level of power and you don’t create tools for them to use easily, ringed around with promises like “we will refuse” which are not possible to keep.


My main questions haven’t been answered: Why is Apple doing this? Why are they risking losing my trust?

Go read my new article on this and the first comment. I think @xdev hits it on the head. Apple didn’t intend to risk anything—they just completely failed to anticipate how it would be received.

Now that we know the details explained in that article, let’s discuss the state actor concerns over there.


Don’t get carried away by Apple’s sophisticated implementation.

My question remains: Why does Apple CSAM detection at all? CSAM is awful, but there are many similar awful things in our societies.

Perhaps because Apple sees it as the morally right thing to do? It could be nothing more than that. There might also be legal reasons or political pressure, but Craig Federighi was pretty clear in saying that Apple did it now because they finally figured out how to do it in a way they felt addressed the awful problem of CSAM while maintaining user privacy.


You can repeat this as many times as you want, but this doesn’t make it true. Apple is not “installing a back door in the iPhone”. That is what the FBI wanted: a way to circumvent the phone’s encryption and access all the data (in 2016 they specifically wanted the passcode limit feature disabled so they could brute-force guessing the passcode).

The CSAM detection is nothing like that. Are there concerns that a state actor could co-opt CSAM detection to identify people holding other, non-CSAM images? Yes. Is this a reversal of Apple’s previous position that it will not allow phones to be unencrypted by anyone other than the owner? No. They are scanning images that are being uploaded to their servers to ensure they don’t contain content that they do not want to host. And as @Shamino has extensively documented, there are many safeguards in place to ensure that Apple cannot view the images unless they are extremely likely to contain illegal content. (As a side note, anyone who is using iCloud Photo Library is already storing all their images unencrypted on Apple’s servers which is much more of a privacy risk.)

Making broad generalisations that any sort of notification of illegal content being uploaded is the same as allowing the FBI to bypass a phone’s encryption only undermines the genuine concerns that exist around this topic.


Give it up dude…I agree with you but nobody here is going to change their mind no matter what we debate or how convincing the other side thinks their position is. In addition…Apple is going to do what Apple is going to do…and even if the file is signed by Apple and part of an iOS update so it can’t be maliciously replaced…when China passes a law that requires Apple to include hashes of the images the Chinese government cares about Apple will follow local law and put the Chinese database in as well as the NCEMC database. Apple isn’t going to leave the Chinese market…and they really can’t move their manufacturing elsewhere either…so they will submit to coercion. You’re right…it isn’t about not trusting Apple…they’re still better and more secure than Android…but implementing this tech can and will have all sorts of unknown consequences. I’m sure they’ve debated it at the executive level to death…and they’ve made their decision…and users are just stuck with it or they can switch to a worse security model platform.


@neil1, I was initially going to argue it’s not quite that bad, plenty of lively back and forth still going on, and there’s always a chance Apple could come around. But recent events have proved me naive.

There is a level of vitriol being spewed at those who dare openly challenge the party line that I find unprecedented. As others put it much better, “we’re so sorry that you’re so stupid that you don’t understand how right we are”. I’m tired of this. I don’t understand why liking Apple products has to be equated with univocal approval of all they do, the constant urge to apologize every mistake the company makes, or to point out supposed silver linings so as to divert from BS some people are rightly upset about.

I’m tired of this “PEZ head” attitude. And I won’t submit to it or be bullied by its proponents either. Plenty of people with actual credentials (take Howard Oakley or the EFF) have come down on the critical end of this, why should I take any **** from anonymous posters with as far as I can tell zero credentials just because they don’t like Apple facing scrutiny? Nobody would behave this way on campus where I work, so why should I tolerate it online? Well, I won’t. I’m done with this “debate”.


In addition to what Adam said, Apple has been criticized for years for not doing anything to protect children from predators and abuse. CSAM is acknowledged to be one of the best, if not the best, sources that identifies child pornography, and It also has a good reputation for its security.

If Apple can help prevent the distribution, and maybe the development, of harmful images, it’s AOK with me.

1 Like

I think most of us here value the issues being fully stretched out for examination, which cannot happen without Apple’s line being thoroughly questioned. The issues are complex, trade offs, workarounds, absolute positions, all need elaboration on a nexus of topics as important as this. Folks will end up making their call on this, it’s not so much a matter of persuading anyone, or berating anyone for that matter, that shouldn’t happen. The last few days here have been very informative, and while some of you in particular have been in the fray, be assured that others reading have valued your contributions.


I understand, and feel similarly. I’m really tired of being preached at and patronized. Henceforth, I will generally limit my comments to links to folks like Mr. Oakley or the EFF.


First, I’ve seen supposed quotes from Apple’s announcement that say Apple is targeting “inappropriate child sexual abuse material”. That begs the question as to why ANY csam could be considered “appropriate”!

Second, in the recent “Kibbles & Bytes” (#1164) from Small Dog Electronics, Don Meyer points out the major danger of this move by Apple. I quote his third paragraph:

“This scanning raises questions in my mind regarding Apple’s commitment to privacy. Yes, we can all agree that sexual exploitation of children is evil and that there are some that should be locked up. But my concern is first Apple scans for this and, next, is it scanning for political images in the search for “terrorists”, scanning for undocumented workers or other uses. The pressure from government to do this scanning will expand, it is inevitable. Perhaps there are other ways to catch and prosecute those that perpetrate this heinous crime.“

If you’ve seen “supposed quotes” using the term “inappropriate,” you should direct your criticism at anyone using that word because Apple did not, at least in its published materials. As is easily checked with a search. I don’t have transcripts of all their executive interviews, so it’s possible some person slipped and used the word.

This concern has been addressed already, multiple times. The CSAM Detection can only be used to match photos being uploaded to iCloud Photos against known images, and only once there are at least 30 matches, and only after it passes an Apple human reviewer who is looking for images of CSAM, nothing else. That could conceivably be subverted at a state actor level to identify with known photos of something like Tank Man in China, but it cannot be used for surveillance in any effective way.

The Communication Safety in Messages feature does scan images for sexual content, but it does so entirely on the iPhone and alerts the user and potentially the parents, and does not report anything to Apple or any other party.


It may indeed, but that’s true of all technology and all change in the world in general. There’s no stopping the train.

1 Like