New CSAM Detection Details Emerge Following Craig Federighi Interview

I think that there is an enormous difference between Apple refusing to create a special iOS that would bypass security gates that would enable access to the total contents that resided in iCloud of the iPhones of two dead terrorists. Apple is setting up an opt in service, not an iOS, that will screen for the hash codes of pornographic photos that match those of a respected database of known pornography that are being sent to the iPhones of living and vulnerable kids under the age of 13. They are two totally different scenarios and issues, and parents have the opportunity to implement it for their children if they think it’s a good idea.

This is just about CSAM and giving parents the opportunity to communicate with their children about potentially dangerous situations; nothing else. Apple swears that no government agency is involved, and that the pornographic images of children that match are only those that reside in highly respected database. This might be backdoor, but China or 1984 it ain’t.

I’m not sure why all the concern is about China, but given it keeps coming up…

Except that this system adds absolutely nothing to the Chinese government’s monitoring capabilities. Why would they pass a list to Apple to eventually add to iOS so that certain images can be flagged and Apple notify them… when they can just directly scan everyone’s iCloud Photo Library on the servers they already have access to?

3 Likes

If they don’t know about these dissidents, how are they going to somehow get lots of known pictures of them into the NCMEC and multiple other child safety databases where the pictures are highly vetted to be known CSAM? Remember, NeuralHash only says “The hash of this iPhone photo that’s being uploaded to iCloud Photos matches a hash in Apple’s intersection-created hash database.” It does NOT do machine-learning type facial recognition.

That’s absolutely true and not the goal of the system. It is designed to identify quantities of known CSAM that’s uploaded to iCloud Photos.

That would fail the third-party audit of how Apple creates its database of intersections from multiple child safety databases. So to assume it’s still possible is to go back to the state actor having complete power over Apple, at which point we have independent security researchers to identify abuses.

As with @neil1’s suggestion above, that will fail the third-party audit of how the Apple database is created from the intersection, unless Apple is going to say that “Oh, this is from the Chinese NCMEC and it’s legit—look, here’s their root hash.” But any independent organization doing the auditing would throw a flag on that.

And again, what’s the surveillance win in exact matching of known images? What this technology tells you is if an iPhone user has and is uploading existing known images to iCloud Photos, that’s all.

OK, that’s fair, though pretty specific, and it would seem easily changed by the political movements. It’s in essence a code, and whenever codes are known to be broken, the people using them create new ones.

It would if it were a radically new capability, but it doesn’t seem any different than the iPhone in general. If China can compel Apple to do anything it wants, why aren’t our iPhones already reporting everything we say and do back to Beijing? China may be able to compel Apple to do some things within the country, but clearly hasn’t been able to make the company do whatever it wishes.

3 Likes

I’m not sure that’s relevant, since the technology will be present in all versions of iOS. Presumably Apple has some way of enabling it for US users only, but in the extreme subversion scenario that is being suggested, the company could theoretically be compelled to enable it for another country too.

Personally, I’m unconvinced by the extreme subversion scenario. Aside from the fact that it seems like a really hard secret to keep, I think the US government would take a very dim view of US companies being compelled to participate in activities sponsored by another state actor. And I have to assume that Apple has sufficient ties to the US government that such communication would take place.

2 Likes

This is true, and I should have made it clear that I’ll bet Apple has probably been talking to the equivalents of The National Center for Missing and Exploited Children in some of their target countries, probably the EU as well. But the big kerfuffle in the press might have thrown a monkey wrench into the works. And there might be countries that don’t have an equivalent database.

It isn’t the dissidents image they’re interested in…it’s a list of people that have pictures that dissidents are likely to have…Tank Guy at the square for instance and other images they don’t like. I did not suggest that they would get images of the dissidents or that they needed to get them into the NCMEC’s database. China can simply pass a law that requires Apple to scan against their database in addition to the NCEMC one…and also that the threshold be changed to 1 from whatever it is for the current plan…and that Apple report to Chinese authorities. They can also require that Apple scan every photo on the phone against the Chinese database as well…even if iCloud upload is disabled…and report the results.

Nation states can mandate just about anything…and since the technology to do this already exists Apple would have no choice but to comply.

I’m going to try…again…well, for the 4th or 5th time actually…to not reply any more to this or related threads…as I said nobody is going to change minds and Apple is gonna do what they are gonna do.

3 Likes

Apple doesn’t have to, and I’ll bet won’t, implement this program in any country that will not meet its standards. Like I said, I’ll bet there are countries that don’t yet have an equivalent database.

I don’t think putting the weight of effort on the dissidents rather than the authoritarian government is really the argument you want to make.

Also, and as politely as I can make it, part of the issue with this discussion is that one side has very little expertise on the situation in China and thus makes what I would call frankly silly comments about it. The idea that there aren’t common images that China might hunt for made me shake me head in horror and I’m only marginal (one book on historical China and various conference papers coming closer to the present) in my understanding of current China. The vast majority of the voices so far have been from the tech side, and the naïveté has been really remarkable so far (eg Doug Miller’s post two below this one)

Because China has a set of priorities, budgetary limits, and resource allocation fights, like every other bureaucracy, and that limits them in what they can do. Dedicating Apple’s resources and expertise in a way that makes up for those limitations is not a good idea. I said it earlier, but I’ll repeat, don’t hand the genocidal sociopaths a new weapon, please.

Again, the question is not why would they, but why wouldn’t they? Ready made surveillance, created by a company that has already caved to them, gives useful information in specifically images which they’re already looking for. Why not?

4 Likes

Why hasn’t China passed such a law just doing raw image upload of everyone’s iPhone already? Wouldn’t that be a lot easier than relying on this technology?

It seems that China would gain a lot more, and have a lot more control over the result, if they did something like that. For example, if Apple couldn’t “refuse” a special neural hash “request” from China, why couldn’t they write a “bug” in the code that doesn’t report hash matches at all for China’s special hash? How would Chinese authorities know?

But they can already do this as all files in iCloud Photo Library for Chinese users are on servers the Chinese government controls.

Stepping back, I really don’t understand all this hand-wringing about China from the tech community. Has anyone who actually has experience of being targeted by China, or at least a foreign policy expert with actual knowledge of how the CCP carries out its surveillance, weighed in on this issue? All I’ve seen (from both sides, including myself) is people who have expertise in the tech world making guesses about what may or may not happen.

I’ve not seen anything indicating that people actually affected by Chinese surveillance see Apple’s new system as a concern. It would be useful to understand abuses that this system could be used for, not hypotheticals that only seem realistic to people (like me!) who don’t really understand the domain they would operate in.

2 Likes

With a little quick research, it appears that David is indeed more far knowledgeable in this area than I am, and probably more so than anyone else here. So I’ll defer to his opinions in that regard.

But I do think that @jzw’s question is still a good one. @silbey, what’s your thinking on this?

2 Likes

Howard Oakley has posted another insightful article relevant to this topic:

2 Likes

First off, this assumes that dissidents are merely trading the same photos with each other and that they aren’t constantly generating new content (which China would have to intercept and add to the database before this algorithm could detect it).

Second, it assumes a massive misunderstanding about how Apple’s system works. The threshold isn’t some arbitrary number that phones look at. It’s an integral part of the cryptographic system. You can’t change it without breaking the entire database - which means it can’t be done without everybody else in the world finding out.

You also assume that the matching can be done without Apple’s servers. Again, read the technical description. The algorithms prevent any code (whether on the phone or the server) from learning anything about the content of the security vouchers without Apple’s secret decryption keys.

Finally, you assume that this is somehow going to be easier or more useful than using China’s existing surveillance software (which is constantly scanning live video from thousands of cameras nationwide) to scan through a database of not-encrypted photos.

Additionally, the algorithm requires the server-side software to have Apple’s secret encryption key. I doubt they will every deploy this system to a country where someone else runs the server, since it would require divulging that key - undermining the entire system

If the do deploy something in China, it will be a different system. But it makes no sense (for Apple or China) to do so. It is far more likely that China will order Apple to not enable E2E encryption for the Chinese iCloud databases, and they will do their own scanning with their own existing surveillance software.

But the tech for this already existed, for quite a long time. The system Apple developed is far far more complicated than what would be necessary to satisfy a court order, especially since iCloud photo libraries are not encrypted.

Law enforcement already orders Apple to turn over the contents of suspects’ iCloud backups and photo libraries, and Apple complies (when there is a warrant, of course). If they would choose to compel them to scan everybody’s photos for any arbitrary image provided by the FBI, the tech to do so is already been present. So we are already trusting Apple’s policy in this area.

2 Likes

Two things I’ve not seen much discussion of but which do concern me about these upcoming CSAM detection systems:

  1. What stops a government from requiring Apple to scan all photos on an iPhone, regardless of iCloud Photo Library? It seems there’s no technical barrier, everything’s in place, it is only a policy that means they aren’t doing this. As clever as their convoluted encryption system is, it doesn’t seem to protect against this kind of mission creep/abuse.

  2. I’m surprised there isn’t more concern about the iMessage feature for children. The feature itself seems fine to me, but the principle of scanning messages before/after the E2E is dangerous. I can see the CIA and China being much more interested in this than the photo library stuff.

It seems that with all the focus on the list of hashes, these issues are being lost, and I think they have the potential to significantly undermine privacy.

3 Likes

These two concerns were always present, long before Apple’s announcement.

Any app that has access to your photos can scan your on-device photos for anything it wants. This could be for benign reasons (e.g. Photos creating “Moments” for you to review) or hostile reasons. Your only safeguard is Apple’ App Store policy.

Apple could always (going all the way back to very first iPhone) design and deploy new software and secretly push it into your phone. There is no technological mechanism that can prevent this short of hacking the phone to permanently disable all capability for software updates. You have always had to trust Apple’s intentions here.

Ditto for all other smart phones. Google, Samsung, Motorola, LG and wireless carriers all push updates into their various phones and devices. When you use them, you have to trust them to not push objectionable software into your phone.

Ditto for your computer. Unless you disconnect from the Internet and refuse all updates, Apple (or Microsoft, or Dell or Google) will push updates for their respective software. Although you can (usually) configure your computer to not automatically install updates, if you don’t trust the company with that ability, then you probably shouldn’t be trusting the disable switch either.

Even if you go open source, you can’t eliminate that problem. Sure, the Linux community audits major packages all the time, but how many normal users actually know anything about the updates that they get from the standard Debian (or Red Hat or Ubuntu or whatever) distribution server? Again, you are trusting your distribution to do what they say they are doing.

In other words, unless you take all your electronics completely off-grid, you are always implicitly trusting someone with matters of privacy and security.

For most of us the answer to “how can I ensure that my system will remain secure and private against direct action from governments” is “forget it, kid”. But it may still be useful to ask the question “who is most likely to give in and who is most likely to resist”, knowing full well that everybody will fold if the pressure gets high enough.

1 Like

To add to @Shamino’s good summary, your phone is already scanning all photos and has been for years. That’s how you can search for pictures of cats or oak trees—it has to scan all photos and analyze their content.

1 Like

Why would you explain it to her again? They are scanning content on my device. That is a huge step down the slippery slope, no matter what auditability they claim.

I don’t see how auditability changes anything regarding the slippery slope. All they have to do is change their mind. Sure, we might be able to find out that they changed their mind and expanded what they are scanning and what they are scanning for, but if the government demands that they scan encrypted content on a billion people’s phones, Apple cannot say “impossible” anymore. Apple just built an encryption back door into its devices. And it can change its capabilities at any time.

By the way, I don’t know what auditable means, but being able to see whether the phone claims it is doing something is not auditing in my mind. And I didn’t see anything else in their report that seemed like the ability for me to audit this system.

3 Likes

Surely the difference between providing on-device search vs government reporting isn’t meaningless to you.

The question was related to the government.

There is a reason people don’t mind and in fact prefer on-device scanning for search, but not for government reporting.

1 Like

I think the problem Apple is facing is that the most effective solutions to the problem of online child porn distributed by messaging services would to have to involve scanning Messages. And the company that singlehandedly, repeatedly and over decades dropped nuclear bombs on the digital media industry over privacy did not handle their news release effectively, is now dealing with a s—- storm, especially from the companies who are loosing vast amounts of revenue because of Apple’s recently upgraded anti tracking initiatives.

If Apple had created a route that did not involve on device scanning but would be much less effective, like Facebook’s, they would have gotten criticized for not doing the same thing as everyone else sooner. They will not be scanning everyone’s iOS devices. they are only scanning images from the NCMC. And Apple has a unique and enviable history of not backing down when faced with government requests to break encryption.

Because new information has arisen, so I need to update my thinking to accommodate it. If an opinion is based on incomplete information, or an incorrect understanding of the information, it should be reevaluated with new details in mind.

Certainly. The point I was making is that “the iPhone is scanning your photos” has been true for a long time. A more precisely worded question might be “What prevents a government from requiring Apple to report matches for all photos on an iPhone, regardless of iCloud Photos?” And that then gets into the questions of what database of photos those matches would be against, and how Apple would create that database, and so on. It’s quite different.

We put a lot of effort into using words precisely in TidBITS, and I feel that in the context of CSAM detection “scanning” is an ambiguous term that doesn’t lead to informed understanding. In the context of Communication Safety in Messages, it’s more accurate.

1 Like