New CSAM Detection Details Emerge Following Craig Federighi Interview

Originally published at: New CSAM Detection Details Emerge Following Craig Federighi Interview - TidBITS

In an interview with Joanna Stern of the Wall Street Journal, Apple software chief Craig Federighi said the company would be applying “multiple levels of auditability” to its controversial child sexual abuse material (CSAM) detection system. What are they? Another new Apple document explains.

1 Like

This is what I am finding the most interesting aspect of this whole thing.

When folks on TidBITS-Talk – some of the most educated and Apple-knowledgable people on the planet – are confused and can’t get a proper understanding of what’s going on, there’s something seriously wrong with Apple’s messaging.

Part of this is the complexity of the topic, the technical aspects of the CSAM-detection system Apple has developed, the releasing of both Messages and iCloud scanning systems at the same time (even though they’re completely different), and most of all, I think a bit of “can’t see the forest for the trees” on the part of Apple executives.

Here’s my theory on how this all went down within Apple. A some point in the past, probably several years ago, the idea of searching for CSAM was broached at Apple. The traditional way that cloud services do this – by scanning customer’s pictures via machine learning on the server-side – was immediately dismissed as too invasive. Server scanning requires all data be unencrypted, the code can be changed at any time without the user’s knowledge, and other countries could demand the server search for all sorts of stuff. Apple has been heavily promoting privacy, so that approach was a no-go. Apple set about coming up with an alternative.

I bet this took considerable time and effort (I’m guessing several years), but Apple found an ingenious (though complex) way of searching for CSAM without actually examining customer’s photos, and doing it on-device in such a way that neither the person’s phone or Apple even knows if there’s a match found. Mathematically, Apple can’t know if there’s a match unless there are at least 30 of them.

From Apple’s perspective, they go way out of their to create a privacy-centric method of doing this. But they’ve been working on it for years and have a deep technical knowledge of the alternatives. Apple likes to be up-front, so they announced this in a rather technical manner, getting into way too many details for the average person to understand, yet not enough for true technical people. Apple fully expected everyone to understand this as privacy-first, a great solution to the problems of server-side scanning.

Instead, everyone saw this as privacy-invasive. All people heard was “Apple is scanning the photos on your phone looking for porn” and got upset. Others yelled that this was a “backdoor” that countries and hackers could exploit.

While I do think Apple should have had the foresight to expect this kind of reaction, I do see how they could have missed it. They literally created this system to avoid all the problems people are complaining about – and they just assumed that everyone would understand that.

Now they have real PR mess on their hands and I’m not sure it’s easy to get out. From the discussion on TidBITS-Talk it is clear that many have made their snap judgment on this and won’t back down or change their mind. Those who see this system as a “backdoor” can’t be convinced otherwise.

I thought this was outrageous when I first heard about it – but I waited to read the technical details before I made my judgement. The more I read, the more convinced I am that this system is safe and harmless. It may not be 100% perfect, but it’s pretty close – and a darn sight better than what every other company is doing, which is far more invasive and prone to abuse.

But with so many making up their mind on what they think they know, rather than what’s actually happening, I have no idea how Apple can rectify the situation. Those who feel “betrayed” by Apple will continue to feel that – even though Apple specifically went to extraordinary lengths to create a system that protects their privacy.

This is a great lesson in PR and quite possibly will be studied in schools years from now. Just amazing.

6 Likes

I don’t think that a lot of us are confused at all. I’ve understood what Apple is doing since the beginning. The specific details have added to that knowledge but haven’t changed the general understanding,

What also hasn’t changed is the reaction, and here I’d agree that Apple misunderstood how this was going to play to a general audience. They should have had Cook talking from the start.

5 Likes

I think you’re spot on. It may be a lesson in being too close to the subject and not getting outside opinion (even from family, say). When I first explained the system to my mother briefly, she saw it as a slippery slope instantly, and she’s not conspiracy-minded. Now I have to explain it to her again.

And in fact, when I look back at Glenn and Rich’s original article, I think it’s as accurate as it could have been with what was known at the time. Some people were confused by Apple seemingly conflating multiple unrelated technologies, and others raised legitimate concerns that couldn’t be easily or even adequately answered with what we knew then.

That difficulty was what troubled me at a low level—I felt that there likely were answers, but because I couldn’t provide them to my satisfaction, it seemed that either my knowledge was incomplete (which turned out to be true) or that Apple was indeed either hiding something (which didn’t make sense in the context of a system that could have gone entirely under the radar). The other possibilities, that Apple hadn’t developed a complete threat model or had made unwarranted assumptions in the code, also seemed unlikely given the obvious amount of thought and complexity that was initially revealed.

And yes, this should have been Tim Cook from the very beginning. When your CEO is banging the privacy drum for years, you don’t hand off a privacy-intensive announcement to Web pages, FAQs, and lower-level people.

3 Likes

Bringing the state actor discussion over from another thread:

So let’s think this through.

  1. We now know that Apple’s CSAM hash database is the intersection of several other CSAM hash databases, NCMEC’s and at least one from another non-US organization. So a state actor would have to subvert not one, but two or more organizations like NCMEC. Not unthinkable, but significantly harder and more likely to result in exposure.

  2. Let’s say Apple’s CSAM hash database is subverted. The only thing that could be put in there would be CSAM itself. Remember, Apple’s human reviewers only see the “visual derivative” of the matched CSAM (and only after there are 30 matches). So if Apple’s reviewers see an image of a dissident or whatever, they’ll chalk it up to a false positive and will send it to Apple engineering for analysis. Which might lead to the exposure of the subversion of the CSAM hash database, since the engineers wouldn’t rest until they figured out how NeuralHash failed (which it didn’t). I have trouble seeing what kinds of images would be useful to gather this way as well, since they have to be known in advance to be matched.

  3. So now let’s say that the state actor subverts Apple’s CSAM hash database with more CSAM, with the idea of planting it on the devices of dissidents to discredit them. This is apparently common in Russia. Apple will get the matches, confirm the CSAM, and report the person to NCMEC, and thus US law enforcement. If the person in question is a Russian citizen not on US soil, it seems unlikely that anything of interest happens, unless US law enforcement regularly cooperates with places like Russia on such investigations. This seems like a possible attack vector, but given that Russian dissidents also suffer from poisoning, it seems like way more work than is worth the effort. See xkcd.

  4. What about a state actor that completely subverts Apple? There’s a true backdoor in the code (one that no one knows about—it’s not a backdoor if it has been publicized), Apple’s human reviewers have been forced to identify dissidents when their images show up (but how is this useful if the images are already known?), and Apple reports to the state actor instead of just NCMEC. At this point, we’re so deep in the conspiracy theory that you may as well assume every photo you take on your iPhone is being sent directly to China or whatever. The only answer to this level of conspiracy theory is that there are numerous security researchers looking at iOS at all times, and something like this would likely be discovered. And if Apple was so completely subverted, why would they make a public announcement of all this tech?

Am I missing any attack vectors?

3 Likes

A state actor forces Apple to directly add images to the hash database without going through the intervening organizations and requires Apple to report positive matches to them without verification. The state actor doesn’t care if this is publicized — in fact they prefer it because even the threat of such surveillance intimidates people.

This is pretty much what China did with iCloud. We know they control the keys, but China doesn’t care.

Why would, eg, China, do this rather than roll their own system? Because it’s less effort for them, and they can take advantage of all of Apple’s programming skills and knowledge of the system.

1 Like

I would say that China would not want this system. Why should they bother to get Apple involved at all when they are running the servers, which already store the images without encryption. It’s far easier for them to have their own engineers write their own software to scan the entire server for whatever images they want. Then they can change their database at whim without needing Apple to push it out to the rest of the world via iOS updates.

Remember, this is the same country that scans all internet traffic passing through the country, routinely blocking anything and everything the government might not want seen. Running a machine learning algorithm over a database of a few million image files is no big deal for them.

I think that while China could do what you say, they have no reason to, and many good reasons to roll their own system.

Likewise for the US. US law enforcement already requests scans and audits of the contents of iCloud libraries for specific people (usually with warrants). If they were to demand scans of everybody’s account, they could do that right now, with no technical changes to the system. Apple would probably fight it in court (like they did about unlocking iPhones), and win or lose, knowledge of the attempt would quickly become public. And in the US (at least for now), this is something many (not sure about most anymore) elected officials would object to.

2 Likes

And yet Apple has already done that work for them, and even added a global surveillance bonus. So, no, it’s not easier.

I’d also note that I’m really not comfortable with a system where it’s protected by “we hope China won’t be interested.” Hope is not a plan.

2 Likes

What does that get them, though? This system works only with known images that are matched by NeuralHash, and only if there are a sufficient number of them. How does that help with surveillance? In what scenario would a state actor be interested that someone has a set of known images on their phone rather than new images? Photos of secret weapon blueprints?

This feels like a subset of the “China can make Apple do anything it wants” attack. If we assume that China has that power, then it’s game over in every way other than independent security researchers discovering and revealing the ways data is being exfiltrated.

I’m with @Shamino on thinking that a state actor like China would be more interested in controlling its own system than on subverting a highly publicized system that’s going to be the subject of intense scrutiny.

3 Likes

Apple has already said that third parties (namely from child protecting organizations that provide hashes) will be able to audit the hashes to make sure that there are no hashes that are not in their database. Since the hashes are an intersection of hashes from NCMEC and at least one other national source, there should be no hashes that NCMEC doesn’t know of in the set of hashes that Apple uses. China or Russia cannot force a hash that NCMEC doesn’t also provide.

1 Like

It’s not a hope, there’s just no way that China is going to be interested in this relatively targeted, convoluted system. Their surveillance capability both within and outwith China is already far superior. As is the capabilities of the security services of most major economies. This is not the scale or level of control that these agencies work at. (Unfortunately :cry:)

2 Likes

I agree 100%, and I think that they should have announced this with the endorsement or recognition of the National Center for Missing and Exploited Children, whose database they are referencing, and maybe one or more other child safety focused organizations. And it would have been smarter for Apple to have hired a PR firm that specializes in child safety and advocacy issues to handle this introduction. Or even better, hire internal PR specialists that will focus full time on Apple’s child safety and education related initiatives.

Business wise, using this to target parents could be a good selling point for iOS devices.

China has dissidents they don’t know about…so they legally require Apple to NeurlHash voodoo a set of images…include those in the scanning…require all photos on the iPhone to be scanned…lower the threshold to 1 or 2…and report all matches to the Chinese government. The technology as described seems to provide those capabilities technically…and it is easy for China to make it the law there. This…the Chinese government now finds out who all the currently unknown dissidents are…and that forces them underground or they go to the gulag or the secret police visit them or whatever…

Since the NeuralHash is supposedly so sophisticated that modifying the image doesn’t prevent detection…an image of Tienneman (sp.) Square or whatever will similarly survive modification or other slight differences.

Apple’s previous position was…we won’t compromise user privacy…then they turned around and did so in China because Chinese law requires them to because they won’t Brandon the market there and cannot abandon their manufacturing capability. Now they’ve invented a tech that nefarious state actors can and will figure out how to take advantage of.

And the real issue is that a pre encrypted image by the user won’t get caught…as will the home grown images. With this public…people interested in this material will just encrypt their files and email them around instead of putting them onto iCloud…so smart perverts won’t get caught. If Apple’s intention is to keep the material off of their servers…simply scan once it is uploaded…as anybody who thinks they have an expectation of privacy for anything uploaded to a cloud service without encryption first just isn’t thinking straight.

2 Likes

Sure they can. The NeuralHash tech can be replicated and additional hashes made…then pass a local law requiring Apple to scan an additional database. State actors don’t need NCEMC’s assistance.

3 Likes

No state actor has to subvert anything. Instead they pass a law that says any CASM scanning that happens on device also has to include scanning against hashes they provide. In the past Apple would have said, we don’t do that so we can’t, have a nice day. Now their answer is well we can’t do that because we only have these two databases and there’s an and statement in between to ensure hashes must be in both, and you see we have this great AI/ML yada… Chairman Xi Jinping interrupts, smiles, and answers: for use in my dear beloved China, turn that little and into an or and add my database of hashes, thankyouyverymuch. Does anybody here seriously believe that Apple could then just say nope, we’re outa here?

Again, it’s not a technical problem. It’s not about being smart, having well thought out mitigations, or writing beautiful code. In the end, it’s about opening yourself up to exploitation. I’m really surprised that among the many smart people that work at Apple, there seems to be a somewhat engineering-myopic view prevalent that prevents the company from understanding this. At the very least the top execs that have access to more than just engineering resources, should have been warned about this. Perhaps they were. Perhaps they thought, it won’t be that bad. I guess we’ll see.

2 Likes

At this moment, Apple has not announced this initiative will run in any other market than the US. I think it’s safe to assume Apple will not want to implement this app in countries that might want to use it for anything other than than its stated mission. Apple would either tell them no, and maybe it would be no way but the highway. China would loose a LOT of jobs and incoming revenue if Apple took the highway. It’s too much of a loose-loose situation for both parties. And too much work, and terrible PR for Apple to build a service they would not have 100% control of.

And we don’t even know if China has an equivalent of the National Center for Missing and Exploited Children. There is an International Centre for Missing & Exploited Children that is based in the US, but is has not been mentioned at all. It’s probably because there are too many countries to cover to begin with, and it would probably be the better for Apple to work on a country by country basis anyway. Or Apple might not want to wade into a morass of conflicting regulations between a multitude of governments.

Countries can pass a law requiring companies to scan photos uploaded to their servers now, regardless of what Apple is doing, and whether the scanning happens on device or on the server. I don’t see that Apple implementing this system changes what laws are possible nor the likelihood of them coming about or not. See: China and the iCloud servers for Chinese citizens. States pass the laws they want and tech companies have to figure out how to comply.

The concern (for me) with a system like the CSAM scanning, is if law enforcement can use its existence to force a company to turn over someone’s data that they wouldn’t otherwise be able to access. It’s whether a system like CSAM scanning allows abuses or further intrusions without the government going through the proper legislative process.

I’m still not sure where I stand on Apple’s upcoming system, but the additional details have certainly lessened some of my initial concerns.

1 Like

I don’t think it’s so much about laws being possible or not. A law can’t compel a company to do something they simply cannot do (for example because they don’t have the tech). But a law can very well compel a company to do something they already do in a different manner. I think the San Bernardino case really shows a stark contrast here.

2 Likes

It definitely can. As long as the government thinks it’s possible to develop the tech in some reasonable timeframe, they will make the law, with some implementation period to develop the necessary systems. This has happened many times (recently, GDPR in Europe) and the relevant government is unconcerned with the implementation as long as it meets the law.

I think the San Bernardino case is a red herring here. I’ve still not seen a convincing argument about how law enforcement would find the CSAM feature useful for anything other than catching child abusers, or how they could somehow subvert the system to do so.

This is in contrast to state security services – I can see how they theoretically could co-opt the system by forcing Apple to expand the type of images in the hash. But as discussed earlier it seems like a preposterous route given the more powerful tools they have, and the legal challenges once they were found out in certain countries.

Chinese political movements often use 1) pictures of the tank man from 1989, and 2) popular memes as means of signaling and communications. All of these are known images and thus susceptible.

More, China has done an enormous amount to erase the memory of Tiananmen Square from its history and that includes the known pictures of it. Using the Apple photo scanning feature would be a good way for them to keep that effort going.

Which feels to me like a “well, look, the murderer is going to get a gun somewhere, so it doesn’t matter if I sell it to them” kind of reaction.

Why? As I pointed out, the publicity is part of the point. It intimidates Chinese citizens, it demonstrates the PRC’s power, and it makes clear that Apple answers to them. I’m befuddled that anyone thinks that China will care if people know about this. They don’t hide the Great Firewall, they make sure that people are very clear that it’s there. They don’t hide that they have the keys to the Apple iCloud servers. They want people to know.

People are asking the wrong question: it’s not why will China do this, it’s why won’t China do this? They’re not going to let Apple surveil their citizens without their active participation and it’s to their advantage to be seen controlling the tool.

Folks, as I mentioned above, I understand what Apple’s doing. Please try to understand the point I’m making. Apple has set up an intricate safety system so that it (and the public) tries to put images into the database. Great. My point is that China will simply tell them to ignore that safety set-up and put the images in anyway, publicity be damned. All the intricacy in the world is not going to help Apple if China tells them to go around it.

One of the markers of authoritarian governments is the breadth and depth of their surveillance – as you have noted. This is going to be part of it.

And yet China did exactly that with the iCloud backups and Apple caved.

Finally, the lack of thought Apple put into the roll out of this does not make me confident that they thought everything through as much as they should have.

4 Likes