Then let them hash the image once it is on their servers. I’m happy with that…they can and should do their best to combat kiddie porn on drives they own…but doing it on our device without a warrant…and I realize they aren’t government so warrant is not technical required…and turning people in smacks of unreasonable search and seizure. If they had just said that…once images are uploaded to iCloud we are going to hash them…exactly what they’re doing except where the hashing takes place…this would not be a privacy issue. It…in addition to invading user privacy…they are shooting themselves in the foot the next time they try to claim ‘we can’t do that’…because they can do that now. They’ve been for decades fiercely protective to user privacy…and 8 realize that kiddie porn isn’t something they don’t want to be seen as combatting…but doing it on device and invading user privacy is a ham fisted way to go about it when there was an alternative that would have not resulted in this debate.
However, like the rest of the Bill of Rights, it only applies to privacy from the government, not private actors. If Messages was programmed not to allow you to send any messages saying “Tim Cook is a poopyhead”, that would not be a first amendment violation because Apple is not the government.
Dave
This is what the parents signed their kids, who are under the age of 18, up for. All they are doing is attempting to match stuff on a kid’s iPhone with a known and acknowledged database of harmful images. Some kids might not like it, but if their parents are paying for their iPhones and data plans, it’s their way or no access to the digital highway.
Dave, I think you hit the nail on the head.
Where some people see Apple going down the slippery slope, others see Apple trying to dig in and hold ground high on the slope, while governments are trying to pull it further down.
One thing I’m not seeing people mention in this thread is the fact that how Apple is doing this already sets it apart from pretty much every other tech company out there: they’re telling us up front what they’re intending to do, before it’s already installed on everyone’s phones. They’re opening the floor to discussion and debate while there’s still time to change it if the discussion leads them to determine that this isn’t the right approach.
None of the other tech giants puts this kind of thing out in the public eye in advance. Google and Facebook pay lip service to transparency, but most of the privacy-breaching things they do are in place long before or, at best, when they’re revealed—and that reveal isn’t necessarily by their choice. Your privacy is already broken by the time you know about it. And they certainly don’t offer the level of detail that Apple has made public about it.
To me, this kind of transparency makes me trust Apple more. They’re essentially saying, “This is the problem we’re expected to address. This is what we’ve come up with to address that with what we see as a minimal invasion of privacy, with safeguards in place to help protect that privacy.”
You can call it a “backdoor” or a “gateway” or a “step down the slippery slope” all you want, but that doesn’t change the fact that Apple wants users to know this is being prepared before it’s implemented.
There are a lot of shady things Apple does in the name of making money, but they go out of their way to share when it comes to privacy protections. Even if the action itself bears privacy risks, they still aren’t trying to hide what they’re doing.
Absolute privacy is possible only by refusing to interact with the world outside your own home in any way. Any interaction opens you to breach of your privacy. Permitting information about you to be known by someone else is the price you pay for living in a civilization. Once you realize that, you recognize that all social interaction requires setting a foot on that “slippery slope” of “loss of privacy”. You have to give trust where it’s warranted.
So? That doesn’t make it right!
Again, the scanning for CSAM material applies to everyone, adult or child, who uses iCloud photo sync.
It’s hard to recall any time they have changed things because of general (not beta tester) criticism, but I may be forgetful in my old age.
The issue is about child pornography and exposing kids to pornography, as well as enticing children under 18 to participate in creating pornographic images:
“ Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law. Section 2256 of Title 18, United States Code, defines child pornography as any visual depiction of sexually explicit conduct involving a minor (someone under 18 years of age). Visual depictions include photographs, videos, digital or computer generated images indistinguishable from an actual minor, and images created, adapted, or modified, but appear to depict an identifiable, actual minor. Undeveloped film, undeveloped videotape, and electronically stored data that can be converted into a visual image of child pornography are also deemed illegal visual depictions under federal law.
Notably, the legal definition of sexually explicit conduct does not require that an image depict a child engaging in sexual activity. A picture of a naked child may constitute illegal child pornography if it is sufficiently sexually suggestive. Additionally, the age of consent for sexual activity in a given state is irrelevant; any depiction of a minor under 18 years of age engaging in sexually explicit conduct is illegal.
Federal law prohibits the production, distribution, reception, and possession of an image of child pornography using or affecting any means or facility of interstate or foreign commerce (See 18 U.S.C. § 2251; 18 U.S.C. § 2252; 18 U.S.C. § 2252A). Specifically, Section 2251 makes it illegal to persuade, induce, entice, or coerce a minor to engage in sexually explicit conduct for purposes of producing visual depictions of that conduct. Any individual who attempts or conspires to commit a child pornography offense is also subject to prosecution under federal law.”
The whole law is worthwhile reading:
Sorry, but I have no idea how your response relates to my comment. For Apple’s Messages actions, machine learning on the device is used to determine if the message contains explicit material. CSAM is no more that a possible subset of all potential “explicit” material.
There are two different projects here that I think are being conflated.
One of them is the Messages scanning. This is 100% opt-in by parents to monitor things sent in Messages by and to their children on a Family Sharing plan.
The other is the CSAM scanning. This is opt-out only by turning off iCloud Photos. It computes hashes of images you attempt to upload to iCloud Photos and compares them to hashes of known illegal images. If there’s a match, the image is checked to see if it’s the one that is in the database (because two different images can end up with the same hash).
These are different and separate projects. They overlap only in that one of the things the Messages scanning will be looking for is CSAM. Apple made the unfortunate decision to put both projects in the same press release, which can imply that the two projects are one.
As I understand it, the hash in the CSAM scanning is computed on your device, and the only thing that is uploaded to check is the (non-reversible) hash, unless there’s a match. No match, no one at Apple has any more access to your images than they already do. Match, and they check it just to see if it’s a real positive or a false positive.
Is there a privacy breach in a false positive? Yes. A very small one. Apple is asking us to trust that it and its employees will not exploit this. Given how much access they could make use of without telling us, and, as far as we know, don’t make use of (unlike, say, Facebook or Google), I think reasonable trust is warranted here. If you’re a privacy absolutist, you likely won’t agree.
Many alarmists (including some at the EFF) extrapolate every change into the worst possible scenario and use that to judge the value of the initial change without actually factoring in the realistic odds of that worst case happening. Apple’s track record on privacy may blemished, but it’s far less so than that of any of the other big tech companies. And their record for transparency on privacy initiatives is even less blemished, at least in the iPhone era.
Is it perfect? No. Is it reasonable? Probably.
It’s important to remember that using an iPhone is by definition an extension of trust to Apple. Its iOS code is proprietary. Unless you have an inside connection, you have no opportunity to examine that code to see if Apple is telling the truth about what they can, can’t, will, and won’t do with what’s on your phone. You have to trust that they’re telling the truth until evidence indicates otherwise. If you’re not willing to extend any trust to them, you shouldn’t be using an iPhone (or any other modern smartphone) in the first place.
Just a reminder to everyone that ‘slippery slope’ is a -fallacy- for good reason. Unless you can provide actual evidence that the next step down the slope is likely (and how likely), it’s an emotional appeal, primarily used to trigger fear.
Ok:
https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html
Seems a bit slippery around here…
Could be that Apple os reading tea leaves that suggests Congress may decide to enact legislation to enforce a way to read all messages “to stop child pornography” (child porn, child sex trafficking, and terrorism are always the reasons) and this particular set of methods is a way to lobby Congress that no, you don’t need us to give you a key to decrypt all iMessages.
It’s true that there are two different projects. One, associated with iCloud Photos, makes explicit checks for CSAM using a hash database downloaded to individual devices (not just iPhones). Here’s what Apple said about the other project: “Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit.” Using on-device machine learning is entirely different than the CSAM hash scanning, and is intended to catch far more than just CSAM–it is also intended to catch legal pornography illegally directed at children.
Being concerned about Apple’s ability to pull off these activities without “false positives” is not alarmism. Being concerned about governments forcing Apple to expand its backdoor activities is not falsely claiming a slippery slope, when Apple has already slid down that slope in China. Finally. characterizing the insertion of non-user controlled communications as “backdoors” is not hyperbole. Check the discussion in Wikipedia to see how broad the generally accepted definition is. And the “fear” that I have is concerning the continuing forces on this country pushing it towards authoritarianism. I don’t appreciate Apple’s efforts to cover its corporate butt against potential legislation and regulation while they simultaneously undercut one of their primary selling points: user privacy.
That’s up to Apple of course. For myself, I’m engaging in a long term reassessment of my hardware needs. Too bad this news didn’t come out about 3 weeks ago, as I could have saved myself the cost of a new SE(2) iPhone (64GB really cheap at Consumer Cellular if you’re in the market!).
Yeah, I spoke to this in the first message of this thread (I’m the OP). Government proponents need Apple and other companies to do the heavy lifting because laws passed by Congress to accomplish same would probably be ruled unconstitutional.
Any company that wants to do business in another country must adhere to the rules of that country. China is expected to exceed the US economy in the near future:
https://www.bloomberg.com/graphics/2021-china-accelerated-growth/
Slippery slope is only a fallacy if it’s argued with little or no evidence. In this case, we have plenty of evidence that Apple has started down a slope of allowing more surveillance of its users, most notably with China.
We’re not at the start of the slope, we’re already a ways down it and this one is a big slide.
Actually…if I was Apple I would hope that Congress passes such a law…because that way we could just get to the end game and let SCOTUS determine what if any limits are placed on the right to privacy. I’m not trying to hijack the discussion…but as an example both abortion and flag burning have been declared legal due to your right to privacy and freedom of speech. I disagree with both but that’s not the point…the point is that a determination has been made as to what is legal and what isn’t legal. I think SCOTUS would toss out a law banning encryption personally…but at least we would have an answer one way or t(e other.
I appreciate your concern and agree with it. And I hope Josh’s comment is not an indication that I think anyone here is downplaying child sexual abuse. I tried to delineate MY dilemma recognizing the real dilemma found in this issue, and perhaps my own pain from my/our personal experiences may have clouded that. If anything I wrote caused pain I apologize - the human suffering is very real, as is the struggle to preserve privacy and protect from false accusations and loss of freedoms. These are important issues that need discussion which is why I have very much appreciated the views expressed!
Yes. But that attitude quickly leads to paranoia. Apple could do that. The could also steal all your credit card information and drain your bank accounts. But they have never done any such thing and they have given no indication that they might in the future.
Ultimately, you need to decide for yourself if Apple is telling the truth or if they are lying. If you believe they are lying, then this entire discussion is moot - you should immediately stop using all their products and all their services because you believe they can’t be trusted. If you believe they are telling the truth, then that’s what we’ve been discussing - what they are actually going to do vs. what some members of the media are claiming (based on what appears to be misunderstandings of Apple’s announcements).
For myself, Apple has, to date, not lied to it customers. They sometimes do things we don’t like (like giving into Chinese demands for how the personal information on Chinese citizens is managed), but I can’t recall any instance where they did so while claiming the opposite. This is very different from companies like Google and Facebook which have been caught red-handed doing just that on many different occasions. So I tend to give the benefit of the doubt to Apple on issues like this, while simultaneously refusing to trust Google and Facebook (and others) on the same issues.
You may disagree, of course. I am making no attempt to change your mind. I’m just explaining where my opinion comes from.
And the clear decision for those who object is to stop using Photo Sync.
For myself, I never used it because I have no need for it. I store all my photos on one computer and sync the parts of the library I care about via USB.
If you do use, it then it may be a significant inconvenience. If you trust Apple about what they say they’re doing, then it shouldn’t matter. If you don’t trust them, then stop using the feature.
And if you don’t believe them when they say they are only scanning photos you sync to iCloud, then you should immediately discard and stop using all Apple products, because they could be scanning/uploading/fabricating a million other things as well. Once you’re convinced they’re lying about one thing, then why wouldn’t you think they’re lying about everything else they’ve ever said.
Again, it comes down to whether or not you trust Apple to tell the truth to their customers or not, and I will make no attempt to try and change anyone’s mind about that.