The question is why Apple implemented this feature in the first place. There was no reason for them to suddenly expand their image scanning to the devices themselves and risk their position as the self-proclaimed saviors of privacy - and still they did exactly that. There had to be some push from the government behind all of that, otherwise this debacle just doesn't make any sense.
There is a big stick coming from the EU (and someone said the same is coming from the UK).
My guess is that Apple designed this privacy protecting system so that they could deliver the solution on their own terms - and perhaps leading the way on how this could be done before they are hit with a cookie-banner-popup-level solution from bureaucrats.
Regardless on your opinion on whether they should scan or not, both the EU and the UK now have a reference design that protects the privacy of people, and still manages to either identify the people that own that material, or make it more inconvenient to own that material.
As we like to say "Deplatforming works", and in this case, a good useful tool for people that own those pictures is no longer available, and they have to resort to jumping through hoops and relying on more inconvient solutions. The later might not solve the root problem, but introduces friction that gets in their way.
Apple already controls the entire photos pipeline on your device, and in fact all of the OS code. If they want total device surveillance, they've got it today.
They're doing this because they are announcing it, because they think it's a net positive. You might not agree, but the argument that this is somehow creating the technology for scanning and is therefore nefarious is missing the point that if they wanted to do this surreptitiously and nefariously, they (a) wouldn't announce it and (b) could have been doing it for years. This isn't some fancy new tech, except for the privacy bits, which obviously wouldn't apply to Evil Schemes.
I think Apple was doing the exact opposite. They wanted to do the __least possible thing__ in order to stave away the far worse outcome of intelligence departments using the "it's for our children" excuse to pressure elected representatives to vote for back doors on consumer encryption.
The ridiculous thing is that Apple's proposal was functionally identical to what other platform vendors (e.g. Google, Microsoft) were already doing. In all cases — including Apple's proposed system — only photos uploaded to cloud storage would be scanned to see if it matched CSAM already known to the government. The only difference with Apple's proposal was initial "fuzzy hash" calculation would be performed on-device prior to upload, instead of on-cloud after upload.
The reason for doing it differently was because it meant (in theory) satisfying both masters — implementing real end-to-end encryption, while not being seen as a CSAM scanning laggard compared to Google, Microsoft, etc.
Other vendors just scan all your shit and nobody cares.
Well, this is very problematic for a privacy concerned company. Under no circumstances do I want Apple to scan my private files/photos, aspecially so if it means that an alarm can allow someone to determine if it is a positive or a false positive.
Also, this functionality isn't something they should be able to implement without telling their end users.
It is also problematic because it will just make the cyber criminals more technical aware of what counter measures they must take to protect their illegal data.
The consequence is very bad for the regular consumer: the cyber criminal will be able to hide, and the government has the possibility to scan your files. End consumer lose, again.
I really have to wonder why Apple chose to do this.
As far as I know, this kind of scanning is not legally mandated. So, either they think that this will truly make the world a better place and are doing it out of some sense of moral responsibility, or they've been pressured into it as part of a sweetheart deal on E2E ("we won't push for crypto backdoors if you'll just scan your users' phones for us"). Either way it doesn't thrill me as a customer that my device is wasting CPU cycles and battery life under the presumption that I might possess data my current jurisdiction deems illegal.
For all the acclaim privacy-forward measures like GDPR get here, I'm surprised there isn't more outright repudiation of this frankly Orwellian situation.
It has always been technically feasible for them to have software on the phone that scans everything on the phone. I don't see how this changes anything. Apple is trying to avoid scanning in the cloud so that the data can be E2EE and not subject to being stolen by someone that gets access to their servers somehow.
In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.
To counter the "think of the children" -argument governments use to justify surveillance, Apple tried scanning stuff on-device but the internet got a collective hissy-fit of intentionally misunderstanding the feature and it was quickly scrapped.
>Current technical implementation limits scan only for images to be uploaded into cloud, which can be opted.
That is conflating policy with a technical limitation. Their changes negate the technical discussion at this point.
Their POLICY is that it will only scan for images to be uploaded. They no longer have a *legal* argument to not comply with government requests for device scanning of any data now, since the framework is now included.
That is a big change in that regard. Whereas in the past there was a layer of trust that Apple would hold governments accountable and push back on behalf of a users privacy (and there is a very tangible history there), this implementation creates a gaping hole in that argument.
I don’t understand why comments like this are getting downvoted. I understand the absolutist position against any content scanning, but Apple very clearly tried to strike a middle ground that is far less invasive than the content scanning systems in every other photo service run by Google, Microsoft, Flickr, etc. And then when everyone freaked out, Apple didn’t ship it! Meanwhile the FBI can browse all your Google Photos right now.
That always made the most sense as the reason for attempting that. I agree with some concerns about it surely being abused (especially in some jurisdictions) but on the other hand they can ship whatever software they want to the devices anyway so the idea that this was some sly way to sneak in spying that they couldn't otherwise get away with made no sense. Doing it out of a desire to enable more encryption without instantly becoming the overwhelmingly-preferred platform for child porn enthusiasts was a far more likely explanation.
Curious what they're going to do to mitigate that repetitional risk now. Possibly they'll just eat it and say, "look, this is what you fuckers wanted, we tried to solve the problem but you said no."
Not thrilled to see what the next showdown between them and e.g. the FBI is gonna look like. I expect it's not gonna look good in the court of public opinion and that might have unfortunate legislative consequences.
[EDIT] Actually, wouldn't be surprised if they wait until the first high-profile case involving their inability to deliver data on someone who probably is a disgusting scumbag, and use that as cover to go ahead with the local-CSAM-scanning-for-iCloud-uploads, once it's 100% clear what'll happen if they don't and the no-scanning crowd isn't the loudest set of voices anymore.
I think this has been developed by Apple attempting to balance a complex issue: E.g. Government request asking them to report with certainty that their cloud storage is not used for illegal activity (CP in this particular case but each country/govt may end up with different requirements).
Apple thought they found a clever way to satisfy a government request of "Can you guarantee your storage is not used to house XYZ". Apple then continues to be able to advertise 'privacy' while retaining E2E encryption (or at least have E2E across all services eventually).
What they didn't forsee is the potential slippery slope backlash that the IT community has become concerned about.
The scanning feature could be used in more nefarious ways (if modified in future). For example. The hash checking could be altered to check and hash against each photo metadata field instead of the photo itself.
Now we can find who took photos in a particular location at a particular point in history while still retaining E2E!
Would it go that far? Or can we trust Apple to stop moving the line in the sand?
Yes, but we have seen in the past that privacy once lost is nigh impossible to regain, and it is also obvious that the scanning Apple is proposing is trivial to bypass.
So what is it actually trying to accomplish?
I really struggle to believe that they are trying to protect kids (out of the goodness of their hearts).
The only explanation I can think of is that this is some attempt at appeasing government agencies.
Yeah, I think this is what freaked everyone out, but it's pretty clear the intention of on-device scanning was that it would work even if they had client-side encryption of photos before upload .
The whole point of the protocol (as described in the Apple whitepaper) is to allow clients to attest to perceptual hash matches without the server having access to the plaintext.
So, the irony of all of this is that the Apple design is effectively more private than the status quo, but everyone freaks out about it.
>So what happens when, in a few years at the latest, a politician points that out, and—in order to protect the children—bills are passed in the legislature to prohibit this "Disable" bypass, effectively compelling Apple to scan photos that aren’t backed up to iCloud? What happens when a party in India demands they start scanning for memes associated with a separatist movement? What happens when the UK demands they scan for a library of terrorist imagery? How long do we have left before the iPhone in your pocket begins quietly filing reports about encountering “extremist” political material, or about your presence at a "civil disturbance"? Or simply about your iPhone's possession of a video clip that contains, or maybe-or-maybe-not contains, a blurry image of a passer-by who resembles, according to an algorithm, "a person of interest"?
What I don't get is what prevented these things from happening last month? Apple controls the hardware, the software, and the cloud services so the point at which the scanning is done is mostly arbitrary from a process standpoint (I understand people believe there are huge differences philosophically). They could have already scanned our files because they already have full control over the entire ecosystem. If they can be corrupted by authoritative governments, then shouldn't we assume that have already been corrupted? If so, why did we trust them with full control of the ecosystem?
It's less about scanning (they were already doing that on their side) and more about keys. Before, there were two keys. The one on your device, and one for accessing the data on their servers. Apple could be compelled to decrypt that data and hand it over to the government, and the government could ask for literally anything. So all the fear about "what if they decide to scan for images of XYZ" is a fear that already exists. Apple has made it clear that they do not want to aid the government in any way, and stated so directly to congress in 2019. Congress, in turn, made it clear that if they (big tech) didn't come up with a solution, they would legislate some kind of "backdoor" requirement, which would be terrible for everyone.
So they had to come up with some kind of solution that:
1. Keeps the government happy enough that they don't pass terrible legislation.
2. Keeps Apple's servers from storing illegal content.
3. Keeps Apple from being involved in the subpoena process.
4. Maintains user privacy – because that's the whole point of this exercise.
I genuinely think if people understood how they accomplished this they would see that Apple accomplished two of the objectives (1, 4) and will eventually accomplish #3 as well.
But back to keys. Your device has a master key for decrypting the photos, and that's always been the case. What I'm about to talk about Apple's servers only, and not your device:
Imagine the two-key system required to launch a nuke, or the big Hollywood bank vault that requires two people to simultaneously get retina scans. "Shared Key Encryption" is the same idea – no one person with a key can decrypt the target. What's cool about this is you can have as many keys as you want, and all of them must be present in order to decrypt the contents. How many keys is Apple using? Well in this particular encryption layer, they are using ~31 keys, and Apple only has ONE.
If we stop right there, you can already see how this is way more secure. A government cannot compel Apple to hand over your unencrypted data. Apple has been able to do this in a much simpler way for a long time, but not without causing the government to pass counter-legislation in response. They haven't implemented better security before this for that very reason.
So where do the other keys come from? They are generated anytime a match is found in the CSAM database on your phone. Even that database is hashed, so your CSAM database and its hashes are unique from every other iPhone user. If there is no match with the CSAM database for a particular image, the keys for its decryption are never generated. Meanwhile each time CSAM match is made, another of the 31 keys gets generated. So in a (super over-simplified) way, the "bad" images are keys for each other. This is why Apple has set a "threshold" for how many CSAM images must be detected before Apple is notified. They have to meet that threshold in order to have all the keys to be able to decrypt all the offending images. Even then, all other images in your account still remain encrypted and inaccessible.
All of this keeps the government happy enough to keep the bad legislation at bay. It's not a perfect solution, but it's better than the alternative, and it results in greater privacy than we have today.
Unless/until I see technical documents showing why there is a privacy issue for people who don't have CSAM, I am 100% in favor of this solution.
Could it be that Apple thought that child pornography was going to get the most sympathy from society for this general approach of scanning libraries? They may have expected some HN push back but maybe hoped for some halfway decent press related to helping combat this abuse.
I ask that question in curiosity but I'm super skeptical because this is actually surveillance.
This is a harmful distraction from the massive issues with Apple's proposal. If you wanted to frame someone for possession of CSAM, similar stunts can be pulled with Google, Facebook, Instagram, and Microsoft today. Yes, the scope here is broader and some people don't use any of those, but ....It's silly, and it makes the tech community look like a fringe minority of screeching conspiracy theorists.
And this is a problem because Apple's proposal is really really awful. Apple is normalizing scanning your private phone for files and reporting them. They built the technical capability to do it for any photo, and they will be under enormous pressure to expand it both in the US and abroad. And the fact that they did it will be used to pressure other companies into doing the same and to legitimize laws that require scanning for any content the government can justify.
Apple built a surveillance mechanism that is incredibly powerful. One no government could ever force a company to design and build. But once it's built, the only thing stoping it from being abused is Apple's pinky promise they won't let it happen. If you believe that legal norms, big tech companies and some quasi governmental nonprofit like NCMEC will stop such an abuse if it happens .... where have you been living the past few years? Because it sure isn't the US, the UK, Turkey, or China.
reply