It does that on the device. All it takes is enabling it by default, instead of triggering the scan prior to upload, and add more hashes to flag. The result would be a total loss of privacy, basically Apple is privatising mass surveillance with that. Without oversight, no matter how small, or accountability.
Well, this is very problematic for a privacy concerned company. Under no circumstances do I want Apple to scan my private files/photos, aspecially so if it means that an alarm can allow someone to determine if it is a positive or a false positive.
Also, this functionality isn't something they should be able to implement without telling their end users.
It is also problematic because it will just make the cyber criminals more technical aware of what counter measures they must take to protect their illegal data.
The consequence is very bad for the regular consumer: the cyber criminal will be able to hide, and the government has the possibility to scan your files. End consumer lose, again.
Apple's solution was to scan stuff that was going to be uploaded anyway on-device before upload.
Using that they could add multiple redundancies and they wouldn't need to look at your stuff on the cloud at all before getting multiple positive matches. And even then the first level is a human checking if it's an actual match or a false positive.
This was somehow a huge invasion of privacy, when people were competing on who could misunderstand the very simple premise the most.
How would Apple know what the content was that was flagged if all they are provided with is a list of hashes? I completely agree it's ludicrous, but there are plenty of countries that want that exact functionality.
Yeah, I think this is what freaked everyone out, but it's pretty clear the intention of on-device scanning was that it would work even if they had client-side encryption of photos before upload .
The whole point of the protocol (as described in the Apple whitepaper) is to allow clients to attest to perceptual hash matches without the server having access to the plaintext.
So, the irony of all of this is that the Apple design is effectively more private than the status quo, but everyone freaks out about it.
There is now, as far as I can tell, a system that can flag any photo that matches a perceptual hash for manual review by Apple or by other parties as required by law. Have any screenshots of code or confidential operational documents? Or photos placing you at private events attended by political dissidents (as defined by authoritarian regimes)? You’re one no-code config change away from Apple being able to exfiltrate this content from your device and deliver it to a third party. Engineers at Apple might not even know this is happening to be canaries. It’s a dangerous backdoor waiting to happen.
For what it's worth, Apple is a very deliberate company with immense control over their supply chain. Surveillance could very well be their internal MO, we have no way to hold them accountable in that regard. Considering how they want to generate low-resolution hashes of your content on-device before it's shipped off to their servers... yeah, I can see the two aligning. It's part of a larger, internal push to increase liability and weed out "bad actors"
I also believe Apple doesn't really want to scan your photos on their servers. I believe their competitors do, and they consider this compromise (scan on device with hashes) is their way of complying with CSAM demands while still maintaining their privacy story.
First, standard disclaimer on this topic that there were multiple independent technologies announced - I assume you are speaking to content hash comparisons on photo upload specifically to Apple's photo service, which they are doing on-device vs in-cloud.
How is this situation different from an oppressive government "asking" (which is a weird way we now use to describe compliance with laws/regulations) for this sort of scanning in the future?
Apple's legal liability and social concerns would remain the same. So would the concerns of people under the regime. Presumably the same level of notification and ability of people to fight this new regulation would also be present in both cases.
Also, how is this feature worse than other providers which already do this sort of scanning on the other side of the client/server divide? Presumably Apple does it this way so that the photos remain encrypted on the server, and release of data encryption keys is a controlled/auditable event.
You would think the EFF would understand that you can't use technical measures to either fully enforce or successfully defeat regulatory measures.
There's already a problem that Apple can't verify the hashes. Say a government wants to investigate a certain set of people. Those people probably share specific memes and photos. Add those hashes to the list and now you have reasonable cause to investigate these people.
Honestly this even adds to the danger of hash collisions because now you can get someone on a terrorist watch list as well as the kiddy porn list.
And then Apple reviews the account and sees that what was flagged was not CSAM. And again, the hashes aren’t of arbitrary subject matter, they’re of specific images. Using that to police subject matter would be ludicrous.
The question is why Apple implemented this feature in the first place. There was no reason for them to suddenly expand their image scanning to the devices themselves and risk their position as the self-proclaimed saviors of privacy - and still they did exactly that. There had to be some push from the government behind all of that, otherwise this debacle just doesn't make any sense.
I bet people would have had no problem with the CSAM on-device scanning if it involved neither Apple employees snooping on your photos nor government reporting.
e.g. if they made it so their software simply refused to upload a photo and inform the user why it refused to upload it, then there's no privacy issue (at least not until they do add a backdoor later on).
Find the websites distributing it, infiltrate them, generally do the legwork to find what's going on - which is exactly what they've been doing.
"But the children!" is not a skeleton key for privacy, as far as I'm concerned.
I reject on-device scanning for anything in terms of personal content as a thing that should be done, so, no, I don't have a suggested way to securely accomplish privacy invasions of this nature.
I'm aware that they claim it will only be applied to iCloud based uploads, but I'm also aware that those limits rarely stand the test of governments with gag orders behind them, so if Apple is willing to deploy this functionality, I have to assume that, at some point, it will be used to scan all images on a device, against an ever growing database of "known badness" that cannot be evaluated to find out what's actually in it.
If there existed some way to independently have the database of hashes audited for what was in it, which is a nasty set of problems for images that are illegal to store, and to verify that the database on device only contained things in the canonical store, I might object slightly less, but... even then, the concept of scanning things on my private, encrypted device to identify badness is still incredibly objectionable.
In the battle between privacy and "We can catch all the criminals if we just know more," the government has been collecting huge amounts of data endlessly (see Snowden leaks for details), and yet hasn't proved that this is useful to prevent crimes. Given that, I am absolutely opposed to giving them more data to work with.
I would rather have 10 criminals go free than one innocent person go to prison, and I trust black box algorithms with that as far as I can throw the building they were written in.
Whoever controls the hash list controls your phone from now on. Period. End of sentence.
Apple has not disclosed who gets to add new hashes to the list of CSAM hashes or what the process is to add new hashes. Do different countries have different hash lists?
Because if the FBI or CIA or CCCP or KSA wants to arrest you, all they need to do is inject the hash of one of your photos into the “list” and you will be flagged.
Based on the nature of the hash, they can’t even tell you which photo is the one that triggered the hash. Instead, they get to arrest you, make an entire copy of your phone, etc.
It’s insidious. And it’s stupid. Why Apple is agreeing to do this is disgusting.
And it doesn’t make sense. If I were a pedophile and I took a new CSAM photo, how long would it take for that specific photo to get on the list? Months? Years? As long as pedophiles know that their phones are being scanned, they won’t use iPhones for their photos. And then it will be only innocent people like me that get scanned for CSAM and potentially getting that used against me in the future.
If they really cared about CSAM, this feature is useless and stupid. All it does is make regular people vulnerable to Big Brother tactics which we know already exist.
How would this let them scan the text of your signal messages or even your iMessages?
How would this let them search for a subversive PDF?
It wouldn’t. This is just fearmongering.
> The result would be a total loss of privacy, basically Apple is privatising mass surveillance with that. Without oversight, no matter how small, or accountability.
You are talking about an imagined system someone could build in the future, not the system Apple has put in place.
I guess if you really want to get technical, they aren’t explicitly drawing that conclusion. They’re simply saying that Apple announced they’d scan files for CSAM, and now Apple is scanning files for “some reason.” Draw your own conclusion.
I think sending a one-way encrypted hash of a file, a la PhotoDNA [0], is a fair compromise. But if Apple is outright uploading your entire photo library for “analysis” without an explicit opt-in, that’s a different beast altogether. And if Apple today isn’t using this for surveillance, Apple tomorrow may be.
The supposition was that, if the scanning had no reporting capability, then Apple could still claim a lack of capability. They could respond to government demands with, "Sorry, our software only blocks uploads. We have no ability to get telemetry on what uploads are blocked or how often."
That proposal probably wouldn't work for a lot of reasons though. The largest blocker being that (IIUC) the NCMEC won't share a DB of offending signatures without NDA, so Apple probably can't load it onto consumer devices.
reply