Apple wouldn't be able to tell if it's Whinnie or a polical organizer, because they'll only be provided with hashes of the original images. Any government participating would be able to include any image it wants without Apple's immediate knowledge.
They have to come from the intersection of two databases from two jurisdictions. So already that’s out as you suggest. Then you’d have to match _nearly exact photos_, which isn’t a vector for general photos of some random minority. Then you’d need 30 of such specific photos, a match with another secret hash, and then a human reviewer at Apple has to say yes it’s CP before anything else happens.
I think there are plenty of reasons to be concerned about future laws and future implementations, but let’s be honest about the real risks of this today as it’s currently implemented.
At this point, with all the easily producible collisions, the Gov't could just modify some CSAM images to match the hash of various leaked documents/etc they want to track. Then they don't even have to go thru special channels. Just submit the modified image for inclusion normally! (Not quite that simple, as they would still need to find out about the matches, but maybe that's where various NSA intercepts could help...)
The question that should be asked is if you think it's ok if the U.S. gov't looks at every picture you take and have taken and store and will store. The U.S. gov't will access, store, and track that information on you for your whole life. Past pictures. Present pictures. Future pictures.
I don't use apple products, but if I found out google was scanning my photos on photos.google.com on behalf of the government I would drop them. I'm not saying it wouldn't hurt, because it definitely would, but in a capitalistic country this is the only way to fight back.
With a broader rollout to all accounts and simply scanning in iMessage rather than photos there's one possible scenario if you could generate images which were plausibly real photos: spam them to someone before an election, let friendly law enforcement talk about the investigation, and let them discover how hard it is to prove that you didn't delete the original image which was used to generate the fingerprint. Variations abound: target that teacher who gave you a bad grade, etc. The idea would be credibility laundering: “Apple flagged their phone” sounds more like there's something there than, say, a leak to the tabloids or a police investigation run by a political rival.
This is technically possible now but requires you to actually have access to seriously illegal material. A feasible collision process would make it a lot easier for someone to avoid having something which could directly result in a jail sentence.
Winning would always be easy if you didn't have an adversary. Apple (having access to the original low resolution photo) could build a relatively simple mechanism add a filter in their verification pipeline.
Even if they didn't have access to the original (for whatever reason), they train their own learning algorithm (supervised by their manual verification checkers) to detect the fake submissions.
Well, presumably at that point, someone in that position would just reveal their own files with the hash an prove to the public that they weren't illegal. Sure, it would be shitty to be forced to reveal your private information that way, but you would expose a government agency as fabricating evidence and lying about the contents of the picture in question to falsely accuse someone. It seems like that would be a scandal of Snowden-level proportions.
Sure, but so could the OS or app distributors. You need to establish a baseline of trust somewhere, and this will likely be on the official (or your cherry picked official) images, and you build from there.
I think you’re missing the point, this is not a technology to create politically neutral photos. The way you take a photo and even the way you edit it later will all be your choice. It’s also your choice what to take photos of or not in the first place. All this does is it proves the photo was created by
(1) a trusted application
(2) using a builtin device camera
(3) of a device running attested trusted operating system
(4) and any subsequent edits reference such photo’s original hash.
Hopefully stronger assertions could come in the future. This is just to combat digital forgeries and synthetically generated images. It also does nothing against real-world forgeries like actors wearing costumes or makeup.
Now that Apple has proven they can do it, how long before it’s required by law, regardless of their wishes, and how long before Apple is forced to implement it for other types of images that governments may find objectionable or interesting?
You can't know what it is reading though. The app could theoretically be sending all your photos to the government. Would be interesting to see someone reverse engineering it.
Seems to me that a 1x1 pixel image tag pointing at an image matching one of the signatures would be enough to trigger some kind of action, even if the end user doesn't see that single pixel. What then, does your phone get seized automatically?
Of course the civil disobedience way of dealing with this would be to find an entirely safe image which matches the hash so that every website can safely embed such an image and show the utter futility of the idea.
Not just false positives. You won't get the original images with the hashes of course, so no problem for a hostile state to slip in a few hashes of other things it doesn't like: document on off-colour politics, criticism of the prime-minister, ...
Given that modifying just a single bit in an image results in a wildly different hash digest, I think the risk is a little overblown. There are probably easier ways for authoritarian governments to figure out who's sending illegal content, like just taking somebody's device and looking at their messages.
reply