Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

How would Apple know what the content was that was flagged if all they are provided with is a list of hashes? I completely agree it's ludicrous, but there are plenty of countries that want that exact functionality.


sort by: page size:

How do you know there aren’t bad actors working at the NCMEC? If I know that adding a hash to a list will get it flagged, and I could conveniently arrest or discredit anyone I wanted, I would certainly send people to work there.

How will Apple know whether a hash is for non-CSAM content? Spoiler alert: they won’t.

And Apple claims it will be reviewed by a human. Sure, just like YouTube copyright claims? Or will it get automated in the near future? And what about in China? Or Saudi Arabia or other countries with less human rights?

The point is that it is completely an easy way to get tagged by a government or bad actors as a pedophile. It’s sickening that Apple would let this “technology” into their products.


There's already a problem that Apple can't verify the hashes. Say a government wants to investigate a certain set of people. Those people probably share specific memes and photos. Add those hashes to the list and now you have reasonable cause to investigate these people.

Honestly this even adds to the danger of hash collisions because now you can get someone on a terrorist watch list as well as the kiddy porn list.


And then Apple reviews the account and sees that what was flagged was not CSAM. And again, the hashes aren’t of arbitrary subject matter, they’re of specific images. Using that to police subject matter would be ludicrous.

Isn't it? Honestly asking.

Apple are having peoples handsets check file hashes against a hash list and reporting anyone who has files with hashes on the list right? And there is a threshold of matches below which you dont get reported. Above that they lock your account.

The fact they're currently limiting it to image files and the US doesn't seem like much of a difference.

Am i missing some clever defence against misuse.


Well first of all, it's not provided by the US government. It's a non-profit, and Apple has already said they're going to look for another db from another nation and only included hashes that are the union of the two to prevent exactly this kind of attack.

If what you mean by blinded is that you don't know what the source image is for the hash, that's true. Otherwise Apple would just be putting a database of child porn on everyone's phones. You gotta find some kind of balance here.

What do you mean you can't verify it doesn't contain extra hashes? Meaning that Apple will say here are the hashes in your phone, but secretly will have extra hashes they're not telling you about? Not only is this the kind of thing that security researchers will quickly find, you're assuming a very sinister set of features from Apple that they'll only tell you half the story. If that were the case, then why offer the hashes at all? It's an extremely cynical take.

The reality is all of the complaints about this system went from this specific implementation, and then as details get revealed, it's now all about the future hypothetical situations. I'm personally concerned about future regulations, but those regulations could/would exist independently of this specific system. Further, Dropbox, Facebook, Microsoft, Google, etc all have user data unencrypted on their servers and are also just as vulnerable to said legislation. If the argument is this is searching your device, well the current implementation is its only searching what would be uploaded to a server instead. If you suggest that could change to anything on your device due to legislation, wouldn't that happen anyway? And then what is Google going to do... not follow the same laws? Both companies would have to implement new architectures and systems for complying.

I'm generally concerned about the future of privacy, but I think people (including myself initially) have gone too far in losing their minds.


First off, I don’t think this is some evil plan to kill our privacy. I think this project is done with good intentions, if nothing else.

However I think this is an interesting question: how does Apple know that the hashes they’re supplied match CSAM, and not, say, anti-government material? How would they know if the people they got hashes from started supplying anti-government hashes? Apple will only be receiving the hashes here - by design, even they won’t have access to the underlying content to verify what the hashes are for.


In the press release Apple says they are using hashes of hashes.

Apple has no visibility into the original image that generated the hash, so the gov can compromise the list at the source and Apple would have plausible deniability.


The FT article mentioned it was US only, but I'm more afraid of how other governments will try to pressure Apple to adapt said technology to their needs.

Can they trust random government to give them a database of only CSAM hashes and not insert some extra politically motivated content that they deem illegal ?

Because once you've launched this feature in the "land of the free", other countries will require for their own needs their own implementation and demand (through local legislation which Apple will need to abide to) to control said database.

And how long until they also scan browser history for the same purpose ? Why stop at pictures ? This is opening a very dangerous door that many here will be uncomfortable with.

Scanning on their premises (considering they can as far as we know ?) would be a much better choice, this is everything but (as the "paper" linked tries to say) privacy forward.


Thank you for this perspective. I've never worked at an organization of this magnitude, so I am definitely lacking some perspective.

> It's also clear Apple put a lot of thought into addressing the privacy concerns for this. Technologically, it's sophisticated, impressive.

I'm not sure about this. How is a perceptual hash sophisticated and impressive given that it can be abused by governments demanding Apple scan for political content, etc?


My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason.

The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.

And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places, there's no simple or obvious way to get them to match.


The OP mentions that two countries have to agree to add a file to the list, but your concern is definitely valid:

> Perhaps the most concerning part of the whole scheme is the database itself. Since the original images are (understandably) not available for inspection, it's not obvious how we can trust that a rogue actor (like a foreign government) couldn't add non-CSAM hashes to the list to root out human rights advocates or political rivals. Apple has tried to mitigate this by requiring two countries to agree to add a file to the list, but the process for this seems opaque and ripe for abuse.


It does that on the device. All it takes is enabling it by default, instead of triggering the scan prior to upload, and add more hashes to flag. The result would be a total loss of privacy, basically Apple is privatising mass surveillance with that. Without oversight, no matter how small, or accountability.

The db is encrypted and uploaded to user devices. If each country gets a different db, the payload will be different in each country, which does not make sense if it's all supposed to be CSAM. So Apple would likely just say "these were mandated by the US government for US citizens," punting the ball in their court, unless they are forbidden to say so, in which case they'll say nothing, but we all know what it means. That's when you know you should change phones and stop using all cloud services, because obviously all cloud services scan for the same thing.

On the flip side, though, at least Apple will have given us a canary. And that's why I don't think Apple will be asked to add these hashes: if the governments don't want their citizens to know what's being scanned server side, pushing the equivalent data to clients would tip their hand. They might just write Apple off as a loss and rely on Google, Facebook, etc.


> How will Apple know whether a hash is for non-CSAM content? Spoiler alert: they won’t.

As I said, the flagged content is reviewed by an Apple employee before it actually triggers an external report. If the flagged material is not in fact CSAM, it will not be reported.

> And Apple claims it will be reviewed by a human. Sure, just like YouTube copyright claims? Or will it get automated in the near future? And what about in China? Or Saudi Arabia or other countries with less human rights?

First of all, the volume of flagged CSAM content is much, much smaller than the volume of YouTube copyright claims. It's entirely plausible to ensure that a human reviews all flags. Second, Apple is actually constitutionally-barred from automating this step entirely. You can thank Neil Gorsuch's decision in United States v. Ackerman for this. [1] The crux is that since NCMEC is a qausi-governmental entity, automatically sending CSAM-matched content to NCMEC without an Apple employee first inspecting the content would constitute an unreasonable search and seizure and would violate the 4th amendment.

[1] https://library.law.virginia.edu/gorsuchproject/united-state...


Literally Apple said in their own FAQ that they are using a perceptual(similarity based) hash and that their employees will review images when flagged. If that's not good enough(somehow) then even the New York Times article about it says the same thing. What other evidence do you need?

It is my understanding that Apple implemented a program that generates a hash of a file and compares it to a blacklist, and notifies Apple in the event of a match. It's not clear, but it appears this is only run when uploading to iCloud.

The blacklist itself is not maintained by Apple, but by the US government or a third party like NCMEC, which means Apple can't be sure content that isn't child abuse imagery hasn't made it onto the list. Perceptual hashes probably can't be abused to target non-image/video content because they're an inherently image-oriented technology.

Apple could, however cause such a program to match on different criteria with a simple update, and such a change would likely be difficult to detect. Most of us assume Apple wouldn't voluntarily do such a thing, but it's very probable that they would do it involuntarily. The US government has already attempted to compel Apple to create a tool to compromise the security of an iPhone, and might have eventually succeeded in court if they hadn't gained access by other means. That fight took place in public, but the next one might well take place in secret.


Many of these steps are spun as being for the users' privacy.

But they also prevent the operators of the system – Apple Inc, quasi-governmental non-profits (like NCMEC), & government law-enforcement agencies – from facing accountability for their choices. No one else can confirm what content they're scanning for, or what 'threshold' or other review applies, before serious enforcement action is taken against an account.

Also, Apple is quite vague about whether the claim "Apple manually reviews all reports" means Apple employees can view all triggering content, even if it was never uploaded to Apple servers. (How would that work?)

It would be trivial to extend this system to look for specific phrases or names in all local communication content.

How will Apple say 'no' when China & Russia want that capability?

In fact, might this initial domestic rollout simply be the 1st step of a crafty multi-year process to make that sort of eventual request seem ho-hum, "we're just extending American law-enforcement tech to follow local laws"?


I’m not sure which side you’re arguing here.

The biggest concern about Apple’s system is that it’s very easy to add new items to a hash list. That is an argument about the technical similarity of scanning for CSAM and scanning for other things like classified documents (for example).

But there is a vast difference in principle. Pretty much everyone wants to stop child abuse. But many people—including major news organizations—believe citizens should sometimes have the opportunity to view classified documents.

Different categories of things to scan for will be different in principle, even if the technical approach is similar. This difference in principle is what Apple leans on when they say they will oppose any request to expand their system beyond CSAM.


> So Apple scans Chinese (or whoever) citizens libraries, finds "CSAM" and reports them to ICMEC which then reports them to the government in question.

If Apple finds that a particular hash is notorious for false positives, they can reject it / ask for a better one. And they’re not scanning your library; it’s a filter on upload to iCloud. The FUD surrounding this is getting ridiculous.

next

Legal | privacy