Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> once they made it to the authorities, would then be reviewed in detail and shown as a false positive, and then ignored in the future / whitelisted

Which typically will be a court case, or at least questioning by police. This can be quite a destructive event on someone's life. Also, there's no mechanism for whitelisting outlined in the paper, nor can I imagine a mechanism that would work (i.e. now you've got a way to distribute CP by abusing the whitelisting fingerprint mechanism or you only match exact cryptographic hashes which is an expensive CPU operation & doesn't scale as every whitelisted image would have to be in there).

Also, your entire premise is predicated on careful and fair review by authorities. At scale, I've not seen this actually play out. Instead either the police will be underworked & not investigate legitimate cases (too many false positives) or they'll aggressively police all cases to avoid missing any.



sort by: page size:

> I keep seeing this suggestion, and it seems not to occur to the proponents that it would simply land someone with destruction-of-evidence charges.

Only if someone can prove the data was there in the first place.


> Despite there been reasonable solutions like bloom filters and client sided hash detection, so that known child abuse material can be detected, without it needing to compromise the privacy of 99.99999% of users?

This is not a good argument. “Known child abuse material” is the tip of the iceberg. There’s nothing stopping people from creating new “child abuse material”, and the people who are doing that sort of thing are the ones who are more important to catch.


>I mean, what's the attack vector here? The plan, as far as I'm aware, is to upload a list of hashes to each device that have been vetted by multiple child protection agencies.

I hope not.

If hashes are uploaded to devices, they can be extracted and images that clash against it can be created.

I think they're going to be creating hashes of images locally that are being uploaded and send it with the image. Then if the hash is found to match one on their database, that's flagged.

The problem then is, if they're matching on their side, what prevents them from receiving some order that forces them to match for other images?


> ...if you have CP or photos in a CP database...

Which database? I get the impression that people think there is a singular repository for thoroughly vetted and highly controlled CP evidence submission. No such thing exists.


> the result of which would be just that X hashes marched but not which X

That means you can't prove an incriminating file was not deleted even if you're the victim of a false positive. So they will suspect you and put you through the whole police investigation routine.


> moving back to laptops and desktop machines?

Sure, as long as they never upload those images to somewhere that already does scanning (which is basically every cloud storage provider, I believe.)

> As long as it is transferred off the phone before someone gets caught with it and its hash entered into the database, they wouldn't be flagged.

Yep, that's definitely one flaw in the plan.


>probably would be harder to prosecute in the event that some files become illegal

Cynically I'd assume they'd just use it to add a conspiracy to commit a crime charge to the list since you technically need to coordinate with other people.


> Try explaining why that's not real to a clueless judge.

Judges aren't stupid. You can easily explain why the random data isn't something. You can't explain your way out of unencrypted data being objectively there.

> Anyway, i hope the legal system would recognize the difference between having intent to store cp and not having such intent.

If you know how blockchains work, then it is intentional.

> Otherwise anyone can be made or proven to have stored it.

Not for any real standard of proof.


> it still requires absolute trust in the entity who provides the list of hashes to not put anything but CSAM there

Yes but isn't that already the case? If a government, a three-letter-agency, or even a local detective want to fabricate a case against me for some reason, how difficult will that be to do in practice? Does this CSAM hash list really give them anything fundamentally new?


> even when that data never left the end user devices

Do we know that to be true?

In any case, I'm pretty sure they don't bring criminal cases like this frivolously. There's obviously something here.


> In theory you could automate it [..]

Sorry for the somewhat off-message thought, but perhaps this kind of thing is actually more secure if you _don't_ attempt to automate it?

Maybe the person receiving the request should actually go and look up the phone number of the police department or court who allegedly issued it/approved it, and then call that number (note: not the number mentioned on the request itself).

Surely if that was the SOP, this kind of stuff would just stop?


> Who needs SWATing when you can send a CP pic (either real or with hash collision as per the thread few days ago) from a virtual overseas number/service and get FBI van to show up as well?

You are talking like collisions are trivial to make. I bet they have had a deep conversations in this area. At first, you would need a real hash to even try (which are hidden). Secondly, to get real material it means that it must be in their database to trigger anything. This tells a lot from sender already, and is worth to tell for police. It is quite easy to prove that someone just send it to you. And one photo is not triggering anything. Besides, sender must know that those photos must go automatically into the cloud to mean anything.

> What about injecting code into a public website to download same pic into local browser cache without user’s knowledge?

At least US legistlation is precise that user must willingly obtain/download CSAM material, and it must be proved. So this is not harmful for the user in the end.

A lot of speculation, but does not really lead for coencequences. Almost every system can be tried to be abused, but does it really mean something, is different story.


> For example, how to prove random data isn't an encrypted blob?

It is for the prosecution to prove that the random data is actually encrypted (or more accurately, to prove that you possess the 'key' to some electronic data).

> What if you did encrypt it but with a throwaway password which you obviously don't remember?

It is for the prosecution to disprove your sworn statement to this effect.

> How can bits possibly be so dangerous to warrant jailtime for you unless you choose to reveal them?

There could be reasonable grounds for believing that the bits are being used to encrypt child pornography, or some other serious crime.


> It seems like the simplest solution here would be to not ban accounts automatically for CSAM detections, but to have a process to do so based on police recommendation.

The police aren't going to recommend a ban. They don't want people banned. They want people arrested, tried, and convicted for criminal possession. They're going to recommend keeping the account open until they have enough evidence to take it to court.


> even opening the file would be illegal

Not an expert, but I've worked with large companies that used humans to filter user content. Either they were breaking the law en masse deliberately or there is a safe harbor for doing manual filtering.

Again, I don't know the answer, but I'm very skeptical that it's always illegal to manually filter user content. Presumably swift deletion (and cache purging, if possible), reporting and record-keeping would be important. Although that does raise the issue of how you preserve evidence for police to take action if you delete it immediately... geez this is such a landmine.


> signed by you

Signed by a private key which has no connection to my legal identity, and which can be generated in a split second. Signatures give strong pseudonymity.

> Relays don’t have to store any message on disk

This is a good idea, as long as messages are relatively few, which may be hundreds of thousands if they are short. There's no promise of availability, so an infrequent reboot is free to erase them.

> Then you are going to convince a judge that the thing on my disk belongs to me

I'm afraid it can be the other way around, if the material is sensitive enough. You'd have to explain to the judge how it ended up on your computer.

> every server in the world has this vulnerability

No, only a server that allows users to upload UGC without requiring enough background information to let the law enforcement find out their real identities if need be. Should be an adequate explanation to the judge in a civilized country where governmental censorship is not a thing.


> and they want to check if any of the hashes are the same as hashes of child porn?

... without any technical guarantee or auditability that any of the hashes they're alerting on are actually of child porn.

How much would you bet against law enforcement to abuse their ability to use this, and add hashes to find out who's got anti government memes or police committing murder images on their phones?

And that's just in "there land of the free", how much worst will the abuse of this be in countries who, say, bonesaw journalists to pieces while they are alive?


> Investigators said content stored on the encrypted hard drive matched file hashes for known child pornography content

How on earth is this supposed to work? Unless they can decrypt the hard drives I am pretty sure that this is impossible to deduce.

Maybe he used freenet or something in his unencrypted hard drive?


> When a virus scanner finds a virus, it alerts me and I can either quarantine or delete it. It’s up to me to decide if what it’s found is actually a virus, or a false positive.

This is also a bit superficial. If you are breaking the law, you can't decide by yourself whether you are breaking the law of not. That is up to the judge.

While you can quarantine or delete the virus, AV vendor is still getting all the stats. It is not maybe including PhotoDNA matches but cryptographical hashes are included for identical match. It is still perfectly legal to inform CSAM content against these matches, and we can't be sure if that has been made or not.

In case of Windows Defender, what if automatic sample submission is enabled? Uploading and storing a file makes Windows as cloud-provider for this specific scenario, and is required by law to report CSAM content.

Who knows if PhotoDNA is also applied into this content, but that hasn't been told yet? It is legal, there is no need to to tell that.

next

Legal | privacy