I don't see any reason to implement CSAM scanning on-device as opposed to doing it on the server if it wasn't to switch to a model where the server doesn't have access to the data.
The on-device CSAM scan must be canceled, not delayed. It is a dangerous payload, future backdoor if you will, bundled with more friendly offline opt-in features and wrapped in "think of the children" paper
The actual problem is that they've created a great surveillance tool which will inevitably get broader capabilities and they are normalising client-side data scanning (we need to eradicate terrorism, now we need to eradicate human trafficking, and now we need to eradicate tax evasion, oh, we forgot about gay russians, hmm, what about Winnie memes?).
That's why we shouldn't call it scanning for CSAM. We should call it mandatory submission of all private communication to government inspection. Fighting CSAM is just the alleged, first, use-case.
While the on-device CSAM scanning was a huge overreach I'm not sure how you could leverage that system for things like Amber/silver alerts or threats of violence. It's not really backdoor, more of a snitch system.
Most HN crowd presumable isn't actually worried about CSAM detection itself - its the local-side scanning where you lose control over your own hardware.
Why exactly are we believing the author's claims? The link on the supposed "2.0" announcement on mandatory CSAM scanning leads to no such annoucement. Nor does any of the other 50 links on the page, as far as I can tell.
Shutting down this new type of scanning is not the same as no longer scanning for CSAM.
It's curious how the big providers have been scanning for CSAM for YEARS with nothing making the news...because hashes are much different and don't false positive like this.
That CF tool is voluntary, it does not run automatically. Also, I haven't seen many people argue that CSAM scanning shouldn't happen on online cloud services. On local devices though? Massively over the red line.
reply