Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

They ended up never rolling out the CSAM scanning implementation due to the backlash they received from it.


sort by: page size:

As I understand it, they delayed rolling out CSAM scanning on-device due to the backlash.

Maybe because they can’t scan them for CSAM?

Do you actually think they didn't have CSAM scanning implemented server-side before this?

Technically they backed down from CSAM scanning after public pressure. Can we say the same thing about Google and/or Meta?

I don't see any reason to implement CSAM scanning on-device as opposed to doing it on the server if it wasn't to switch to a model where the server doesn't have access to the data.

How do you reconcile that with their move into CSAM scanning?

The on-device CSAM scan must be canceled, not delayed. It is a dangerous payload, future backdoor if you will, bundled with more friendly offline opt-in features and wrapped in "think of the children" paper

This is not about compliance, the relevant law specifically says that companies are not required to proactively scan for CSAM.

> The device performs involuntary "CSAM" scans

It does not.


Not scanning for CSAM on your own servers isn't a realistic expectation.

What makes it not a realistic expectation? According to other references, the USA cannot compel companies to run scans on their own customers.


The actual problem is not CSAM scanning.

The actual problem is that they've created a great surveillance tool which will inevitably get broader capabilities and they are normalising client-side data scanning (we need to eradicate terrorism, now we need to eradicate human trafficking, and now we need to eradicate tax evasion, oh, we forgot about gay russians, hmm, what about Winnie memes?).


Local CSAM scanning wasn't an open book either. Also, it wasn't mutually exclusive with cloud scanning.

That's why we shouldn't call it scanning for CSAM. We should call it mandatory submission of all private communication to government inspection. Fighting CSAM is just the alleged, first, use-case.

But people are mad precisely because the CSAM scanning is being done as "frontend stuff".

While the on-device CSAM scanning was a huge overreach I'm not sure how you could leverage that system for things like Amber/silver alerts or threats of violence. It's not really backdoor, more of a snitch system.

Most HN crowd presumable isn't actually worried about CSAM detection itself - its the local-side scanning where you lose control over your own hardware.

Why exactly are we believing the author's claims? The link on the supposed "2.0" announcement on mandatory CSAM scanning leads to no such annoucement. Nor does any of the other 50 links on the page, as far as I can tell.

Shutting down this new type of scanning is not the same as no longer scanning for CSAM.

It's curious how the big providers have been scanning for CSAM for YEARS with nothing making the news...because hashes are much different and don't false positive like this.


That CF tool is voluntary, it does not run automatically. Also, I haven't seen many people argue that CSAM scanning shouldn't happen on online cloud services. On local devices though? Massively over the red line.
next

Legal | privacy