Many of these steps are spun as being for the users' privacy.
But they also prevent the operators of the system – Apple Inc, quasi-governmental non-profits (like NCMEC), & government law-enforcement agencies – from facing accountability for their choices. No one else can confirm what content they're scanning for, or what 'threshold' or other review applies, before serious enforcement action is taken against an account.
Also, Apple is quite vague about whether the claim "Apple manually reviews all reports" means Apple employees can view all triggering content, even if it was never uploaded to Apple servers. (How would that work?)
It would be trivial to extend this system to look for specific phrases or names in all local communication content.
How will Apple say 'no' when China & Russia want that capability?
In fact, might this initial domestic rollout simply be the 1st step of a crafty multi-year process to make that sort of eventual request seem ho-hum, "we're just extending American law-enforcement tech to follow local laws"?
How would Apple know what the content was that was flagged if all they are provided with is a list of hashes? I completely agree it's ludicrous, but there are plenty of countries that want that exact functionality.
That is how they've described the system. After 30 or so matches, they'll be able to decrypt a version (presumably scaled down, the description is vague) of the images and manually review them. With the present system targeting CSAM imagery, unless the images you have match their expectation of matching images, they wouldn't pass it on to law enforcement.
This is where the slippery slope argument comes in. Apple would have to actively cooperate with governments to both detect and report other kinds of images. So if Tiananmen Square images were showing up in the review pile, Apple would have to know to pass those on to the Chinese government and be willing to.
The only other way they'd be passed on is if Apple drops the manual review portion.
I didn’t say it would be transparent or public. I’m a government plant for saying the government could just tell Apple to scan for whatever they want without messing with the CSAM stuff? I’m saying governments are already capable of ordering this kind of thing, gag orders included, so this specific tech implementation doesn’t really change that. And yes that’s bad and scary and we should probably try to prevent our governments from doing that.
But no, I don’t believe that a government could “fool” Apple by adding non-CSAM images to the database. The review step would catch that.
I don’t like on device scanning in principle and in precedent. I’m just saying this specific tech stack doesn’t seem like it would be useful for your surveillance scenario, and most of your criticisms don’t seem to be based in having read how this system actually works.
I’m AGAINST this system, I just wish the discussion here weren’t so full of misinformation and bad assumptions.
The supposition was that, if the scanning had no reporting capability, then Apple could still claim a lack of capability. They could respond to government demands with, "Sorry, our software only blocks uploads. We have no ability to get telemetry on what uploads are blocked or how often."
That proposal probably wouldn't work for a lot of reasons though. The largest blocker being that (IIUC) the NCMEC won't share a DB of offending signatures without NDA, so Apple probably can't load it onto consumer devices.
That's a very optimistic point of view. On the other hand, I and others speculated that the reason Apple wants to introduce code on your device that scans local content on your device against a government mandated database of "wrong content" was to appease law enforcement's desire for more control.
The human review phase is supposed to explicitly prevent that. Im not sure I would but my faith there, especially if its a flood of colisions and they are rated/paid on case clearance rate.
Further, this is step 1 of a process they have explicitly said they are looking to expand on [1], even going as far to state it in bold font with a standout color.
So theres no telling that they wont expand it by simply scanning everything, regardless of icloud usage, or pivot it to other combat "domestic terrorism" or "gun violence epidemics" or whatever else they feel like.
Its an erosion of trust, even if not a full stop erosion, its something they intend to expand upon and wont be taking back.
>Recognition isn't something people would always be able to immediately do with CSAM imagery so Apple, which has already created a tool to recognize it, can assist with that.
You said you didn't want Apple to be the police but you know want them to be the judge of what is and isn't CSAM?
>It can also create tools to reduce the overhead of reporting. If you make it easier to report, more people will do it. You're assuming that anyone that ever encounters CSAM would go out of their way to report it, which simply isn't true.
I don't know why you are assuming that potential reporters knowing about CSAM but not reporting it is anywhere close to as common as CSAM being unknown and therefore obviously unreported.
>My right to privacy is not an infringement of your rights so this argument has no bearing in reality. Law enforcement requires probable cause to get permission to surveil the population and there's been countless cases thrown out because of this violation, which, as it turns out, was even tested in the context of technology interfacing with NCMEC.
As the other reply stated, certain content does infringe on other people's rights. Also Apple isn't law enforcement. They don't need probable cause. One of the primary and unmentioned motivators here is that they don't want CSAM ending up on their servers and opening themselves up to legal action.
>Those people are also pushing the same false dichotomy that you are. There is no technical reason that this is required to enable E2EE for iCloud, it's purely speculation as to why Apple would roll this surveillance tool out, as a compromise to having a fully encrypted system.
It is a technical requirement once you accept the moral requirement that Apple doesn't want to enable the sharing of CSAM. Once again, you are ignoring that some people think the morality of enabling (or at least not curtailing) the sharing of CSAM is a moral failing that can't be accepted.
This part is legit though: governments around the world could pressure Apple to add other forms of surveillance. Be it hashes of non-CSAM, or simply pressure NEMEC or other hash providers, or extending its capabilities to all messages or photos on device.
In my opinion, this is the biggest concern, not the technology. Before, Apple could simply refuse by saying we don't have the capabilities. But now, that excuse is gone. Apple's promise to human review content and only report CSAM is the weakest link.
> There is a chance of false positives, so the human review step seems necessary...
You misunderstand the purpose of the human review by Apple.
The human review is not due to false positives: The system is designed to have an extremely low rate of hits where the entry isn't in the database and the review invades your privacy regardless of who does it.
The human review exists to legitimize an otherwise unlawful search via a loophole.
The US Government (directly or through effective agencies like NCMEC) is barred from searching or inspecting your private communications without a warrant.
Apple, by virtue of your contractual relationship with them, is free to do so-- so long as they are not coerced to do so by the government. When Apple reviews your communications and finds what they believe to be child porn they're then required to report it and because the government is merely repeating a search that apple already (legally) performed, no warrant is required.
So, Apple "reviews" the hits because, per the courts, if they just sent automated matches without review that wouldn't be sufficient to avoid the need for a warrant.
The extra review step does not exist to protect your privacy: The review itself deprives you of your privacy. The review step exists to suppress your fourth amendment rights.
I think the biggest thing you are missing (from my perspective) is that to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.
A change to the policy of what kinds of images are scanned is opaque by law, since none of the Apple employees involved can even have any access to the database of hashes they are using. There is also no realistic way for the consumer to understand the true false positive rate, no ability for a third party organization to distinguish false positives from true positives on non-csam images leaving the device.
Additionally, these are just problems in the US. Other governments can and will mandate the use of this tool for other kinds of media they find objectionable in their borders.
The large investment from this system is almost certainly the infrastructure to get it on phones, report the results, and run it in scenarios where it will minimize battery impact. What photos on device it is run on does not strike me as a technical challenge once the tool is built, only one with policy implications. And the easy answer to that will be just to check some flag if the phone is in a country that requires all pictures to be scanned.
It does that on the device. All it takes is enabling it by default, instead of triggering the scan prior to upload, and add more hashes to flag. The result would be a total loss of privacy, basically Apple is privatising mass surveillance with that. Without oversight, no matter how small, or accountability.
The reason I think Apple went forward with it though is that, from their perspective, it's not like they are building a new tool for surveillance. It doesn't take many brain cells for a lawmaker to realize that they could mandate pre-screening of content.
From Apple's perspective... for authoritarian governments like China or India... they are already able to mandate it and are likely to. So they shouldn't be factored into the "CSAM Scanning could be abused!" argument because it was already happening and going to happen, whether the tool exists or not.
In which case, releasing the CSAM tool has only benefits for the abused and doesn't make a difference in preventing surveillance and privacy invasions because it was already going to occur. A cynical view but a possibility.
That’s the major concern I have: take as a given that NCMEC is on the side of the angels here, what happens when some government demands that Apple help identify anyone who has an image shared from a protest, leak, an underground political movement, etc.? The database is private by necessity, so there’s no way to audit for updates.
Now, currently this is only applied to iCloud photos which can already be scanned server side, making this seem like a likely step towards end to end encryption of iCloud photos but not a major change from a privacy perspective. What seems like more of a change would be if it extended to non-cloud photos or the iMessage nude detection scanner since those aren’t currently monitored, and in the latter case false positives become a major consideration if it tries to handle new content of this class as well.
Find the websites distributing it, infiltrate them, generally do the legwork to find what's going on - which is exactly what they've been doing.
"But the children!" is not a skeleton key for privacy, as far as I'm concerned.
I reject on-device scanning for anything in terms of personal content as a thing that should be done, so, no, I don't have a suggested way to securely accomplish privacy invasions of this nature.
I'm aware that they claim it will only be applied to iCloud based uploads, but I'm also aware that those limits rarely stand the test of governments with gag orders behind them, so if Apple is willing to deploy this functionality, I have to assume that, at some point, it will be used to scan all images on a device, against an ever growing database of "known badness" that cannot be evaluated to find out what's actually in it.
If there existed some way to independently have the database of hashes audited for what was in it, which is a nasty set of problems for images that are illegal to store, and to verify that the database on device only contained things in the canonical store, I might object slightly less, but... even then, the concept of scanning things on my private, encrypted device to identify badness is still incredibly objectionable.
In the battle between privacy and "We can catch all the criminals if we just know more," the government has been collecting huge amounts of data endlessly (see Snowden leaks for details), and yet hasn't proved that this is useful to prevent crimes. Given that, I am absolutely opposed to giving them more data to work with.
I would rather have 10 criminals go free than one innocent person go to prison, and I trust black box algorithms with that as far as I can throw the building they were written in.
How do you know there aren’t bad actors working at the NCMEC? If I know that adding a hash to a list will get it flagged, and I could conveniently arrest or discredit anyone I wanted, I would certainly send people to work there.
How will Apple know whether a hash is for non-CSAM content? Spoiler alert: they won’t.
And Apple claims it will be reviewed by a human. Sure, just like YouTube copyright claims? Or will it get automated in the near future? And what about in China? Or Saudi Arabia or other countries with less human rights?
The point is that it is completely an easy way to get tagged by a government or bad actors as a pedophile. It’s sickening that Apple would let this “technology” into their products.
Even if scanning for the content is not explicitly required by law, what happens when a pedophile ring hoarding thousands of images of CSAM on iCloud is busted, and the news gets out? The article at [1] makes it sound like Apple choosing not to scan any of its users' videos in iCloud was seen as evidence of Apple lagging behind the status quo of companies like Facebook that were proactively reporting CSAM.
According to that article, in the last year Apple only submitted 265 reports to the NCEMC while Facebook submitted 20 million. Would law enforcement believe they'd be missing out on catching abusers after seeing this disparity?
If a company is found to allow criminals to store CSAM on their servers for extended periods of time, the law is going to want to know why they let it pass, irrespective of the extent the company chose to scan for it. Apple probably doesn't want to deal with that fallout, so maybe they figured that being proactive about scanning for CSAM in a way that could enable the use of E2EE wouldn't hurt, and that pushing the privacy narrative would satisfy enough people - which it didn't.
1. Apple has only stated a commitment to protecting user privacy to the extent it is legal. They make a lot of suspicious operational changes to how they operate in China, for example. Their opposition to the FBI case was because they believed the request placed on them was illegal.
2. This article only provides evidence to say Apple will scan photos you upload into iCloud. Scanning local photos is your speculation.
3. The operation of NCMEC and the blacklist is defined by the Federal government. American (citizens) have some mechanism to oppose this design. Apple could perhaps lobby against it, but their lobbying operations are famously minimal.
4. In principle, Apple could do anything. That doesn’t really inform what they’re likely to do. Just like in principle, anyone could slip a malicious content scanning patch into the Linux kernel (and this already has a POC!).
Why does anyone even assume that a bad actor would need to apply pressure?
These databases are unauditable by design -- all they'd need to do is hand Apple their own database of "CSAM fingerprints collected by our own local law enforcement that are more relevant in this region" (filled with political images of course), and ask Apple to apply their standard CSAM reporting rules.
I agree- I think this is a bad decision on Apple's part. It really undercuts a lot of their statements about privacy by doing any kind of on device scanning of your content, even in such a narrow context.
I guess the point I am making is that as of now, this only applies if you're using their cloud services. I'm not sure if Apple would announce if they were compelled to use this functionality through a court order.
But they also prevent the operators of the system – Apple Inc, quasi-governmental non-profits (like NCMEC), & government law-enforcement agencies – from facing accountability for their choices. No one else can confirm what content they're scanning for, or what 'threshold' or other review applies, before serious enforcement action is taken against an account.
Also, Apple is quite vague about whether the claim "Apple manually reviews all reports" means Apple employees can view all triggering content, even if it was never uploaded to Apple servers. (How would that work?)
It would be trivial to extend this system to look for specific phrases or names in all local communication content.
How will Apple say 'no' when China & Russia want that capability?
In fact, might this initial domestic rollout simply be the 1st step of a crafty multi-year process to make that sort of eventual request seem ho-hum, "we're just extending American law-enforcement tech to follow local laws"?
reply