Since when do search warrants get automatically approved by a black box? It is on the judge to approve, not a machine.
If that's not enough, make a human sign off that they have confirmed or at least agree with the black box's conclusion, and put the department on the hook if it was obviously wrong.
> Who makes sure that the data sets used to train the models was 100% free of bias?
The box is comparing a source image to an image with a name attached to it. Like you said, no different than a human would do, with all of their own biases in play. We aren't talking minority report here, so there's no reason to think this is a hurdle that would be difficult to overcome.
I don’t discount the usefulness of ai or facial recognition, however,
>Since when do search warrants get automatically approved by a black box? It is on the judge to approve, not a machine.
The USA PATRIOT ACT removed judicial oversight and allows for warrants to be issued that notify the judiciary and simultaneously issue gag orders on those notified. For federal cases warrants no longer require judicial approval in an undisclosed number of cases. Try and find a list of current Guantanamo Bay detainees or the judges who approved the warrants used to detain them. Until that’s dealt with I see why King County would ban this use as a defense against automated enforcement.
Right, the facts presented to the judge, taken as stated, have to add up to cause for a search. Judges shouldn't ignore a paucity of factual assertions. I don't doubt that many do. But short of an adversarial process, which I don't think can work with warrants, I don't know what you can do to mitigate a police officer fabricating evidence.
As someone who has read a fair number of granted search warrants, I can attest to the fact that 100% had obvious technical, logical, or factual errors (under penalty of perjury) that were granted by the judge anyway.
The bar/basis to successfully receive a search warrant is hilaribad. It's pretty close to a rubber stamp. The courts just believe whatever crap the cops spew out.
> From society's perspective maybe it's verygoodware.
At least in the US, the historical distinction between verygood searches and mal searches is given in the 4th amendment: "no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."
Of course that has been twisted and stretched before, and this is more of the same. But if literally everything is being scanned, "probable cause" is completely absent from the process. It is a fishing expedition of the exact type that the 4A was designed to prevent.
You're confusing accuracy and precision. You want search warrants to be precise, but the only way to guarantee accuracy is to know in advance the fruits of the search.
You can have a solid foundation for a warrant against an individual, but find out they aren't responsible, a 100% miss rate! But that's the problem, you don't know what you dont know, which is why you need a search warrant (and justification for it) to begin with.
Small nit to start, this sets a precedent. This does not necessarily set precedence. Those are two different things.
Again, though, this is a hard hill to want to set the battle on. If this was a case of "searching for contentious thing" than I would be worried, as well. This was literally, "any evidence that someone scoped out a specific location where a crime that we all agree is a crime happened?"
And saying there is no oversight is silly. This had a warrant. It went through a process. To claim there is no oversight is overstating the situation and is a bad faith argument. To contrive how this could have led to a lot of false positives completely ignores that it was such a well directed search that it did not, in fact, do that. Last number I heard was that it literally hit single digits in how many results it had. That is beyond well scoped for this sort of thing.
We also have a robot court system where warrant applications are examined in cases get thrown out all the time for improper search and seizure. There have been tons of cases where falsified data has been discovered causing hundreds of cases to be reheared
I think you know the answer: if judges will approve it 95%, law enforcement becomes convinced that the process of obtaining approval is superfluous, and is nothing but a barrier to getting their job done.
The problem is, that is a violation of the persons rights. S ome publications report that anywhere from 5-10% of WARRANTED searches were invalid. So imagine if you dropped that prerequisite, what that number would end up being? Would LEO selectiveness drop, and put the level of effort in validating the search onto the defense? Is that fair to the defendant? Absolutely not.
"For something like matching photo hashes, the bar could be lower than e.g. for searching someone's house. The flipside would be that if the warrant is found to be inadequately justified (even in retrospect), no data found in it is available for evidence."
The bar is pretty low already. Anything found by warrant that is found to be defective is thrown out already under the fruit of the poisonous tree theory.
Even if the evidence is thrown out, or if the person is innocent, the process can still ruin people's lives, cost money, etc. I recently had an experience with a trooper letting a dangerous dog charge stand even though he knew it was the wrong charge. That charge carried with it pretrial restrictions. This should be a violation of the 8th amendment's prohibition on unusual punishment. There were a few other issues/rights violation with what he and prosecutors did too. Anyways, nobody cared - they even said they are allowed to leave incorrect charges stand. Unless you spend thousands of dollars on a lawyer, you're screwed. Even if you're right, they will not take you seriously unless you have a lawyer. It generally costs more to defend your innocence than pay the fine for lower level offenses.
Giving the system even more power is just going to lead more innocent people being trampled on. I would guess it's only a matter of time before some parent gets caught in Apple's scanning because they have a naked picture of their baby taking a bath or something.
It seems that a 24/7 set of federal judges with smartphones on them for quickly processing warrant applications is a better fix to “warrants are slow” than giving cops the power to simply skip them when they feel like it.
As it stands now, the majority of federal search warrants contain easily falsified perjury, and yet they are still approved by judges almost without exception and the feds that wrote them suffer no consequences whatsoever.
I’m not sure why they have an issue with the current system, outside of blatant abuse.
Presumably a warrant is signed by an actual judge. I'm not sure I want every company launching their own investigation into the underlying evidence behind a search warrant in order to determine if they should comply or not.
If we are unhappy with the warrants, we should address that at the judicial level.
In this case, they searched a person's car after the software system highlighted their behavior as suspicious. The constitutional issue would be whether the resulting search was reasonable or not.
We can agree that if the system was very poorly made and flagged 90% of drivers, then the flagging wouldn't constitute "probable cause" in a legal sense... so how good does the system have to be before we're willing to accept searches based solely on the system's recommendation?
Pre-arrest probable cause in the current judicial system is a joke.
Search warrant requests rarely get denied, often the approving Judge isn't even capable of understanding the probable cause argument in the warrant (particularly in cases involving technology)
They rely solely on the sworn statement of the police office 'your affiant' and a rubber-stamp from the local State Attorney's office.
There is little incentive for the judge to deny the warrant, and, could face significant backlash if they do deny a warrant that may have enabled law enforcement to stop something from happening.
LEO would simply draft a warrant 'Your affiant has been in law enforcement for 25 years and has taken this training class and that training class and has this certification. Your affiant knows from his training and experience that a crime happened or likely happened at this date and time. Your affiant requests all video for the time period of 7 days from cameras 1472 - 1499 which cover a 2000ft section of Rodeo Dr.'
The warrant gets rubber-stamp approved, and, even if they don't find what they were looking for, they can single out anything of which the 'incriminating nature is readily apparent' that they observe, and go after that. And that 'incriminating nature is readily apparent' bar is pretty low and generally subjective based on the officer, or 'a reasonable person'.
How well has this 'dragnet surveillance' worked out in the FISA court? A system like this would be ripe for abuse. And what about securing it? So far our government has a pretty crappy track record for securing even info that is classified and top secret. How secure would this dragnet video surveillance be?
These are good points, and supported by the court explicitly stating that they’re not making a judgement on all such warrants, but on the constitutionality of using evidence gathered this way.
I think this evidence cannot be used constitutionally, but practically I might be mistaking the application of this ruling to “good” warrants as well as “bad” ones. This might be a well-directed warrant which supports the public good with statistics satisfying a reasonable observer. But, can we tell that apart from a “bad” warrant in which we do not have access to refuting statistics? Honest question, should the search engine’s legal department be the only ones who can make the counter argument.
It depends on whether a search warrant was obtained. Given the article notes how similar it is to what data you can get from a search warrant, it may be the case that a court gave them a thumbs up.
>Law enforcement must manually review all suspected CSAM before seeking a warrant based on it.
Is this a legal statute or simply convention due to the ways things have historically worked (i.e. pre-hash matching at scale)? If warrants are granted based on probable cause, it seems easy to convince a judge that a hash match is sufficiently unlikely that it would exceed the threshold for probable cause. In the context of cryptographic hashes, this is accurate. But if law enforcement doesn't distinguish between cryptographic and perceptual hashes, then there is the real possibility for cases opened and warrants issued unjustifiably.
Sure, matching a hash isn't a crime and you will eventually be exonerated. But as they say, you can beat the rap but you can't beat the ride.
If that's not enough, make a human sign off that they have confirmed or at least agree with the black box's conclusion, and put the department on the hook if it was obviously wrong.
> Who makes sure that the data sets used to train the models was 100% free of bias?
The box is comparing a source image to an image with a name attached to it. Like you said, no different than a human would do, with all of their own biases in play. We aren't talking minority report here, so there's no reason to think this is a hurdle that would be difficult to overcome.
reply