Everything else that you described – computers, digital records – has simple algorithms understandable by the average police officer or citizen. You type a document, it gets saved. You can do a full text search on it. You can pull up an old case and look at photos. You can quickly cross reference. All these tasks could be done step by step by a person and it would just take more time.
When it comes to facial recolonization or AI in general, could anyone really tell you why the computer decided to flag your face and why another similar one was rejected? Would you accept a search warrant when the judgement wasn't based on any human's perception but something which came out a black box? Who makes sure that the data sets used to train the models was 100% free of bias?
Since when do search warrants get automatically approved by a black box? It is on the judge to approve, not a machine.
If that's not enough, make a human sign off that they have confirmed or at least agree with the black box's conclusion, and put the department on the hook if it was obviously wrong.
> Who makes sure that the data sets used to train the models was 100% free of bias?
The box is comparing a source image to an image with a name attached to it. Like you said, no different than a human would do, with all of their own biases in play. We aren't talking minority report here, so there's no reason to think this is a hurdle that would be difficult to overcome.
I mean, FISA warrants had (have?) a 99+% approval rate despite many applications having serious deficiencies.
Utah’s e-warrant program has a 98% approval rate. Some warrants are approved in less than 30 seconds.
Warrants across the US are frequently approved on evidence as shaky as a single anonymous tip.
The problem (at least in my opinion) isn’t that facial recognition itself is inherently evil, it’s that:
a) There are no standards around the algorithms themselves in terms of reliability. Worse still, this can inadvertently lead to serious racial bias.
b) It can launder lousy evidence (this person looks vaguely like this grainy photo) into solid scientific evidence (the algorithm says it’s a 99.5% match!) providing justification for all sorts of warrants and raids on even flimsier ground.
c) It creates lousy incentives in favour of creating a surveillance state. The police already have out-of-control hardware budgets, and many don’t like the idea of that budget being used to monitor and record people at all times they’re in public.
If we had better police and court systems and better privacy law this might not have been necessary - but we live in the world we live in so I can see why this county would do this.
> Warrants across the US are frequently approved on evidence as shaky as a single anonymous tip
Here is the crux of the problem. If facial recognition were good, then it would actually make things better. If it were bad, then it isn't making things substantially worse- if anything, it creates a track record which makes things like racial bias more obvious and auditable.
The real issue is that there isn't sufficient judicial oversight, nor recourse for when it fails.
At least in the United States, any encounter with law enforcement can potentially end in violence, so anything that makes it easier to get a questionable warrant will make things worse.
As the parent post pointed out, questionable warrants are already supposedly trivial to get. This system, being more auditable as a single source rather than individual anonymous statements, means that it is far less likely to stick around if it consistently punts out bad matches.
If you are stuck living somewhere where music is illegal, and you happen to listen to some music on the sly and get caught by a perfect system, and punished, how is that better?
Better would be if people can live freely without having to worry about a perfect system tracking their every action.
There is no person alive on earth today who does not do things that would get them punished under the law of some government somewhere.
And you want the law enforcement systems to be perfect?
No, I want them to be held accountable for when they are wrong, and the hypothetical example of a good facial recognition system removing human bias was an example of that, and could protect people from the other example the post I replied to, which was search warrants granted under a single unreliable source's statement.
Facial recognition isn't going to care that you procured illegally distributed copies of protected works (your ISP already does that anyway), so lets stop moving the goal posts. It is a reactive, not proactive system, and while I am not fully convinced that it is actually ready, I am also not buying the slippery slope nonsense.
>isn't going to care that you procured illegally distributed copies of protected works
Sorry, I wasn't clear maybe, but somehow you misunderstood me.
I wasn't talking about pirated music.
I was pointing out that some regimes have laws that forbid listening to music, period.
Fixing bias doesn't help when all it does is help accurately catch innocent people who in fact were accurately detected to be doing innocent things, and punishes them for doing those innocent things because those innocent things are illegal in that country for no good reason.
I don’t discount the usefulness of ai or facial recognition, however,
>Since when do search warrants get automatically approved by a black box? It is on the judge to approve, not a machine.
The USA PATRIOT ACT removed judicial oversight and allows for warrants to be issued that notify the judiciary and simultaneously issue gag orders on those notified. For federal cases warrants no longer require judicial approval in an undisclosed number of cases. Try and find a list of current Guantanamo Bay detainees or the judges who approved the warrants used to detain them. Until that’s dealt with I see why King County would ban this use as a defense against automated enforcement.
> Would you accept a search warrant when the judgement wasn't based on any human's perception but something which came out a black box?
Isn't every PD that uses facial recognition keeping a human in the loop to confirm algorithmic matches first? And if so why would the overall process be any less accurate than a human match?
There are many tools to help make feature activations in Computer Vision interpretable by humans.
Class Activation Mapping [0] and Salience Maps are some of the most used and intuitive approaches. Lime and Shap [1] are tools to visually represent feature activation in an understandable way.
The idea that we have no idea what is going on with machine learning models is just flat out wrong.
When it comes to facial recolonization or AI in general, could anyone really tell you why the computer decided to flag your face and why another similar one was rejected? Would you accept a search warrant when the judgement wasn't based on any human's perception but something which came out a black box? Who makes sure that the data sets used to train the models was 100% free of bias?
reply