Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
King County, WA bans facial recognition software (komonews.com) similar stories update story
288.0 points by sharkweek | karma 18121 | avg karma 9.19 2021-06-02 21:21:08+00:00 | hide | past | favorite | 208 comments



view as:

Suppose looters ransack a store when it is closed and are caught on video. Why wouldn't you want facial recognition software to identify them? Do you have a right to privacy when you break into a building?

You can support some uses of facial recognition without wanting it to be used to say ticket people for jaywalking.


>Suppose looters come to ransack a store, don't you want to be able to wave them off with a shotgun?

The potential for abuse is too high.


This is a ban on use of facial recognition by the government. It does not limit store owners in any way.

I wouldn't want facial recognition software to identify them because I can't understand its failure cases. If it is allowed in court as evidence, prosecutors will talk up the recognition as "state-of-the-art" and juries will be influenced by its opinions.

Do you also apply this reasoning to fingerprinting, DNA analysis, tire prints, and ballistics comparisons? Like most people, I don't understand all of the failure modes involved with those technologies, but they do seem to be helpful tools for bringing violent criminals to justice.

I also think eyewitness testimony has many failure modes. If anything, it's probably less accurate than current facial recognition tech and biased in ways that are harder to determine. Yet I wouldn't want to ban all use of eyewitness testimony.

Banning facial recognition seems like overkill. Instead, it makes more sense to restrict it so that we can get the good parts (catching violent criminals) without the bad parts (oppressive surveillance state). Instead of banning all fingerprinting and DNA analysis, we have rules for how police can use them. Why not have similar rules for facial recognition?


Yes, I do.

Ballistics comparisons in particular been found to be nearly complete bunk, yet experts continue to use them in court.[0]

Tire prints are also pretty bad.[1]

We live in an age of unprecedented HD video surveillance. We don't need automated facial recognition to bring violent criminals to justice.

[0] https://www.sciencemag.org/news/2016/03/reversing-legacy-jun... [1] https://www.apmreports.org/story/2016/09/27/questionable-sci...


did you read the article?

also, if you are part of a minority that is frequently misidentified by this “tech” and you end up being harassed by this pos tech, do you still want it used?


> Why wouldn't you want facial recognition software to identify them?

Why would I want it?


Facial recognition isn't very accurate. Based on some of the research I've seen, it's borderline worthless under many circumstances.

I wouldn't want to use it in such a situation because of the likelihood that the person who committed a crime is going to go free while an innocent person might take the fall for it.


I would like it even less if it was 100% accurate.

Do you really want every move you make logged? It's an incredibly powerful tool to use for oppression. If someone in the government doesn't like you, all they have to do is watch you for a few days. You'll be sure to commit a crime sooner or later, such as jaywalking, or maybe you looked at someone in a suspicious manner, and then they prosecute you to the max.


Because I don't want to live in an overbearing police state. I find it weird that you would pick the example of looting rather than some sort of very serious (and irreversible/uninsurable) crime like murder. You are surely aware that technological capabilities lead to feature creep, just as you are surely aware that police departments all over the country now operate military-grade armored cars to little apparent public benefit.

Edit: just to expand on this, here's a press conference from earlier today (sorry about the poor sound): https://twitter.com/DarwinBondGraha/status/14001715920642416...

Here, Oakland, California's Chief of Police admits that police claims about molotov cocktails and crowd demographics that were used to justify the deployment of tear gas and rubber bullets against protesters a year ago actually had no basis in fact. The Chief explained that he received information to that effect via the radio, and then went out and repeated it to the public (via the media) without making any effort to vet its accuracy. (For clarity, he has only been police chief since February; at the time last year, he was a acting as a spokesperson for the department). It's arguable that the decision to deploy tear gas escalated the protest into a full-fledged riot; even if you don't think so, it certainly misled the public about the behavior and composition of the protesters, inevitably impacting policy debates and so on.

I feel this is a good example of why the police cannot be trusted with facial recognition tools; false claims can be used to to designate large numbers of people as criminal suspects, and that suspect status can then be leveraged to intrude upon their rights. Were California's interim prohibition on facial recognition not in place, chances are that it would already have been deployed to identify large numbers of legitimate protesters on the basis of initial false allegations (ie, lies) made by individual police officers. 33 officers have since been disciplined and no doubt civil litigation will delve deeper into this, but at present the police officers who were disciplined do not have to be identified, despite the fact that they were quite happy to lie in order to violate the rights of that same public.

https://www.ktvu.com/news/oakland-police-chief-apologizes-is...


>You can support some uses of facial recognition without wanting it to be used to say ticket people for jaywalking.

Agreed. As xanaxagoras said, this is a political favor to Seattle Antifa.

I presume that any prosecution using facial recognition software is also going to have human beings verifying that video actually matches the face of the accused. In other words, facial recognition software is going to be used as an automated first-pass filter.


> You can support some uses of facial recognition without wanting it to be used to say ticket people for jaywalking.

That's a huge assumption that many of us (opponents to widespread surveillance tech) just don't agree to. I don't think it's possible to hand government this kind of capability, then limit it to a specific set of uses. It always expands in scope, covering more and more use cases until the folks over at Vision Zero[0] make an impassioned plea, "Suppose you could prevent 10 pedestrian deaths a year by enforcing our jaywalking laws better? Why wouldn't you want facial recognition software to protect them?"

Or maybe the bias-free policing people[1] put forward their argument that removing human cops from jaywalking enforcement will increase Equity. It would be a decent proposition! And you end up with every minor thing being automatically caught and ticketed.

That would suck.

"Slippery slope" may be a fallacy in formal logic, but it's damn useful in resisting the march into a dystopian future. Nip this shit in the bud. Make it require some effort to enforce the law.

[0] http://www.seattle.gov/transportation/projects-and-programs/...

[1] https://council.seattle.gov/2017/07/20/council-president-har...


> "Slippery slope" may be a fallacy in formal logic

It's not if the slope is, in fact, slippery. At least around here, one fallacy more common than slippery slope is the slippery slope fallacy fallacy - calling out real slippery slopes as fallacious.


Yes, everyone has the right to privacy from facial recognition software, including criminals.

Why would I?

> Do you have a right to privacy when you break into a building

To find the person who appears on video footage filmed in the building, you need to spy on everyone and then match the face from the footage against the face of everyone else walking about. All of those other people did not break into the building, yet for this to work, their privacy is expunged by having their movement and faces filmed, tracked and catalogued so that the FR software can cross-match them.

I think there is a privacy argument here. If I didn't commit any crime, maybe it shouldn't be that my face gets recorded in some database alongside my latest location.


It's not the first county int he country; as noted int he article San Francisco (which is both a city and a county) instituted such a ban in 2019. California has a statewide ban in place already. It's good news but needlessly and inaccurately sensationalized.

The linked source is a Sinclair News outlet. They are probably the worst media conglomerate in America honestly. Their news is constant fear and I view anything by them as poisonous information because it is just as likely meant to mislead as it is to inform. Sure, KOMO publishes some real news. They also publish lies and wildly misleading stuff.

https://www.youtube.com/watch?v=_fHfgU8oMSo

https://www.youtube.com/watch?v=GvtNyOzGogc


Did not realize that it's a Sinclair affiliate, worth bearing in mind.

Eh, I feel like that's being too picky. First, California's "ban" is only a three-year moratorium on FRT in police body cameras, whereas King County's is on "the use of [FRT] software by county government agencies and administrative offices."

Second, and where things get a bit more "technical" is that SF is both a city and a county, yes, but it's only one city in that county. There are 881,549 people in SF county compared to 2,252,782 people in King County[0]. According to each county's Wikipedia page, SF county is 231.91 square miles to King County's 2,307. King County has 39 cities and towns[1] to SF county's 1.

So while yes, you're technically correct, I still think that the headline is accurate as-is. Most of the country's counties are similar to King County (eg, not a city-county), and it's important to distinguish the fact that this ban covers a tremendously wide area and numerous municipalities.

[0] https://www.census.gov/quickfacts/fact/table/kingcountywashi...

[1] https://kingcounty.gov/depts/health/codes/cities.aspx


There's a quote in the middle of the article which was probably inaccurately summarized for the headline:

> We applaud King County for being the first multi-city county in the nation to pass such necessary measures


Shut...up. There is no such thing as antifa, and even if there were, being anti fascist IS the correct call.

What's the point when they can buy the information from private companies?

The CA ban includes the police using facial recognition services from private companies as well

And the same applies to KC governmental authorities as well.

Another click bait article. Beware. Facial recognition was banned by SF before in the US.

But it’s a technology evolving. And susceptible to manipulation as well. Watch the avantgarde comedy by the creators of South Park:

https://www.youtube.com/watch?v=9WfZuNceFDM


I mentioned it in another response in this thread, but while it's technically correct that SF is both a city and a county, making it the first actual county to ban the tech, it's important identify the fact that this ban covers a much wider area, a population of ~2.2 million compared to ~880,000, and 39 towns/cities compared to 1.

With that in mind I don't have much of an issue with the use of the word "county", considering there aren't a whole ton of city-counties relative to the total number of counties in the US.


This might be a good thing for proponents of surveillance. They can wait until some really bad things happen and people beg for facial recognition.

At last, KC did something right. Hooray! There's nowhere near enough unsolved violent crime to justify the surveillance state. And yes, I have been the victim of an unsolved violent robbery.

P.S. who cares if KC is first or not. What matters is it got done.


It's great to see the US leading the way on this, i hope that Europe takes notice.


Specifically on this, yes but keep in mind the US is far behind Europe in terms of civil rights / privacy and other protections for citizens (See GDPR for ex.).

I just learned about this "traffic light labelling" going on in the EU and was blown away that this was implemented 4 YEARS ago. [1] I'm hoping the US catches to Europe for everything else, we're far behind.

[1] https://www.euractiv.com/section/agriculture-food/news/traff...


There may not be enough unsolved violent crime to justify facial recognition, but one thing there is too much of in Seattle, Portland, SF, LA, NYC is un-prosecuted violent crime.

If we forced the cops to wear their badge numbers, we'd know who was committing the violence and they could be prosecuted for it.

If someone is beating me, I'm not likely to be able to focus on his number and memorize it. When a thug put his gun in my face to rob me, I later could describe the gun in great detail, but not his face.

But with all the smartphones around, someone can get the officer's badge number hopefully while they're beating you.

Smartphones don't have the resolution to pick up a badge number.

I don’t think we’re discussing police brutality here, not that it isn’t terrible and more common than it should be. I live in King County and every week it seems some person assaults another for little to no reason (often with a hilarious number of priors) and walks the same night. That is the violent crime that isn’t prosecuted here that the GP was referring to.

I'm not sure how this conversation changed to talk about police brutality, but I think the argument is that if there is a bystander watching or recording, the police officer can later be identified.

Compare to a situation in which identification is not required and the officer can't be identified even with video evidence.


We’re not talking about the police, we’re talking about violent criminals who are arrested and then let go.

Do you have any sources I can read through on this? I'm very interested to hear that this is the case.

Not a read, but this feature from KOMO goes over the Seattle situation pretty well

https://www.youtube.com/watch?v=WijoL3Hy_Bw


That piece, and KOMO's larger project, are toxic right-wing propaganda. Please don't consider that a reliable source.

I don't think calling it propaganda is quite right. "Muckraking" is probably more accurate.

Muckraking implies at least a casual desire to stick to the facts, which is entirely absent from Sinclair-owned stations.

I don't even want to begin to engage upon what you purport KOMO to be, because I don't really care... But the people and lives that video shows are real. Do you have left-wing friendly media pieces to balance it out? This is not something that is typically portrayed positively in media in Seattle, so it's not easy to find good news

You do you HN. Downvote the uncomfortable opinions! That'll surely help everyone

Have you walked through Pioneer Square or down Third Avenue in Seattle recently?

Yep, I have!

Homelessness is a profound and society-wide problem, with no easy answers unless you consider "spend way more money, build way more housing" an easy answer (which I do).

Until we do those things, it should be no surprise that a region with skyrocketing rents has a housing crisis, and that will present itself in myriad unpleasant ways.


There's a great book on ~roughly this topic - Ghettoside, by Jill Leovy. The thesis is that police in some cities (her work is in LA, but I lived in Chicago for a while and the idea transfers seamlessly) do an abysmal job of solving homicides.

That has all sorts of knock-on effects: people close to a murder victim lose the sense that the state will do something about their loss, so they take the matter into their own hands; people who commit homicide are a tiny minority but if they're not apprehended, they can keep committing homicides, which makes a broader society feel less safe; people sense that the state, their city, their neighbors, etc don't care about them because if the homicide was of someone in a richer neighborhood, it would be more likely to be solved; etc. Been a while since I read it so I'm forgetting a lot.

I think this goes some way towards explaining why people in neighborhoods where lots of violent crime is committed have such ambivalent feelings about the police - they are at once underpoliced (police don't solve major crimes in their neighborhood) and overpoliced (police bust people for all kinds of small-time stuff, which has devastating long-term consequences).

At the same time as all of the above, it's also true that it's basically never been safer to live in a big American city (ok ok, at least since like the 1960s - before then the picture gets significantly more complicated). So it's safer than ever, but big crimes don't get prosecuted or solved, unless they happen in the "right" neighborhoods. The solution therefore isn't more of the same police; maybe it's more policing but the policing itself needs to change.


This has already been a fantastic read: thank you!!

How's gait identification doing these days?

You should always worry when they stop fighting you on something. Odds are good they've already switched to something else.

Why does this apply only to policing? Is it a matter of authority/jurisdiction?

It does not apply only to policing. From the article:

>The legislation bans the use of the software by county government agencies and administrative offices...


For those unaware of WA state counties: this includes Seattle and its surrounding metropolitan area.

But unfortunately, the ban only applies to King County agencies, e.g., KC Sheriff's Department. It does not apply to the numerous independent city jurisdictions within KC such as Bellevue, Redmond, Kent, Auburn, Snoqualamie, etc. who may have their own law enforcement entities, including the Seattle Police Department. And while Seattle does have more restrictive oversight of facial recognition tech through the Seattle Surveillance Ordinance[1] passed in 2018, there are several local governments located inside the boundaries of KC that have experimented with or have indicated they may implement the tech.

TL;DR, those of us who have been working this issue for several years are gratified with the unanimous(!) vote of the county council, but realize there's still a lot of work to do both at the local and state (and national) levels.

[1] https://www.seattle.gov/tech/initiatives/privacy/surveillanc...


Genuine question - why not ban cameras altogether? Or ban the use of computers in police stations, make them write up all their reports by hand. I truly don't understand why there would be a line at facial recognition, it's just a law against making a process more efficient.

It's clear to me that there should be a line to prevent fully convicting someone of a crime without any humans in the loop at all. Don't replace the jury or public defenders with robots. But facial recognition could just be a way to look through a large amount of footage to find relevant clips that would then be reviewed in more detail by humans. Without facial recognition, you just pay cops overtime to look through footage themselves. It doesn't necessarily affect the outcome of the process at all.

I think facial recognition just needs some help with its image, it's just a tool that would save tax dollars. Or a means to Defund The Police, if that's your thing.


This is the right take. Can't ban math. Facial rec is here. If you don't like it, win a public policy debate about making it evidentially weak in front of a judge. Banning facial rec is like saying "you can have security cameras and iPhones, but only human eyeballs can look through them, not computers!" Arbitrary, and doomed to fail.

The math is incorrect with Black people's faces more than White people's faces.

"Can't ban math", that's like saying "can't ban words". Yes, but you can ban a combination of words in a location, such a "There's a fire!" in a crowded theater. You can ban a combination of math in a police station that leads to people going to jail.


The issue here is a complete lack on the behalf of the FR industry from impressing the importance of human oversight, and then validating that human oversight does not suffer racial blindness. It is pointless if the operators of an FR system cannot distinguish between similar age siblings or cousins of the ethnicity they are seeking via FR. This is far too often the case, and those police officers operating an FR system could simply be replaced by same ethnicity operators to receive significantly less bias.

> The math is incorrect with Black people's faces more than White people's faces.

That's a limiting aspect of the physical world less light back = less details. Flagging footage for manual review doesn't need to be bias free just the end component that actually effects the person in the video.

Yelling fire in a crowded theater is legal.....

https://www.theatlantic.com/national/archive/2012/11/its-tim...


Does anyone know of any data about the racial bias of human police officers vs. facial recognition software?

That is, I am not sure if it makes sense to ban the software if it is less biased than the human. But it might make sense to ban it if it is more biased.

There are loads of people of color who have been falsely imprisoned for looking like someone in a grainy security camera footage well before facial recognition technology was widespread.


Wrong. It isn't. By your logic, CMOS cameras themselves are "racist" because they have exposure defaults that make just slightly more sense for light complexions than for dark.

Or maybe you think highlight curves for TV shows must be adjusted so that lots of white things are blown out, but we can see the fine grained detail on a very dark object or person. Only then are we being antiracist!

See how this stuff makes you into a numbskull?


Right, this is like banning knife because it can be used to kill people. I much prefer they try to fix whatever it is that cause misery with the use of facial recognition, instead of banning it altogether.

I think there are very good reasons to keep the government out of the FR business, at least for now.

Facial recognition is the enabling technology for automated real-world repression. Maybe it is true that we can find a way to tame it to gain efficiency gains without destroying open society. But right now it looks like a dangerous one-way ratchet.


Laws can always be changed, seems sensible to me to prevent this stuff being rolled out when there are so many unresolved problems and abuses with it.

I generally don’t want my government to “move fast and break things”


There is a subtle misdirection when some jurisdiction "bans government use of facial recognition" - that ends up, or is by design, how private agencies are created that law enforcement then contracts with less oversight than before.

>it's just a tool that would save tax dollars

No, nothing is "just a tool". Surveillance technology enables all sorts of new attacks. Automation makes things feasible. Cheap storage that allows weeks or months of security footage to be preserved changes what's possible. License plate scanners are "just" a technology that could be done with a bunch of cops on corners with notepads, but in aggregate they're a tracking of where everyone is and has been that could never be done without the technology. Facial recognition is very similar.


> Surveillance technology enables all sorts of new attacks

Sure, but like any other tools it also enable all sorts of new benefit.

The preferable action would be to take advantage of the benefit while also try to fix whatever it is that cause problem with the tools instead of simply banning it.


That's what they're doing.

The problem is automated identification of faces. The solution is banning that.


>The problem is automated identification of faces

Whats the problem with this ?


I had a dream a few months ago where I was standing in front of a computer. I typed in my name and got back a stream of video which showed every single moment of my life from multiple angles starting from the moment I was born. The footage came from a variety of sources, CCTV, smartphone cameras, dashcams. The system had videos of everything anyone had ever done. I didn't like it. Found it kind of disturbing.

Fortunately, we can stop that from being a reality.

Then basically you says its boil down to matter of subjective preference. So you don't like it, fine fair enough then don't be surprised that other people may like it.

I don't say that? You just have to think about how this stuff is going to be used in the real world where we actually live. The thing which is objectively disturbing is how easy it is to lose control of one's life once the government is able to record, observe and judge every single thing a person does. I believe such a level of transparency is only beneficial when it's applied to people who hold power over others; to subject everyone in a society to that kind of scrutiny would just be laying the foundations for abuse.

The government simply record, observe and judge every single thing a person does is not problem, in fact it can be used for my benefit.

The problem is when they use it to harm me.

What need to be fixed is The government record, observe and judge every single thing a person does to harm people.

>to subject everyone in a society to that kind of scrutiny would just be laying the foundations for abuse

It can be used for abuse but not always, It can also be used for good, what need to be prevented is the use for abuse.


I don't think that's the problem. The problem is that the justice system believes that it's enough to convict someone of a crime.

Even just arresting them or bringing them in for assessment is enough to put someone in a dangerous position, because cops can effectively legally execute people. So the problem is any police interaction is potentially deadly and this is giving cops another arbitrary, potentially racist tool to interact with people who are just living their lives.

Let me ask you a question: How many unarmed people were killed by police in 2020?

https://www.washingtonpost.com/graphics/investigations/polic...

According to this, 405, which is 405 too many.


I only see 50 there. Are you sure you're looking at unarmed 2020 and not for all years?

I also just looked at a few unarmed shootings at random to get an idea of what they were like. Of the three I looked up I saw one guy who was shot while fighting with a police officer after being discovered at the scene of a burglary. One guy who was shot after a car chase when, after they had caught him, he got behind the wheel of a police car and the police shot him to prevent another chase. One guy, after stabbing someone to death and being approached by multiple police officers, faked a draw to "suicide by cop". My point is, I don't think it's even clear that all (or even most) of the 50 unarmed shootings are unjustified.


So we have 50 total out of thousands, if not millions of potentially negative police interactions.

You can't build a system of robots without some fault-tolerance. How are humans supposed to keep up to that?

Based on this data - how can you make the assumption that a police interaction is an inherent risk to life that is always unjustified?

We mostly pay police to show up and be generally aggravating to people who might be doing bad stuff - in some sense "being sketchy in the vicinity" is a valid reason for police action, which doesn't even always turn into an arrest, let alone a life-or-death situation. Those cases are the vast minority of police interactions total.


People have a right to be armed; why would you consider using your rights to be worth a death sentence?

If it's illegal to be armed, you should change the constitution, rather than asking police to shoot evry armed person they see


This ignores the coerced plea bargains, which i don't think involve any conviction

The problem is that constant surveillance of people makes exerting power over them trivial.

So then that is the actual problem, try to fix that problem instead of banning it. Allowing people to own gun also make it trivially easy to kill people. So ban gun ?

If there’s a way to make it impossible for the government to use facial recognition to monitor people other than legally banning them doing so, it’s hard to imagine it.

why would i want to make it impossible for government to monitor people ?

government monitoring people by itself is not a problem, government monitoring people to harm people is the problem.

What need to be fixed is the usage of tools to harm people, not the tool itself.


The only thing that keeps them from harming you is your power relative to them. Handing them more power only increases the probability that they will abuse what powers they do have.

I thought the election of Trump would disabuse people of this idea that it’s smart to hand more and more power to central authorities. But at this point I think it’s hopeless. People just want the state to be Daddy, and assume or hope that its power will always be wielded by people they like against people they don’t.


For your analogy: yes, guns should be banned. That bottle has been opened in most places in the world, and some countries have actually put the genie back in, and it has resulted in dramatically lower gun death rates. The UK for example, as it has one of the lowest gun homicide death rates in the world: https://en.wikipedia.org/wiki/Firearms_regulation_in_the_Uni...

Moreover, yes, other deeply awful weapons are also banned. Biological weapons and Chemical weapons are both widely banned by all decent countries in the world. Nuclear weapons should join this list. They almost did, during the Reagan era: https://www.theatlantic.com/politics/archive/2016/01/ronald-...


I still don't see why this is the place we should draw the line. If we want to ban license plate scanners because they can be used to track location, should we also make it illegal for police to access cell phone records or cell tower ping data with a warrant? Should we ban cell phones altogether so that the location data can never be collected in the first place to be misused?

Lots of new technology has some downsides, the laws should more specifically target the downsides instead of require that everyone pretend the technology doesn't exist. If nothing else, it's just far too impractical for a technology that is easily available and in everyone's pocket to be illegal to use. It will just incentivize obscuring and lying about using it, rather than truly prevent it from ever being used. It reminds me of the history of banning encryption, you're basically just making math illegal.


The trick is that police need a warrant to invade your privacy via cell phones or whatever, but they won’t need a warrant for the flawed surveillance stuff that can end up severely fucking over someone’s life/their family if the police ever decide they don’t like them for whatever reason, such as being an advocacy against police abuse ;)

>I think facial recognition just needs some PR help

They are making their own bed in many cases. Here's one of several interesting moves by Clear View AI, for example: https://twitter.com/RMac18/status/1220876741515702272


I think it's a matter of accuracy and bias. If the tech keeps making false accusations, especially against the same kind of people over and over, then a ban might be in place.

You could argue that humans make similar mistakes and bias, but the scale is always smaller when humans are involved, just because of how slow they are.

Say a model is wrong 37% of the time, and so are humans, but the model runs 1000 times a minute, where as humans perform the task 10 times a minute. That means the model makes 370 false accusations a minute, where a human makes only 3 a minute.

In effect, the number of people you bothered falsely accusing is still less when done manually, simply because it can't scale.

Lastly, there's an argument that the police shouldn't be too efficient, because some crimes might be better off existing in a grey area.

People do fear police supremacy and police states, the idea that they can find you for j-walking or jumping a fence as a dare, etc. Or that the state could become authoritarian, but you'd have no way to fight back as rebels due to advance tech of surveillance the police would have, etc. I'm not saying those are justified, but I think it plays into it, people are fine saying that only the highest risk cases are worth pursuing, but if a tech comes along that lets police pursue all cases, it might start to feel abusive to people.

P.S.: I'm neither against nor in support of a ban here, I'm still making up my mind, just answering some of the reasons I know off for being for the ban.


> I think it's a matter of accuracy and bias. If the tech keeps making false accusations, especially against the same kind of people over and over, then a ban might be in place.

It seems like we should then ban eyewitness testimony. That has tons of accuracy issues and bias.


Please read the full post. The issue is inaccuracy and scale.

The scale is a complete strawman. They aren’t falsely accusing 300 people for a crime.

A great example is the Capitol Riots. They are narrowing the list down to manageable numbers. They aren’t suddenly arresting thousands of people for a crime committed by three.


Is facial recognition more biased than manually comparing against mugshots? A lot of people seem to be writing under the impression that if facial recognition software is banned, police and government won't use facial recognition. The reality is that they will still use facial recognition, just in the old fashioned way by looking at suspects and looking at mugshots. And humans are often biased, too.

The problem is that jurors place higher faith in technology than they do in people, and that’s not necessarily warranted.

It’s one of the things swept under the umbrella of the “CSI effect.”


> Say a model is wrong 37% of the time, and so are humans, but the model runs 1000 times a minute, where as humans perform the task 10 times a minute. That means the model makes 370 false accusations a minute, where a human makes only 3 a minute.

Or they spend 100x as many man-hours and achieve the same number of false accusations, but at a much higher cost to the taxpayers.


Would you make a similar argument that allowing the police to track location of all citizens 24/7 is just making the process of having officers follow people around more efficient?

Sure...just give the public the same access that police have. Then if dirty cops break the law or mislead the public, everyone will know.

Everything else that you described – computers, digital records – has simple algorithms understandable by the average police officer or citizen. You type a document, it gets saved. You can do a full text search on it. You can pull up an old case and look at photos. You can quickly cross reference. All these tasks could be done step by step by a person and it would just take more time.

When it comes to facial recolonization or AI in general, could anyone really tell you why the computer decided to flag your face and why another similar one was rejected? Would you accept a search warrant when the judgement wasn't based on any human's perception but something which came out a black box? Who makes sure that the data sets used to train the models was 100% free of bias?


Since when do search warrants get automatically approved by a black box? It is on the judge to approve, not a machine.

If that's not enough, make a human sign off that they have confirmed or at least agree with the black box's conclusion, and put the department on the hook if it was obviously wrong.

> Who makes sure that the data sets used to train the models was 100% free of bias?

The box is comparing a source image to an image with a name attached to it. Like you said, no different than a human would do, with all of their own biases in play. We aren't talking minority report here, so there's no reason to think this is a hurdle that would be difficult to overcome.


I mean, FISA warrants had (have?) a 99+% approval rate despite many applications having serious deficiencies.

Utah’s e-warrant program has a 98% approval rate. Some warrants are approved in less than 30 seconds.

Warrants across the US are frequently approved on evidence as shaky as a single anonymous tip.

The problem (at least in my opinion) isn’t that facial recognition itself is inherently evil, it’s that:

a) There are no standards around the algorithms themselves in terms of reliability. Worse still, this can inadvertently lead to serious racial bias.

b) It can launder lousy evidence (this person looks vaguely like this grainy photo) into solid scientific evidence (the algorithm says it’s a 99.5% match!) providing justification for all sorts of warrants and raids on even flimsier ground.

c) It creates lousy incentives in favour of creating a surveillance state. The police already have out-of-control hardware budgets, and many don’t like the idea of that budget being used to monitor and record people at all times they’re in public.

If we had better police and court systems and better privacy law this might not have been necessary - but we live in the world we live in so I can see why this county would do this.


> Warrants across the US are frequently approved on evidence as shaky as a single anonymous tip

Here is the crux of the problem. If facial recognition were good, then it would actually make things better. If it were bad, then it isn't making things substantially worse- if anything, it creates a track record which makes things like racial bias more obvious and auditable.

The real issue is that there isn't sufficient judicial oversight, nor recourse for when it fails.


> then it isn't making things substantially worse

At least in the United States, any encounter with law enforcement can potentially end in violence, so anything that makes it easier to get a questionable warrant will make things worse.


As the parent post pointed out, questionable warrants are already supposedly trivial to get. This system, being more auditable as a single source rather than individual anonymous statements, means that it is far less likely to stick around if it consistently punts out bad matches.

Why aren't you afraid of good matches? Good matches open up far more horrifying possibilities.

> then it would actually make things better.

If you are stuck living somewhere where music is illegal, and you happen to listen to some music on the sly and get caught by a perfect system, and punished, how is that better?

Better would be if people can live freely without having to worry about a perfect system tracking their every action.

There is no person alive on earth today who does not do things that would get them punished under the law of some government somewhere.

And you want the law enforcement systems to be perfect?


No, I want them to be held accountable for when they are wrong, and the hypothetical example of a good facial recognition system removing human bias was an example of that, and could protect people from the other example the post I replied to, which was search warrants granted under a single unreliable source's statement.

Facial recognition isn't going to care that you procured illegally distributed copies of protected works (your ISP already does that anyway), so lets stop moving the goal posts. It is a reactive, not proactive system, and while I am not fully convinced that it is actually ready, I am also not buying the slippery slope nonsense.


>isn't going to care that you procured illegally distributed copies of protected works

Sorry, I wasn't clear maybe, but somehow you misunderstood me.

I wasn't talking about pirated music.

I was pointing out that some regimes have laws that forbid listening to music, period.

Fixing bias doesn't help when all it does is help accurately catch innocent people who in fact were accurately detected to be doing innocent things, and punishes them for doing those innocent things because those innocent things are illegal in that country for no good reason.


and people who take the oath as a court witness never lie

I don’t discount the usefulness of ai or facial recognition, however,

>Since when do search warrants get automatically approved by a black box? It is on the judge to approve, not a machine.

The USA PATRIOT ACT removed judicial oversight and allows for warrants to be issued that notify the judiciary and simultaneously issue gag orders on those notified. For federal cases warrants no longer require judicial approval in an undisclosed number of cases. Try and find a list of current Guantanamo Bay detainees or the judges who approved the warrants used to detain them. Until that’s dealt with I see why King County would ban this use as a defense against automated enforcement.


> Would you accept a search warrant when the judgement wasn't based on any human's perception but something which came out a black box?

Isn't every PD that uses facial recognition keeping a human in the loop to confirm algorithmic matches first? And if so why would the overall process be any less accurate than a human match?


There are many tools to help make feature activations in Computer Vision interpretable by humans.

Class Activation Mapping [0] and Salience Maps are some of the most used and intuitive approaches. Lime and Shap [1] are tools to visually represent feature activation in an understandable way.

The idea that we have no idea what is going on with machine learning models is just flat out wrong.

[0]http://cnnlocalization.csail.mit.edu/

[1]https://github.com/slundberg/shap


> Or a means to Defund The Police, if that's your thing.

Are there many sincere researchers studying flaws in facial recognition advocating its unequivocal ban forever? Joy Boulamwini:

> At a minimum, Congress should require all Federal agencies and organizations using Federal funding to disclose current use of face-based technologies. We cannot afford to operate in the dark. https://www.govinfo.gov/content/pkg/CHRG-116hhrg36663/pdf/CH...

Timnit Gebru in the NYTimes:

> Do you see a way to use facial recognition for law enforcement and security responsibly?

> It should be banned at the moment. I don’t know about the future. https://archive.is/JqiqP

Are the flaws Algorithmic Justice League finding real? Someone has definitely been wrongly accused by a facial recognition error (https://www.nytimes.com/2020/06/24/technology/facial-recogni...).

Is there certainly an impression that activists want a forever ban? Yes, Joy Buolomwini and Timnit Gebru are frequently represented as advocating against even "perfect" facial recognition.

It's true, lawyers tend to advocate for a forever ban. I don't think these researchers advocate for a forever ban. If you read their raw words, they are much more intellectually honest than the press allows.

Is your line of thinking co-opted by people acting in bad faith? Certainly. You would feel very differently about the deployment of the system if you were so much more likely to be falsely accused by it due to the color of your skin.

Every intellectually honest person's opinion of the justice system eventually matures. You'll come around to delegating authority to researchers over lawyers and police.


Timnit Gebru is not a sincere researcher, in any respect.

She has proven to be willing to act in bad faith on numerous occassions when she is losing an argument. Not to say that a researcher is forbidden to hold any political opinions whatsoever - but her research is primarily political activism, and when political activism goes into research, what you get out is political activism, not research.

The whole Algorithmic Justice League concept is the process of overstating the impact of difficult-to-handle problems in order to secure laudable research positions and book deals.

Which - there actually should be someone willing to pose those arguments, but treating them as sincere researchers and not motivated


I don't know dude. I understand there are a lot of impressions. For the purpose of furthering curiosity, it was easy to punch in their positions into Google and read their raw, reasonable words. You're welcome to highlight a bad faith argument, in the raw words.

The bad faith argument was losing an internal fight with other Googlers and deciding to go public in the hopes the social media mobs would force management to capitulate to her ideas.

I think the tough part is you can't really ban failing to think in the presence of an algorithm. People see an algorithm produce a result and often assign far too much confidence to it.

"The satnav directions say to turn onto this pier."

"The model says this house is worth 200k less than the ones in the whiter neighborhood a quarter mile away."

"The system recommends the patient have 40% allocated home health care hours per week going forward."

"The algorithm says this grainy footage of someone far away from the camera in the dark and the rain is X."

If you put humans in the loop informed by an algorithm making judgements, the human will often defer to the algorithm. Does the algorithm give an output that indicates its uncertainty? Based on what model? How equipped is the human to consider that?


Depending on how you read the specific text of the statute, I think they banned the usage of electronic mug shot databases.

>"Facial recognition" means an automated or semiautomated process that assists in identifying, or verifying the identity of, an individual based on the physical characteristics of an individual's face.

So I have a still from security camera footage. And I have 50,000 mug shots stored in the mug shot data base. I can see that it's a man in the image, he's white, between the ages of 15 - 50. So I go to my computer and put in the filter for the mug shot database to include those criteria. Sounds to me like I just used a semi-automated method to attempt to identify the person in the still image.

This ordinance is dumb. The use of a computer to pattern match, to automate what a person can do, is a lever that makes everyone more efficient. What's the logic in making someone thumb, or click through, every photo in the state's mug shot collection instead of allowing a system to reduce the possible candidates, if any.


If when you take people's mug shots, you also record their age, gender and race (should you?), then you can filter by those characteristics without the automated process being "based on the physical characteristics of an individual's face" -- you'd just be filtering on a DB column like any other.

But if you didn't record or have access to those fields, and you tried to infer age, gender and race from the photo, and then filter -- well, what if your system is much more accurate guessing ages for some racial groups than others, or more accurate in distinguishing between white and latinx than between latinx and asian? Suddenly your "filtering" process can also have a bias problem, where some people are more likely than others to be included in the results for a query which shouldn't actually include them.


> then you can filter by those characteristics without the automated process being "based on the physical characteristics of an individual's face" -- you'd just be filtering on a DB column like any other.

A case agent looks at the still photo from security footage, concludes the person is male and white. They put that information into the mug shot database and it returns the subset of all mugshots that are male and white. They have just used a semi automated system to assist to ascertain the identity of the person in the security footage based on the physical characteristics of their face. This is forbidden by the ordinance.

>what if your system is much more accurate guessing ages for some racial groups than others, or more accurate in distinguishing between white and latinx than between latinx and asian? Suddenly your "filtering" process can also have a bias problem, where some people are more likely than others to be included in the results for a query which shouldn't actually include them.

What if it isn't? Doesn't matter. It's a tool to attempt to match two photos. If a tool is not as effective due to the way light and different skin tones interact, it just limits the leverage factor of the tool. It doesn't make it wrong and it isn't an infringement on people's rights. We use DNA matching even though it can't distinguish twins and sometimes it comes back and says it's a close match for Bob which leads them to suspect Bob's brothers family, e.g. Golden State Killer. I fail to see why tools have to be perfect or not be useful.


At least ban speed cameras! I think we can all agree on that. Also, all those cameras that detect when you make illegal turns and send you nasty fines through the post.

The lack of understanding of the downfalls and conflicts with facial recognition with justice system and this post is every reason why it shouldn’t be near the justice system.

it's about security and convenience.....it's a tradeoff so it's impossible to achieve both perfectly.

typically you find the balance....right now you are suggesting that the improvement is a convenient way of increasing security but the reality is, that security in one sense can lead to loss of it in another.

example. Assuming all things are Good and no possibility for corruption then it would be purely beneficial....but since those aren't controls, and not realistic either, we cannot sacrifice security for convenience in this case.....if that makes sense.


I agree. It's hard to understand how this is different than using a computer to detect, for example, credit card fraud. Both pour through tons of private data (typically using algorithms that don't make sense to a layperson -- or maybe even to experts since they likely use some DNNs) to determine is something problematic may have occurred. Both technologies can be used for nefarious purposes, but both can have safeguards that minimize the likelihood and impact of these purposes.

I'm way more scared of guns than I am of facial recognition and as a country we will never ban guns.


You won't have to choose. We'll see guns attached to facial recognition systems soon enough, both "authorized" and not.

I agree with you - this is just forcing policing to be more expensive and inefficient. It would be absurd to ban a police department from using Microsoft Office for example, and instead forcing them to track data in physical spreadsheets. Banning facial recognition is equivalent to forcing a police department to hire enough staff to monitor county-wide feeds and match feeds against a long list of criminals they're looking out for. With humans in the loop and requirements for when facial recognition can be used, I feel like there isn't a "real problem" here. But when we look at quotes from this article, for example the person quoted from Latino Advocacy who is concerned about ICE enforcing the law against illegal immigrants, it's clear that the motivation to ban facial recognition is really more political in nature - it's about letting some people get away with breaking the law more so than fundamental concerns with facial recognition.

I was thinking much the same thing. Facial recognition is just a tool that makes it easier. If there are problems with false detections, you should be able to insert a human check in the process. But banning something makes it look like they are doing something. Meanwhile, it makes the job harder for police.

There are many things the law permits at a small/manual scale but forbids at a larger/automated scale. This makes sense because in the latter case, the cost to the person doing the act is smaller.

For example, there's no law against making one phone call to a random number in the phone book. But if you use an autodialer to call everyone in the city, you're breaking the law.


Watch Netflix' Coded Bias if you want a deeper understanding of the political motivations here - and they really are almost purely political.

I'm not for an instant saying that there aren't any implications in terms of how the technology can and should be deployed - but some people just want to be angry and offended despite a lack of understanding of what they're actually on about.

Facial recognition struggles to accurately identify black people and women.

The difficulties around both are probably around edge and contour detection:

Black people have darker skin and less angular features - the shadows on their faces are not only harder to see (due to blending with skin-tone), but softer to begin with.

Women routinely leave the home wearing enough makeup to not only look less like themselves, but often look more like other women.

There are ways to mitigate these issues to some extent: cameras that work better in low light and codecs that do a better job of encoding relative rather than absolute light levels will do a better job of preserving the information in the faces of people with darker complexions.

As far as makeup is concerned, it can misrepresent the shape of a face - if you only have one angle.

Treating the face shape as invariant as ?t -> 0 and running on video (even better with multiple synchronised cameras) will allow for topography extraction despite dealing with someone deliberately misrepresenting what they look like.


That's all very true but any person using their own eyesight would have the same issues. This should be used as a narrowing of a search, same as Google's algorithm. Google doesn't give you the answers to your questions, they hopefully point you in the right direction, but sometimes don't give you a clear answer at all, but in clear and obvious cases it does match. Google also has programmed biases, but we all accept Google is one of the best tools.

Not to the same extent - we've got good enough binocular vision and pretty good handling of dynamic lighting.

Compare that with the usable colours and bad blacks of standard absolute colour encoded video.


Your (poorly informed view, IMO) of the world does not consider immoral and illegal activities carried out by law enforcement and those with authority. Stalking, and harassment and abuse based on stalking, is a problem. Making it easier and simpler with facial recognition causes bigger problems. And this is just one example.

My instinct for self-preservation tells me that this is not a good thing. I understand the need for privacy, but what happens if somebody puts a gun (or a knife) to your face? I think that the need for privacy could be solved through the legislation: we can have very severe restrictions on who could look at this data and why. Also, we can have severe restrictions on the admissibility of such data in court. Unfortunately, I have not seen any credible efforts from politicians, right or left, to introduce privacy protections from the surveillance abuses.

The article and discussion is not about privacy. The people against facial recognition are against it, at least in this case, because it is racist - or at least, it produces racist outcomes.

Removing bias from facial recognition is the problem you would have to solve to appease the concerns right now, not privacy.

When innocent minorities are getting locked up because the software running it was trained with poor data, the outcomes of using the software is a racism-entrenched legal and justice system.

Which is why people are fighting against it.


Someone should let these people know that nobody gets put in jail based on the facial recognition’s decision, so their “concerns” are impossible. Not only that, but if anything, it’s less likely to find darker skin tones at all, so it will favor minorities, not hurt them.

It’s a shortcut for manually digging through databases to identify people. Any identification is followed up with investigation, just as it would be if a human matched it. No decision is made by the machine.

So, no, it’s not racist at all.


> Someone should let these people know that nobody gets put in jail based on the facial recognition’s decision, so their “concerns” are impossible. Not only that, but if anything, it’s less likely to find darker skin tones at all, so it will favor minorities, not hurt them.

The article directly contradicts both of your claims:

> "Facial recognition technology is still incredibly biased and absolutely creates harm in the real world," said Jennifer Lee with ACLU Washington. "We now know of three black men who have been wrongfully arrested and jailed due to biased facial recognition technology.


The article provides no details on those cases, and I am not willing to trust the ACLU at their word, given how politically biased they are these days (https://thehill.com/blogs/pundits-blog/civil-rights/347375-a...). I would like to know whether those incidents involved a human in the loop. It's also worth considering the benefits of facial recognition. Just like policing as a whole, I am willing to accept a small number of incorrectly handled incidents against a much larger body of good policing.

I don't think you read the article, which contains examples to support their claims that are the opposite of yours, which do not have any supporting evidence.

We have survived as a society for long time without the need for this.

You could say the same thing about the 1st, 4th, and 5th amendments. "what about the children"


You may be right. The facial recognition does seem to interact with the 4th amendment, at least. But then it happens in the public place? I don't know the answer to that one. I fear that in the age of social media and Antifa, the protections that we had before are no longer enough. Now we have additional actors on the streets who may turn to physical violence on a dime. I feel that the streets should be free from physical violence.

Well, there's another amendment to the US constitution that is a substantial contributor to the level and severity of physical violence on our streets. But we won't talk about that...

You mean the rampant hate speech and misinformation enabled by 1A?

Ok I'm at a loss what amendment don't we talk about that leads to violence on the streets. Are you just being trying to be cute and talk about the 13th?

I guess I shouldn't have been so coy: SECOND AMENDMENT.

I don't care if you're a Proud Boy or a John Brown Society member. More guns == more violence.


This account has been using HN primarily if not exclusively for ideological battle. That is destructive of what HN is supposed to be for, and we ban accounts that do that, regardless of which ideology they're battling for or against. I've banned this one. Please don't create accounts to break HN's rules with; doing that will eventually get your main account banned as well.

https://news.ycombinator.com/newsguidelines.html


> what happens if somebody puts a gun (or a knife) to your face?

Nothing. Either they mug you and leave or you get injured (or they didn't see the cop standing behind them.) Facial recognition is not going to solve that problem.

I'm not informed on the issue, but I'd imagine that preventing them from buying the technology is easier than tightly controlling its use.


Considering how rampant property crime like car break-ins or catalytic converter thefts or shoplifting are in King County, this seems like a really bad decision. I would definitely like to see criminals identified, located, arrested, and sentenced. Not to mention, we just had a whole year of regular rioting in Seattle, with fiascos like CHAZ and daily illegal blockades of highways. This technology makes it much more likely that the police department can actually enforce the law as it exists on the books by identifying and locating these criminals. It also makes it much more likely that home surveillance footage from law-abiding residents can be put to use.

I do not think this technology is intrusive or invasive as the quoted council member claimed. Recording in public spaces is completely legal to begin with. And we can always limit the use of facial recognition by police departments to cases with either a warrant or probable cause, to prevent completely uncontrolled surveillance. The allegations of racial biases are also not meaningful. In practice, false positives in machine vision algorithms are contained by maintaining a human in the loop to verify matches. It is trivial to use this technology in a way that matches human-levels of accuracy with this layer of safety in place.

Banning this kind of technology outright feels like a fear-driven decision by luddites. That's a charitable take - a more direct perspective is that the politicians and interest groups supporting this ban are looking to protect their favored groups, which very frankly seems to include criminals.


Police isn't going to chase guys who stole your precious catalytic converter, even if you give PD scans of the bad guys' DLs. The face recognition data, on the other hand, is going to stay forever and ten years down the road, you may get flagged for sitting next to a drug dealer who's been evading police for years. And it will be your responsibility to explain what you did there that day ten years ago.

Wow what heroes.

Hey, quick question: Did they also ban Stratfor's gait tracking technology that they piloted for years?


Nice! Ban recommendation algorithm next!

This is great! As a data scientist though, we should go farther and ban using ML anywhere in policing or the justice system. It just has no place in a system that's supposed to presume innocence.

The backstory seems to be that WA state had succumbed to lobbying from Microsoft and had passed a law allowing facial recognition, with limits, in 2020.1,2,3 The OP, OTOH, refers to an new ordinance that only applies to one county. Statewide, it appears the rules are more lax.

Note that other states had already limited the use of facial recognition, by law enforcement, before California or Washington, e.g., NH in 2013 and OR in 2017.4,5

1 https://www.wired.com/story/microsoft-wants-rules-facial-rec...

2 https://www.wsj.com/articles/washington-state-oks-facial-rec...

3 https://housedemocrats.wa.gov/blog/2020/03/12/house-and-sena...

4 http://www.gencourt.state.nh.us/rsa/html/VII/105-D/105-D-2.h...

5 https://law.justia.com/codes/oregon/2017/volume-04/chapter-1...

I would bet Microsoft is lobbying in many states regarding facial recognition and privacy laws to try to get the laws they want. The news will report that they are proponents of passing laws governing these issues, but the point is that they want to shape how the laws are written to benefit Microsoft. I can find citations if anyone doubts this is happening.


"The legislation bans the use of the software by county government agencies and administrative offices, including by the King County Sheriff’s Office."

So anybody else can use it anywhere, including third-party contractors performing work for any of the above parties.

Also, strangely, I see no mention of hardware or other non-software facial recognition technology.


There is the conservative and the progressive. The conservative hold their 1984 bible and say the end of world is coming. The progressive just go forward and everything is fine.

How will this facial recognition tech ban be enforced? The most common users of the tech are the ones who will be enforcing the ban. Are we honestly expecting the police department to self-regulate?

The ridiculousness that is King County at this point is borderline a conspiracy. Most of it surrounds, "insert skin color" bad, all others good.

Our subreddits are cancer and have been known for a while to be the highest manipulated forums for astroturfing on reddit.

Can anyone help? Can we help ourselves?


What are you saying?

Why ban it? It's not the facial recognition that's the problem; the problem is that the justice system believes that it's enough to convict someone of a crime. It's hard to believe that prosecutors can be like, "The computer says it's you, so you're guilty." Especially when there's probably a known margin of error.

exactly. There must be a person in the loop. Facial recognition is just a tool and a very useful one. Banning it is silly virtue signaling. A proper way to address all sorts of possible problems with it is by requiring qualified people in critical decision points within the overall system.

because that's the only option. There is no scenario in which it's allowed in a narrow set of circumstances that isn't going to get abused by law enforcement. Either you ban it, or law enforcement is going to find ways to abuse the right to use it, including appointing people whose job it is to say "this was fine".

If this was a reasonable society, with a reasonable police force: absolutely, narrow definitions of permitted use should be reasonably fine. But the US isn't, and the US doesn't, so in the US, it can't be.

(but then you look at countries where it might be fine, and it turns out that they don't need facial recognition to have a decent enough catch rate, kind of making the whole thing moot. No police force should use it, for wildly different reasons)


Except this is allowed in a narrow set of circumstances - to look for missing children. So they can probably dragnet everyone they see to find the missing children or abductors. Now the residents need a way to prevent the police misusing it by sneakily letting it find rapists and murderers.

Welcome to the security vs. privacy debate. Glad to have you here, please read the pamphlet and please don't resort to a "won't anyone think of the children" argument. Because the statistics around facial recognition just don't work for that argument. However many children you think are saved with this: it's nowhere near that figure, if not flat out zero.

Because people don't enjoy the government knowing what they're doing all the time? It's a bit creepy that in places like London the government knows where everyone is 100% of the time.

Because even the hassle of being hauled into jail for someone to realize the software got it wrong can cause someone to lose their job, and since the software remains flawed it’ll keep picking them!

> It's hard to believe that prosecutors can be like, "The computer says it's you, so you're guilty."

Why is that hard to believe? The article very explicitly and clearly quotes a lawyer who says that is pretty much exactly what is happening.

And regardless, even if there weren't a direct quote from a lawyer on this topic in the article, it still shouldn't be that hard to believe that a computer's output is going to influence the court cases at all. That's not how humans work, for one - people will include everything they 'know' about a situation when making a decision. You can't have a computer producing output and then people ignoring the output and then calling the computer system useful. If people are ignoring the output, then why have the system? And if people aren't ignoring the output, then, the (FALSE) output is being considered in decision-making of which innocent minorities are to be jailed.


Unfortunately, the absolutist nature of the criminal justice system means it does not do a good job of integrating nuance.

Take for examples:

How unimpeachable police testimony is.

How hard it is to overturn a conviction.

How hard it is to dislodge forensic techniques that are known to be bunk.

I could go on. Probably the best correlate we have to this is the rejection of the polygraph and boy in retrospect are we just lucky about that. If King County manages to be the first in such an action against facial recognition they may be worthy of a great deal of praise.


There's far more than the racial component. Do you want the government tracking you? It helps authoritarian rule (who always victimize minorities first and more harshly). I think a big problem with this topic is that we're only focusing on the "it doesn't work" aspect and not even considering the "but what if it perfectly works, do we still want it?" question. Humans haven't historically been great at dealing with questions that rely on foresight, but that doesn't mean we should ignore those questions.

Exactly. No one wants to live in a word where 100% of crimes are prosecuted 100% of the time. Not really.

Do you want to receive a citation every time you exceed the speed limit by 1 mph? Ever pull off into a parking lot to check your phone or look for something in your car? Trespassing. Ever lose your temper and yell at someone, even briefly? That's assault in most jurisdictions.


That's right. But the justice system can't be fixed. So we prevent it from using these tools.

It's to prevent some halfwit police officer talking to a halfwit judge and prosecuting some innocent fellow. #donttreadonme


Sad news, if not unexpected. Yet another municipality prioritizes ginned-up grievance over the actually existing harms of unchecked criminality.

I can understand wanting to put heavy restrictions on the use of this technology, for example a rule that someone cannot be arrested based on software alone, and a rule that it can only be used to help identify individuals that are clearly committing a crime. But it seems to me to be batshit insane to ban it completely. If we have video of, say, an armed robbery, and we (only) use software to get a suspect list before we have human beings looking at mug shots, what is the alleged downside?

> If we have video of, say, an armed robbery, and we (only) use software to get a suspect list before we have human beings looking at mug shots, what is the alleged downside?

Based on my reading of the article, the 'alleged' downside (this is very clearly real, there is no need to say alleged here!) is that minorities are being swept up in the system and wrongfully accused of crimes and then jailed.

Your suggestion is, what if we didn't jail them? Just round up the innocently identified minorities and start investigating them? That is....hardly an improvement. In fact it would seem that while it might reduce falsely accused innocent people from going to jail, it could increase the number of racist incidents where police are harassing innocent minorities. And worst of all, it could further entrench racist decision making inside police departments.

A much better suggestion in my opinion would be, test the software before deploying it so that it doesn't make racist decisions. Set the bar VERY high for this, so the software has to be really, really, really good. Otherwise don't use it.


> Based on my reading of the article, the 'alleged' downside (this is very clearly real, there is no need to say alleged here!) is that minorities are being swept up in the system and wrongfully accused of crimes and then jailed.

Then is it really fair to call it facial recognition if it doesn't work? Or to say "we are banning this until it is more reliable" or something like that?


To be clear, there are a lot more downsides too. Sci-Fi dystopians bring up facial recognition technology a lot and about how the technology can be abused and help a government rule with authoritarian power. I do think it is important to think about these issues since democracy isn't stable (Before we start a flame war: I'm not convinced any system is and I think you'd be hard pressed to argue one is). We should consider the vulnerabilities to our system of governance and this seems to be a common one: tracking citizens. But I think in our society "potential of authoritarian takeover" (sometimes called "turnkey tyranny") is less of a concern for people than racial bias (which is still an important topic, don't get me wrong). I think the difference is that one requires foresight and the other is an effect we can see right now. And of course, historically we've seen minorities become the early victims of authoritarians.

We should think long and hard about these technologies and how to use them properly. Historically we haven't thought about ethics much in computer science, but the time isn't post hoc. We (Hacker News readers) are the experts, and we need to be talking about this decades ago, not decades from now. So let's not ignore the conversation.


now assume you’re a racist pos that is also by chance enforcing the law. also assume that you’re allegedly going to use this as a pretext to investigate people that even though don’t appear on the footage sort of look like the persons in the footage (the software told you so!). now assume that based on your harassment you find something and you arrest those poor souls.

does this seem far fetched? if you haven’t experienced this you may allegedly be privileged.


A small chance of being robbed is way better than a global survellience system that's tracking my face in all public places. Besides, criminals will simply learn to wear masks.

I won't lie, I truly understand the knee-jerk reaction to Orwellian surveillance by a police state. It's creepy, but is somewhat in place via semi constant NSA tracking that gets re-upped by private Senate hearings, so that's neither here nor there.... However, as far as facial recognition tracking goes, this doesn't seem like it needs to be an "All or Nothing" agreement, as you say Why can't we use facial recognition software with mugshots to narrow a list of suspects. Or use facial recognition software to enhance police work instead of doing a prosecutor's job. It's possible to ban facial recognition in certain arenas instead of alltogether.

You're assuming that the technology works well and that people will understand its limitations and not gradually expand its application. The downside is that that assumption is an unrealistic fantasy.

Innocent people will be charged with crimes by lazy cops who don't want to do the actual police work of thoroughly investigating suspect lists. There will be juries who don't understand prediction error and find innocent people guilty because they trust the "expert" who says the facial recognition system reported "99.99% accuracy". Some of the 40% of cops who have committed domestic violence will make unauthorized use of this system to hunt down fleeing battered spouses - and this unauthorized use will go unpunished. Innocent black people will be identified as subjects and arrested because of racial disparities in accuracy of these systems. Innocent political activists will be charged with crimes they didn't commit because a cop found video of a crime that the software says plausibly looks like the activist.

Every time police get a new tool for establishing guilt, it is abused. The only way to make a facial recognition system that is just is if it is out of control of the police and it can only be used with a valid court warrant - and even then, I'm not sure that I trust the court's ability to control such a system securely.


Next, perhaps, they can ban tracking via wireless signals and cameras tracking your license plates. Amazon ring too while we’re at it. I know some students who tracked peoples wireless signals as a school project recently, it was creepy

Let's go further with facial recognition?

Let companies use on our children. Let companies/government scan our kids from birth to death.

Put cameras in schools, on every street corner, government buildings, parks, doctor's offices, and of course scan every face that goes through governmental facilities (DMV, Criminal/civil courts). Put cameras in all stores. Put a Ring type camera on every residence.

Government could then predict whom will most likely succeed in our complicated, competive society.

The good kids will go to the best schools of course. The good adults will get preferential treatment in hiring, loan accusation, housing, etc.

Stores will track our every buy. Stores will know I stole an olive from Whole Foods once.

STD's will be a thing if the past. We will know who likes to booze it up. We will know what percentage of fast food they consume, and we will definitely know who uses tobacco.

We can do away with Credit Scores. We can do away with most tests. We will be able to predict a person's future success with data.

Cops can give up patrols, and just use AI programs to catch bad guys, or prostitutes.

This is Not the future I want.

I cannot believe Americans are even putting up with the surveillance we already have.

(The only thing I like about cameras/surveillance is it might make cops a bit more honest?)


I'm always amazed at how many people watch movies/TV like Blade Runner, Black Mirror, Terminator, Altered Carbon, Her, Ex Machina, Firefly, Cowbow Bebop, etc and think "Yeah, that looks like a future I want." Dystopian Sci-Fi is a warning, not a blueprint. Even the utopian Sci-Fis often reference a struggled past in transitioning to their societies. And a lot of these problems have to deal with surveillance or an adjacent technology (like Minority Report or the more realistic Psycho-Pass).

That's essentially the plot to Tomorrowland (2015). Images of the dystopian future are broadcasted from "Tomorrowland" (sort of a different universe) back to ours as a warning to prevent it. Instead we fetish these images in pop culture.

https://en.wikipedia.org/wiki/Tomorrowland_(film)#Plot

The movie isn't great, but it's critique of pop culture isn't wrong. We have very little hopeful science fiction these days apart from Star Trek.


What do you mean future? Data brokers already have virtually all of this information.

I hate to exercise my lawyer brain, but this ordinance is tailored in a somewhat weird way that it seems to me it would prohibit the county sheriff and other agencies from posting any kind of photo on Facebook, which has the ability to segment human faces and tag people, or an application like iOS Photos or Google Photos.

The ordinance is here if this is a permalink. https://mkcclegisearch.kingcounty.gov/View.ashx?M=F&ID=94173...


Think they may soon ban voice recognition and voice signature ... body infrared signature ... body electromagnetic spectrum ... and eventually ban the electromagnetic force :-) ...

The FBI and CIA don’t have to adhere to these rules so expect local police departments to dump more cases their way.

NO, King County banned "government use" of facial recognition software.

This is such an annoying trend in media coverage of this issue: every city in the world is happy to restrict THEIR USE of facial recognition. That's a symbolic and small victory. It will always be a symbolic and small victory. Every headline leaves the "government use" part out. Every single time. That's because people who write headlines know what everyone knows: that "government use" is a small slice of the overall problem. The key is that anyone who raises this issue with these headlines will hear from some annoying person chiming in to list out why government use of facial recognition is bad (as though that's the basis for my rub here. It's isn't.) But individual localities making internal rules to stop their own use of facial recognition software isn't solving the big problem. This has been going on for years, and cities banning their own use of facial recognition has had zero effect on any other political entities but other cities seeking easy headlines. It's not solving the societal problems that facial recognition poses. It's not earning the click. Cities should stop distracting from that bigger set of problems with this stuff.


The corporate-government exploitation of the people is staggering. Corporations do all the spying and then give APIs to the government. No constitutional problems over here.

Fascism or public-private partnership... It's the same in-group in power, protecting each other, regardless of which arm of the state (corporate, government, religion, media, universities) is currently doing the most effective social engineering.

Police use of facial recognition software already has bad impacts today, so at least preventing PD use will prevent real harm.

> Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match

> A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.

https://www.nytimes.com/2020/12/29/technology/facial-recogni...


Hey, and at least I addressed this reply in advance of you making it.

The existence of a massive wildfire does not really mean a local house fire is not something to be addressed as well.

The amount of people going to jail because of shoddy, unauditable code with no due process should be zero, so these announcements matter.


Legislation link: https://mkcclegisearch.kingcounty.gov/LegislationDetail.aspx...

Interesting parts: > The Proposed Ordinance would also prohibit county administrative offices and executive departments from issuing any permit or entering into any agreement which authorizes a third party to use facial recognition technology or obtain or access facial recognition information on behalf of the county. However, evidence relating to the investigation of a specific crime that may have come from facial recognition technology may be used by a county administrative office or executive department so long as the evidence was not generated by or at the request of the county office or department.


It's not like they can't call the FBI or whatever federal agency and have them do it for them.

"""“Facial recognition technology is still incredibly biased and absolutely creates harm in the real world," said Jennifer Lee with ACLU Washington. "We now know of three black men who have been wrongfully arrested and jailed due to biased facial recognition technology. So, this powerful surveillance tool inevitably exacerbates already racially biased policing"""

Seems like a blatant error to reason that, because there are instances of facial recognition being used incorrectly that the technology itself is fundamentally flawed. You could apply the same reasoning to the police themselves - I know of at least three instances where the police arrested the wrong person, so we should ban the police?

With the police example it seems obvious we should continue iterating and improving the police to correct errors, but that we don't want to throw out the whole institution because that would be much worse. The police may have caused some harm, but did they do more good?

I'd bet that banning facial recognition is net harmful to black and Hispanic people. Black and Hispanic people are disproportionately likely to be victims of crime, so denying effective tools to law enforcement will disproportionately hurt those communities. Of course, it's academic as King County didn't use facial recognition - but I bet it could be a useful tool if used responsibly.

I've read news articles before about cases where facial recognition was used irresponsibly. We absolutely need to put a stop to that. I think if we treat it more like an unreliable witness saying "The person I saw reminded me of X" and less like a definitive ID that may be a better position.


Legal | privacy