Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I can't help but roll my eyes at the the introduction's "unregulated facial recognition software" part of the introduction. That is a meaningless term given the lack of regulation in the first place examples and says in itself nothing about the effectiveness. The "Clipper Chips" infamous Skipjack was regulated. It annoys me mostly because meaningless rhetoric looks like they have no defensible stance.

That rant aside I am curious if this technique will lead to more resilent facial recognition and image parsing techniques to find the shape. Obviously the fact humans can still recognize it is a hint there is some other algorithim possible.



sort by: page size:

Does your criticism extend to the whole enterprise of facial recognition, or did this research somehow fail to include the secret sauce that makes it work?

I mean, I'd read that as "facial recognition technology products which actually exist in the real world are fundamentally biased". The platonic ideal facial recognition technology wouldn't be, but it doesn't exist and will never exist so is hardly worth worrying about.

Others have pointed out that it doesn't seem possible to identify a single face out of millions with enough confidence to issue a fine with current technology.

The article mentions the AI Act and claims it “restricts the use of remote biometric identification — including facial recognition technology — in public places unless it is to fight “serious” crime, such as kidnappings and terrorism”, which I am afraid seems rather hollow as that is basically how it is already used.

I’m not sure whether it’s the reporting that’s confused or the politicians, but this distinction between three different things (“facial recognition technology”, “private facial recognition databases”, and “predictive policing”) is very strange. Modern facial recognition algorithms are just clustering vectors, and facial recognition databases are just lots of pictures of faces, neither seems well-defined enough to easily ban. And predictive policing is just plain bizarre - what relationship does that have with facial recognition at all?


From my understanding, the technology is not deeply flawed. It just has accuracy problems when dealing with edge cases.

Facial recognition is not a 100% independent solution, but instead should aid other systems, including human-run systems. I think this is the concept that critics fail to grasp.


I was thinking much the same thing. Facial recognition is just a tool that makes it easier. If there are problems with false detections, you should be able to insert a human check in the process. But banning something makes it look like they are doing something. Meanwhile, it makes the job harder for police.

I have a ton of quibbles with the article and methodology, honestly. I suspect it's just straight-up so flawed as to be irrelevant to the discussion.

At the same time, they do still have the correct final conclusion. Dragnet facial recognition is a bad idea that will produce too many false positives to be of use. Or at the very least, dragnet facial recognition used by people who put too much faith in it is a bad idea. It's at most useful to produce flags for slightly more attention; on its own, I would not consider it anywhere close to enough to arrest someone on.

And it can serve as a demonstration of how a clumsily-put-together system can be used to produce bad results, so, if we correctly assume that there may be people bodging together a system in much the same way the ACLU did, well, hey, it's a valid result then!


I don't think there is such a thing as responsible use of facial recognition technology by law enforcement.

The technology is certainly not robust enough to be trusted to work correctly at that level yet. Even if it was improved I think there is a huge moral issue with the police having the power to use it indiscriminately on the street.


Is facial recognition more biased than manually comparing against mugshots? A lot of people seem to be writing under the impression that if facial recognition software is banned, police and government won't use facial recognition. The reality is that they will still use facial recognition, just in the old fashioned way by looking at suspects and looking at mugshots. And humans are often biased, too.

I don't get it. This article seems to be a hodge-podge of kooky old ideas and innuendoes comparing them to modern technology. What does the author want, an immediate ban on facial recognition software?

"So where does this leave us for facial recognition?"

Where it leaves us is that it doesn't work, and it can't work. I see no evidence that there is some big reservoir of facial recognition quality to be extracted from the same basic data set. There is all sorts of reasons to believe that it is simply impossible to create a system that can be given a small percentage of the population as the targets and pick them out from millions of samples correctly.

Of all the disciplines, those trained in computer science should be aware of the concept that problems can be fundamentally hard or unsolvable.

However, I've been careful to phrase what I think may be fundamentally unsolvable as being related to "the same basic data set". Expansion of the data set provides other possibilities, and while I'm not ready to declare that adding that data will certainly solve the problem, I'm not ready to declare it as fundamentally unsolvable either. Add portable device tracking, gait analysis, speech analysis, anything else some clever clog can think of, and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals (I would assert there is no solution to the mismatched incentives there), and the problem may well be solvable. It would, however, require Rite Aid and anyone else planning to use this sort of thing to radically upgrade their hardware.


When facial recognition fails, it usually looks pretty weird. Like it will identify faces in the knots and grain patterns of a wooden wall behind somebody. It's really a very ineffective technology and it's disturbing that it gets sold as something you could use to justify locking someone in a cage over. Machines with pareidolia are not good tools.

> There are already several anti-bias and discrimination laws on the books. Why does facial recognition warrant additional regulation?

Mainly because many companies doing it are arguing that when their models produce biased results, it's not their fault, it's just "computer thinks that way". So far as I know, this approach hasn't been properly tested in court, but it might just fly, if courts decide that you need to have intent to discriminate (and that training on real-world datasets, that are always implicitly biased, does not constitute such intent).


Maybe im naive but I don't see anything positive coming from facial recognition. Its two main uses are targeted advertising and finding dissidents. It could help find a handful of criminals, but considering the high level of false positives, the inevitable abuse like every other police technology, and the fact that I live in a police state where im unknowingly breaking any number of laws at any given moment, I don't see how it helps more than it hurts. Might as well require people to carry papers at all times and submit to police checkpoints at every street corner.

There are almost no ethical uses for facial recognition. It is a technology for criminals.

"Facial recognition doesn't discriminate people. People discriminate people."

This just reminded me of a typical gun control argument (which I don't disagree with). Banning one particular technology might be still worthwhile, but the argument is valid only after the actual damage is assessed. I think we're still missing the number. (Arguably it's very hard to measure though!)


> Based on my reading of the article, the 'alleged' downside (this is very clearly real, there is no need to say alleged here!) is that minorities are being swept up in the system and wrongfully accused of crimes and then jailed.

Then is it really fair to call it facial recognition if it doesn't work? Or to say "we are banning this until it is more reliable" or something like that?


I'm not sure I follow the point about robust facial recognition being cheaper to break. Like others pointed out, I don't see how this could be a scalable tactic.

I think it's good that it is simple. People are constantly complaining about the excess of legalese in various contexts. A law that can be understood is cool and a breath of fresh air. I doubt the judges will have any trouble parsing between face blurring and face recognition.

Also, the definition excludes the face blurring use case. "'Face Recognition' means the automated searching for a reference image in an image repository." Although that detail does mean that if you're not searching for "a reference image" then you would technically be clear of the law's restriction. What if you build features based on a reference image then delete the image? Or what if what you're searching for is a person and the image is not the search target?

That I suspect would take more effort to parse. But the penalties are pretty light and the provable damages done by facial recognition are pretty minor, so if organizations get a lot of value out of facial recognition then they might want to try rolling the dice.

next

Legal | privacy