Color me extremely skeptical. A low-pass filter will make short work of any "tiny, pixel-level" changes designed to thwart ML. After all, one of the most tell-tale identifiers (space between eyes/nose/mouth) is still plainly observable and unaltered in the "cloaked" image.
If a human's neural network can correctly correlate the before/after examples, so can a computer's. They might have found an issue with some modern implementations of facial recognition, sure. But it's a false sense of security to claim "when someone tries to identify you using an unaltered image of you [...] they will fail."
I wonder if this would be defeated by running an image I wanted to match through it first. Would current state of the art facial recognition match the two cloaked images, or did they already consider that as an attack surface?
I'm curious, since the facial recognition algorithms they are testing are created with ML, would adding the instances of someone wearing these glasses to the training set result in a version that wouldn't be fooled? Or is the method something the ML algorithms are inherently weak against? Is this an exploit that could be patched, in other words?
The fact humans can easily see through the disguise means there are huge glaring faults in the recognition algorithm. These types of attacks will probably be wiped away one day with a single breakthrough.
You can't fool a human by covering up a small percentage of something with noise. Humans can recognize parts of a face pretty well, classifying smaller parts of an image separately may help defend against this (and also be able to detect composite images, for example where eyes and mouth of different people are combined).
Also humans instantly detect that those glasses are noise, there probably needs to be a filter that removes obvious noise before the face recognition begins.
Saw this last night, and being that I work on the application side of facial recognition, I sent the paper and this discussion here to our neural R&D lab (on the other side of the planet, where it was morning for them.) The chief scientist read the paper with interest, ran their images through our system and found none of them worked as the paper claimed. I suspect the technique described had been suspected by some security industry technologists and measures put in place to combat these types of attacks. I suspect those that foresaw these developments have had their suspicions confirmed.
We're still in the phase where different models can play cat and mouse, but I wouldn't count on this lasting very long. Given that we know it's possible to correctly recognize these perturbed images (proof: humans can), it's only a matter of time until AI catches up and there's nothing you can do to prevent an image of your face from being identified immediately.
It happens to trick current facial recognition systems looking for typical faces... but I imagine it would be well within the possibility to create a "juggalo" filter that simply adjusts the levels until the impact of the makeup is reduced enough to grab a face out of?
You don't need a disguise. You just need some accessories or make up to throw off the neural network. Humans are still much better at facial recognition than computers, but even they can get thrown off by make up. There's no way to tell if someone is intentionally trying to avoid facial recognition or regularly wear makeup or sunglasses.
tl;dr Using spectacle frames with the other persons image "perturbed" onto that frame, facial recognition algorithms can be fooled into thinking your face is someone else's.
(There is a weirder ability to hide a human face using similar approach.)
This is surprisingly cool - the fact that the algorithm can be fooled is not amazing, but that they could find a fairly practical attack is quite impressive.
And at least now a computer can think I look like George Clooney, even if it won't work in a singles bar.
I've been looking into anti facial-recognition make up, and I wonder how much you'd have to apply to defeat the algorithm - like, would a line of make-up breaking up the contrast pit of an eye be sufficient?
And then there's reflectacles - glasses designed to reflect IR and visible light. I presume they'd mess with an algorithm, but could you code that algorithm to recognise them and raise a flag requiring human input?
Ah man, I love how the cyberpunk dystopian future of my youth has slowly arrived, sadly with less magic and orks, and last I checked, no shady street samurai in bars looking to hire me for my leet decker skills.
Probably not. From what I understand facial recognition software relies heavily on things you can't change, like the distance between your pupils and the ratio of your facial features to each other. You can certainly obscure your face, but if you were forced to remove sunglasses and such, say, at a security checkpoint you'd probably get ID'd.
BTW, if it's immediately obvious to me that the next development in this technology will be for mask wearers to dynamically load new randomly generated faces (and even facial shapes and feature types) at whim, then I'm damn sure others will already be working on the problem of actually developing the technology.
It seems to me that the cat's now out of the bag thanks to unfettered/unregulated access to facial recognition by all and sundry.
Probably - its an arms race after all - at this time for the stupider systems the following steps can confuse them
1 - use makeup techniques to make the nose fade into the face and don't let it get a profile view - noses are very important - a prosthetic nose probably won't help tho, cause the nose is an orientation characteristic more than an identification one
2 - eyes and the facial geometry around them are secondary, so no nose and giant movie star sunglasses adds to the machines confusion
3 - collar worn sparkle lighting directed upwards is baffling to machine - silly things keep picking up the light sparkles on the face as feature points (heh)
4 - makeup patterns based on the WW2 ship camouflage mess up geometry detection
Now, all of that will defeat a simple recognizer but if the developers had a lick of sense, the system will still pop it out as anomalous.
Mission impossible facial masks could certainly help, but pretty much everyone is removing the IR filters at this point and the facial thermography would be all messed up
Then there's all the work going on in recognizing more than just a face - I've seen some stuff on incorporating more of the body into the recognition pattern (I've always wanted to watch one of those systems misrecognize Mitch McConnell as a turtle :-) As the available horsepower continues to expand, more of the totality of the individual can be recognized (my own accidental contribution to this is that cheap stereoscopic vision makes it a lot easier/more accurater to pick out foreground from background)
At the end of the day, I think Scott McNeally had it right in the '90s - "You have no privacy. Get over it" I am of mixed minds about this stuff in public spaces, the good and bad examples are evenly distributed
Weird how you act as if it was always possible, yet it has not begun to occur until recently, with the application of facial recognition. Must be some sort of weird coincidence, right? What reason do you propose?
If a human's neural network can correctly correlate the before/after examples, so can a computer's. They might have found an issue with some modern implementations of facial recognition, sure. But it's a false sense of security to claim "when someone tries to identify you using an unaltered image of you [...] they will fail."
reply