Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I know literally nothing about this science, so that paper had me concerned about the following question:

Given a visual face recognition door lock or similar system. If I want to break such a door lock, can I install that system at home, train it with secretly taken pictures of an authorized person, and evolve some kind of key picture with my home system until I can show it to the target door lock and fool it into giving me access?

OK this is a very simplified way to put the question, but is that something this paper would imply to be possible (in a more sophisticated way)?



view as:

Yes, exactly.

However, you will need the database or training data fed into that lock system. Without the data, you won't be able to test it at home.


You could probably arrange that person to be on a photograph (say ask him/her to hold a sign "I support our war heroes!" or something else everyone would agree to hold if you said you're making a photo album for a non-profit) and work from there.

But in general, this class of problems is why a biometric lock is only useful if accompanied by a guard with a gun stationed next to it.


You may be interested in this talk: https://www.youtube.com/watch?v=tleeC-KlsKA#t=282

The speakers runs you through a hypothetical case study: a pet-door company and looks at the pitfalls of applying machine learning to it.

I believe the paper is actually focusing on something else: Create images that humans will not be able to classify as a digit, but that the net will gladly give a prediction for. To translate to faces: There may be clouds that look random to our human brain, but are detected as faces by nets.

It seems this is an adversarial attack, where you need access to the guts of the net (weights, layers). I compare this with a hashing algorithm and brute-forcing the input till you find a collision with a target. Nearly impossible in real-life situations.

You may be able to sign a check using a scribble that the cashier can not recognize as a digit, but that the machine will recognize as a digit. Not much practical gain from an attack there.


If i'm understanding you correctly you want to train a second system with a first system's authorized user, then use the 'key' to open the first system.

A someone who actively researches biometrics I can't say that this is a good method for a few reasons.

1) Systems often train templates which look very different than the original input, especially if more than one image is involved in training. These templates aren't necessarily going to be recognizable to the first system (even if they can be represented as a 2D image).

2) Many enterprise systems (such as from Honeywell or whoever) include liveness tests and spoofing measures. Though anecdotally they are not very good, they check for basic measures such as if the pupil expands and contracts from a burst of light.

3) Most biometrics that involve access to some place (verification) usually include a 3rd party monitoring said access.

If you were to do this for say someone's home. Depending on the system you may gain access with a high definition photo as many consumer systems are set to a higher false accept rate (FAR) to prevent user aggravation. However, if they set it to be very strict (giving a larger false reject rate) then the best way would probably be attack at their sensor directly. That is, the system often doesn't care about the surroundings, it's trained for one task (open if authorized user).


Thanks all for the answers. I find neural networks and AI such an interesting topic, wish I had more time to go beyond science journalism to learn about it...

Legal | privacy