There exist algorithms for estimating emotion from facial video. The accuracy is kind of hit or miss, but it's enough for "he's experiencing anxiety during the security search, he must be a terrorist."
One huge issue is that a scared terrorist looks the same as a scared person singled out for screening, and there are many times more of the latter. This is a specific issue Ekman brings up in his work when using FACS for lie detection. FACS is much better in finding out how someone is feeling rather than what they're thinking. But you need to know intentions to figure out if someone is a terrorist.
The same tech that turns your face into an alien, complete with facial expressions, can turn your face into a searchable surveillance data point ("Show me all the people who looked unhappy or worried when looking at this propaganda poster").
Still without honestly. Relative OSINT laypersons on different social media seem to have little problem identifying terrorists without facial recognition. The fact that they publicised their own multi-angle, high resolution video surveillance in real time seems to help.
I’m very skeptical about the veracity and accuracy of facial recognition software to detect emotions.
I went to an affective computing conference in 2019 and was underwhelmed; models couldn’t distinguish between looking upwards (and raising your eyebrows) from exhibiting surprise. Emotions are complex and personal, and the phrase from the article “facial expression software — which nearly eliminates possible bias” seems absolutely ludicrous to me.
I take results like this with an absolutely massive grain of salt, and don’t expect them to be reproducible.
The danger is that they’re “catchy”, clickbait-y results, that are popular because people like to hypothesize about underlying psychological reasons why those bronze medalists might be happier. But let’s examine the core claim first, and not take facial recognition software as a ground truth for emotional state.
Facial expression recognition. I was talking with one big data analysis vendor who claimed they installed expression recognition software outside toilet stalls in the Dubai airport.
If you exited the khazi with a disgusted expression, the system would recognize it and dispatch a cleanup crew.
They also claimed they could detect if students were bored in class and feed this info back to the teacher. Up next: thoughtcrime.
It reminds me that Israeli company (Faception) which claim to identify from your face if you are a terrorist.
It was shown that it was bullshit and that they basically built a smile detector (because ofc, in training set, photos of criminal in prison weren't really happy to be in prison)
(also, how ironic for an Israeli company build such tools, they are literally building the equivalent of the Nazi's nose measurement)
Then you're dealing with the Heisenberg uncertainty principle of sentiment analysis in face-scanning: person may have been pretty happy, until they noticed you face-scanning the people ahead in line.
I wonder how much data about a person can you squeeze out of a high-definition video, analyzing not only face, but grimaces, tone of voice, rhythm of speech etc.
That should give you a lot more hints than a still image.
The worst possible uses: filtering out undesirable people applying for jobs or for college, firing people who are suspect of belonging to a different political tribe.
Not quite. I did some UI work for a phone-company and their in-store camera's were able to detect clients mood within a second. They even demo'd it and showed the mood of people walking in. Right now it's only capable of 'happy, neutral and not-happy' as well as suggest man/woman. I'm pretty sure the software driving those camera's will only get better. Combine that with facial recognition and it get's creepy.
First, you need reliable training data to begin with. This can't be supplied by third-party labeling of faces, so it would need to be done with a dataset of self-reported emotions, but self-reported emotions also have tons of pitfalls well documented in the literature.
But second, the correlations of individual facial muscle contractions with emotions has been extensively studied and it's far noisier, inconsistent, or completely devoid of a signal than many people assume. In academic terms there's no such thing as a reliable emotional "signature" to be gleaned from facial muscle activation.
So the point is, it appears that the raw data simply isn't there for the AI to detect patterns that humans can't. Detecting emotions requires far more data points outside of facial muscle activation -- such as the ones I listed.
> This can't be supplied by third-party labeling of faces
Yes it can. You get enough labelling and it overcomes the unreliability of detection by any single human.
Get 70 people to label the same face. A random distribution over the 7 Ekman emotions[1] will give 10 each, and any non-random variation from that is a signal. Do that over enough faces and you'll get something to train on.
(Also, no reason why it needs to be face pictures. It could be 2 second video snippets for example).
If we are talking about professional actors trying to trick the tracker, then yes, it should be pretty hard to design software to overcome it. But most people aren't that good, and although they can mislead their friends or collegues, they still leave clues to detect a fake emotion. If you are interested, Paul Ekman has quite a lot of literature on the topic, e.g. see [1].
Indefinite archival of your face every time you cross? It's ripe for abuse for data mining to claim (via AI, expect witness, whatever) at the exact time of crossing your face exhibited behavior which was consistent with criminals, such as nervousness/anxiety. Border patrol officer can't provide an exact recollection of your face from 10 years ago, so (without camera) if you look nervous at the border this one time it's going to be difficult to look backwards and retroactively determine you looked "nervous" or intoxicated or whatever the other times too.
A photograph of your face at the pharmacy or post office taken for your passport is pretty much useless as that sort of evidence as it doesn't show your demeanor at the border.
Every time you use your phone your camera is on your face. Facial microexpressions could be automatically detected and analyzed based on the content the user was viewing at the time. This information could provide a detailed emotional picture of the individual as it relates to different stimuli.
reply