Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Adversarial design printed on a shirt to fool object recognition algorithms (www.vice.com) similar stories update story
133.0 points by ciccionamente | karma 732 | avg karma 6.91 2019-11-08 14:22:44+00:00 | hide | past | favorite | 76 comments



view as:

It only works until those pictures are used to counter-train the AI, right? So is this the high-tech arms race of the future?


Lovely. A bit of social proof hacking could go a long way to making these kind of adversarial designs more common on the streets - hire some actors to go round the city with CV-defeating makeup on, or these T-shirts, or these garments: https://www.vice.com/en_ca/article/qvgpvv/adversarial-fashio... (though I wonder if those designs might be shut down by copyrights on license plate designs?)

(As an aside I got a kick out of reading "some kind of hypebeast Supreme x MIT collab")


Don't forget to have them carry around 3D printed full auto assault turtles.

I'm sure if you avoided trademarked symbols or phrases using just letters in a rectangle it wouldn't be a problem. Time to create a few bumper stickers to fool ALPR's.

> shut down by copyrights on license plate designs

Afaik anything made by the government is expressly public domain in the US, but dunno about state govs. Unless, anyway, the state buys a design from a private company, at which point my law knowledge ends.


Would be nice if the article had bigger images of the shirt!

Straight out of William Gibson's "Zero History"!


In Zero History, the purpose of the shirt was not to fool the algorithm, but to trip a deep "gentlemen's agreement" between intelligence agencies to make invisible anyone bearing a certain pattern in order to protect the intelligence apparatus.

It will always be difficult to sustainably defeat recognition algorithms and I expect this to be an arms race along the same lines as other counter-surveillance techniques.

Gibson's suggestion that deeply coded and secret exceptions to mass surveillance might be used to protect state actors seems to me a plausible and concerning aspect of these developments.


Yup. And wasn't there a similar thing in Neuromancer, with a gang of members wearing such urban camouflage?

I have always wondered what the minimum amount of makeup needed to be "invisible" to facial recognition would be. In some cyberpunk future I could see people breaking up their features with thin black lines or something to fool cameras.

The cyberpunk future is already here, it's just not evenly distributed

https://www.nytimes.com/2019/07/26/technology/hong-kong-prot...


What you're looking for is CV dazzle or dazzle makeup. Check out https://en.wikipedia.org/wiki/Computer_vision_dazzle and some more practical (?) information at https://cvdazzle.com/

Wow, these are great. Looks like it could be straight off a runway. Very haute.

Do these still work? I expect designs like this to age quickly. Try reverse-image-searching them :)

Interesting: all the authors have Chinese names. I wonder if any of them has any relatives on the Xinjiang Uygur region.

Reminds me of the "hack" that was done to the Samaritan system in the excellent TV series "Person of Interest." Granted, you have to suspend disbelief on many points of AI to enjoy that show but I never understood why they couldn't work around the bug that was placed in the system that prevented the identification of seven people. In the examples cited, like tricking the AI into thinking that a turtle was a gun, there's an easy fix once the misclassification is noticed. I suspect the "t-shirt of invisibility" will similarly be accounted for in the system and that people seen wearing it will be targeted for MORE scrutiny as it could be presumed they are trying to hide in plain sight and that there might be a nefarious reason for it.

> they couldn't work around the bug that was placed in the system that prevented the identification of seven people

The explanation given was that one server per person would invalidate some portion of the overall profile so the identity would be misclassified (for all main characters)


Where can I buy this merch?


FYI you can buy your own shirt here: https://teespring.com/stores/original-ai-invisible

Colour me skeptical. There are multiple ways to capture features and the shirt may fool one set of algorithms but I highly doubt they'll fool them all.

Like any good security protocol, this wouldn't be the only line of defense. A combination of adversarial clothing, makeup, hair style, and accessories would be used. And constantly evolving, making countermeasures harder. Security is always reactionary, you can't defend against an attack you've never seen before.

> Security is always reactionary, you can't defend against an attack you've never seen before

Yes you can, that's part of the appeal of applying machine learning to security. They don't rely on things like signatures or existing heuristics to identify things as malicious.


Machine learning does rely on heuristics, it just builds the heuristics on its own. If it runs into an attack that doesn't use any of the attack vectors it's learned to guard against, it will fail.

Think of it like your body. It learns to identify viruses. Does that mean you're immune from novel viruses or new strains of the flu?


I think it was implied that I meant heuristics that humans have added themselves. The point of it all is to allow models to make generalizations about things it hasn't seen before. This can be done with a combination of supervised and unsupervised techniques.

> heuristics that humans have added themselves

I don't think this is a meaningful distinction. Who cares whether the new heuristic is being added by a machine or a human?

You still need to keep feeding the neural network data to learn from, and it will still choke when it sees novel data that doesn't align with the heuristics it developed.

That's the entire reason adversarial AI works. The reason the Trippy T-shirt makes you invisible to some current AI systems is because it exploits the heuristics they've built using data that these systems are unfamiliar with and haven't learned to process yet. If it was possible to build an AI system that could defend against novel attacks, the Trippy T-Shirt wouldn't be able to fool them.


If you train your security on parallel lines, and I come in with circles, I've just defeated your security. Machine learning only learns how to categorize things into predetermined categories. If I come in with a novel category it's never seen before, the best it can do is guess, and most likely, it will be worse at guessing than random chance.

Except nobody would train just on parallel lines. They use a wide array of different types of data so the model can make generalizations about things it hasn't seen before.

> Machine learning only learns how to categorize things into predetermined categories.

This is just one type of machine learning called classification, there are others like regression and clustering which can be combined to create more robust models. Look at the technology behind Cylance's product which identifies files as malicious or not pre-execution. They are not just using classification.


This has pretty broad applicability across a wide range of algorithms. The common failure mode when the machine fails to recognize the otherwise normal real face and body indicates that the whole face/skeleton relationship has fallen apart. Defeating this is interesting, as we have enough trouble just trying to recognize faces. To add to this "yes these are faces too but they are not faces too" is probably going to drive some researchers to drink. At the end of the day, this is a common flaw in a lot of deep learning systems, they're very brittle.

Exactly, this is mostly a gimmick. It works in specific situations, but isn't robust and won't stand the test of time.

Yeah, it's a static T-shirt, all these systems are just one network update away from a fix. We need dynamic clothing like the SmartShroud from "The Light of Other Days".

Yup, I have the same problem with DPI circumvention software. We know that it fools more-or-less widely available open and commercial DPI suites. Does it fool systems that are made specifically for deployment on ISPs as mandated by the government? Who the hell knows, aside from those working with the systems. The system may even not shut down your requests, but mark you as a weirdo to keep an eye on.

I know about adversarial attacks, but are they widely applicable? I would think an attack that works on one algo might not work on another.

So 100 human sized objects get detected by the algo, and then 1, wearing this t-shirt, that fits most of the parameters doesn't. Very, very easy to adjust the algorithm to account for a t-shirt. This is cute, at best.

It's also then super easy to say that the individual wearing the shirt is likely to try to usurp monitoring. In practice this type of thing will likely make you a more prevalent target for monitoring along the lines of "what do you have to hide?".

Not that I agree at all with large-scale monitoring or think anyone should prove that they don't have something to hide. Only that it paints the target on your back.


Ya, I immediately thought of the IlIlIl licence plate xkcd. https://xkcd.com/1105/

> Very, very easy to adjust the algorithm for a t-shirt.

The operative point here is not 'a shirt', but a visual pattern that tricks deep learning-style classifiers into wildly misidentifying something. There's no 'very easy' way to counteract that other than retraining on a new dataset or switching entirely away from a deep learning system.


Surely all you have to do is train a "trippy shirt" detector, and run it in parallel?

As I understand it, adversarial designs generally work on one specific recognition system. So working around this attack would be very achievable with three or more recognition systems and a consensus check.

This particular paper is based around attacking YOLOv2.


I think these types of adversarial attacks are even easier to foil than that because they're specific to one particular set of weights. Even really really small changes in the training data or model could invalidate the attack if I understand correctly.

I know there has been work in generating adversarial images that work against multiple models. That kind of thing is probably only going to get better, to say nothing of particular sets of weights in a single model.

I’m confused how this helps beyond body recognition. It seems to me that the focus these days is on facial recognition where you would be training your model to look for facial features rather than whatever is on that shirt. Is this supposed to somehow fool that as well by tricking it with false face features or something?

I can imagine a scenario where a system doesn't attempt to look at a face before it determines there's a full human in the frame.

Ultimately if a system is designed to only look at faces then this method would likely not be effective.


This t-shirt defeated 2 CV model, R-CNN and YOLOv2.

We need better deployed testing suites that can test an adversarial model against many popular classifiers, not just 2.

Even so, the paper itsef shows tht their tshirt doesn't make the wearer undetectable, only partially-undetectable. A security system won't ignore you just because it only saw you 10% of the time you were present (unless it's an Uber self-driving car).


In a world of click-bait "invisible to AI" is the same as defeating two models most of the time.

A wearable captcha!

The arXiv paper contains images of the shirts and methodologies:

https://arxiv.org/pdf/1910.11099.pdf


This reminds me of that recent article about how zebra stripes have been shown to reduce bug bites when painted on cows: https://news.ycombinator.com/item?id=21201807. Probably a similar effect on object detection algorithms.

One of the most common uses of this tech in the US is automatic license plate readers.

Without getting into a debate about expectations of privacy on public roads vs. building a perpetual government database that tracks where every car is effectively at all times of day, another application of this tech would be a bumper decal.

I think most reasonable people would agree obscuring the license plate on a public road is not the solution (well, with the exception of Florida Man who racked up a $1MM fine when he was finally caught doing that through toll booths for a year), but a decal like this wouldn't interfere with any officer's human duties.


It's going to be fun to see this whole recognition proof clothing turn into a low-key "war" as states demand better recognition systems that can bypass this kind of thing, and privacy activists keep developing new ways of fooling AI.

The license plate shirts should be made with state department diplomatic country codes.

You can fool all the AIs some of the time, and some of the AIs all the time, but you cannot fool all the AIs all the time.

Carl Sandburg said that.

Deep dive in to the backstory behind the quote:

http://www.taxhelp.com/lincoln.html


FYI you can buy your own shirt here: https://teespring.com/stores/original-ai-invisible

I'm only waiting for the first SWATting incidents triggered by an algo "recognizing" a turtle of mass distraction.

Great way to get run over by an Uber self-driving car!

You don't even need this shirt for that to happen..

We also tested fooling YoloV2 using t-shirts, but as mentioned in the paper, we got mixed results. You can fool the object detection only if you get a frontal exposure to the camera without any torsion / rotation / bending of the t-shirt, which is pretty hard in real life. Would be interesting to see if you can train adversarial examples robust to multiple angles. We thought to put these t-shirts out for sale for fun and to send a message: #donottrack. https://stealth.cool

The joke about Juggalo facepaint is both true and funny but I think there is some actual merit to that idea. Camo clothing (and I don't mean the kind you see everyone wearing at rural WalMartss) goes in and out of fashion every couple years. Military-style jackets, boots, and caps (think of a stereotypical anarchist style) are also perennially in style with certain crowds. I don't think it's too far fetched to imagine a future where camo facepaint becomes fashionable enough to be widespread, there's also a lot of artistic potential available in non-traditional patterns and colors.

I can't really see a way for AI cameras to get around properly applied facepaint, especially varieties that are IR absorbent or reflective. I hold the human brain in very high regard when it comes to pattern/symbol/shape recognition and if facepainting techniques are good enough to trick human visual processing, it's going to be good enough to fool any existing AI. For an example of what I mean by proper technique, refer to this video: https://youtu.be/YpzUr3twW4Q

The trick is in getting enough people to adopt such a strategy that you can't be identified through simple exclusion. I think the idea of camo/other facepaint isn't so foreign and unappealing as to never come into common fashion.


> I can't really see a way for AI cameras to get around properly applied facepaint,

In video people move, and 3D information can be recovered unless their faces are painted with something like Black 2.0. At which point why not just wear a mask?


Can a person's gait be used as identifying information?

Yes, and the 'rock in the shoe' model has been trained against as well. Good luck.

> I can't really see a way for AI cameras to get around properly applied facepaint

Make it illegal to use facepaint.


> Make it illegal to use facepaint.

How do you distinguish this from makeup?


You're not thinking totalitarian enough: send cops to intercept unrecognizables, wipe their face till the face recognition and mandatory ID match.

> I don't think it's too far fetched to imagine a future where camo facepaint becomes fashionable enough to be widespread

A lot of those masks people in China wear they refer to as privacy masks (though this seems more an auxiliary usage -- especially in HK -- where the primary use is for filtering air). So I'd say there is evidence of such styles already becoming fashionable.


In Japan the stereotypical get-up a bank robber would wear isn't a ski mask but a medical mask with sunglasses and baseball cap.

> No one's going to start carrying cardboard patches around

Uhhhh... why not? You can put them on hats, backpacks, arm patches, or a lot of things. I get that they are suggesting it would be uncomfortable to have a stiff shirt, but there are easy solutions here.

I'm not trying to undermine the research here (because it is good research) but I think the reporting could be a little better.

As for the research, I wished they had compared it to more accurate models. I think this would greatly help a reader to understand the limitations of the work. YOLO and faster-RCNN are great for "real-time" but don't have the greatest accuracy. They trade accuracy for speed (more accurate models are pretty slow). While I do think YOLO is more similar to what would be used in a real life setting, it would be great to know how the design works for more accurate models (this wouldn't require significantly more work either, since you're just testing against pretrained models). If the researchers stumble across this comment I would love to know if you actually did this and what the results were (or if you see this comment and try it against a more accurate model). (I do also want to say to the researchers that I like this work and would love to see more)


That should be easily solvable using 3D convolutions and processing a short clip (~10 frames) instead of a single picture.

welp pack it up boys back to LIDAR

Baidu security gave out something similar at defcon Beijing. It was pretty cool conceptually but it really was just a gimmick

I wonder what would happen if a self-driving car came across something like this. Would it classify the pedestrian as "Nothing" then run them over?

Legal | privacy