Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Beating Google ReCaptcha and the funCaptcha using AWS Rekognition (bitbucket.org) similar stories update story
190 points by nailer | karma 21705 | avg karma 2.02 2020-08-25 11:12:43 | hide | past | favorite | 128 comments



view as:

I wonder how good it's at solving the "hard" (with noise added, eg. https://i.imgur.com/jbf0Xfy.png) captchas. I went through the "past puzzles" zip, and they look easy, compared to the "hard" ones you get if you're on tor (or otherwise "shady" network) or the site owner has turned up the difficulty setting[1].

[1] https://stackoverflow.com/questions/23314528/how-to-reduce-r...


> compared to the "hard" ones you get if you're on tor (or otherwise "shady" network)[.]

In my (limited) experience they serve you "hard" ones, and then continue serving you "hard" ones until you give up and go do something else.


Interesting, Google technically has the power to ban whoever they want from any website that uses their captcha.

All they have to do is give out impossible captcha's.


Can't you just add the same noise to you training set?

Step 1: Use Captcha to get free AI training data.

Step 2: Sell improved AI.

Step 3: Add edge cases to the Captcha that your AI can't handle yet. Go to step 1.


Can someone please explain to me why smart people keep saying that we’ll just build software to detect deepfakes?

When AlphaZero plays against itself it just gets better. Once that spread is so narrow, every deviation is in the range of possible error. And it can’t explain WHY a certain move is better or why the picture is a squirrel. And sometimes it can be wrong, but only on hilarious edge cases, in the rest we just “trust it” because we dont know one way or the other!!

Once deepfakes have covered every possible thing that could give away the deepfake, what remains is dubious arguments as to WHY something is or is not a deepfake. It could be wrong or not, we would just “trust it”?

I am saying that if there a range past the uncanny valley for adversarial AI then once the generative network is there, it’s game over. They can quickly generate any amount of speech said by anyone for example, and we will not know whether they said it or not. All audio and video evidence would be inadmissible without watermarks. And then we would have to trust whoever made the watermark.

In short - mutually distrusting byzantine consensus or signing will be required for any claim.


Except of course if they also are able to use AI to generate hash collisions in everybcryptographically secure hash algorithm, but I highly doubt THAT’ll ever happen :-)

security by obscurity - The companies detecting deepfakes will certainly not publish their methods/networks.

And what about universities?

There are plenty of publicly published papers detailing detection of deep fakes.

how would anyone know if their methods worked or they were just making results up then?

We do have methods to detect deepfakes and it often works very well (at least for any given generator, perhaps not multiple generators), but you're right, at some point deepfakes will be indistinguishable from real media. At that point I think the problem is largely outside the domain of computer science and we will need to start redefining "trust" looks like.

Anyone else find it mildly(/extremely) distressing that a large part of our industry is dedicated to creating technology that provides little-to-no value to society, but creates enormous opportunities for great societal damage?

We then release the tech to the wild without any proposed way to address the problems, instead saying “the problems this causes are outside the domain of CS, so I won’t bother considering them – besides, if I didn’t create these problems someone else would have”



I hope I didn't make it seem like I was pushing the solution onto others when I said that it will move outside the domain of computer science. My research is in the area of detecting deep fakes.

You're definitely right about how little we attend to the consequences of our advancements, especially in the field of AI/ML. This area is fraught with ethical and moral issues that have taken a backseat to the ideal of progress and I think we are making a mistake there.


If there were B and C that would mitigate the bad things in more powerful A, then fine

But otherwise delaying progress what if in the end a more powerful A will eventually make any B and C irrelevant? All you’re doing is kicking the can down the road maybe?


Up until the last decades, photography was an art, and photos were generally manipulated, sometimes heavily , sometimes to remove people etc. This is not new, human societies have survived without video evidence since forever

> photography was an art, and photos were generally manipulated, sometimes heavily , sometimes to remove people etc.

Except this was time consuming and couldn't be done automatically in an undetectable way.

> human societies have survived without video evidence since forever

Stupid argument, we've also survived war since forever, except now a war between nuclear powers would be devastating in a way war historically wasn't. It will absolutely not be the same as it was before photos and videos.


And we see what happens when we do have video evidence because everyone has a camera in their pocket. For decades law enforcement version of events were taken as gospel. Now we have evidence that they do lie.

Sci-fi author John Varley covered this in one of his novels.

In a society where anything digital can be manipulated, the veracity of a speech or event depends on a network of trust and people who witnessed the event in real life. If someone made a speech in front of hundreds of live, physically present spectators, these spectators can then verify the authenticity of any recording, and as long as there people in that group that you trust (directly, through your trusted network, or perhaps as journalists of good standing) you can trust that the recorded speech is what was said.

Conversely, a recording of a world leader saying they recommend Coca-Cola to prevent tooth decay wouldn't have any credibility at all if no one credible was actually there to witness it.


So you'd need a bunch of GPT-3+ stuff to corroborate the deepfake, exciting! Where do you determine the "troll-line" in a trust network? You can build it over time I suppose? Though that kind of leads to similar echo-chamber stuff, right?

As an anecdote, I know real humans who do truly believe deepfaked political videos. They were genuinely surprised to know that Pelosi can talk like a normal human.


That’s a great point. Tons of people already don’t CARE about the facts, and consider fact-checkers to be just as biased as the statements they are checking. Just having 30%-90% of the population react to a deepfaked outrageous video will already be a huge damage even if 10% are stickler skeptics.

“A lie can go halfway around the world before the truth has a chance to get its pants on”


> and consider fact-checkers to be just as biased as the statements they are checking

Aren't they? I can come up with examples off the top of my head, let alone with a search. This is my favourite[1]:

> The fact-checker for the New Yorker who mistook a Marine veteran’s tattoo for a Nazi symbol has resigned, saying the “small mistake” has ruined her life.

"mistook" is being kind. Take a look at her Twitter feed[2] and tell me if you think she's less biased than the stories that would've passed across her desk, because her feed is incredibly partisan. Do you think she managed to remain impartial while she was still employed as a fact checker? I wonder if she intended or wanted to be impartial, let alone managed it. I also wonder how that kind of person comes to be part of a team of fact checkers, which brings into question their impartiality. How was she able to make this “small mistake” and no one else on that team caught it? Perhaps they lack editorial oversight like Snopes[3], the Snopes that employs people who are obviously biased:

> Of particular interest, when pressed about claims by the Daily Mail that at least one Snopes employee has actually run for political office and that this presents at the very least the appearance of potential bias in Snopes’ fact checks, David responded “It's pretty much a given that anyone who has ever run for (or held) a political office did so under some form of party affiliation and said something critical about their opponent(s) and/or other politicians at some point. Does that mean anyone who has ever run for office is manifestly unsuited to be associated with a fact-checking endeavor, in any capacity?”

I'd say it was unbelievable but it's really not[4].

> Snopes appears to be actively engaged in an effort to discredit and deplatform us.

Dropping one's scepticism because someone calls themselves an impartial fact checker strikes me as being in the same realm as following the orders of a complete stranger because they have a uniform and claim authority over you or a situation, i.e. unwise.

[1] https://nypost.com/2018/06/26/new-yorker-staffer-resigns-aft...

[2] https://twitter.com/chick_in_kiev

[3] https://www.forbes.com/sites/kalevleetaru/2016/12/22/the-dai...

[4] https://mailchi.mp/babylonbee/reparations-for-everyone-83635


This doesn't work that well in practice:

- Humans can lie

- Human memory is too fallible to serve as this verification, particularly for every detail

- Small details with big impact can be changed in a verified speech without some people noticing

Of course if someone reliable records the actual speech, any copy can be checked against that. I'm not sure why human memory would be considered superior, as humans have poor memories and can lie.


How can you tell a true copy from a false copy? What makes one recording more true than the other, if you can only compare it against a reality which no longer exists (because it's in the past)?

I have a simple manufacturing tweak patent for the recorder devices on my table to do so.

What happens when there's a global pandemic and everyone is forced to stay home and not attend witness any of these speeches, but instead has to watch the broadcast over the internet and/or TV?

Heh

Cool idea for a sci-fi story.

Then we just generate the speeches and let public figures do more work dealing with the pandemic, because it doesn't matter at that point.

Ideally anyways. Realistically, we probably just argue on the internet about which talks were real (in this case generated by the person pictured in them).


Maybe there needs to be created a "Witness Coin" ledger, where events happened if enough people say they did.

What's needed is more cryptography. Build keys into sensors to sign media and use something like blockchain to broadcast the signature at time of capture / time when the device returns to the internet.

Won't be perfect, but it would be a hurdle. Also, for things like audio and video if you know the location on the planet then there are certain artifacts that are hard to fake with mere AI. The way sound bounces off walls in a room or shared environmental noise that other devices in the area would detect, like a plane flying by.


The problem with that is, the video can only be watched at it's true raw form. Any edits will need to be re-signed with "Edited by X". Now, X has to be trusted and not compromised (by money, bias or hacks).

If X is CNN, then half the population will not believe it. If X is Fox, the other half won't believe. Bottomline, the world will be truly living in virtual alternative universes and it doesn't even matter what truth is as long as everyone in that world believe that.


Watching and verifying are two separate procedures, if the edited piece has a link to the original source, the verification can pass to that.

This is actually a valid use case of blockchain, where each edit is signed by an entity in cascading fashion.

> Build keys into sensors to sign media and use something like blockchain to broadcast the signature at time of capture / time when the device returns to the internet.

That literally accomplishes nothing. It would be a matter of time before someone just removes the light sensor (CCD? or something) and has a custom made encoder that can write the input data as though the device actually records it live.


Yes, like I said, it's not perfect. There will still be deepfakes, but they won't be around every single nook and cranny. And once you ever invalidate a video, all other works by the same key are now suspect.

Imagine two court cases. In one, there is a video signed with a key from a phone that shows no signs of compromise. In the other, there is a video with no signatures at all. Which is more trustworthy? It's a continuum, not a binary.

Edit:

And also, we can scale trust. There should be more webs-of-trust in the world. I should be able to tap my phone next to my buddies and it should create a link between us. We should have the concept of trust online when we need it.


Audio and video evidence isn't admissible because it's audio or video (and may be inadmissible nonetheless). It's admissible because someone testifies under oath that they have personal knowledge of its provenance. The burden is on the party introducing the evidence to show that it's reliable, and the question of whether or not it is indeed reliable is a factual one for a judge or jury to answer. It's not assumed to be "true" or to accurately reflect reality just because it's a purported photo, video, or audio recording.

It will be inadmissable because of the psychological effects. Imagine having someone claim you murdered someone. Also you happened to drive by that park where the person was murdered at the same time.

Now someone wants to frame you, perhaps because you are a celebrity or a person with money to extort from. So they claim that they saw you murder this person.

Normally, this wouldn't hold any water and no jury would convict you, we are not in the medieval ages anymore. There would be no evidence of you around the crime scene, because of course, you weren't there.

Now imagine that they are able to produce audio recordings of a conversation you had with the victim, screaming at them and saying "you are gonna pay for this". And then they also happen to have video evidence of a smart phone camera that happened to capture you stabbing the victim.

Now tell me, if the jury sees all this and then it gets dismissed, because of "deepfake" claims or whatever, how will it affect the outcome of this trial?

If you are truly innocent, this might still work out alright, unless you are not white. But imagine there is even the SLIGHTEST connection to the real world. It doesn't have to be completely fake. Maybe you had some fight with that victim earlier, or there are witnesses who testify under oath that you had several heated arguments with them, etc.

Deepfakes can change the entire outcome of trials, by biasing the jury and proceedings against you. That's why any audio and video evidence essentially needs to be rejected without a very very thorough forensic analysis of its authenticity. And even then you would still not know for sure.


In your examples, it would most likely be a Mistrial scenario, the jury would be dismissed, and the prosecuting team would have to start over again from scratch later, assuming they aren't themselves in trouble for introducing faulty evidence.

I’d be really curious to hear about a case where prosecutorial misconduct was punished. Don’t they have something akin to the “qualified immunity” that the police enjoy?

Deepfakes will change the entire outcome of trials the same way photoshop changed the entire outcome of trials before it.

Also you should see the type of video evidence used in court. It’s grainy 5 FPS surveillance footage that doesn’t even show their face. Deepfakes are not necessary if you want to frame someone.


The jury wouldn’t see it if it got dismissed, typically by a Motion in Limine. Opposing counsel also gets to object to evidence before it’s shown to the jury during trial. If the trial judge allowed it and, on appeal, the appellate court decided it shouldn’t have been shown to the jury, there would be a new trial.

I think it’s because getting to a point where your error bound is that small with nature is incredibly difficult. GANs are a breakthrough but there are still lots of drawbacks and “tells” that allow you to identify a fake distribution. I actually don’t think we’ll get there for a really long time, and for all we know it might actually be impossible to replicate a natural distribution without gathering some untenable amount of data. Even if you gather that amount of data you often have no idea how correct your model actually is because you don’t have access to the natural distribution. You have to have a prior and picking that prior is a research area in and of itself. That’s why people say that. Your premise is really hard.

EDIT: just thought of something else, for videos and pictures increasing the resolution dramatically will make it harder to generate deep fakes, it might be interesting to see how much better cameras get and I’m sure that the amount of compute to fake a higher resolution image will be drastically higher. There are a lot of factors at play.


It's just a different world. Somehow we live with text being like that. Maybe we'll even learn to verify signatures from trusted sources one day.

With deepfakes, in general I agree but it is a bit like computer security. It may be no different from reality but it may still have marks of the network that generated it. Ways to detect it may come out slowly, like computer bugs, with some entities having incentives to research them heavily and not release them.


I wonder how much more difficult it would be to deepfake a lightfield compared to a regular video.

I'm not sure if we are there already. The Biden speech at the convention was very good, but the video was immediately claimed to be suspicious. The image had unusual softness around his mouth, compared to rest of the image. The speech was pre-recorded. So post processing could be responsible. But also deepfakes at this point are possible too. So who knows.

https://www.vice.com/en_us/article/7kzj8y/why-joe-bidens-vir...


There's a scifi book where AGI arises from the ever escalating battle between spam bots and spam detectors, sometimes it feels like we're not too far away from that :)

I did collect the images from the puzzles, here is a few: https://drive.google.com/open?id=18b0HxyOsLP6AZMpF1-DNITrGvF...

They mention their own captcha system but don't really reference it anywhere as far as I can tell. I'd guess it'd be based on something like ARC (Abstract Reasoning Corpus), instead of object recognition which already have decent methods of solving if you can get enough data.

couldn't find anything...

Does it beat ReCaptcha v3?

Apparently yes.V3 is just the system used by V2 to give puzzle difficulty. I scored 0.7 for 40 hours on Vision API videos. https://www.youtube.com/watch?v=7tPcs06fgbg

Go to the channel for the other 3.


Thank you for the link, I could t find it!!!!

reCAPTCHA v3 is supposed to be frictionless so there would be no use for this as the user does not need to solve anything.

The user solves the puzzle by behaving like a human.

At some point Captcha's will be easier to solve with software than by people. Hopefully by then we can retire them completely. The onus for proving that a visitor is not a bot should be on the server side, not on the humans.

google is already trying to retire them. v3 has no challenges, they just generate a score completely transparent to the end user

and yet can still be bypassed by captcha solving services

"Transparent" until Google doesn't trust you for some insane reason.

Now I can be denied access because of an opaque score, with no way to get around it.

Yay, progress!


hah yea, their sales pitch here left a bit to be desired. Once a user gets a low score a recaptcha implementer should make a cell phone number collection system to have their own trust system based on that...

yup. Setting some tor privacy extensions in Firefox already locks you out of Google login. Yay!

I'd imagine they may already be. I've been having a -lot- of issues with captchas lately. It's not my eyes getting worse, they seem to be getting both more esoteric, and the letters all crammed together. Is that cl, cI, ol, oI, or d? It often takes me multiple attempts.

> The onus for proving that a visitor is not a bot should be on the server side,

Unfortunately that's exactly what is happening with ReCaptcha checking for Google cookies.


I'm not sure I follow. Cookies are stored in the client.

Yes, but they are only set by Google's server if the user has some sort of reputation (logged in on a Google account, visited Google before with cookies enabled, etc).

There are already extensions for that.

Server should never trust client input

How would it make that determination?


That's not my problem. As a user I should not be inconvenienced for a problem caused by someone else. The offending party should pay the price.

That's not the server's problem. As the server, they should not be inconvenienced for a problem caused by someone else. The users (which includes the bots) should pay the price.

No, the users do not include the bots. You have this backwards.

Bot are most certainly users. How is the server going to tell the difference?

https://continuations.com/post/180985743645/world-after-capi...


Do you want my business? I have gone elsewhere with my money instead of solving for crosswalks before

And that works very well for every website you pay to use.

What percentage of websites that you use is that, exactly?


I booked a hotel room elsewhere just this weekend.

You know? That's fair. I hadn't considered the kickback model that booking sites seem to use.

That's fair. I think your solution would work for those.


Finally, someone willing to advocate for the voice of the server rather than the obstinate user!

Woah, I never seen something like this done with bash and command line utilities. Thanks for sharing, made me find xdotool which is exactly what I neede to replace SikuliX. I can't comment on the project since I didn't test it but wow, only this example https://bitbucket.org/Pirates-of-Silicon-Hills/voightkampff/... made me think why didn't OP use a scripting language and went with Bash? (Python?) Don't get me wrong, the Bash level is strong with this one :)

Sikuli has the benefit of being fairly cross-platform, so scripts written for X will also work in Windows and in theory other more esoteric window manager environments.

It also has the ability to identify visual cues, which this doesn't seem to do.


Sikuli is not fast enough. I have know about it since it was developed at MIT because I was there in 2010

It was my first time working with Bash. My focus is algorithms and I use C++.

(as a deep learning researcher) just wanted to let you know that PhDs in AI are in Python, and that all AI papers are in Python

The tool is not important

learning python well would be helpful for you

while neat, it's a boon to spammers and other nefarious actors who can use these techniques to further get around captcha... concerning to say the least.

if we want to defeat time-wasting and privacy-invading captcha and the murky ethics aren't a concern, then we should go to every e-commerce site that employs them, with full crap-blocking privacy mode on, load up on products/services, and then abandon our carts at the captcha.

maybe we can have botnet operators take this idea and run with it, despite the murky ethics.


I am perfectly fine with rendering CAPTCHAs absolutely useless for bot prevention if it expedites websites finally getting rid of the absolute pox that is reCAPTCHA.

Yeah. Text captcha/Recaptcha can be solved by spammers. The more advanced the more cpu time it burns up but it still does little to stop them. It just makes it super inconvenient for me to solve. Sometimes they aren't even human readable.

In the worst case, there are people in poor countries solving those chaptas for relatively little money. Probably not worth for everyone, but when operating a botnet on a large scale, thats probably a pretty attractive option as well.

  >we should go to every e-commerce site that employs them, with full crap-blocking privacy mode on, load up on products/services, and then abandon our carts at the captcha.
I've been doing that for years [well, not the shopping trolley bit] on websites which throw fullscreen modal announcements in my face. The second it happens, I click off that website. I dream that the webmasters in question will have some advanced stats setup that tells them that, as soon as they digitally shat in my face I abandoned their sites. But, unfortunately I know it's an empty gesture.

https://stiobhart.net/2015-05-04-overlays-new-popups/


>abandon our carts at the captcha.

I thought the credit card number was the answer to the captcha on an e-commerce site.


On some yes, but many offer other payment methods.

Are we saying that the Google image recognition is inferior to Amazon's with this?

Interesting. Two things that I found interesting:

1) ip addresses he uses have probably really good reputation, because even with rather bad image recognition ability it still lets him in.

2) surprisingly enough google's captcha apparently doesn't consider mouse movements at all - the bot selects images rapidly one immediately after another and always clicks in the same place of image.


If I were Google, I would let it pass (at least where it's not in my own services, not in crucial places), and use that obviously simple heuristic for machine learning so that distinction between human and robot can be taught using many other available variables.

Well, I wouldn't really do that if I were Google. But I think it could be happening.


Google doesn't pass or fail, it gives a score and leaves it up to the site operator.

s/let it pass/give lower score

Although I admit my idea looks less probable this way. It's still less ridiculous to me that not using keyboard or mouse data.

But on the other hand they may be afraid of people screaming "Google is sending your keystrokes to their servers" etc.


They only did that so they could shift the blame/responsibility of making the classification to website operators instead of themselves.

It's a library for web developers... Is it so terrible to return a float instead of a bool?

Yes, because it's going to return a small float for anything that Google thinks is "suspicious" (including using Safari or Firefox) and websites will block it as if Google returned "no", except Google is absolved of the criticism that they return bad results for people who value their privacy ("if you don't like it, you should set your confidence lower…")

You’re dealing with two different things.

reCAPTCHA v2 and the badly named Invisible reCAPTCHA¹ assess the user and either let them in, or gate them on puzzle solving. So, it does pass or fail you, where fail means trap you in purgatory indefinitely (though some hold it’s actually hell rather than purgatory).

reCAPTCHA v3 never presents a CAPTCHA for you to solve, but decides a score (in practice, I’ve only seen 0.1, 0.3, 0.7 and 0.9) where higher means it’s feeling more friendly towards you, and it’s up to the site operator to decide what to do with it. (You should provide alternative means, e.g. if you’re doing fraud prevention for signup, fall back to SMS verification. Too many sites provide no recourse for a low score, which is illegal to do in various countries on accessibility grounds. With reCAPTCHA v2 and Invisible reCAPTCHA the site owners can at least blame Google, not that that gets them off the hook.)

Given the use of Rekognition here, I presume this project is breaking reCAPTCHA v2 and possibly Invisible reCAPTCHA, and not reCAPTCHA v3.

———

¹ I call Invisible reCAPTCHA badly named because it’s only invisible on the happy path—all it’s doing is hiding the “I’m not a robot” widget, effectively having the code “click” it when submitting. And given that reCAPTCHA v3 is then invisible, never showing anything to the user… yeah, it’s confusing. Arguably reCAPTCHA v3 isn’t even a CAPTCHA any more either, so… yeah, the names are all a bit of a mess.


your comment is reasonable, using rekognition for recaptcha v3 wouldn't seem to make sense.

but the linked source code is pretty clearly interpreting scores and referencing recaptcha v3, so i'm not sure what images they're running rekognition on. (unless there's a bunch of stuff going on in this repo and this file is unrelated to the title)

https://bitbucket.org/Pirates-of-Silicon-Hills/voightkampff/...


It seems to be using the reCAPTCHA v3 score to guess how reCAPTCHA v2 will behave, and therefore influence its behaviour so it doesn’t dig itself into a ditch. (c.f. main.sh lines 755–758, notabot.sh lines 47–57.)

they suggest a default 0.5 threshold in their documentation. if you're testing "passing" their captcha, it'd be pretty reasonable to consider anything below that a fail.

> surprisingly enough google's captcha apparently doesn't consider mouse movements at all - the bot selects images rapidly one immediately after another and always clicks in the same place of image

Yep, it's Google cookie based, not behavior on the page. You can only use the tab and enter key to successfully complete the captcha if you have enough Google cookies.


isnt it ultimately just a series of API calls to solve the captchas? whats to stop someone hooking the javascript, solving the images and then raising the right event ?

I would assume Google would send all mouse events to the server, then compare them with all other mouse events from all users who solve reCaptchas and then decide on a "humanness rating" of the data internally and use that alongside all other parameters to decide the probability the user is or is not a bot. You can't fake that, you can send fake events but you can't trick Google for long enough, sooner or later it would figure out a pattern and mark that pattern as a bot. Well, if it did track mouse movements, which it apparently does not. But it can start at any time.

probably Google is already working on it.

But what if the device is mobile or tablet (or touch screen laptop) where captchas are solved without mouse movement? I guess that could be the reason why Google has not implemented it yet.


Possibly. They could also track device information like motion sensors, which are available without a prompt by default on Chrome, and use the motion information from the device to decide on the randomness of the movement. In the worst case, they can just show you 5 walls of captchas and pass you.

Did you beat recaptcha v3 also ?

See videos for Vision API. Score over 0.7 for 40 hours

what we need is an unbeatable captcha

Pay a micro transaction to perform an action. In the end it's more about cost.

I remember a Yahoo hackday from around 2009 where one of the teams created a captcha that required you to match photos to tags. Another team made a program that could look at a photo and assign tags. Mutually assured destruction.

Lo and behold the wonder that is adversarial AI - 11 years later we can automate this. My guess is that Google is quite happy about open source Recaptcha solvers. Delicious fodder for improving the adversary.

There were both automated 11 years ago using Flickr data.

Media Inquiries: pirates.of.silicon.hills@gmail.com

Is there really no way to passively collect enough data from the user agent to determine the likelihood that an actual human was involved?

I suppose there are a bunch of privacy and corner cases that would mess that up (for instance, composing a reply in a text editor and pasting it would be indistinguishable from a bot).

Has anyone tried? Are there write-ups I could read about?


What is the Project Touch-Captcha (? ??) mentioned in the readme?

I will announce in a couple days

Does it work with Invisible Captcha? I do not think so.

Legal | privacy