My guess is that it's less that there are now better ways to do it, and more that bots are increasingly sophisticated and the value of this signal has decreased over time, while the business costs (explaining to customers why this is not a GDPR violation) have increased.
I do think there is an element of "botting" as you would put it.
Theres a lot more going on behind the scenes than most people realise. There is a lot of data sharing taking place between businesses behind the scenes and there is a resistance for different entities to admit this but GDPR is slowly prizing open those dark pools of data.
It's gotten pretty challenging from what it used to be.
A lot of small things... but basically if you load from an actual browser (headless) and cycle IPs, it's pretty hard for a site to pinpoint you as a bot vs a user.
It often costs far more then money. In the this case bots were gaining data to better target future attacks. In the common spam case it costs real user's attention.
Sadly most bots today are solutions looking for a problem.
Our customers (thankfully) are typically looking for ways to engage with their existing customer base better, and/or deliver their existing commercial services in a better way.
A lot of times the purpose is more on rate limiting than disallowing bot access. The goal to tell apart is on the premise that humans are a lot slower than bots.
No, they haven’t been trying to solve the problem. They want the bots on there for financial and political reasons. It is easy to solve the problem - require identification to use the site. I mentioned the advantages of social media doing this about 10 years ago and was downvoted on this very website because of “privacy and anonymity on the internet is great!”.
Stupid question: why do companies care so much about bots to the point of degrading the customer experience significantly? I can understand for things like public forums. But like why would an ecommerce website ever put a captcha between you and your order (or a news website)?
One solution to this could simply be improving our ability to handle traffic. Bots aren't doing anything bad (under normal condition), they just represent a portion of your view traffic that you don't inherently want. An aggressive user, of sorts.
With that said, we have more cycles these days, better web servers, and inherently more traffic capacity than we did 10 years ago.
I have a feeling bot traffic will grow faster than our traffic-handling-tech, but regardless, i think this is just going to be the new standard. Especially since bot traffic could represent UX requirements for potential users.
This was inevitably going to get more popular after the new API pricing was announced. Unfortunately this also means they're going to start being more aggressive on bot detection.
I do agree with the overall trend the author is observing, but I guess what I was getting at is that this is sort of an old problem extending to the web.
There's a unique social stigma around "bots" that isn't applied the same way to power users of any other system (understandably so, given some are nefarious). I believe this largely gave way to AI-powered bots, as there's a demand for bots to behave as humanly as possible to 1) minimize obstructions like 403s and 2) maximize the extraction of information.
Maybe if web servers were broadly designed thinking of bots as power users, the web would bifurcate into a "optimized web" for bots (APIs), and a "traditional web" for humans (HTML). Instead, we're getting this mess of bots and humans all trying to squeeze through the same doorway.
It's also a good way to signal to investors and advertising partners about how many real humans actively use their services versus how many bots are using them.
reply