Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You also obviously have to allow the standard CDNs and such, but the result is that 90% of the web then "just works", without most of the tracking or problems.


sort by: page size:

Yes unless the websites are using a CDN with shared IP addresses. I wonder what % of the web that covers nowadays.

There aren't many sites that aren't, including "otherwise healthy websites" hosted without a CDN.

DNS didn't fail, and there's nothing you can do in HTML/CS/JS if your CDN fails to serve those things

If you're not using a CDN, you can just enable it for your own site. They explained in the article why they didn't think enough sites would do this to make it worthwhile.

A CDN isn't necessary for a large website to keep running either.

A data-whoring mega-corp isn't forcing anyone to use a CDN, and a CDN doesn't mandate Javascript (and CSS to punish visitors who block that JS).

Why no other sites bypass the CDN and go directly?

but.. it just allow static content, good to make it a CDN :D

Literally no significant extra effort to download the files and put them with your website. If you fetch them from a CDN with a fixed version you're not getting updates anyway.

CDNs are server side trackers. Even when not serving up browser side tracking code that makes yet more requests (which they often do), they're collecting location and other data based on response times and headers. Google's gv2 beacons, newrelic, cloudflare, etc.

This is why tools like decrentral-eyes exists. Unfortunately, tools like actually introduce additional unique identifiers.

Even with Javascript disabled, web fonts are tracking you. Even with javascript and web fonts disabled, images and CSS are tracking you. Even first-party images and CSS track you, even over VPNs and Tor.

First class web experiences that can't track are only possible by routing all traffic through a trusted remote privacy-by-design pre-caching hash table, like via a trusted IPFS node. Then you also need to only transact with the non-web (e.g. server hosting) service over a routeless protocol.

The latter requires the service to support non-custodial routeless protocols for payment and control. This is something dApps do well.

It sounds complicated, but every Web3.0 (not to be confused with Web3.js) service already supports this by necessity, even in outdated browsers, simply by choosing a trusted node and updating your browser or device's DNS and proxy settings. No additional software required.


> CDNs make users much more trackable across sites.

How does has CDN do this in a way that a "regular" web deployment wouldn't?


You can still use a CDN through your own domain. It wouldn't be perfect, but it would be something.

Yes, it would. Actually so many uses of CDN's are nonsensical. I mean, I literally block those requests and sites just work OK. (For my definition of OK. If I don't see autoloading autoplaying video it's only a big big plus)

Your box running your web server is far less complicated than using a CDN and worrying about countless additional points of failure. Network problems are only a minor risk.

Not really - if a one of the CDN's mentioned develops a security issue my sites are among the 26.6% that are unaffected.

It's unfeasible, but also completely unnecessary. CDNs can give you some speedup, but they don't magically make a site smaller and less resource-intensive to use.

This is what CDNs are designed for, the allow a few companies to get a monopoly on your browsing data. This is why I prefer uMatrix, it blocks all third party requests, a lot of stuff breaks but it breaks because their tracking you.

The browsers aren't going to be disallowing CDNs any time soon.

So your point is that there is no reason to not serve everything behind the same origin, it only requires setting up a full fledged CDN to do so.

I'm sorry but that's simply not an acceptable constraint.

next

Legal | privacy