X forwarding is totally unusable over crowded, throttled, public WiFi or even good internet speeds with hardly any latency at all. The web is designed for those environments.
You do realize that every latency you see on residential connections is magnified on mobile or even satellite connections?
If you have a RTT of seconds, 10 additional roundtrips can cost an entire minute.
These things become very noticeable very quickly if the system is in non-perfect environments.
Additionally, even with modern browsers it takes far too long to open a website — it should be 100% instant, less than one or two frames (16 or 33ms). That's not possible, as RTT is usually around 18ms between users and CDN edges, but at least it should be below perceivable delays (100ms).
EDIT:
My best websites hover around 281ms to start of transfer, and 400ms to the site being finished.
That's improvable, but most sites out there take literally half a minute to load.
Now go on a 64kbps connection, and try again. Handshake takes seconds, start of transfer is almost after 30sec, and by the time your website arrives your coffee has gone cold (a few minutes for a google search).
Years ago, Google was usable on dial-up. Now even the handshake takes as long as an entire search used to take. Notice anything?
Latency is bad, throughput is pretty decent. Unfortunately this is a side effect of it's routing system (routing through 3 random nodes around the world).
Latency is a lot lower than people generally think it is. I'm currently on a terrible satellite connection (not the popular one), and the latency to ping 1.1.1.1 averages 16.184ms right now, during awful weather, and never hits above 25ms. Probably the worst modern case scenario you could imagine for latency's sake, and significantly worse than I had ten years ago. In better weather, it's half that.
If you pull a few tricks, it wouldn't be anywhere near impossible to manage sub-20ms latency on your average connection. Now, over a web browser? I couldn't tell you. I wouldn't bet on it. But definitely possible natively.
Bandwidth and latency aren't the same thing! High-latency networks sometimes don't ever load some websites, even if there is reasonable bandwidth. I remember one time when I was in Greenland having to VNC to a computer in the US to do something on some internal HR website that just wouldn't load with the satellite latency.
Latency is the real killer. I used HughesNet satellite internet while travelling for a while and it has a 650ms round trip. Even sites built by talented and well compensated engineers would sometimes be completely unusable. For example, YoutubeTV used to download the video data in tiny little sub 1MB chunks, each from a seemingly separate CDN endpoint. The result was that each chunk needed a DNS roundtrip and then another one or two for SSL handshaking so it would take at a minimum 1.5 - 2 seconds to even start downloading the data. Needless to say, it was not usable.
Moral of the story - latency and roundtrips are the devil and some users have it extra bad, but if you make it work for them, it's going to be amazing for everyone else too.
Regardless of where on this globe you put your VPS, someone will be accessing it at 1000 ms latency. It doesn't make sense to optimize the browser page load speed to 10 ms, and forget that it takes 600 ms to fetch the data from Asia.
You say that until you travel somewhere in Eastern Europe, South America, or Southeast Asia, and realize that 90% of the Western Internet is completely and utterly broken. Everything besides Google, Facebook, etc. (Because they’ve invested in solving the problem) are practically unusable.
The problem isn’t just TTFB or latency, as some people are implying, it’s poor interconnectivity at the transit provider level. Those links are frequently congested and experience fiber cuts, and unless you’re peered in that country your application is going to perform poorly. The companies who understand this have a first mover advantage in some of the fastest growing economies in the world.
This is why I find it dreadful that evangelists here are heavily promoting live$whatever technology where every local state change requires at least one server roundtrip, or “browsers support esm now, bundling is a thing of the past!” etc. You don’t need to be at Antarctica to feel the latencies caused by the waterfall of roundtrips, or roundtrip on every click, as long as you’re a mere 200ms from the server, or in a heavily congested place.
And with higher latency it’s even worse: from Australia, latency to the server was over 350ms (going the long way round, via the USA, for some reason), and it took 80 seconds to load. With HTTP/2, it could have taken as little as two seconds to load, and I’d expect it to take less than eight seconds.
reply