Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

HTTP/1.1 pipelining was never useful due to head-of-line blocking and no browsers enabled it by default.


sort by: page size:

HTTP 1.1 supports pipelining requests but in reality it's disabled everywhere.

HTTP pipelining is busted for a variety of reasons. Support exists in most browsers but it's disabled by default because it makes things worse, on balance.

Clients stopped using HTTP/1.1 pipelining because it just didn't work well enough.

https://en.wikipedia.org/wiki/HTTP_pipelining#Implementation...


Your browser wouldn't even use pipelining if it was supported. The reason HTTP/2 exists is because pipeling ended up not being a solution.

Yeah, HTTP/1.1 pipelining is useless. But the good news is, HTTP/2 is pipelined (and doesn't have the head-of-line blocking problem). And HTTP/2 is supported by modern browsers.

But anyway… HTTP/2, or 1.1 pipelining for that matter, is usually terminated at the reverse proxy level. So it's really not necessary in a web framework! Just makes these unfair benchmark results possible.


Pipelining support in http 1.1 is basically useless even outside of the compatibility issues.

HTTP/1.1 pipelining is fundamentally flawed and largely unused. There isn't some conspiracy not to use it - many, many projects evaluated it and pretty much all came to that same conclusion.

HTTP/1.1 can do this. It's called pipelining. But browsers refuse to implement it.

Is HTTP/1.1 pipelining basically dead too?

The "new" HTTP is clearly targeted at the "approved browser".

For example, this alleged "head-of-line blocking problem" that HTTP/2 purportedly "solves" was never a problem of HTTP outside of a specific program, the graphical web browser, the type of client that tries to pull resources from different domains for a single website. Not all programs that use HTTP need to do that.

For instance I have been using HTTP/1.1 pipelining outside the browser for fast, reliable information retrieval for close to 20 years. It has always been supported by HTTP servers and it works great with the simple clients I use. I still rely on HTTP/1.1 pipelining today, on a daily basis. Never had a problem.

There are uses for pipelining besides the ones envisioned by "tech" companies, web developers and their advertiser customers.


Leaving aside the technical statements about SPDY, the reality of HTTP pipelining is that no-one uses it. According to Wikipedia, Opera is the only major browser that ships with pipelining enabled. Most intermediaries don't support pipelining either.

Pipelining was a well-intentioned feature which didn't solve the core problem: namely, that a big or slow request can block you from doing anything else for a really long time unless you open another TCP connection.


Others have pointed out why HTTP/2 is still better but if you're curious about HTTP pipelining here are the reasons why Firefox and Chrome disabled it after years of testing:

https://bugzilla.mozilla.org/show_bug.cgi?id=264354 https://www.chromium.org/developers/design-documents/network...


Yes, that. I always set firefox to attempt pipelining until they removed it in favor of HTTP/2. It worked well.

> No, browsers can pipeline requests (send the requests back-to-back, without first waiting for a response) in HTTP/1.1. The server has to send the responses in order, but it doesn't have to process them in that order if it is willing to buffer the later responses in the case of head-of-line blocking.

Browsers can pipeline requests on http/1.1, but I don't think any of them actually do in today's world, at least that's what MDN says. [1] And from my recollection, very few browsers did pipelining prior to http/2 either -- the chances of running into something broken were much too high.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Connection...


Yes. There are no current browsers that implement HTTP/1.1 pipelining, and even various proxies and others don't support HTTP/1.1 due to head of line blocking and proxy errors.

Also, with HTTP/1.1 pipelining if a client sends multiple requests but one of the requests results in an error and the server closing the connection all of those other requests are lost.


HTTP pipelining is turned off by default in most browsers due to concerns with buggy proxies and servers (see https://bugzilla.mozilla.org/show_bug.cgi?id=264354 ). It may work for you and the particular set of servers you visit, but I suspect browser developers would rather have a browser that by default works with the widest possible range of configurations.

Unfortunately, it being turned off by default in most browsers means that most people won't see the benefits from it. Hopefully, the upcoming HTTP/2 standard will fare better (latest draft: https://tools.ietf.org/html/draft-unicorn-httpbis-http2-01 ).

Note that HTTP/2 will be based on SPDY (in particular, SPDY/4 with the new header compressor). Hopefully, when the standard is finalized and we have multiple strong implementations, that will allay the concerns you seem to have with SPDY today.

(Disclaimer: I work on SPDY / HTTP/2 for Chromium.)


HTTP does have pipelining since the 90s.

Yes. HTTP/1.1 arranged for this with pipelining 11 years ago.

But there were servers that thought they supported pipelining and had corruption issues, so the clients got scared and wouldn't use it. (Besides, the modems on the edge of the web were the problem, not the latency.) Then the proxy people said "Why bother? No one uses it.", and the clients continued to not implement it, or did but left it off by default with a switch to turn it on in a disused lavatory, behind the "Beware of the Leopard" sign. Meanwhile, the server people having run out of useful and useless features to add to their vast code bases actually got around to making pipelining work correctly.

Welcome to 2010.

• Most popular web servers support pipelining, probably correctly.

• <1% of browsers will try it.

• Proxies (firewall and caching) largely break it.

• If you are using scripts that can change the state of the web server, then your head might explode when you consider what happens with pipelined requests.


It doesn't take a rocket scientist to support http/1.1 pipelining on servers. It's harder to implement in proxies. Much of the creepy middleware on the Internet would break down in nearly impossible to troubleshoot ways when browsers pipelined requests. So they forked the protocol as a bypass.
next

Legal | privacy