Optimizing HTTP: Keep-alive and Pipelining by Ilya Grigorik.
From the post:
The last major update to the HTTP spec dates back to 1999, at which time RFC 2616 standardized HTTP 1.1 and introduced the much needed keep-alive and pipelining support. Whereas HTTP 1.0 required strict “single request per connection” model, HTTP 1.1 reversed this behavior: by default, an HTTP 1.1 client and server keep the connection open, unless the client indicates otherwise (via Connection: close header).
Why bother? Setting up a TCP connection is very expensive! Even in an optimized case, a full one-way route between the client and server can take 10-50ms. Now multiply that three times to complete the TCP handshake, and we’re already looking at a 150ms ceiling! Keep-alive allows us to reuse the same connection between different requests and amortize this cost.
The only problem is, more often than not, as developers we tend to forget this. Take a look at your own code, how often do you reuse an HTTP connection? Same problem is found in most API wrappers, and even standard HTTP libraries of most languages, which disable keepalive by default.
I know, way over on the practical side but some topic maps deliver content outside of NSA pipes and some things are important enough to bear repeating. This article covers one of those. Enjoy.
[…] his design principles, complains about HTTP being slow. Maybe I should send him a pointer to: Optimizing HTTP: Keep-alive and Pipelining. What do you […]
Pingback by Jasondb « Another Word For It — October 14, 2011 @ 6:25 pm