HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web 406
grmoc writes "As part of the 'Let's make the web faster' initiative, we (a few engineers — including me! — at Google, and hopefully people all across the community soon!) are experimenting with alternative protocols to help reduce the latency of Web pages. One of these experiments is SPDY (pronounced 'SPeeDY'), an application-layer protocol (essentially a shim between HTTP and the bits on the wire) for transporting content over the web, designed specifically for minimal latency. In addition to a rough specification for the protocol, we have hacked SPDY into the Google Chrome browser (because it's what we're familiar with) and a simple server testbed. Using these hacked up bits, we compared the performance of many of the top 25 and top 300 websites over both HTTP and SPDY, and have observed those pages load, on average, about twice as fast using SPDY. Thats not bad! We hope to engage the open source community to contribute ideas, feedback, code (we've open sourced the protocol, etc!), and test results."
Re:Akamai? (Score:5, Informative)
Not a terribly new concept. (Score:5, Informative)
Re:Akamai? (Score:3, Informative)
Re:How about telling Analytics to take a hike? (Score:3, Informative)
Adsense is embedded where the ads are going to be, Google Maps scripts are embedded where the map is going to be, etc.
This doesn't have to be the case, unless you're still coding per 1997 standards. Even with CSS 1, you can put those DIVs last in the code and still place them wherever you want them to be.
It's what I do with the Google ads (text only ads, FWIW) on one of my personal sites - so the content loads first, and then the ads show up.
HTTP-NG Revisited (ten years later!) (Score:5, Informative)
The good news is that SPDY seems to build on the SMUX ( http://www.w3.org/TR/WD-mux [w3.org] ) and MUX protocols that were designed as part of the HTTP-NG effort, so at least we're not reinventing the wheel. Now we have to decide what color to paint it.
Next up: immediate support in FireFox, WebKit, and Apache -- and deafening silence from IE and IIS.
Re:Just turn off image loading (Score:5, Informative)
Gopher is not installed by default, kiddie...
Gopher is installed by default on most builds of Firefox. Try this in your address bar: gopher://gopher.floodgap.com/1/world [floodgap.com]
Re:Just turn off image loading (Score:5, Informative)
Someone already invented this.
It's called Opera browser
Re:Just turn off image loading (Score:3, Informative)
Mostly in that it handles tables and frames.
http://www.jikos.cz/~mikulas/links/ [jikos.cz]
Re:Application Layer... (Score:3, Informative)
Yes, it means that both sides have to speak the protocol.
That is why we want to engage the community to start to look at this work!
Re:Just turn off image loading (Score:3, Informative)
>>>acoustic-coupled modem
Which was the result of the Bell Telephone monopoly. They refused to let other non-Bell devices connect to their lines, which forced users to buy *only* Bell products. Man I hate monopolies. I despise them like Teddy Roosevelt despised them.
Fortunately somebody came-up with the idea of the acoustic modem, which connected *indirectly* via the usage of sound. Very primitive but they worked, and they didn't break Bell's rules, and more importantly, they opened-up the market to other companies.
THEN bell announced, if you were using a modem, you had to pay an extra surcharge for overusage of the line you paid for. Or else risk disconnection. Sound familiar? (cough Comcast). Most users ignored Bell's surcharge idea.
Re:Cool.... but it's not http (Score:5, Informative)
Right now the plan is to use port 443. We may as well make the web a safer place while we make it faster. .. what we have planned right now, is:
The plans for indicating how a client/server speaks SPDY is still somewhat up in the air..
UPGRADE (ye olde HTTP UPGRADE).
and, putting some string into the SSL handshake that allows both sides to advertise which protocols they speak. If both speak SPDY, then it can be used.
This is nice because you don't have the additional latency of an additional roundtrip (and that latency can be large!)
Re:Just turn off image loading (Score:4, Informative)
>>>Gopher predates HTTP by a fair number of years.
Not correct. Gopher and HTTP were both released in summer 1991, so virtually the same birthdate. However gopher was available on the IBM PC that same year while HTTP was still confined to Unix systems, so that's why people misremember gopher as being first. (HTTP came to IBM PC, Macs, and Amigas in 1993.)
Re:Problems... (Score:3, Informative)
# To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.
The problem for that is now everything is encypted. If it has multiple channels, let one be plaintext of insecure items,a nd one cyphered for encrypted ones
We've had ideas along these lines-- specifically, we need to work on caching! One proposal that we had was that we'd send cryptographic hashes on the secure channel, then send the static data in the clear on a non-encrypted channel.
Alternatively, the data could be signed, and no communication would be necessary on the secure channel.
In any case, there is a lot of work to do on this, and we by no means have the answers right now. We just want to make the experiment public, and get as many people involved as we can so that we all end up with something better.
# To enable the server to initiate communications with the client and push data to the client whenever possible.
Horrible idea because now popup and ad blockers don't work. Sure they might not show it, but the server has already sent it to you and eaten up your bandwidth. What are your options? Send a block-list during negotiation? Not likely, and still might not be honored. We need to keep the client in control. What should be done is the server send the component list, and then the client can return the accepted list back to the server to have it put into the download stream. While this is the correct operation, the problem with this is it increases latency.
Well, the fact the server sends the data doesn't mean that the browser has to interpret it or render it. In the protocol, if/when the browser notices the server sending something it doesn't want, the browser can send a FIN (letting the other side know it should stop), and then can simply ignore the rest. It uses up some bandwidth, but it is really not that much worse than today... especially if we find that the real world tests also show it to be 2X faster on average!!
addin not needed (Score:4, Informative)
Re:Just turn off image loading (Score:3, Informative)
Re:How about downsides... (Score:3, Informative)
As a server implementor I can tell you that I'd rather have 1 heavily used connection than 20 (that is a LOW estimate for the number of connections many sites make(!!!!!!!)). Server efficiency was one of my goals for the protocol, in fact!
When we're talking about requiring compression, we're talking about compression over the headers only.
In any case, as someone who operates servers... I can't tell you how many times I've been angry at having to turn of compression for *EVERYONE* because some browser advertises supporting compression, but doesn't (which interacts badly with caches, etc. etc).
Re:Oh that's wonderful (Score:3, Informative)
Slashdot moderation is like the average fuel consumption on your car's trip computer. If you reset it while rolling down a hill you'll get insane MPG figures, but after that it'll fix itself up in the long run and evaluate to a correct value.
Re:Before you click! (Score:4, Informative)
Yes. thedailywtf.com has such stories. I specifically remember one, where the delete button of database entries was a GET link from the list page. So Google's little spider went there, and crawled the entire list. Requested every single delete link address on the page. I think it was not even linked from anywhere. The crawler got there by reading out the referrer addresses from when the developers came to Google from a link on that site.
And if I remember correctly, it of course was a non backuped production database. The only one in fact. Must have been fun. :)
Re:Before you click! (Score:3, Informative)
Yup. They're pretty big (cookies can be HUGE!)
Take a look here:
http://sites.google.com/a/chromium.org/dev/spdy/spdy-whitepaper
"Header compression resulted in an ~88% reduction in the size of request headers and an ~85% reduction in the size of response headers. On the lower-bandwidth DSL link, in which the upload link is only 375 Kbps, request header compression in particular, led to significant page load time improvements for certain sites (i.e. those that issued large number of resource requests). We found a reduction of 45 - 1142 ms in page load time simply due to header compression."
Re:Before you click! (Score:2, Informative)
Reference: http://www.quasimondo.com/archives/000225.php [quasimondo.com]