Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Google Networking IT Technology

HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web 406

grmoc writes "As part of the 'Let's make the web faster' initiative, we (a few engineers — including me! — at Google, and hopefully people all across the community soon!) are experimenting with alternative protocols to help reduce the latency of Web pages. One of these experiments is SPDY (pronounced 'SPeeDY'), an application-layer protocol (essentially a shim between HTTP and the bits on the wire) for transporting content over the web, designed specifically for minimal latency. In addition to a rough specification for the protocol, we have hacked SPDY into the Google Chrome browser (because it's what we're familiar with) and a simple server testbed. Using these hacked up bits, we compared the performance of many of the top 25 and top 300 websites over both HTTP and SPDY, and have observed those pages load, on average, about twice as fast using SPDY. Thats not bad! We hope to engage the open source community to contribute ideas, feedback, code (we've open sourced the protocol, etc!), and test results."
This discussion has been archived. No new comments can be posted.

HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web

Comments Filter:
  • Re:Akamai? (Score:5, Informative)

    by TooMuchToDo ( 882796 ) on Thursday November 12, 2009 @04:14PM (#30078112)
    No. Akamai gives boxes to ISPs that cache Akamai's customer's content closer to the ISP's customers. Akamai then uses logic they've put together into DNS to redirect requests to the appliance closest to the request.
  • by ranson ( 824789 ) on Thursday November 12, 2009 @04:22PM (#30078210) Homepage Journal
    AOL actually does something similar to this with their TopSpeed technology, and it does work very, very well. It has introduced features like multiplexed persistent connections to the intermediary layer, sending down just object deltas since last visit (for if-modified-since requests), and applying gzip compression to uncompressed objects on the wire. It's one of the best technologies they've introduced. And, in full disclosure, I was proud to be a part of the team that made it all possible. It's too bad all of this is specific to the AOL software, so I'm glad a name like Google is trying to open up these kind of features to the general internet.
  • Re:Akamai? (Score:3, Informative)

    by ranson ( 824789 ) on Thursday November 12, 2009 @04:30PM (#30078354) Homepage Journal
    No. Akamai offers many services and features beyond 'giving' boxes to ISPs. For instance, they have their own global CDN unrelated to any ISP which you can pay to have your content served across. They'll host it or reverse proxy/cache it. They also can multicast live streaming media, on demand streaming media, etc. You get the picture. In once sentence, Akamai is a high availability, high capacity provider of bandwidth. And they accomplish that in a variety of ways other than just putting boxes in ISPs.
  • by 93 Escort Wagon ( 326346 ) on Thursday November 12, 2009 @04:48PM (#30078624)

    Adsense is embedded where the ads are going to be, Google Maps scripts are embedded where the map is going to be, etc.

    This doesn't have to be the case, unless you're still coding per 1997 standards. Even with CSS 1, you can put those DIVs last in the code and still place them wherever you want them to be.

    It's what I do with the Google ads (text only ads, FWIW) on one of my personal sites - so the content loads first, and then the ads show up.

  • by kriegsman ( 55737 ) on Thursday November 12, 2009 @05:13PM (#30079022) Homepage
    HTTP-NG ( http://www.w3.org/Protocols/HTTP-NG/ [w3.org] ) was researched, designed, and even, yes, implemented to solve the same problems that Google's "new" SPDY is attacking -- in 1999, ten years ago.

    The good news is that SPDY seems to build on the SMUX ( http://www.w3.org/TR/WD-mux [w3.org] ) and MUX protocols that were designed as part of the HTTP-NG effort, so at least we're not reinventing the wheel. Now we have to decide what color to paint it.

    Next up: immediate support in FireFox, WebKit, and Apache -- and deafening silence from IE and IIS.

  • by ribuck ( 943217 ) on Thursday November 12, 2009 @05:17PM (#30079066)

    Gopher is not installed by default, kiddie...

    Gopher is installed by default on most builds of Firefox. Try this in your address bar: gopher://gopher.floodgap.com/1/world [floodgap.com]

  • by commodore64_love ( 1445365 ) on Thursday November 12, 2009 @05:34PM (#30079374) Journal

    Someone already invented this.

    It's called Opera browser

  • by mattack2 ( 1165421 ) on Thursday November 12, 2009 @05:47PM (#30079638)

    Mostly in that it handles tables and frames.

    http://www.jikos.cz/~mikulas/links/ [jikos.cz]

  • by grmoc ( 57943 ) on Thursday November 12, 2009 @05:55PM (#30079796)

    Yes, it means that both sides have to speak the protocol.
    That is why we want to engage the community to start to look at this work!

  • by commodore64_love ( 1445365 ) on Thursday November 12, 2009 @05:56PM (#30079824) Journal

    >>>acoustic-coupled modem

    Which was the result of the Bell Telephone monopoly. They refused to let other non-Bell devices connect to their lines, which forced users to buy *only* Bell products. Man I hate monopolies. I despise them like Teddy Roosevelt despised them.

    Fortunately somebody came-up with the idea of the acoustic modem, which connected *indirectly* via the usage of sound. Very primitive but they worked, and they didn't break Bell's rules, and more importantly, they opened-up the market to other companies.

    THEN bell announced, if you were using a modem, you had to pay an extra surcharge for overusage of the line you paid for. Or else risk disconnection. Sound familiar? (cough Comcast). Most users ignored Bell's surcharge idea.

  • by grmoc ( 57943 ) on Thursday November 12, 2009 @05:58PM (#30079840)

    Right now the plan is to use port 443. We may as well make the web a safer place while we make it faster.
    The plans for indicating how a client/server speaks SPDY is still somewhat up in the air.. .. what we have planned right now, is:
    UPGRADE (ye olde HTTP UPGRADE).
    and, putting some string into the SSL handshake that allows both sides to advertise which protocols they speak. If both speak SPDY, then it can be used.
    This is nice because you don't have the additional latency of an additional roundtrip (and that latency can be large!)

  • by commodore64_love ( 1445365 ) on Thursday November 12, 2009 @06:07PM (#30080008) Journal

    >>>Gopher predates HTTP by a fair number of years.

    Not correct. Gopher and HTTP were both released in summer 1991, so virtually the same birthdate. However gopher was available on the IBM PC that same year while HTTP was still confined to Unix systems, so that's why people misremember gopher as being first. (HTTP came to IBM PC, Macs, and Amigas in 1993.)

  • Re:Problems... (Score:3, Informative)

    by grmoc ( 57943 ) on Thursday November 12, 2009 @06:41PM (#30080560)

    # To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.

    The problem for that is now everything is encypted. If it has multiple channels, let one be plaintext of insecure items,a nd one cyphered for encrypted ones

    We've had ideas along these lines-- specifically, we need to work on caching! One proposal that we had was that we'd send cryptographic hashes on the secure channel, then send the static data in the clear on a non-encrypted channel.
    Alternatively, the data could be signed, and no communication would be necessary on the secure channel.
    In any case, there is a lot of work to do on this, and we by no means have the answers right now. We just want to make the experiment public, and get as many people involved as we can so that we all end up with something better.

    # To enable the server to initiate communications with the client and push data to the client whenever possible.

    Horrible idea because now popup and ad blockers don't work. Sure they might not show it, but the server has already sent it to you and eaten up your bandwidth. What are your options? Send a block-list during negotiation? Not likely, and still might not be honored. We need to keep the client in control. What should be done is the server send the component list, and then the client can return the accepted list back to the server to have it put into the download stream. While this is the correct operation, the problem with this is it increases latency.

    Well, the fact the server sends the data doesn't mean that the browser has to interpret it or render it. In the protocol, if/when the browser notices the server sending something it doesn't want, the browser can send a FIN (letting the other side know it should stop), and then can simply ignore the rest. It uses up some bandwidth, but it is really not that much worse than today... especially if we find that the real world tests also show it to be 2X faster on average!!

  • addin not needed (Score:4, Informative)

    by eleuthero ( 812560 ) on Thursday November 12, 2009 @06:46PM (#30080620)
    Most of the features of fasterfox are found in about:config. There is no sense in installing an addon that will slow the browser down when the browser already has pipelining and prefetching (albeit disabled)
  • by ProfessionalCookie ( 673314 ) on Thursday November 12, 2009 @06:54PM (#30080736) Journal
    One is the world wide web, the other is a cat.
  • by grmoc ( 57943 ) on Thursday November 12, 2009 @06:56PM (#30080754)

    As a server implementor I can tell you that I'd rather have 1 heavily used connection than 20 (that is a LOW estimate for the number of connections many sites make(!!!!!!!)). Server efficiency was one of my goals for the protocol, in fact!

    When we're talking about requiring compression, we're talking about compression over the headers only.

    In any case, as someone who operates servers... I can't tell you how many times I've been angry at having to turn of compression for *EVERYONE* because some browser advertises supporting compression, but doesn't (which interacts badly with caches, etc. etc).

  • by somersault ( 912633 ) on Thursday November 12, 2009 @07:32PM (#30081226) Homepage Journal

    Slashdot moderation is like the average fuel consumption on your car's trip computer. If you reset it while rolling down a hill you'll get insane MPG figures, but after that it'll fix itself up in the long run and evaluate to a correct value.

  • Re:Before you click! (Score:4, Informative)

    by Hurricane78 ( 562437 ) <deleted@slas[ ]t.org ['hdo' in gap]> on Thursday November 12, 2009 @07:56PM (#30081476)

    Yes. thedailywtf.com has such stories. I specifically remember one, where the delete button of database entries was a GET link from the list page. So Google's little spider went there, and crawled the entire list. Requested every single delete link address on the page. I think it was not even linked from anywhere. The crawler got there by reading out the referrer addresses from when the developers came to Google from a link on that site.

    And if I remember correctly, it of course was a non backuped production database. The only one in fact. Must have been fun. :)

  • Re:Before you click! (Score:3, Informative)

    by grmoc ( 57943 ) on Thursday November 12, 2009 @08:37PM (#30081886)

    Yup. They're pretty big (cookies can be HUGE!)
    Take a look here:
    http://sites.google.com/a/chromium.org/dev/spdy/spdy-whitepaper

    "Header compression resulted in an ~88% reduction in the size of request headers and an ~85% reduction in the size of response headers. On the lower-bandwidth DSL link, in which the upload link is only 375 Kbps, request header compression in particular, led to significant page load time improvements for certain sites (i.e. those that issued large number of resource requests). We found a reduction of 45 - 1142 ms in page load time simply due to header compression."

  • Re:Before you click! (Score:2, Informative)

    by GravityStar ( 1209738 ) on Friday November 13, 2009 @02:56AM (#30084162)

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0

Working...