Forgot your password?
typodupeerror
The Internet Google Networking IT Technology

HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web 406

Posted by timothy
from the sufficient-disclosure dept.
grmoc writes "As part of the 'Let's make the web faster' initiative, we (a few engineers — including me! — at Google, and hopefully people all across the community soon!) are experimenting with alternative protocols to help reduce the latency of Web pages. One of these experiments is SPDY (pronounced 'SPeeDY'), an application-layer protocol (essentially a shim between HTTP and the bits on the wire) for transporting content over the web, designed specifically for minimal latency. In addition to a rough specification for the protocol, we have hacked SPDY into the Google Chrome browser (because it's what we're familiar with) and a simple server testbed. Using these hacked up bits, we compared the performance of many of the top 25 and top 300 websites over both HTTP and SPDY, and have observed those pages load, on average, about twice as fast using SPDY. Thats not bad! We hope to engage the open source community to contribute ideas, feedback, code (we've open sourced the protocol, etc!), and test results."
This discussion has been archived. No new comments can be posted.

HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web

Comments Filter:
  • by Anonymous Coward on Thursday November 12, 2009 @03:01PM (#30077894)

    Now we can see Uncle Goatse twice as fast.

  • by courteaudotbiz (1191083) on Thursday November 12, 2009 @03:02PM (#30077912) Homepage
    In the future, the content will be loaded before you click! Unfortunately, it's not like it today, so I didn't make the first post...
    • Re: (Score:2, Interesting)

      by oldspewey (1303305)

      content will be loaded before you click!

      Sounds like those "dialup accelerators" from back in the '90s ... the ones that would silently spider every link on the page you're currently viewing in order to build a predictive cache.

      • Re:Before you click! (Score:5, Interesting)

        by wolrahnaes (632574) <seanNO@SPAMseanharlow.info> on Thursday November 12, 2009 @03:41PM (#30078508) Homepage Journal

        Which of course led to quite amusing results when some failure of a web developer made an app that performed actions from GET requests. I've heard anecdotes of entire databases being deleted by a web accelerator in these cases.

        From RFC2616:

        Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

                In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered “safe”. This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

                Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.

        • Re:Before you click! (Score:4, Informative)

          by Hurricane78 (562437) <deleted@slashd o t .org> on Thursday November 12, 2009 @06:56PM (#30081476)

          Yes. thedailywtf.com has such stories. I specifically remember one, where the delete button of database entries was a GET link from the list page. So Google's little spider went there, and crawled the entire list. Requested every single delete link address on the page. I think it was not even linked from anywhere. The crawler got there by reading out the referrer addresses from when the developers came to Google from a link on that site.

          And if I remember correctly, it of course was a non backuped production database. The only one in fact. Must have been fun. :)

      • by commodore64_love (1445365) on Thursday November 12, 2009 @04:09PM (#30078952) Journal

        >>>Sounds like those "dialup accelerators" from back in the '90s ...

        Hey I still use one of those you insensitive clod! It's called Netscape Web Accelerator, and it does more than just prefetch requests - it also compresses all text and images to about 10% original size. How else would I watch 90210 streaming videos over my phoneline?

        Why I can almost see what looks like a bikini. Man Kelly is hot... ;-)

        • Re: (Score:3, Funny)

          But seriously...

          the accelerator (compression) is really useful, and I couldn't imagine using dialup without it. It makes those slow 28k or 50k hotel connections look as fast as my home DSL hookup. (Except for the blurry images of course.)

      • Re: (Score:3, Funny)

        by grmoc (57943)

        Its not the same, really...
        SPDY could do prefetching (in which case it'd be server push, instead of a new pull), but mainly what it does is it lets a lot of requests use the same connection, and does compression on the HTTP headers.
        Thats essentially almost all of the current performance advantage (for today).

    • by mcgrew (92797) *

      In the future, the content will be loaded before you click!

      Wouldn't you have to have some thiotimoline [wikipedia.org] and water in your mouse for that to work? Thiotimoline ain't cheap, you know.

  • and faster still.. (Score:4, Insightful)

    by Anonymous Coward on Thursday November 12, 2009 @03:04PM (#30077928)

    remove flash, java applets ad's
    20X faster!

    • by amicusNYCL (1538833) on Thursday November 12, 2009 @03:42PM (#30078526)

      You could also remove images, CSS, Javascript, and text, imagine the time savings!

      • by Joe Mucchiello (1030) on Thursday November 12, 2009 @04:11PM (#30078982) Homepage

        Remove the content too. It's all meaningless stuff like this post.

      • by commodore64_love (1445365) on Thursday November 12, 2009 @04:14PM (#30079028) Journal

        Ye are joking, but ye are correct. Take this slashdot page. I used to be able to participate in discussion forum with nothing more than a 1200 baud (1kbit/s) modem. If I tried that today, even with all images turned off, it would take 45 minutes to load this page, mainly due to the enormous CSS files

        It would be nice if websites made at least *some* attempt to make their files smaller, and therefore load faster.

        • Re: (Score:3, Funny)

          by icebraining (1313345)

          So save the CSS to your HD and put a filter in an extension/proxy/etc to replace the CSS URL with your local file. Wait, isn't that what the cache is for? Hmm...

        • by ProfessionalCookie (673314) on Thursday November 12, 2009 @05:41PM (#30080554) Journal
          Are you kidding? The new slashdot is way easier to participate on from dialup. The CSS file may look huge but it's a 29KB one time download.

          Cache headers are set to one week so unless you're clearing your cache every page load it's amounts to nothing.

          If anything the scripts are bigger, but again, cached. Besides AJAX comments were a huge improvement for those of us on dialup- no more loading the whole page every time you did anything.

          CSS and JS, when used correctly make things faster for users, even (and sometimes especially) for those of us on slow connections.

          • Re: (Score:3, Insightful)

            by edumacator (910819)

            The new slashdot is way easier to participate on from dialup.

            Shhh...If people start thinking /. discussions work, half the people here won't have anything to complain about and will have to go back to spending the day working.

    • Re: (Score:3, Interesting)

      by daem0n1x (748565)

      I think Flash should be made illegal. Yesterday I visited a website 100% made in Flash. I had to wait for it to load and then none of the links worked. Many Flash sites' links don't work in Firefox, I have no idea why. I suspect incompetent developers.

      I sent a furious email to the company saying I was going to choose one of their competitors just because of the lousy website. I got a reply from their CEO basically saying "go ahead, we don't give a fuck".

      Flash is like cake icing. A little bit tastes and

  • by Anonymous Coward
    How is this different from Web servers that serve up gzipped pages?

    If only the Google engineers can do something about Slashdot's atrociously slow Javascript. Like maybe they can remove the sleep() statements.

    What, just because the original poster pulls a "look at me, I did something cool, therefore I must be cool!" doesn't mean I have to go along with it.
  • Suspicious.... (Score:3, Interesting)

    by Anonymous Coward on Thursday November 12, 2009 @03:08PM (#30078010)

    From the link

    We downloaded 25 of the "top 100" websites over simulated home network connections, with 1% packet loss. We ran the downloads 10 times for each site, and calculated the average page load time for each site, and across all sites. The results show a speedup over HTTP of 27% - 60% in page load time over plain TCP (without SSL), and 39% - 55% over SSL.

    1. Look at top 100 websites.
    2. Choose the 25 which give you good numbers and ignore the rest.
    3. PROFIT!

  • by rho (6063) on Thursday November 12, 2009 @03:09PM (#30078024) Homepage Journal

    And all other "add this piece of Javascript to your Web page and make it more awesomer!"

    Yes, yes, they're useful. And you can't fathom a future without them. But in the meantime I'm watching my status bar say, "completed 4 of 5 items", then change to "completed 11 of 27 items", to "completed 18 of 57 items", to "completed... oh screw this, you're downloading the whole Internet, just sit back, relax and watch the blinkenlights".

    Remember when a 768kbps DSL line was whizzo fast? Because all it had to download was some simple HTML, maybe some gifs?

    I want my old Internet back. And a pony.

    • by ramaboo (1290088) on Thursday November 12, 2009 @03:11PM (#30078062)

      And all other "add this piece of Javascript to your Web page and make it more awesomer!"

      Yes, yes, they're useful. And you can't fathom a future without them. But in the meantime I'm watching my status bar say, "completed 4 of 5 items", then change to "completed 11 of 27 items", to "completed 18 of 57 items", to "completed... oh screw this, you're downloading the whole Internet, just sit back, relax and watch the blinkenlights".

      Remember when a 768kbps DSL line was whizzo fast? Because all it had to download was some simple HTML, maybe some gifs?

      I want my old Internet back. And a pony.

      That's why smart web developers put those scripts at the end of the body.

      • Re: (Score:3, Insightful)

        by Zocalo (252965)

        That's why smart web developers put those scripts at the end of the body.

        It's also why smart users filter them outright with something like AdBlock - anything that I see in the browser history that looks like a tracking/stats domain or URL gets blocked on sight. Come to think of it, I could probably clean it up publish it as an AdBlock filter list if anyone's interested; there's only a few dozen entries on there at the moment, but I'm sure that would grow pretty quickly if it was used by a more general a

        • by causality (777677) on Thursday November 12, 2009 @03:58PM (#30078782)

          That's why smart web developers put those scripts at the end of the body.

          It's also why smart users filter them outright with something like AdBlock - anything that I see in the browser history that looks like a tracking/stats domain or URL gets blocked on sight. Come to think of it, I could probably clean it up publish it as an AdBlock filter list if anyone's interested; there's only a few dozen entries on there at the moment, but I'm sure that would grow pretty quickly if it was used by a more general and less paranoid userbase.

          What's paranoid about insisting that a company bring a proposal, make me an offer, and sign a contract if they want to derive monetary value from my personal data? Instead, they feel my data is free for the taking and this entitlement mentality is the main reason why I make an effort to block all forms of tracking. I never gave consent to anyone to track anything I do, so why should I honor an agreement in which I did not participate? The "goodness" or "evil-ness" of their intentions doesn't even have to be a consideration. Sorry but referring to that as "paranoid" is either an attempt to demagogue it, or evidence that someone else's attempt to demagogue it was successful on you.

          Are some people quite paranoia? Sure. Does that mean you should throw out all common sense, pretend like there are only paranoid reasons to disallow tracking, and ignore all reasonable concerns? No. Sure, someone who paints with a broad brush might notice that your actions (blocking trackers) superficially resemble some actions taken by paranoid people. Allowing that to affect your decison-making only empowers those who are superficial and quick to assume because you are kowtowing to them. This is what insecure people do. If the paranoid successfully tarnish the appearance of an otherwise reasonable action because we care too much about what others may think, it can only increase the damage caused by paranoia.

          • Re: (Score:3, Insightful)

            by mattack2 (1165421)

            It's not "free for the taking". It's "free in exchange for free content on the web".

            (Note, I'm not arguing against ad blockers or the like.. just like I 30 second skip through the vast vast vast majority of commercials on my Tivos, and FFed through them on my VCR before that.)

          • Re: (Score:3, Interesting)

            What's paranoid about insisting that a company bring a proposal, make me an offer, and sign a contract if they want to derive monetary value from my personal data?

            Because the costs of doing so would outweigh the benefits, leading to no one agreeing to the use of their data, no ad revenue, and ultimately no professional web sites (except those that charge a fee to view). This situtation is termed a "market failure", in this case because of high transaction costs. Therefore, society standardizes the agreeme

          • Re: (Score:3, Interesting)

            by krelian (525362)

            Instead, they feel my data is free for the taking and this entitlement mentality is the main reason why I make an effort to block all forms of tracking.

            What about your sense of entitlement to get their content under your conditions?

    • I want my old Internet back. And a pony.

      You forgot to yell at the kids to get off your internet.

    • I want my old Internet back. And a pony.

      If Slashdot does OMG Ponies again will that satisfy your wants and needs?

    • by gstoddart (321705)

      Remember when a 768kbps DSL line was whizzo fast?

      Jeebus. I remember when my 1200 baud modem felt whizzo fast compared to my old 300 baud modem.

      And, yes, I can already see the "get off of my lawn" posts below you, and I'm dating myself. :-P

      Cheers

    • Re: (Score:3, Insightful)

      by value_added (719364)

      I want my old Internet back. And a pony.

      LOL. I'd suggest disabling javascript and calling it a day.

      Alternatively, use a text-based browser. If the webpage has any content worth reading, then a simple lynx -dump in 99% of cases will give you what you want, with the added bonus of re-formatting those mile-wide lines into something readable.

      On the other hand, I suspect most people don't want the "old internet". What was once communicated on usenet or email in a few simple lines, for example, now increasingl

  • by Animats (122034) on Thursday November 12, 2009 @03:10PM (#30078038) Homepage

    The problem isn't pushing the bits across the wire. Major sites that load slowly today (like Slashdot) typically do so because they have advertising code that blocks page display until the ad loads. The ad servers are the bottleneck. Look at the lower left of the Mozilla window and watch the "Waiting for ..." messages.

    Even if you're blocking ad images, there's still the delay while successive "document.write" operations take place.

    Then there are the sites that load massive amounts of canned CSS and Javascript. (Remember how CSS was supposed to make web pages shorter and faster to load? NOT.)

    Then there are the sites that load a skeletal page which then makes multiple requests for XML for the actual content.

    Loading the base page just isn't the problem.

    • by HBI (604924) <kparadine&gmail,com> on Thursday November 12, 2009 @03:12PM (#30078076) Homepage Journal

      IAWTP. With NoScript on and off, the web is a totally different place.

      • by rho (6063)

        With NoScript on and off, the web is a totally different place

        Yes. Quite often completely non-functional, because the site requires Javascript to do anything.

        Usually this is followed by an assertion that the site's developer is a clueless knob--which may be true, but doesn't help at all. This is the Web we deserve, I suppose: 6 megabit cable connections and dual-core 2.5 gigahertz processors that can't render a forum page for Pokemon addicts in under 8 seconds.

    • Re: (Score:3, Funny)

      by BlueBoxSW.com (745855)

      So if Google sped up the non-ad web, they would have more room for their ads?

      SNEAKY!!

    • ?

      No. Neither can I. It will let them *push* adverts at you in parallel though... *before you asked for them*

      Google wanting more efficient advert distribution... No, never...

       

    • Re: (Score:2, Insightful)

      by Yoozer (1055188)

      Remember how CSS was supposed to make web pages shorter and faster to load? NOT.)

      What, you think after the first load that CSS file isn't cached in any way? Inline styles slow down every time, CSS just the first. CSS was supposed to make styling elements not completely braindead. You want to change the link colors with inline styles from red to blue? With inline styles - enjoy your grepping. You're bound to forget some of 'm, too.

      Bitching about ad loading times and huge JS libraries? Sure, go ahead.

    • by mea37 (1201159)

      So... when you try to load slashdot, the requests that fetch the content don't get rolling until the request that fetches the ad finishes... and SPDY allows all of the requests to be processed concurrently so the content doesn't have to wait for the ad...

      How is that solving the wrong problem again?

    • by Cyner (267154)
      Don't forget the servers that are overloaded, or have poorly written code. An easy can, check out HP's bloated website. Each page has relatively little content compared to the load times. It's all in the backend processing, which must be massive seeing as how it takes 1/2 to several seconds for the server to process requests for even simple pages.

      As the OP said, they're solving the wrong problem. It's not a transport issue, it's design issues. And many websites are rife with horrible design [worsethanfailure.com].
    • by shentino (1139071) on Thursday November 12, 2009 @03:37PM (#30078434)

      CSS can make things shorter and faster if they just remember to link to it as a static file.

      You can't cache something that changes, and anything, like CSS and Javascript, that's caught in the on-the-fly generation of dynamic and uncacheable text in spite of actually being static, is just going to clog up the tubes.

      In fact, thanks to slashdot's no-edits-allowed policy, each comment itself is a static unchangeable snippet of text. Why not cache those?

      Sending only the stuff that changes is usually a good optimization no matter what you're doing.

      CSS and javascript themselves aren't bad. Failing to offlink and thus cacheable-ize them however, is.

  • Everything plays together nicely for "cloud-gaming" statrups. This will solve, at least to some extent, one of their hardest problems, for free. Except if Google itself is not after exect same market. They never mentioned how Chrome OS is supposed to provide gaiming to users ...
    • Google has never explicitly mentioned it(at least to my knowledge); but I don't think that it is rocket surgery to infer the likely possibilities.

      For basic casual flash stuff, there will almost certainly be flash support(since Adobe seems to at least be promising to get off their ass about reasonably supporting non wintel platforms). In the longer term, Google's work on making javascript really fast will, when combined with SVG or WebGL, allow flash level games to be produced with stock web technologies.
  • Application Layer... (Score:3, Interesting)

    by Monkeedude1212 (1560403) on Thursday November 12, 2009 @03:12PM (#30078068) Journal

    Doesn't that mean that both the client and the server have to be running this new application to see the benefits of this? Essentially either one or the other is still going to be using HTTP if you don't set it up on both, and its only as fast as the slowest piece.

    While a great initiative, it could be a while before it actually takes off. To get the rest of the world running on a new protocol will take some time, and there will no doubt be some kinks to work out.

    But if anyone could do it, it'd be Google.

    • A plugin gets it into something like firefox. Then, as long as there is a way for a webserver like apache to allow both requests (http or spdy), it shouldn't be that hard because you arn't storing your web pages in (static or dynamic) in a different format so it shouldn't be that much work to add the [apache] module once it is written.

    • Re: (Score:3, Informative)

      by grmoc (57943)

      Yes, it means that both sides have to speak the protocol.
      That is why we want to engage the community to start to look at this work!

  • Am I the only one imagining a ventriloquist controlling a snarky dummy that counters all the points in the summary with dubious half-truths?

  • by Colin Smith (2679) on Thursday November 12, 2009 @03:19PM (#30078178)

    So which ports are you planning to use for it?

     

  • by ranson (824789) on Thursday November 12, 2009 @03:22PM (#30078210) Homepage Journal
    AOL actually does something similar to this with their TopSpeed technology, and it does work very, very well. It has introduced features like multiplexed persistent connections to the intermediary layer, sending down just object deltas since last visit (for if-modified-since requests), and applying gzip compression to uncompressed objects on the wire. It's one of the best technologies they've introduced. And, in full disclosure, I was proud to be a part of the team that made it all possible. It's too bad all of this is specific to the AOL software, so I'm glad a name like Google is trying to open up these kind of features to the general internet.
    • Re: (Score:3, Insightful)

      by bill_mcgonigle (4333) *

      It may be noble in goal, but AOL's implementation makes things hell on sysadmins trying to load-balance AOL users' connections. In a given session, even a page load, I can expect connections from n number of (published) AOL proxies, *and* the user's home broadband IP. It's not possible to correlate them at layer 3, so nasty layer-7 checks get used instead, and AOL users wind up getting shoved into non-redundant systems.

  • by 51M02 (165179)

    I mean reinventing the wheel, well why not, this one is old and let say we have done all we could with HTTP...

    But why, WHY should you call that with a stupid name like SPDY?!? It's not even an acronym (of is it?).

    It sound bad, it's years (decade?) before it is well supported... but why not. Wake me when it's done ready for production.

    I guess they start to get bored at Google if they are trying not rewrite HTTP.

  • by RAMMS+EIN (578166) on Thursday November 12, 2009 @03:30PM (#30078336) Homepage Journal

    While we're at it, let's also make processing web pages faster.

    We have a semantic language (HTML) and a language that describes how to present that (CSS), right? This is good, let's keep it that way.

    But things aren't as good as they could be. On the semantic side, we have many elements in the language that don't really convey any semantic information, and a lot of semantics there isn't an element for. On the presentation side, well, suffice it to say that there are a _lot_ of things that cannot be done, and others that can be done, but only with ugly kludges. Meanwhile, processing and rendering HTML and CSS takes a lot of resources.

    Here is my proposal:

      - For the semantics, let's introduce an extensible language. Imagine it as a sort of programming language, where the standard library has elements for common things like paragraphs, hyperlinks, headings, etc. and there are additional libraries which add more specialized elements, e.g. there could be a library for web fora (or blogs, if you prefer), a library for screenshot galleries, etc.

      - For the presentation, let's introduce something that actually supports the features of the presentation medium. For example, for presentation on desktop operating systems, you would have support for things like buttons and checkboxes, fonts, drawing primitives, and events like keypresses and mouse clicks. Again, this should be a modular system, where you can, for example, have a library to implement the look of your website, which you can then re-use in all your pages.

      - Introduce a standard for the distribution of the various modules, to facilitate re-use (no having to download a huge library on every page load).

      - It could be beneficial to define both a textual, human readable form and a binary form that can be efficiently parsed by computers. Combined with a mapping between the two, you can have the best of both worlds: efficient processing by machine, and readable by humans.

      - There needn't actually be separate languages for semantics, presentation and scripting; it can all be done in a single language, thus simplifying things

    I'd be working on this if my job didn't take so much time and energy, but, as it is, I'm just throwing these ideas out here.

    • by rabtech (223758) on Thursday November 12, 2009 @05:31PM (#30080394) Homepage

      e have a semantic language (HTML) and a language that describes how to present that (CSS), right? This is good, let's keep it that way.

      But things aren't as good as they could be. On the semantic side, we have many elements in the language that don't really convey any semantic information, and a lot of semantics there isn't an element for. On the presentation side, well, suffice it to say that there are a _lot_ of things that cannot be done, and others that can be done, but only with ugly kludges. Meanwhile, processing and rendering HTML and CSS takes a lot of resources.

      The problem is that worrying about semantic vs presentation is something that almost no one gives a s**t about, because it is an artificial division that makes sense for computer science reasons, not human reasons. I don't sit down to make a web page and completely divorce the content vs the layout; the layout gives context and can be just as important as the content itself in terms of a human brain grasping an attempt at communication.

      I know I shouldn't use tables for presentation but I just don't care. They are so simple and easy to visualize in my head, and using them has never caused a noticeable slowdown in my app, caused maintenance headaches, cost me any money, etc. The only downside is listening to architecture astronauts whine about how incorrect it is while they all sit around and circle-jerk about how their pages pass this-or-that validation test.

      In oh so many ways writing a web app is like stepping back into computer GUI v1.0; so much must be manually re-implemented in a different way for every app. Heck, you can't even reliably get the dimensions of an element or the currently computed styles on an element. Lest you think this is mostly IE-vs-everyone else, no browser can define a content region that automatically scrolls its contents within a defined percentage of the parent element's content region; you've gotta emit javascript to dynamically calculate the size. This is double-stupid because browsers already perform this sort of layout logic for things like a textarea that has content that exceeds its bounds. And guess what? This is one of the #1 reasons people want to use overflow:auto. Don't waste screen real-estate showing scrollbars if they aren't necessary, but don't force me to hard-code height and width because then I can't scale to the user's screen resolution.

      This kind of crap is so frustrating and wastes MILLIONS upon MILLIONS of man-hours year after year, yet we can't even get the major browser vendors to agree to HTMLv5 and what little bits (though very useful) it brings to the table. So please spare me the semantic vs presentation argument. If just a few people gave a s**t and stopped stroking their own egos on these bulls**t committees and actually tried to solve the problems that developers and designers deal with every day then they wouldn't have to worry about forcing everyone to adopt their standard (IPv6), the desire to adopt it would come naturally.

  • A novel idea (Score:4, Interesting)

    by DaveV1.0 (203135) on Thursday November 12, 2009 @03:49PM (#30078640) Journal

    How about we don't use HTTP/HTML for things they were not designed or ever intended to do? You know, that "right tool for the right job" thing.

    • "right tool for the right job"

      Fair enough.

      What's the right tool to deliver to your users rich applications which are

      • accessible from (almost) any computer, anywhere
      • doesn't require the user to install software that isn't already pre-installed on most computers
      • works on all architectures and operating systems
      • can be updated for everybody by the application provider without invading peoples' machines

      I don't know of any tool other than HTTP/HTML. I can imagine something with ssh and X forwarding, but windows boxes don't come with X preinstall

  • by unix1 (1667411) on Thursday November 12, 2009 @04:06PM (#30078898)

    It's not all rosy as the short documentation page explains. While they are trying to maximize throughput and minimize latency, they are hurting other areas. 2 obvious downsides I see are:

    1. Server would now have to keep holding the connection open to the client throughout the client's session, and also keep the associated resources in memory. While this may not be a problem for Google and their seemingly limitless processing powers, a Joe Webmaster will see their web server load average increase significantly. HTTP servers usually give you control over this with the HTTP keep-alive time and max connections/children settings. If the server is now required to keep the connections open it would spell more hardware for many/most websites;

    2. Requiring compression seems silly to me. This would increase the processing power required on the web server (see above), and also on the client - think underpowered portable devices. It needs to stay optional - if the client and server both play and prefer compression, then they should do it; if not, then let them be; also keeping in mind that all images, video and other multimedia are already compressed - so adding compression to these items would increase the server/client load _and_ increase payload.

    • Re: (Score:3, Informative)

      by grmoc (57943)

      As a server implementor I can tell you that I'd rather have 1 heavily used connection than 20 (that is a LOW estimate for the number of connections many sites make(!!!!!!!)). Server efficiency was one of my goals for the protocol, in fact!

      When we're talking about requiring compression, we're talking about compression over the headers only.

      In any case, as someone who operates servers... I can't tell you how many times I've been angry at having to turn of compression for *EVERYONE* because some browser advert

  • by kriegsman (55737) on Thursday November 12, 2009 @04:13PM (#30079022) Homepage
    HTTP-NG ( http://www.w3.org/Protocols/HTTP-NG/ [w3.org] ) was researched, designed, and even, yes, implemented to solve the same problems that Google's "new" SPDY is attacking -- in 1999, ten years ago.

    The good news is that SPDY seems to build on the SMUX ( http://www.w3.org/TR/WD-mux [w3.org] ) and MUX protocols that were designed as part of the HTTP-NG effort, so at least we're not reinventing the wheel. Now we have to decide what color to paint it.

    Next up: immediate support in FireFox, WebKit, and Apache -- and deafening silence from IE and IIS.

  • by jddj (1085169) on Thursday November 12, 2009 @04:24PM (#30079198) Journal
    you got my new Droid to be able to dial hands-free and sync with Outlook. Would help me out a bunch more than faster http. No, really...
  • SPDY (Score:3, Insightful)

    by rgviza (1303161) on Thursday November 12, 2009 @05:09PM (#30080046)

    Cache control 4tw. A lot of the user perception problems SPDY is trying to solve can be solved by utilizing already-existing protocol features and the farms of cache servers at ISPs for your active content.

    The latency differences between a user going all the way to your server and grabbing your content vs. going to ISP's cache server to get it can be huge when you consider a separate connection for each part of the page. When coupled with the decreased response time (checking a cache file and responding with a 304 is a lot easier on your server than pulling your content out of a database, formatting it and sending the entire page) makes a huge end-user perception difference. It also frees resources on your web server faster because you are sending 20-30 bytes instead of x kb. The faster your server can get rid of that connection the better.

    Doing this reduces the load on your server(especially connection utilization), your bandwidth utilization, speeds up the download of your page (since it avoids the need to leave the ISP for your content download) and generally makes you a better network citizen.

    Of course this requires developers that understand the protocol.

    What I want to know is will ISP cache servers will have this implemented?

    • Re: (Score:3, Insightful)

      by Casandro (751346)

      Absolutely the 304 response won't work anymore under that new proposal. And 304 already saves a lot as most external references are static.

      There is only one exception, advertisements. One can only asume that Google wants this to effectively push advertisements on the user.

  • by Anonymous Coward on Thursday November 12, 2009 @05:16PM (#30080174)

    If they really wanted a faster web, they would have minimized the protocol name. Taking out vowels isn't enough.

    The protocol should be renamed to just 's'.

    That's 3 less bytes per request.

    I can haz goolge internship?

That does not compute.

Working...