Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Firefox Networking The Internet Google Software Technology News

SPDY Not As Speedy As Hyped? 135

Freshly Exhumed writes "Akamai's Guy Podjarny reveals after testing: SPDY is different than HTTP in many ways, but its primary value comes from being able to multiplex many requests/responses from client to server over a single (or few) TCP connections. Previous benchmarks tout great benefits, ranging from making pages load 2x faster to making mobile sites 23% faster using SPDY and HTTPS than over clear HTTP. However, when testing real world sites I did not see any such gains. In fact, my tests showed SPDY is only marginally faster than HTTPS and is slower than HTTP."
This discussion has been archived. No new comments can be posted.

SPDY Not As Speedy As Hyped?

Comments Filter:
  • Not so fast...YET (Score:1, Insightful)

    by jampola ( 1994582 )
    You're not going to see the potential of SPDY before we have environments (browsers, CPU and your internet speed) that can take full advantage of it. Only in the most recent version of Firefox did we see SPDY support.

    What's the moral of all this? It's early days yet. Let's talk in a few years when the rest of us catch up.
    • by bakuun ( 976228 ) on Sunday June 17, 2012 @10:17AM (#40351263)

      You're not going to see the potential of SPDY before we have environments (browsers, CPU and your internet speed) that can take full advantage of it. Only in the most recent version of Firefox did we see SPDY support.

      SPDY does not depend at all on CPUs or your "internet speed". It does depend on the browser (with both Firefox and Chrome supprting SPDY now) and, critically, the server. That last is also why the article author did not see much of a speedup - most content providers don't support SPDY yet. Going to non-SPDY servers and believing that it will evaluate SPDY for you is absolutely ridiculous.

      • Re:Not so fast...YET (Score:5, Informative)

        by Anonymous Coward on Sunday June 17, 2012 @10:31AM (#40351317)
        I came here to say something like "read the article, the guy is from Akamai and would know to only use servers that serve SPDY - such as many of the Google properties". But then, I read the fine article (blog) and realized the guy doesn't know enough to do that and just used "the top 500 sites" - which means a very large chunk of them don't know what SPDY is and he only used it between himself and his proxy. Great test that was. So your point is well taken. Bogus test means bogus results.
        • Re:Not so fast...YET (Score:5, Interesting)

          by Anonymous Coward on Sunday June 17, 2012 @10:50AM (#40351397)

          No, he proxied the sites through a SPDY proxy, but that's not really a good way to test. Most sites shard domains to improve performance. Unfortunately, that basically destroys SPDY's pipelining advantage. The author tried to compensate by doing simple sub-domain coalescing (which he admits significantly increased SPDY performance where it applied) but that's just too coarse of an approach, as sharding is rarely ever restricted to subdomains. He also doesn't understand how browser parsing and loading works, and specifically that script execution and resource loads can be deferred (which is another key to keeping SPDY's pipeline full).

          Basically, his tests showed that SPDY isn't magic faster dust. You will need to modify your site a bit if you want to really see advantages from it. And, you may see a minor performance degradation on an HTTP site that's unoptimized or heavily optimized in a way that doesn't play well with SPDY. However, if your site is optimized correctly, SPDY is still a big win over HTTP/HTTPS.

        • by Anonymous Coward

          You are not a very good reader then. He put a proxy in front of the web sites that implements the protocol.

        • From what I read about SPDY it doesn't sound like a big benefit justifying a change in protocols.

          HTTP pipeline support has been around for over a decade now and I'm unaware of the extent of it's usage but it produced real benefits back when I was using it in Firefox and apache about a decade ago. SPDY does pipelines; well so did HTTP: OPTIONALLY.
          I've read arguments about the benefits of pipelines, been there, done that - it is not new. When you have a scalable solution you CAN'T run everything from 1-2 pip

          • by rekoil ( 168689 ) on Sunday June 17, 2012 @01:38PM (#40352679)

            1. HTTP Pipeline support proved very difficult to implement reliably; so much so that Opera was the only major browser to turn it on. It can be enabled in Chrome and Firefox but expect glitches. By all accounts SPDY's framing structure is far easier to reliably implement.

            2. WIth SPDY, it's not just the content that's compressed but the HTTP headers themselves. When you look at the size of a lot of URLs and cookies that get passed back and forth, that's not a insignificant amount of data. And since it's text, it compresses quite well.

            3. SSL is required for SPDY because the capability is negotiated in a TLS extension. Many people would argue that if this gets more sites to use SSL by default, that's a Good Thing.

            4. If you're running SPDY, the practice of "spreading" site content across multiple hostnames, which improves performance with normal HTTP sites, actually works against you, since the browser still has to open a new TCP connection for each hostname. This is an implementation issue more than an issue with the protocol itself; I expect web developers to adjust their sites accordingly once client adoption rates increase.

            5. The biggest gains you can get from SPDY, which few have implemented, is the server push and hint capability; this allows the server to send an inline resource to a browser before the client knows it needs it (i.e. before HTML or CSS is processed by the browser).

            But as someone else as pointed out, the author's test isn't really valid, as he didn't test directly to sites that support SPDY natively, he went through a proxy.

            The website I work for is supporting SPDY, and the gains we've seen are pretty close to the ~20-25% benchmarks reported by others. As many have pointed out, this author's methodology is way broken. I'd recommend testing to sites that are known to support SPDY (the best-known are Google and Twitter), with the capability enabled and then disabled (You can set this in Firefox's about:config, Chrome requires a command line lauch with --use-spdy=false in order to do this, though).

            • 1. Buggy proxies should not get so much power that we can't progress HTTP

              2. OPTIONAL compressed headers would be nice. One reason HTTP and HTML got big is they are simple and easy to use-- obviously we could replace them with binary formats for huge technical gains as competing technologies did. I do not know SPDY but I would oppose whole protocol compression if that is what they are doing, most traffic is already compressed images or video. I don't want the extra work of having to encode headers into gzi

          • by Bengie ( 1121981 )
            HTTP 1.1 pipelining does not allow more than one command to be outstanding. HTTP allows you to re-use the connection for your next command, but does not allow multiple commands at the same time.

            HTTP1.1 is actually quite horrible for modern networks/computers. Choosing between HTTP1.1 and 1.0 is choosing between a giant douche and a turd sandwich. UDP won't happen because of the lack of congestion avoidance, which should be left up to the OS, not the app.

            The biggest issue with HTTP is it does not allow m
        • It is approximately valid. He put a bandwidth simulator between him and his proxy.

          His comment about the average site requiring ~ 30 different SPDY connections seems excessive though. I suspect this is why he's seeing such bad results. Maybe he is assuming no benefit from the removal of domain sharding which providers would likely do if they rolled out SPDY.

      • SPDY does not depend at all on CPUs or your "internet speed".

        CPU is mostly irrelevant, true, but the characteristics of one's network connection are likely very relevant to the sort of speedup one should expect to see from SPDY. My understanding is that the greater the latency between client and server the greater the benefit the client is likely to experience from using SPDY.

        • mmm, tfa doesn't make it clear whether the latency figures they are using are one-way latency or round trip time. Either way though 200ms is a pretty low value to use as the highest latency testcase. UK to australia is about 300ms round trip time datacenter to datacenter and a sattelite connection or crappy cellular connection can easily have over a second's latency.

    • Personally I'm a little sceptical about the testing methodology:

      For a proxy, I used the Cotendo CDN (recently acquired by Akamai). Cotendo was one of the early adopters of SPDY, has production-grade SPDY support and high performance servers. Cotendo was used in three modes – HTTP, HTTPS and SPDY (meaning HTTPS+SPDY).

      Surely that means that the proxy would have to download at least some of the pages from non-SPDY servers on demand, rendering this entire thing suspect? He said that he ran 5 replicates, but no attempt is offered at explaining why SPDY should be slower than plain ol' HTTP, only why it might not be faster. I could be wrong, but it looks like [ietf.org] the protocol is more concise on average even for a single-page request. Maybe Cotendo just has a bad impl

      • Re: (Score:2, Informative)

        by Anonymous Coward

        The reason SPDY doesn't help much is that your average web site now requires connections to a dozen other domains (each with only one or two requests), meaning that consolidating the requests to the main domain into a single connection just isn't that beneficial.

        dom

      • by butlerm ( 3112 )

        no attempt is offered at explaining why SPDY should be slower than plain ol' HTTP, only why it might not be faster

        The author was testing SPDY over TLS, which has a significant connection startup overhead. That probably explains all the performance degradation relative to regular HTTP.

        In fact, if SPDY support was ubiquitous tommorrow, I would be surprised to see SPDY+TLS used for third party ad serving for this very reason. SPDY+TLS isn't likely to be used for that without _major_ standards modifications

        • Ah, that explains it. In retrospect it seems rather silly to complain that an encrypted protocol is not more efficient than an unencrypted one.
        • by rekoil ( 168689 )

          In fact, if SPDY support was ubiquitous tommorrow, I would be surprised to see SPDY+TLS used for third party ad serving for this very reason.

          Good news here: Google's DoubleClick and AdSense ads are served over SPDY today. In fact, I'm not aware of any Google properties that don't use SPDY, since they're all routed through the same GFE (Google FrontEnd) proxy farms.

  • SPDY optimizes on a per-domain basis. In an extreme case where every resource is hosted on a different domain, SPDY doesn’t help.

    So the whole CDN thing has to be redone for SPDY to deliver on the promises?

    • by Skinkie ( 815924 )
      Like the CDN had to be redone for the opendns stuff (non-geographic queries). And the HTTPS stuff had to be redone because Google thought FalseStart was a great idea :)
      • Re: (Score:2, Informative)

        by Anonymous Coward

        This is not at all true. There were numerous stories about how Google worked with CDNs to ensure compatibility with OpenDNS. Here's one example from last year:
        http://arstechnica.com/tech-policy/2011/08/opendns-and-google-working-with-cdns-on-dns-speedup/ [arstechnica.com]

        As for SSL False Start, the problem was a handful of SSL terminators that violated the spec. Unfortunately, most of the manufacturers of those devices showed no interest in making a trivial fix to their implementations, and the few that did make the fix didn

    • by dmomo ( 256005 )

      Maybe that work could be given to browsers?

      If a page used these resources:

      http://static1.js.cdn.com/resource1.js [cdn.com]
      http://static2.js.cdn.com/resource2.js [cdn.com]
      http://static3.css.cdn.com/resource3.css [cdn.com]

      perhaps the browser could automatically look for some file, the way it does with favicon:

      http://static1.js.cdn.com/spdy.txt [cdn.com]

      That would pass back a file containing a spdy domain to use?
      spdy.cdn.com

      It would mean more requests on a single page load, I guess. But they could be cached.

      • by rekoil ( 168689 )

        That only works if all of those hostnames resolve to the same IP addresses. The main optimization in SPDY is the elimination of the need to make multiple TCP connections simultaneously, but all of those resources must live on the same server. If the resources have different hostnames, you might be able to detect hostnames that point to the same IP and then interleave those, but I don't know if the current implementations do that yet.

        Most CDNs, however, return different IPs for nearly every query, and web de

  • Amazon Silk (Score:5, Interesting)

    by yelvington ( 8169 ) on Sunday June 17, 2012 @10:08AM (#40351221) Homepage

    Amazon's Silk browser, used in the Kindle Fire, implements SPDY and a reverse proxy cache in the Amazon cloud that is supposedly capable of predictive retrieval and caching. While it occasionally is faster than HTTP, on the whole it doesn't seem to mesh well with my browsing habits and I've disabled the so-called "accelerated page loading" on my KF. Judging from comments in the Amazon forums, my experience is not unusual.

  • by Anonymous Coward

    ...is the usual problem you have when a single self-centred company which believes itself to be awash with superstars tries to take over a standard protocol: some ideas seductive but much questionable.

    There are some good ideas in SPDY which have been proposed elsewhere and which could be introduced into the next version of HTTP: not repeating headers, header compression. There are some questionable ideas (consider cost and implementation complexity at both ends - some problems are illustrated by the rules r

    • IMHO, there is a huge issue with this test. On one hand its author claims : "However, when testing real world sites I did not see any such gains. In fact, my tests showed SPDY is only marginally faster than HTTPS and is slower than HTTP." But on the other hand at bottom lines it states : "This means SPDY doesn’t make a material difference for page load times, and more specifically does not offset the price of switching to SSL." SSL ? Wait a minute doc ... there is something wrong with the flux cap
      • by Anonymous Coward

        SPDY mandates TLS. Go read the spec.

      • by rekoil ( 168689 ) on Sunday June 17, 2012 @01:49PM (#40352749)

        SPDY as implemented requires SSL, since the protocol capability is negotiated by a TLS extension on port 443. There's no spec for negotiating SPDY on a standard HTTP port - it would only work if the capability was assumed on both sides before the connection (for example, URLs that start with spdy:// instead of http:/// [http] which connects to a different TCP port on the server).

  • by MobyDisk ( 75490 ) on Sunday June 17, 2012 @10:16AM (#40351255) Homepage

    The average is meaningless without the raw data. Suppose it averaged 5%: is that because all sites were 5% faster, or because one site was 500% faster and the others were 2% faster? The former would mean that SPDY is mostly useless. The latter would mean that SPDY is immensely useful, but just not in all cases.

  • by Skapare ( 16644 ) on Sunday June 17, 2012 @10:25AM (#40351293) Homepage

    Web browsing experiences are slowing down from advertising. But it's not an issue around the images that advertising loads. Instead, it is a combination of the extra time needed to load Javascript from advertisers (whether it is to spy on you or just to rotate ads around), and programming defects in that Javascript (doesn't play well with others). Browsers have to stop and wait for scripts to finish loading before allowing everything to run or even be rendered. You can have a page freeze in a blank state when some advertiser's Javascript request isn't connecting or loading.

    The solution SHOULD be that browsers DISALLOW loading Javascript (or any script as the case may be) from more than one different hostname per page (e.g. the page's own hostname not being counted against the limit of one). This would remain flexible enough to reference scripts from a different server, or even an advertising provider, without allowing it to get excessive. Browsers should also limit the time needed to load other scripts, though this may be complicated for scripts calling functions in other scripts. Perhaps the rule should be that even within a page, scripts are not allowed to call scripts loaded from a different hostname, except the scripts from the page's hostname itself can call scripts in one.

    I've also noted quite many web sites that just get totally stuck and refuse to even render anything at all when they can't finish a connection for some script URL. Site scripting programmers need to get a better handle on error conditions. The advertising companies seem to be using too many unqualified programmers.

    • by Anonymous Coward

      If you have a better suggestion on how to finance all the free content out there, I'd like to hear it.

      The current "everything is free" model on the web is only possible because of that advertising.

      • by Anonymous Coward

        If you had actually read the post you'd have seen that he considers advertising acceptable in itself.

      • by Anonymous Coward

        Life was fine before advertising blew up on the web, more of the content was generated by people than content farms and shills so the signal-to-noise was much better

      • The current "everything is free" model on the web is only possible because of that advertising.

        It's not free, it's paid for by advertising. Hidden costs don't mean "no costs". Here's an idea: I pay my ISP for the downstream bandwidth. What if some of that money went to the owners of the servers I peruse, automically, AKA built-in micropayment? Dunno about you, but if that would double the price of my interwebs, it would still be fucking cheap interwebs. It wouldn't be a way to earn money with traffic either

      • Serve all the scripts from your own website rather than from loads of different third party websites? The actual bandwidth requirements of a script file are not that great, most of the time is spent trying to contact all the different websites.

      • by Anonymous Coward

        Web was around before it became the polluted cesspit it is today. We don't need a billion images per page, of which 1 is related to the page you're looking at. We don't need instantly playing media divs for TV adverts as soon as you land on a page. We don't need a billion wanker bloggers and old media portals trying to be l337 and bring in ad dollars. If they all disappeared right now, nothing of value would be lost.

    • The solution SHOULD be that browsers DISALLOW loading Javascript (or any script as the case may be) from more than one different hostname per page (e.g. the page's own hostname not being counted against the limit of one). This would remain flexible enough to reference scripts from a different server, or even an advertising provider, without allowing it to get excessive.

      This would cause problems for lots of sites that use javascript libraries, like jQuery et al, and load them from the canonical library sources. Of course, sites could host their libraries themselves, but that would require them to keep them up to date, and it would keep them from being cached and reused across different sites.

      Plus, your suggestion basically boils down to "let's break sites of clueless web developers so they'll get a clue". I don't object to that on principle, but it's not practical. Wh

      • by jfengel ( 409917 )

        Of course, sites could host their libraries themselves, but that would require them to keep them up to date, and it would keep them from being cached and reused across different sites.

        I haven't done much work with jQuery, but in my experience library updates nearly always require somebody to at least examine the code. Something nearly always breaks.

        It seems to me that maintaining jQuery must be an extraordinary responsibility. Any change to its semantics risk breaking thousands or millions of sites. And the point seems to be to be able to make it unannounced, without any assistance from those who rely on it.

    • by PIBM ( 588930 )

      Actually, a lot of sites refuses to load the content if the advertisement is not yet displayed to prevent showing the content for free; they prefer you to reload the page and obtain that advertisement rather than risk losing that display.

      • by tepples ( 727027 )
        Then why can't they include the advertisement as text in the page, rather than as a script that inserts an Adobe Flash Player object into the page?
      • I just encountered this today, in fact. A blog I read uses Disqus to manage its comments. For some reason, on this particular blog, the comments wouldn't display until I selectively turned off blocking certain items on the page that part of the default list in Adblock Plus. Even then I could see the comments but not add my own. Had to create exceptions for a few additional items before the Disqus section would actually let me post.
        • It's probably because Disqus is using web-tracking-like technology (third-party cross-site scripts?) to do the comment display, and some of the adblock plus filter subscriptions block those things whenever possible. I had to set ghostery to have an exception for disqus if I wanted to read the comments on any page with Disqus commenting system on it.

          • It must be something added in a newer version of Disqus, then, because other blogs that use it seem to work fine w/ no exceptions. It's only this particular one that's jacked up. volokh.com, btw, is the blog.
    • If you take the time to analyse a modern page, you'll see that the ads usually represent a relatively small chunk.

      What really slows the pages down is over reliance on javascript frameworks. Pages that use both jquery and another framework such as scriptaculous are not uncommon. What's worse is that the developers often use these libraries for trivial effects, stuff that can easily done directly in javascript. Then most pages have scripts for at least 3-4 social media sites (Facebook Like box, Twitter counte

      • For the sites that I've worked on, it's mostly the social media plugin crap that slows then down, with facebook being one of the worst offenders. As such, I take great pains to load the social media stuff only after the entire page has been rendered, which helps responsiveness greatly. Multiple css files = negligible. Multiple javascript frameworks = negligible.

      • What's worse is that the developers often use these libraries for trivial effects, stuff that can easily done directly in javascript.

        Doing it directly in JavaScript would take valuable programmer time to code and test for proper HTML5 browsers and for each of the past few versions of IE.

        Then most pages have scripts for at least 3-4 social media sites (Facebook Like box, Twitter counter, G+ counter, etc.), live tweet box

        How would you recommend accomplishing the visibility that sharing of pages by users of social media sites gives without incurring the delay of loading those scripts?

        5-8 different css files

        One approach is to have one huge CSS file that covers all browsers and all parts of the site, forcing users to download a lot of data that does not pertain to a particular page. Another is to

        • by Anonymous Coward

          How would you recommend accomplishing the visibility that sharing of pages by users of social media sites gives without incurring the delay of loading those scripts?

          Right, because users can't possibly make posts on their favourite faggot site by themselves without the help of some ridiculous time-wasting script.

          • Right, because users can't possibly make posts on their favourite faggot site by themselves without the help of some ridiculous time-wasting script.

            This is correct, as I understand it. Unless users can "make posts on their favourite faggot site" with one click, users won't feel inclined to "make posts on their favourite faggot site" at all, and your web site will not reap the benefit from exposure in "posts on their favourite faggot site".

    • by DavidTC ( 10147 )

      I don't know about your solution, but you're dead right about the problem.

      I work as a fairly-technically-inclined web developer, and I can't count how many times I've seen other developers say 'I can't figure out how to make my Joomla or Wordpress or whatever give out pages faster, perhaps I need some sort of caching or something'. I check, and it's a ten second load time.

      And then I look at it in Firebug and point out their HTML arrives in half a second, and their images arrive in three seconds and are ca

      • There is nothing stopping you from doing that david. We do that and it works quite well for the off-site content.

    • by fa2k ( 881632 )

      It has gotten a bit better for me over the last 6 months. I used to see "Waiting for google-analytics.com" for 1-2 seconds all the time, but it doesn't happen as much now. It could be because I moved to a different country and Google has more servers there.

    • Instead, it is a combination of the extra time needed to load Javascript from advertisers (whether it is to spy on you or just to rotate ads around), and programming defects in that Javascript (doesn't play well with others). Browsers have to stop and wait for scripts to finish loading before allowing everything to run or even be rendered. You can have a page freeze in a blank state when some advertiser's Javascript request isn't connecting or loading.

      It is also a result of plain stupidity of some (major) ad server operating companies - here in Austria, we've heard things like "we try to disable caching for served media [images, swf etc.]" and we've actually seen images, animations, 200KB+ flash being loaded at every request because of this. Needless to say, the slow document.write() method of displaying ads is also still the norm ... If the W3C had any sense, it would have come up with an <AD> tag that worked like a restricted IFRAME (with mandat

    • Another solution that could (should?) be implemented is: Use a locally-hosted script to load any externally-hosted scripts. Your page will load and render faster because everything required to render the page is loading from your own server, and you still get the 3rd-party-hosted shit^H^H^H^Hscripts on the page, just *after* it has rendered, so your users can, you know, actually start using the damned site.
  • by gstrickler ( 920733 ) on Sunday June 17, 2012 @10:35AM (#40351331)

    I used a Chrome browser as a client, and proxied the sites through Cotendo to control whether SPDY is used. Note that all the tests – whether HTTP, HTTPS or SPDY – were proxied through Cotendo, to ensure we’re comparing apples to apples.

    Since they all ran through the same proxy, it might be the limiting factor. We would need to see tests the bypass the proxy to determine if his results have any meaning beyond that specific proxy.

    Likewise, you have to ask the question, do all of the 500 sites he tested support SPDY?

  • by Anonymous Coward

    For the umpteenth time, if the main advantage of SPDY is multiplexing multiple streams over a single socket, why not use BEEP http://en.wikipedia.org/wiki/BEEP [wikipedia.org]?

  • SPDY solves *a* problem, but not *the* problem. The root of the problem today is that loading a simple web page requires 20 or more separate connections: images, ad networks, tracking systems, social network links, 3rd party comment systems, javascript libraries, css, etc. Somehow all of that content needs to be coalesced into fewer connections.

    • Somehow all of that content needs to be coalesced into fewer connections.

      We did use fewer connections before CDNs, CDN-hosted JS libraries and cookie-less content domains became the norm and before web developers were advised by PageSpeed and other "authorities" to put content on multiple hosts to facilitate parallel access by browsers. Yes, it was a stupid idea to fix a minor implementation problem of browsers on the web server side.
      Also, what contributes most to web page slowdown is ads and tracking code, both areas dominated by Google - who pretends to be somehow interested

      • by rekoil ( 168689 )

        The good news there is that connections to Google's ad networks DO run over SPDY now, assuming a compatible browser.

    • by ls671 ( 1122017 )

      SPDY solves *a* problem, but not *the* problem. The root of the problem today is that loading a simple web page requires 20 or more separate connections: images, ad networks, tracking systems, social network links, 3rd party comment systems, javascript libraries, css, etc. Somehow all of that content needs to be coalesced into fewer connections.

      You are wrong use netstat and a modern browser to test it out. With servers configured to do HTTP request keepalives, most browser open a maximum of 2 connections to one server and keep them up and everything is sent through those persistent connections.

      The browser also needs to open at least one connection to any third party server without regards for the protocol used.

      I see an average of 5 to 20 seconds timeout on most sites. The the server waits for other requests on the SAME connection. Use telnet and

      • by ls671 ( 1122017 )

        The SPDY whitepaper suggest an keep alive timeout of 500ms while most sites do 5 to 20 seconds ?

        No wonder why they think they are going to be so fast ;-)

        Seriously although, I still see the point of this protocol but it might have been over-hyped a bit. Established things are hard to change without revolutionary gains to be expected.

        http://www.chromium.org/spdy/spdy-whitepaper [chromium.org]

        Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FI

  • by Lisandro ( 799651 ) on Sunday June 17, 2012 @11:40AM (#40351697)

    Shouldn't we be working on adopting SCTP [wikipedia.org] instead?

    • it's more effort, thats why people use SPDY.
      Changing all the gear to support SCTP is hard. Specially those pesky hp/cisco/juniper/you-name-it proprietary network hardware

      That's also why Google is sometimes pushing for open source software on top of network hardware IMO.

      • While that's true, a standard (and popular library) for SCTP-over-UDP could be created. At most, you'd need a single well-known UDP port for inbound SCTP-over-UDP (9989 is suggested by the Internet draft [ietf.org] for this). SCTP ports would be used to distinguish between separate SCTP-using services on the server. I'm sure that the existing Linux and BSD SCTP stacks could support this with little effort. Firewalls that only permit HTTP/HTTPS would block this variant, but it would work well enough through NATs, es

    • SCTP's a neat protocol. But by itself, it won't help to accelerate HTTP. SDPY is designed to quickly load all of the different resources you'll need to render a web page with a much lower latency. Standard HTTP over SCTP wouldn't have as much of an impact on latency.
  • HTTP and HTTPS are fast enough, it's the web servers / content generation (and ads) that limit the user experience and make web pages load slowly, followed by low bandwidth in some areas. If you really want to fix old protocols that actually need fixing, go look at SMTP first.
  • by _Bunny ( 90075 ) on Sunday June 17, 2012 @01:07PM (#40352481) Homepage

    As someone who's job it is to work on things like this, there's a few things that must be pointed out.

    - SPDY runs over SSL. There isn't an unencrypted version -- note that SPDY was in fact faster than HTTPS.

    - Many of the tricks used today to speed up page delivery, such as domain sharding, actually hurt SPDY's performance. SPDY's main benefit is that it opens up a single TCP connection and channelizes requests for assets inside that connection. Forcing the browser to establish a lot of TCP connections defeats this entirely, and the overhead of spinning up an SSL connection is very high. (And again, it should be noted that SPDY *WAS* faster, even if just a little bit, than standard HTTPS.)

    There are other features in SPDY that today remain largely untapped, such as a server hinting to a client that it knows it'll need some content ahead of time -- giving the client something to do while it'd normally be idle waiting for the server to respond while it's generating the HTML it requested. (Large DB query, or whatever.)

    Web engineers are clever and a smart bunch. While it looks like there's not a lot of gain to rethinking HTTP 1.1 today, given the years of organic growth we've had and time spent optimizing an older protocol, as new technology comes along that take advantage of the new foundation, things this will change. Give it time.

    To the folks complaining that this guy doesn't know what he's doing, uh, he's a Chief Product Architect at Akamai. Yes he does. The folks at Akamai know more about web delivery than just about anyone.

    - Bunny

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...