Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Chrome Google Technology

Google Cuts Chrome Page Load Times In Half w/ SPDY 310

An anonymous reader writes "It appears as if Google has quietly implemented the SPDY HTTP replacements in Chrome (well, we knew that), and its websites. All its websites were recently updated with SPDY features that address some of the HTTP latency issues. The result? Google says the pageload times were cut about in half. SPDY will be open source, so there is some hope that other browser manufacturers will add SPDY as well."
This discussion has been archived. No new comments can be posted.

Google Cuts Chrome Page Load Times In Half w/ SPDY

Comments Filter:
  • Now you can refresh your Facebook page twice as fast!
  • is the "extend" part. Let's hope they don't get any temptation to become incompatible.

    • Yeah. Because SPDY is a protocol that's worse than HTTP, and not open to boot... :p

    • They're releasing the implementation under a BSD license. And unlike that other giant software company back in the old days, they don't have an overwhelming market share, so they can't just ram new standards down everyone's throats. If they make it incompatible, then nobody will use it. So it looks pretty good so far.

      • And unlike that other giant software company back in the old days, they don't have an overwhelming market share

        Google's position in search is pretty overwhelming I'd say (except maybe in China). If Chrome is suddently twice as fast on Google websites then all other browsers, then gives the combination of Chrome+Google websites a huge advantage.

        If they make it incompatible, then nobody will use it.

        It is incompatible with all other websites and all other browsers - it only works with the combination of Chrome+Google's own websites.

        If this were not open source, it would be totally evil. But is it fully open source right now, though, both client and server? There appear

        • It is incompatible with all other websites and all other browsers - it only works with the combination of Chrome+Google's own websites.

          This is flat out wrong. Just this morning I used Chrome to connect to a test server that was running a node.js implementation of SPDY [github.com]. I verified the connection using chrome://net-internals. It worked well.

          There is nothing in chrome that prevents this from working on other domains/websites.

          There is nothing stopping anyone from implementing their own server.

          End of FUD.

    • Re:And this... (Score:4, Interesting)

      by Zocalo ( 252965 ) on Monday April 11, 2011 @11:27AM (#35781962) Homepage
      SPDY is (according to Google) going to be released as open source, so I'm hopeful that it's development will be more akin to Mozilla's tack with its "Do Not Track" header - add support to your own browser, then throw it out there and see if the market is interested. IE9 already supports the "Do Not Track" header and there is also signs of some interest from websites too, so that's looking good.

      What would be even better though, especially given that SPDY is really an extension to HTTP akin to using GZ compressed data, is if Google were also to write up and submit an RFC or whatever mechanism it is that W3 uses to get HTTP extensions added to the standard, such as it is. SPDY seems very much like a win for both content providers and content consumers to me, so once the details are out there I'd like to think that we'd see fairly rapid adoption by the browsers over the next several months, followed by backend support from Apache, IIS et al with their next major releases.
      • Not releasing it to the W3C isn't as big a deal as it was, they'll just throw it into HTML and people will have to guess whether or not it's supported.

      • by Lennie ( 16154 )

        Actually there are people working on submitting to the IETF as a RFC, it takes time.

        It just uses an extra header to get it started/switch to the SPDY-protocol. Not only that, but the extension for HTTP to make the switch is already gone through most of the IETF-process.

        I wouldn't be surprised if the biggest hold up is actually websockets. Because the new 'HTML5' websockets where found to be insecure, atleast in combination with transparant caching proxies that didn't implement HTTP properly. Java and Flash

      • Re:And this... (Score:4, Insightful)

        by AmberBlackCat ( 829689 ) on Monday April 11, 2011 @12:24PM (#35782716)
        With Honeycomb, doesn't Google have a history of saying things will be released as open source, and then not releasing the source?
        • With Honeycomb, doesn't Google have a history of saying things will be released as open source, and then not releasing the source?

          Troll much? No such history exists. You made that up.

          Google has repeatedly stated when honeycomb is completed, thought to be the merge of 2.x and 3.x code base, it will be released to the public for general consumption.

          How does Google saying something entirely different than you present, and a long, long history of them following through on exactly what they said, with Android itself in complete disagreement with your delusion, imply they have a history of doing what they've never done?

  • by DarkOx ( 621550 ) on Monday April 11, 2011 @11:07AM (#35781694) Journal

    :-)

  • by jonescb ( 1888008 ) on Monday April 11, 2011 @11:08AM (#35781708)

    When can we expect the SPDY protocol to be implemented in HTTP servers like Apache or Nginx? Does anything need to be done? The summary only talks about adding support to the client portions.

  • by fermion ( 181285 ) on Monday April 11, 2011 @11:08AM (#35781710) Homepage Journal
    My pages load plenty fast. What I notice is that when the page goes to google analytics that load process stops while waiting for the server. There was a time when pages would load partial content, and then go for the ads. Now, many pull the ads and analytics first. This would be good if the ad servers were fast, but the seem to be getting slower. Since google serves so many ads, it seems within it's power to make the web faster by making the ads faster. Perhaps, like MS, they want the web slow for all other browser, so Chrome seems so much faster.
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • That's just bad code. The author of the page should at the very least be putting the call to Google Analytics below the footer. Preferably, they'd make it a callback to document.ready().

    • AddBlock+
      Not sure how you're on slashdot and don't yet know about it.
    • by pz ( 113803 )

      I also notice much of my latency is due to DNS lookup. I've never understood why DNS lookups aren't locally cached by default. Even a cache with a 10-minute timeout would speed things up a lot (and, really, how often does any web site change their IP address?).

      • You can run a local cacheing DNS server.

      • by gpuk ( 712102 ) on Monday April 11, 2011 @11:56AM (#35782352)

        Normally, DNS lookups *are* locally cached by default....... if you're on Windows, try running ipconfig /displaydns

        The problem might be with your upstream resolver(s). If you use your ISPs resolvers, maybe they are are overloaded? Or if you are using a non-ISP upstream cache, maybe it's sparsely populated? Either of these would make initial lookups slow.

        You could give Google's public resolvers a try and see if they improve your lookup times: 8.8.8.8 and 8.8.4.4

      • Win7 already does this. The DNS Client service performs caching of queries.

      • I have noticed the same thing in my house. Our DNS server that we get from Qwest can take as much as 10-15 seconds to resolve DNS queries sometimes (this is not all the time but when it happens its a major pain). I have dnsmasq running on my Ubuntu box (which will use OpenDNS to resolve cache misses instead of the Qwest DNS server). This makes cache misses faster than they would have been anyway, and cache hits take 0ms. I have switched over all of the computers in the house (both Windows and Linux boxe

        • by gpuk ( 712102 )

          >Does anyone here know if there is a way to tell it to keep all entries for at least some length of time (e.g. 1 day) before considering the info stale?

          AFAIK, you can only override TTL values if you use a broken or modified resolver. Also, it is generally a bad idea to second guess the domain owners intention (e.g. upping the TTL will probably screw up their load balancing/maintenance assumptions).

      • "I've never understood why DNS lookups aren't locally cached by default."

        As far as I know, they are. Are you using a web proxy? Because if so, unless you are also using a proxy autoconfig ("PAC") javascript file, you are implicitly delegating DNS lookups to that proxy. That may be why you're seeing some DNS lookup latency.

        "(and, really, how often does any web site change their IP address?)."

        Among other things, websites hosted by CDNs (Akamai, etc.) give different IP addresses to different clients (or even t

      • google "namebench". It's a google code project. I think current version is 1.3.1 Download it, run it, and be amazed. You can speed those DNS lookups significantly if you only know which servers to use. Don't expect it to be a ten second operation, though. Plan on spending ten minutes, minumum, with it. More reasonably, give it thirty minutes. Use namebench's recommendation, or modify it as you see fit. My own latency with the default ISP server was out of this world. In fact, I couldn't find a

    • What I notice is that when the page goes to google analytics that load process stops while waiting for the server. There was a time when pages would load partial content, and then go for the ads. Now, many pull the ads and analytics first. This would be good if the ad servers were fast, but the seem to be getting slower.

      Right. I've commented on this before. If a page loads slowly today, it's usually for one of three reasons.

      1. Page loading is stalling due to ads, trackers, and web bugs.
      2. The page is pulling in vast amounts of CSS or JavaScript code from some third-party source, and that source is slow.
      3. The primary site is building the pages in a content-management system, and the database is a bottleneck.

      The SPDY approach won't fix any of these problems. Client side last-mile bandwidth usually isn't the bottleneck. It

    • by caluml ( 551744 )
      Doesn't everyone do this?

      sudo echo 127.0.0.2 google-analytics.com www.google-analytics.com >> /etc/hosts

      My rule is that if a script from an external site slows down my page loading long enough that I can see it saying "Waiting for ..." in the status bar, then that site gets added to my hosts file.

      I'm 120 miles away from my powered off laptop, otherwise I'd post you the worst offenders here.

    • Interesting. I see nothing in the technical documentation that would lead it to be significantly faster than modern HTTP.

      It has exactly the same overhead for establishing connections, since it uses TCP, and HTTP has no additional connection overhead. It compresses HTTP-headers which might helps a few procent on small requests, but not much. It allows multiple requests within the same TCP connection, but then so does HTTP 1.1 by pipelining.

      I only see sources of more bugs. The short version is that SPDY is HT

      • And what we gain is compressed HTTP headers and a requirement to support multiple requests, and multiple streams.

        Actually you don't even gain compressed headers. Since SPDY (which is a made up initialism just to be "cute") uses SSL to bypass proxies you could already compress with deflate or whatever else people want to standardize on.

        I would much rather like to see anything based on SCTP.

        And why hasn't SCTP caught on? Because like SPDY it doesn't solve an actual problem people have.

      • by Lennie ( 16154 )

        1. SPDY uses 1 TCP-connection, instead of many. Which makes it faster. The time to establisch a new TCP-connection is a lot of overhead when you visit a webpage which has many small elements on it. It also has a big advantage when using HTTPS. Again because you just use the one TCP-connection (no extra SSL-connections to establish):

        http://www.chromium.org/spdy/spdy-whitepaper [chromium.org]

        2. It is backwardscompatible and doesn't need any operating system changes like SCTP or different firewall settings and so on.

        Yes, I k

        • 1. SPDY uses 1 TCP-connection, instead of many. Which makes it faster. The time to establisch a new TCP-connection is a lot of overhead when you visit a webpage which has many small elements on it. It also has a big advantage when using HTTPS. Again because you just use the one TCP-connection (no extra SSL-connections to establish):

          This is what is called pipelining in HTTP 1.1. It is not required by HTTP, but even 10 years ago when I last checked only Microsoft didn't support it. They do add the ability to

      • You mean, pipelining is actually used somewhere? Most browsers (but Chrome) have support for it, but it is disabled by default.

        Compressed headers are a major thing. Have you noticed all the bloat browsers send by default? Heck, I tried to disable them in Firefox -- you can alter but not remove junk completely, you need something like privoxy. This includes headers that are harmful (Accept-Language[3]) or utterly useless (Accept-Charset[2], Accept[1]). All that crap has no chance to fit into a single pa

        • by Lennie ( 16154 )

          Pipelining isn't used because it is incompatible with some older servers and I think some proxy servers.

          Also TCP/SSL/TLS handshakes are per connection, not request. But SPDY just needs the one connection, a modern browser when visiting some sites can open up to 20+ connections. That is a lot of unneeded overhead. Although you only get 20+ when you use something like 5 different domains, so with SPDY you would still be using 5 connections one per domain.

          (the numbers are a bit made up, but gets the point acro

      • I can see how they can cut a round-trip with server-push. You request foo.html, and besides giving you that file, it also sends img1.jpg and img2.jpg, that you would likely request anyway.

        And doing this over TCP is quite nice. In the ideal world, SCTP would be better, but NAT routers en masse would need an upgrade, so it probably wouldn't happen until the whole world uses IPv6.

  • Well.. creating standards or changing standards is bad. You break things, stuff stop working, people get angry.
    Theres always some ways to cut corners and optimize everything withouth touching the standard.
    Then you see that the standard is really unappropiate for the problem, and that a simple change on the standard coud unlock freedom and speed.
    So you create a new protocol. But you find that such protocol break a lot of routers and proxys (very old, buggy, crappy, undocumented using, proxies).
    Then you see

    • Where the hell do you come up with this?

      Standards are always being tweaked/change, but it's up to the companies seeking to make those changes doing it in an ethical way that matters most. While a lot of companies do a lot of unethical shit, the standards (and requirements of standards) do tend to speak for themselves. If this was some "Chrome-only" feature they wouldn't document it and/or open source it. If this is truly anti competitive you would hear the world shouting about it by now.

    • by Lennie ( 16154 )

      I don't know why you are rambling on about this.

      The people working on SPDY submitted or working on atleast 3 different drafts for different changes to the IETF for backwardcompatible changes (SSL/TLS False start, HTTP-header for upgrade-to-SPDY and SPDY) that I know of.

  • Whenever someone starts a project with that in mind: it means shit!
    Why wasn't it open source from the start?

    Look what happened to symbian...
    (Well, maybe I should rtfa but I'm already killing precious time by reading slashdot so that wouldn't be nice..)
  • thank you, finally (Score:4, Informative)

    by hesaigo999ca ( 786966 ) on Monday April 11, 2011 @11:22AM (#35781888) Homepage Journal

    Finally a story for us geeks that want geek stories....I guess cmdtaco is slowly waking up....

  • Let's say, for example, that Microsoft had: 1) Taken an existing web standard and made proprietary changes to it (promising to make the changes open-source, "in the future"), and 2) Implemented those changes in IE and MSN/Bing/Live.Com making those sites faster when using IE. Wouldn't everyone here being screaming "Anti-trust" and demanding an SEC investigation?
    • by h4rr4r ( 612664 )

      Because one company has done things like that before, the other has not. Actions speak louder than words.

    • No. Not everyone. Somebody would be saying "If Google had done that you wouldn't be complaining".

    • That would not at all be what Google did. Google have published full documentation of the current version of the spdy protocol. (Its linked on the announcement page, and looks like something which will be given to w3c once its done.

      It is however still a draft because the protocol is not finished yet.

    • You'll not find very many organizations on the web that can equal MS proven track record for evil. Sure, there's SCO, but how many others can you name? To date, all of the evil attributed to Google is so much "What if" and "They could have done better" and "Look, they have money, they MUST be evil!"

      Google does alright with "dumb" and "stupid" now and then, but as has already been pointed out by other posters, they just suck at "evil".

  • by erroneus ( 253617 ) on Monday April 11, 2011 @11:39AM (#35782112) Homepage

    I am thinking Google did not learn the lesson from the SCSI acronym. Initially, the creator of SCSI wanted it to be pronounced "Sexy" and we ended up saying "Skuzzy." Obviously, Google wants this to be pronounced as "Speedy" but I can easily see this becoming "Spoddy."

    And I have looked around a bit... I still can see where SPDY is defined anywhere as to what the letters mean? I can imagine a lot of meanings... except for the Y. (Standard Protocol no aDopted Yet)?

  • SPDY - server push (Score:5, Insightful)

    by ThePhilips ( 752041 ) on Monday April 11, 2011 @11:51AM (#35782272) Homepage Journal

    My favorite part of the SPDY is server push: now advertisers can clog my internet channel and hog the browser with ads long before the AdBlock kicks in. Or a hacked site would host malware and load it onto potential victims harddrives in parallel to normal surfing. Imagination is the only limit - of how it can go wrong.

    For the security reasons, I think SPDY is a bad thing.

    And I'm personally not bothered with 1-2s loading times.

    P.S. The Chrome guys instead would have invested more times in the bookmarks, to make them useful. They could start by integrating Chrome with the Google Bookmarks.

    • by Tridus ( 79566 )

      I'm not sure there's really a security issue here. From the spec [chromium.org], all its doing is letting the server send back multiple items in response to a request for one item. So if you request X, and the server knows you're going to need Y too, it can send both at once.

      It's not like the server can connect to you out of nowhere and start firing stuff at you. And since a server can already send malicious content back in response to a request, the security aspect isn't really worse then it already is.

      • by ThePhilips ( 752041 ) on Monday April 11, 2011 @01:49PM (#35783646) Homepage Journal

        I'm not sure there's really a security issue here. From the spec [chromium.org], all its doing is letting the server send back multiple items in response to a request for one item. So if you request X, and the server knows you're going to need Y too, it can send both at once.

        If server "knows" that I "need" 2MB of flash ads to see the 40K html page, it would send them to me. IOW browser is out of control what server sends, browser can only discard the content which has wasted my bandwidth and isn't going to be displayed.

        It's not like the server can connect to you out of nowhere and start firing stuff at you. And since a server can already send malicious content back in response to a request, the security aspect isn't really worse then it already is.

        With HTTP, there is no way server can send me something my browser didn't asked for. It can send something bad *instead* of what my browser asked - but only once and with user visible effect. With SPDY, server can send me loads of junk *silently*, still appearing to be serving legit content.

        For static content, it is even worse: first time I visit the page it is cached and then a cached copy used. With SPDY, bandwidth is going to be always wasted for transferring the static content. Yes, I need it to display the page - but no, I have already local cached copy.

    • by pavon ( 30274 ) on Monday April 11, 2011 @12:38PM (#35782864)

      Yeah, and by their own testing [chromium.org], the push features only resulted in an additional -1% to +3% improvement (yes it made things slightly slower in one case). The additional complexity added by those features is not justified by their benefits. They should just drop them.

  • SPDY clarifications (Score:5, Informative)

    by mbelshe ( 462412 ) on Monday April 11, 2011 @11:56AM (#35782366)

    Thanks for all the kind words on SPDY; I wish the magazine authors would ask before putting their own results in the titles!

    Regarding standards, we're still experimenting (sorry that protocol changes take so long!). You can't build a new protocol without measuring, and we're doing just that - measuring very carefully.

    Note that we aren't ignoring the standards bodies. We have presented this information to the IETF, and we got a lot of great feedback during the process. When the protocol is ready for a RFC, we'll submit one - but it's not ready yet.

    Here are the IETF presentations on SPDY:
          http://www.ietf.org/proceedings/80/slides/tsvarea-0.pdf [ietf.org]
    and
          https://www.tools.ietf.org/agenda/80/slides/httpbis-7.pdf [ietf.org]

    I've also answered a few similar questions to this here: http://hackerne.ws/item?id=2420201 [hackerne.ws]

    We love help- if you're passionate about protocols and want to lend implementation help, please hop onto spdy-dev@google.com Several independent implementations have already cropped up and the feedback continues to be really great.

    • by 0xABADC0DA ( 867955 ) on Monday April 11, 2011 @01:09PM (#35783166)

      Since we've got it direct from the horse's mouth -

      - Why server push? Nobody seems to think it's a good idea and it makes things more complicated for everybody involved, including proxies. What is the rationale for this feature.

      - Why did you name it "SPDY" to show "how compression can help improve speed" when SSL already supports compression?

      - In the performance measurements in the whitepaper, what HTTP client did you use and what multiple connection multiplexing method was used if any? How were the results for HTTP obtained? For instance the whitepaper says an HTTP client using 4 connections per domain "would take roughly 5 RTs" to fetch 20 resources, implying theory math. Were situations like 10 small requests finishing in the time it takes to transfer 1 large request taken into account? (ie in practice multiple requests can be made without increasing total page load time)

      - The main supposed benefit seems to be requesting more than one resource at once. Then a request could stall the connection while being processed (ie doubleclick ad) and hold up everything after it, so then you add multiplexing, priorities, canceling requests, and all that complication. Why not just send a list of resources and have the server respond back in whatever order they are available? This provides the same bandwidth and latency with superior fault handling (if the connection closes the browser has only one resource partially transferred instead of several).

      - The FAQ kind of reluctantly admits that HTTP pipelining basically has the same benefits in theory as SPDY except if a resource takes a while and holds up the remaining ones. So what benefit would SPDY have over just fixing pipelining so that the server can respond in whatever order it chooses? The only real problem with HTTP tunneling is fixed-order and bad implementations (ie IIS), correct?

      Barring really good explanations it looks to me like SPDY is just very complicated and increases speed basically as a side-effect of solving other imaginary problems.

      • by grmoc ( 57943 )

        I'm one of the other people who works on SPDY.

        server push: We have some debates about this internally, but it seems the market is deciding that push is important-- e.g. image inlining into the HTML. Server push allows you to accomplish the same, but gives the benefit of having them known as individual resources by a single name, and thus cacheable. I believe it may be particularly beneficial for people on high-rtt devices like mobile. If you look at data just about anywhere, you can see that RTT is the real

    • by fwarren ( 579763 )

      The one thing I appreciate, is your not selling this as "Chrome makes the web faster" the way Microsoft did back in the 90's. By creating their own extensions, and trying to sell everybody on how much better IE5 with IIS was then Netscape with Apache.

      You have added it to chrome and to google sites. Some may notice a speed difference, some may not. Meanwhile the protocol, such as it is, is free to use and implement without anyone having to reverse engineer it. Which is a pretty decent earnest money down to c

  • by 140Mandak262Jamuna ( 970587 ) on Monday April 11, 2011 @12:00PM (#35782422) Journal

    As part of the "Let's make the web faster" initiative, we are experimenting with alternative protocols to help reduce the latency of web pages.

    This smells very close to the "embrace extend and extinguish" technique of Microsoft. Unless Google follows it by keeping the technology open, work on getting it certified into the next version of the standards, this would become the first step in Google becoming the next Microsoft.

  • BAD (Score:3, Insightful)

    by improfane ( 855034 ) * on Monday April 11, 2011 @12:02PM (#35782440) Journal

    I cannot be the only person to think this is not a good thing. So now we'll have sites that have to run both technologies with regular HTTP/TCP as fallback and we fragment the web browser ecosystem even more.

    Thanks Google. As much as I want HTTP to be faster, I think this way is a bit degrading to the web... There was no standards process. It will probably now be rushed as a standard.

    Basically its a fake way of making Google look faster, so you either adopt Google's tech to get ahead. It reeks of a Microsoft strategic move to me. Can't optimize the browser? Change the browser and make an incompatible change! Well done...

    • Re:BAD (Score:5, Informative)

      by mbelshe ( 462412 ) on Monday April 11, 2011 @12:25PM (#35782728)

      Actually, you should read the spec as to how it is implemented. The TLS/NPN mechanism for switching to SPDY has no "fallback".

      And there is no intent to rush - heck - we've been working on it for over a year. You think that's rushed? If you're an engineer, I hope you'll appreciate that protocol changes are hard. You can't use pure lab data (although we started out with lab data, of course). Now we need real world data to really figure it out. We changed it in a way which *nobody noticed* for about 4 months. So, I don't think we hurt the web at all, but we are accomplishing the goals of learning how to build a better protocol.

      Seriously, if you have a better way to figure out new protocols, we'd love to hear them at spdy-dev@google.com, and if you want to lend a hand implementing, that is even better!

    • Amdahl's Argument -- always optimize the slowest part first.

      Browsers and javascript have come ridiculously far in the last couple of years. It would seem taht they're no longer the bottleneck, but now the protocol is. I actually doubt that you can squeeze that much more performance gains out of the browser at this point. I have no problem with now optimizing the protocol as long as the results of that process are open.
    • How is it a fake way? SPDY really is faster. Also there is nothing proprietary about this so your Microsoft comparison is wrong too.

    • I see it more as a typical google-thing, with massive open betas.
      But I can't see how this would make the browser incompatible. It still serves HTTP(S), and Google's servers work as they've done always (seen from the perspective of a competing browser). And the specs are out there, and documented.

    • When Chrome first came out, a friend of mine jumped on it right away and exclaimed at how fast and awesome it was. I tried it and didn't care for the simplistic UI and so stayed with Firefox. Months later, he looks over my shoulder and exclaims, "WTF! how did you load that (gadget blog website) so fast?" So we compared in real time, his Chrome loading the same page against my Firefox, and it took his Chrome ages to load this page, whereas mine was displayed nearly instantly.

      The answer was simple: NoScrip
    • I cannot be the only person to think this is not a good thing. [...] As much as I want HTTP to be faster, I think this way is a bit degrading to the web... There was no standards process. It will probably now be rushed as a standard

      Actually, this is an example of how standardisation should work. They thought about a good way to fix a problem (including consideration of past problems); chose an approach that was upwardly compatible and harmless to older clients; released a working implementation and source code.

      How could anyone do better? This is the classic path to "rough consensus and running code" [wikipedia.org].

      There are plenty of criticisms appropriate to SPDY and Google in general, but how they have proceeded in this case is not one of them.

  • I notice they're on version 10 since March 27...does that mean this is included in their latest build?

    http://www.srware.net/en/software_srware_iron_download.php [srware.net]

    • Who knows what's included in their build, they only provide outdated code as archive fragments on rapidshare... whether it's because of malice or ineptitude, I wouldn't run any software those guys produce. Publishing a project on github or similar service is not rocket science.
      I once tried asking what the deal was on their forums, but my post didn't make it through moderation.

  • I thought BEEP was a great concept that seemed to die on the vine. When I saw "multiplexing," I figured Google had resurrected the protocol but it looks like BEEP just doesn't go far enough.

    As for that 6 socket connections per client connection...wow! Never knew those kinds of resources were being devoured for every network connection.

  • SPDY also allows the server to communicate with a client without a client request

    Could this possibly be used to attack client computers?

    • The client has to initiate the connection with the server, these features just allow the sever to send back more data than was requested. If there was a bug in how the client processed the data then it could be exploited as a security request. However that is already true today of the data the client is requesting. This extra data will be in the same format as if the client directly requested it (except with the X-Associated-Content header added), so the same code should be used to parse it.

      Like I mentioned

  • by Millennium ( 2451 ) on Monday April 11, 2011 @12:48PM (#35782956)

    This is an interesting wrapper around MIME, but it is not HTTP in any way. Honestly, it's more like an "embrace-and-extend" in the Microsoft sense. It is backward-incompatible in ways that are inconsistent with its stated philosophy of bandwidth savings (and in ways which break the most basic HTTP semantics), and it throws gratuitous binary into the wrapper while using FUD to justify its presence (notably the specter of "security problems"). This is sad, because it actually does contain some much-needed improvements to HTTP, such as TLS-only, compression-by-default, and header compression. But it's not an extension, because that implies backward-compatibility: it's a replacement, and one with certain other design decisions that are questionable at best.

    Some questions in particular:

    1) Why break the request-line and status-line? This is the major compatibility-killer, and it adds to the bandwidth consumed by the protocol in ways directly counter to the concept of saving bandwidth. To call requests and responses "virtually unchanged" from HTTP is disingenuous when the most basic syntactic requirements for both of these things is completely different: they are in fact completely different, not virtually unchanged: what you've unchanged is MIME.
    2) Why binary for the wrappers? The specter of security issues via incorrect parsing is true as far as it goes, but by no means insurmountable, and the bandwidth savings are minimal at best. In exchange, the spec becomes considerably harder to debug (and thus to implement) and to extend further as needed by future requirements.

What is research but a blind date with knowledge? -- Will Harvey

Working...