Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Google Microsoft Technology

S+M Vs. SPDY: Microsoft and Google Battle Over HTTP 2.0 180

MrSeb writes "HTTP, the protocol that underpins almost every inch of the world wide web, is about to make the jump from version 1.1 to 2.0 after some 13 years of stagnation. For a long time it looked like Google's experimental SPDY protocol would be the only viable option for the Internet Engineering Task Force to ratify as HTTP 2.0, but now out of left field comes a competing proposal from Microsoft. Lumbered with the truly awful name of HTTP Speed+Mobility, or HTTP S+M for short, Microsoft's vision of HTTP 2.0 is mostly very similar to SPDY, but with additional features that cater toward apps and mobile devices. 'The HTTP Speed+Mobility proposal starts from both the Google SPDY protocol and the work the industry has done around WebSockets,' says Jean Paoli from the Microsoft Interoperability team. Basically, the S+M proposal looks like it's less brute-force than SPDY: Where server push, encryption, and compression are all built into SPDY, Microsoft, citing low-powered devices and metered connections, wants them to be optional extensions. Judging by the speed at which the internet (and the internet of things) is developing, I think MS's extensible, flexible solution has its merits."
This discussion has been archived. No new comments can be posted.

S+M Vs. SPDY: Microsoft and Google Battle Over HTTP 2.0

Comments Filter:
  • Oh god... (Score:5, Funny)

    by Theophany ( 2519296 ) on Thursday March 29, 2012 @02:53AM (#39506117)
    S&M - really??
  • I wonder if all the options of all the extensions will be part of the spec, or is this another embrace, extend, extinguish?

    • by Sneeka2 ( 782894 ) on Thursday March 29, 2012 @03:06AM (#39506207)

      I really like that SPDY insists on SSL secured connections. This is what we should be moving towards and having it forced upon us in the next HTTP revision is a great step. But of course Microsoft tries to be backwards compatible, as they always are.

      I say SPDY for modern devices, HTTP 1.1 for the foreseeable future for low powered devices. It still works fine, you know? And by the time HTTP 1.1 is retired, there will be no more devices so underpowered they can't establish a SPDY connection. For the love of god, drop legacy when you get the chance!

      • by fsterman ( 519061 ) on Thursday March 29, 2012 @03:31AM (#39506353) Homepage

        Correct me if I am wrong, but encryption prevents caching. That is why Facebook and Google used to encrypt only user/password authentication. Forcing every connection to have encryption would prevent all caching as well...

        • by Sneeka2 ( 782894 ) on Thursday March 29, 2012 @03:43AM (#39506429)

          Correct me if I am wrong, but encryption prevents caching.

          Well, you are wrong. At least as a general statement. :)

          It prevents caching by proxies, but it works fine with regular client/server HTTP caching.

          • by arth1 ( 260657 ) on Thursday March 29, 2012 @05:57AM (#39507103) Homepage Journal

            It prevents caching by proxies, but it works fine with regular client/server HTTP caching.

            The first is a huge problem. Having a transparent caching proxy easily saves a medium sized company 20-40% bandwidth and increases the perceived speed for users.
            Enforced SSL also decreases speed because of a need to encrypt on one end and decrypt on the other. Slow devices pay the heaviest penalty.

            My first test of SPDY showed that it slowed down page load by a factor of 2, and consumed a heck of a lot more resources too. Yes, this was on a slow machine. But guess what? Slower machines haven't been banned from accessing the web, and I don't think they should be.

            I am not against SSL, but against the use of it for the sake of using it. It's the lazy way out.

            No, please let me have HTTP/1.0 and 1.1, also without SSL. Because sometimes the solution creates as many problems as it solves.

            Hopefully Microsoft's suggestion is a bit more sensible. But I doubt it. They want controlled slow obsoletion, so customers can be forced to buy new versions of Windows Server, Office and what have you.

          • It prevents caching by proxies

            I think that's what he meant; he was just being brief. Encryption prevents sharing caches, which happens to usually be the whole point of caching.

        • by chrylis ( 262281 )

          Firefox, Chrome, Opera, and IE will all cache HTTPS content if permitted by Cache-Control headers. Of course, HTTPS, does prevent transparent proxy caches.

          • Of course, HTTPS, does prevent transparent proxy caches.

            Hooray! My ISP is really shitty at this anyway. Tired of them creating problems for me because they want to save some bandwidth.

        • Correct me if I am wrong, but encryption prevents caching.

          You're wrong. It prevents downstream proxies from caching, but Google is perfectly capable of putting the SSL layer in-front of Varnish/Squid, and caching rather than always hit the backend.

          It hardly matters, these days. So much of the web is dynamically generated that caching hasn't been very useful in a long time, for anything but images.

          • You're wrong. It prevents downstream proxies from caching, but Google is perfectly capable of putting the SSL layer in-front of Varnish/Squid, and caching rather than always hit the backend.

            Uhm, just because it's also caching doesn't mean it's the caching that is discussed, so wtf are you even talking about..

            So much of the web is dynamically generated that caching hasn't been very useful in a long time

            What? Just because something is dynamically generated doesn't mean it magically changes each time it's requ

          • by arth1 ( 260657 ) on Thursday March 29, 2012 @06:18AM (#39507243) Homepage Journal

            It hardly matters, these days. So much of the web is dynamically generated that caching hasn't been very useful in a long time, for anything but images.

            Wrong. A lot of downloads are http. Do you really want all your users to download the same 80 MB updates or 2 GB iso files as separate copies through a shared internet connection, or get them from the cache after the first download?

            And while a lot of content is dynamically generated, the javascript and css files generally aren't. The earlier they can be loaded in a client, the snappier the experience for the user.

            Once you subtract downloads, streaming http video and audio, static pages, javascript, css and images, you'll find that what's left is a small part of the overall bandwidth.

            What hurts with web 2.0 and abloodyjax is the ridiculous number of connections you establish and break down. Latency kills you. Re-using connections and keeping them more persistent helps, at the cost of maintaining unused connections at both ends. And caching what is cacheable (instead of the web devs taking the lazy cop-out of marking everything as dynamic) helps a lot.

            SPDY is like the stores handing out a huge shopping cart to everyone whether they need it or not, to solve the problem of certain buyers pushing a train of two or more carts. It'll piss of those who just want a bottle of milk. It's a solution looking for a problem.

      • It could be argued that encryption should be done at the IP level, not HTTP level, and therefore having mandatory HTTPS is redundant

      • Drop legacy and force extensions? Sounds like M$/Apple (but in this case it's the opposite) this will lead to "Oh, I'm sorry your App X can't connect to the Internet anymore, you know it's already 3.5 years old? Time to buy a newer version!"

        But seriously, in the foreseeable future (lets say 10 years) we wont get to a state where mobile devices can be allays on-line, listening for server pushes and not drain the battery in 4 hours.

        "You forgot google.com open in your mobile browser? It servers you right
        • Nothing forces an application to be continuously on-line to support server push.

          Server Push is to enable servers to push content to the client that it didn't specifically ask for; for example, /. could push the logo image right after getting a request for the HTML part, so that the client doesn't have to parse the HTML, find the image tag and then do a new request to ask for it.

          Supporting server push actually reduces battery and traffic, since you don't need to send requests or keep the connection open for

      • SSL connections are a great thing however with that comes a great deal of cost and overhead for the server operator.

        • Think again. Today's server CPUs all already have hardware AES support. There is no reason why the next generation's can't support RSA/DH for the handshake too. And if you make SSL mandatory, they will. Which makes the overhead disappear into a tiny fraction of the number of transistors on each core, while making everything more secure.

          "There is going to be a transition period" is no excuse for not doing something which has long-term benefits.

        • by jonwil ( 467024 )

          There are statements (I cant find a source right now but I think they got a mention here a while back) where top technical guys from some big sites that have gone "HTTPS for everyone all the time" said that the overhead of SSL is minimal even when doing millions of page views.

      • by madsen ( 17668 )

        There is ofcourse the problem that using SSL makes it much harder for us to inspect the traffic for malware, bots, and whatnot.

      • by greyc ( 709363 ) on Thursday March 29, 2012 @06:06AM (#39507171)

        [...] SPDY insists on SSL secured connections.

        Citation Needed.

        Certainly the common server-side implementations right now like [wikipedia.org] to use it with encryption, but I can find no mention of that being mandatory in the SPDY IETF draft [ietf.org].

        In particular, section 2.1 has all of the following to say about upper-level protocols:

        2.1. Session (Connections)
              The SPDY framing layer (or "session") runs atop a reliable transport
              layer such as TCP [RFC0793]. The client is the TCP connection
              initiator. SPDY connections are persistent connections.

        SPDY has protocol elements that are only useful when it's wrapped by TLS/SSL, but then you aren't forced to use those on a given connection, either.

  • by bazmail ( 764941 ) on Thursday March 29, 2012 @02:57AM (#39506147)
    S&M? lol what is it with microsoft and their naming schemes. Turtle phone anyone?
  • by merc ( 115854 ) <slashdot@upt.org> on Thursday March 29, 2012 @02:58AM (#39506153) Homepage

    Lets take a little look at the history of Microsoft and clearly understand what we're getting into before we blindly adapt one of their standards.

    • by bemymonkey ( 1244086 ) on Thursday March 29, 2012 @03:08AM (#39506225)

      Wary.

      But yes, it's always a good idea to take a closer look... although tbh, the same thing applies for Google's alternative ;)

    • Lets take a little look at the history of Microsoft and clearly understand what we're getting into before we blindly adapt one of their standards.

      They even called it HTTP Sadism & Masochism, it's not like we aren't warned.

    • Re: (Score:2, Funny)

      by dido ( 9125 )

      As if the name of the proposed standard wasn't already a dead giveaway. It's obviously another ploy for them to place the world back under their bondage and domination. I think some marketing drone at Microsoft hadn't thought the name through, or perhaps they are here trying to display a frank and contemptuous display of their true motives in introducing such a protocol. I hear a song by Rihanna playing in the background....

      • by Thing 1 ( 178996 )
        Microsoft has a history of marketing intelligence. They released the first version of Windows Update as the Critical Update Notification Tool (make the acronym...). They also had a campaign in System Center with the tagline "You're in control!" Spoken, it sounds like "hitting the bowl" (i.e., the same way that "gun control is hitting what you're aiming at", the joke just in case being "you're in" --> "urine").
    • You're just not masochistic enough for Microsoft standards.

    • by DragonWriter ( 970822 ) on Thursday March 29, 2012 @10:59AM (#39510899)

      Lets take a little look at the history of Microsoft and clearly understand what we're getting into before we blindly adapt one of their standards.

      No one -- even Microsoft -- is asking for "blind adoption". The Microsoft proposal offers numerous explicit issues for discussion and raises and provides a recommendations for addressing numerous issues with regards to Google's earlier proposal (both as regards to pragmatics and consistency with the HTTP/2.0 charter.) Its a discussion draft. Its not intended for blind adoption, its intended to spur further discussion in the work group.

      Why not address the merits of the proposal?

  • by DrXym ( 126579 ) on Thursday March 29, 2012 @03:17AM (#39506289)
    I like my HTTP protocols to be a little bit kinky.
  • Well, the internet is for porn...

  • SSL needs to be mandatory .. there is way too much threat from various governments and even non governmental bodies that want to see what people are doing on the web.

    If wish somebody would ship an SSL-only browser.

    • SSL needs to be mandatory .. there is way too much threat from various governments and even non governmental bodies that want to see what people are doing on the web.

      Given the centralized nature of SSL certificates, it's downright trivial for a sufficiently interested government to execute a MITM attack on you. All they need to do is force the certificate authority to issue a copy to them.

      • Well that part needs to be changed. I guess I should have said encryption rather than SSL.

      • by Richard_at_work ( 517087 ) on Thursday March 29, 2012 @04:06AM (#39506553)

        They don't even need a copy, or interaction with the same CA - any cert issued for the same domain by any CA will do just fine, which is why the creation of a CA in China recently was a hot topic, as it allows global MITM attacks by Chinas government.

      • Most governments worldwide have their own certificate authority with root certificates in browsers. So they don't need to put pressure on an 3rd party to do it.
      • That is an issue with the CA system, not SSL.

        • by Meneth ( 872868 )
          SSL relies on the CA system to function effectively, which means that without a reliable CA system, SSL is also unreliable.
          • apparently you don't understand the difference between a technological and non-technological problem.

            They can replace the CA system with something that functions more reliably and SSL would not need to change.

      • by jonwil ( 467024 )

        The solution is to eliminate CAs altogether and do 4 things instead:
        1.Stop storing identifying information (individual or company names etc) in SSL certificates and only store authenticating information. The only thing web page encryption should be doing is verifying that the web page you think you are talking to is the one you are actually talking to.

        2.Store certificates (or certificate hashes) in DNSSEC-secured DNS records. Its much harder for a malicious attacker to break the encryption on DNSSEC or to c

  • by MtHuurne ( 602934 ) on Thursday March 29, 2012 @03:42AM (#39506417) Homepage

    For every optional feature, the server will need code to deal with clients that do support it and clients that don't. It's more code to write and more potential for bugs. Of course this doesn't mean that every feature should be mandatory, but compression and encryption are already supported by pretty much every browser and server push would be a significant improvement over polling.

    On metered connections, compression and server push would be improvements and encryption wouldn't make a difference. For power consumption, server push would be an improvement (polling means sending over a wireless link regularly), compression would probably not make much of a difference (assuming we're talking about gzip here) and encryption might tax the battery a bit more. However, if this is an issue, the common encryption algorithms could be hardware accelerated.

    • You do realize some devices just don't have those hardware accelerators you're speaking of?

      • Today some don't have it, but this is a design for a future standard. Smartphones already have MPEG4 acceleration and I think hardware AES would take less transistors than MPEG4. Also, a software fallback for encryption is possible, so even if it would take a bit more power you'd still be able to use it. And when transistors continue to shrink computation will become cheaper over time, while the amount of power spent on the display and the radio will probably not decrease a lot.

        I also realize that not all d

        • AES was designed to be hardware accelerated in the NIST standardization process, it's one of the reasons Rijndael won to become AES.
        • So all older devices would become incapable of connecting to modern websites, even with software upgrades?

      • by Serious Callers Only ( 1022605 ) on Thursday March 29, 2012 @04:37AM (#39506671)

        Those devices can stay on http 1.1 which will be supported for the foreseeable future. That's a much better way to manage backwards compatibility than trying to make certain features optional.

    • Encryption takes more time and data for the handshake and integrity checks and compression will make a huge difference on hypertext and other text only data.
      • Related assets being pushed with out request will be a huge deal too...I call up the webpage, I want the webpage, so you might as well give me everything without waiting for me to ask for the next image/javascript block...etc.

    • Microsoft the market leader in Mobile phone OS's is obviously the right way to go rather than the minority player Google ... oh sorry got that the wrong way round ...

    • I guess optional isn't good for protocols. If a device might not support it then you can't rely on it, and if you can't rely on it, you might as well not bother trying to use it.

      Of course, if all this happens under the covers of the network stack, then things might be different, but can you really implement push notifications on the server if the client only supports pull.

      I particularly worry that MS will introduce another 'optional' component that is available on Window server and Window Phone, and make it

  • Microsoft, with the ridicule market share of Windows Phone, you are probably not in the position to tell to Google how to do this kind of technology.

  • by 93 Escort Wagon ( 326346 ) on Thursday March 29, 2012 @03:50AM (#39506483)

    Microsoft proposes HTTP 2.0 come in the following varieties:

    HTTP Speed+Mobility Starter Edition
    HTTP Speed+Mobility Professional
    HTTP Speed+Mobility Enterprise
    HTTP Speed+Mobility Ultimate

    • Also, their own HTTP Speed+Mobility implementation is completely different from one they proposed for standardization.
    • Micro soft is getting away from that terminalogy and simplifing, they are going with:

      HTTP S & M Leather Edition
      HTTP S & M Chain Edition
      and for the Asian market HTTP S & M Tentacle Edition
    • You can get the Ultimate edition for free if you're a student.
    • by Nimey ( 114278 )

      S&M Ultimate doesn't come with a safeword.

  • by Chris Mattern ( 191822 ) on Thursday March 29, 2012 @05:41AM (#39507003)

    ...who brought you the Critical Update Notification Tool!

  • When any part of a standard is optional, then you can't really depend on it. If you can't depend on it then you can't really use it for any real life scenarios. Optional features in standards are bad.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...