Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Chrome Google The Internet

Google Chrome Will Adopt HTTP/2 In the Coming Weeks, Drop SPDY Support 88

An anonymous reader writes: Google today announced it will add HTTP/2, the second major version of Hypertext Transfer Protocol (HTTP), to Google Chrome. The company plans to gradually roll out support to the latest version of its browser, Chrome 40, "in the upcoming weeks." At the same time, Google says it will remove support for SPDY in early 2016. SPDY, which is not an acronym but just a short version for the word "speedy," is a protocol developed primarily at Google to improve browsing by forcing SSL encryption for all sites and speeding up page loads. Chrome will also lose support for the TLS extension NPN in favor of ALPN.
This discussion has been archived. No new comments can be posted.

Google Chrome Will Adopt HTTP/2 In the Coming Weeks, Drop SPDY Support

Comments Filter:
  • by Anonymous Coward on Monday February 09, 2015 @07:19PM (#49022211)

    Google has a big case of "Invented Here" syndrome.
    If Google started something, you can count on them dropping it.

    • by Anonymous Coward

      HTTPv2 = spdy

      • by _merlin ( 160982 )

        Not really, it has a number of modifications, including dropping mandatory encryption.

        • by epyT-R ( 613989 )

          I don't see that as an improvement.

          • by _merlin ( 160982 )

            It is if you have constraints on server resources. Encryption costs CPU time, and it requires even more if you use forward security as people tend to since the Snowden revelations. If you're serving lots of public assets (think 4chan image CDN), requiring encryption would greatly increase the CPU resources you need.

          • by Trongy ( 64652 )

            I agree. Currently Chrome, IE and Firefox only support http/2 over TLS.
            Just because the standard doesn't enforce encryption, doesn't mean that servers and clients can't mandate encryption.

            http://en.wikipedia.org/wiki/H... [wikipedia.org]

    • by AmiMoJo ( 196126 ) * on Tuesday February 10, 2015 @08:05AM (#49024897) Homepage Journal

      They are only dropping it because HTTP/2 is largely based on SPDY but with some improvements. SPDY was always a research project designed to produce something better than HTTP/1.1, and it has. Job done, the replacement is here and an official standard, so why maintain the old SPDY code?

      More over, Google seems to be aggressively removing old stuff from Chrome to keep it from bloating too much. Netscape plugins have already gone. Blink dropped all the compatibility stuff in Webkit for old systems and browsers. My bet is that Flash will be removed in a year or two as well.

  • by Qzukk ( 229616 ) on Monday February 09, 2015 @07:23PM (#49022239) Journal

    Will HTTP/2 have a response code that will cause the browser to display the page that is returned from the server AND change the "current url" (for bookmarking, refreshing etc) to an alternate Location? POST to /createnewuser, display the response immediately with the URL of /user/3813812. Refreshing loads /user/3813812, not re-POSTing to /createnewuser.

    Right now, the current paradigm of having to either redirect every single POST request to a new URL or risking users too stupid to know that they really need to not press reload on the page after saving something is a drain on server resources one way and support resources the other.

    • by Chalnoth ( 1334923 ) on Monday February 09, 2015 @07:37PM (#49022321)
      It's pretty easy to get around this issue with JavaScript, e.g. by using Angular. I think this is less a problem with the HTTP protocol and more a problem with website design.
      • by Qzukk ( 229616 ) on Monday February 09, 2015 @08:23PM (#49022575) Journal

        I think this is less a problem with the HTTP protocol

        Using onload="history.pushState(null, null, '/user/31813812');" certainly works, but now pushing the "back" button is the landmine instead of pushing refresh (not to mention users that turn off javascript). Being able to use javascript to pretend you're doing what the HTTP procotol should have done does not make it not a problem with the protocol.

        That said, the HTTP/1.1 protocol itself is fine. A user agent ought handle a 201 Created response exactly like this as a side effect (OK, so the response body is technically not a listing of places you can get the created object from, but it's supposed to be displayed to the user either way), but there are zero browsers implementing the Location part of it. Adding a response code explicitly for the purpose of "here is a response to display to the user right now, if the user wants to reload it, request this URL instead" would hopefully get browser developers to say "oh, I see why we're doing this" and do it. Doubly so when they're writing a new implementation for a new protocol. At this point, I'd argue that the best thing to do would be to add something like "311 Content With Future Redirect" so that browsers that don't implement it continue with 3xx POST-Redirect-GET semantics (losing nothing) and browsers that do understand it will work.

    • You could just only return the data required from the POST request and use client side code to render it, instead of reloading the entire page. Saving bandwidth, server-side processing and rendering time.

    • Erm, but in that instance POSTing then doing a GET makes sense.

      The POST creates the new user.
      The GET retrieves the information for user 3813812.

      How are those two things the same? That is exactly how it is supposed to be done.
      If your server can't handle that 'additonal' load very well, then I've got a 486 upgrade I'll donate for free.

      • by Qzukk ( 229616 )

        With modern frameworks and Java level classiness (here, have a mappingfactory class that instantiates the mapping class that maps the data from the form into the object returned from the factory class that produced the object being created) what happens is more like

        The POST creates a hospital, staffs it with receptionists, doctors, nurses, and so on. A pregnant woman (the request) goes in, the receptionist routes her to the OB ward where nurses help her into the stirrups and the doctor catches the baby. T

        • Yep that sounds correct. That is the entire point of GET and POST being different.
          GET retrieves data, never alters it. POST alters data, never retrieves it.

          This is kinda HTTP 101. You might disagree, but then you'd be doing it wrong.
          You are using a perfectly good axe as a door stop then whining that your teaspoon isn't cutting down trees properly.

          • by Qzukk ( 229616 )

            POST alters data, never retrieves it.

            Except for all of the cases where POST returns data, sure. There is absolutely no reason to destroy the result of creating a resource instead of returning the newly created resource with a flag "don't do that again".

            You are using a perfectly good axe as a door stop then whining that your teaspoon isn't cutting down trees properly.

            Meanwhile you're dulling the perfectly good axe to be sure that nobody cuts down your sacred tree.

            • If the 'axe' is perfectly good, then why are you requesting fundamental changes to HTTP web apps and every web browser just to save your 8080 web server one additional hit?

          • Yep that sounds correct. That is the entire point of GET and POST being different.
            GET retrieves data, never alters it. POST alters data, never retrieves it.

            Both assertions are false.

            This is kinda HTTP 101. You might disagree, but then you'd be doing it wrong.

            LOL if you disagree with me your doing it wrong. No arguing that.

            You are using a perfectly good axe as a door stop then whining that your teaspoon isn't cutting down trees properly.

            HTTP is a dull rusty blade attached to a termite infested rotted, split, splintered handle yet it is the only axe left in the world and the only way to cut down trees.

            It is unwise to attempt to use it properly as "intended" you will just break it and or injure yourself.

            Instructions included with moldy packaging the axe originally came in is only a useful as a reminder of old times before our Alien overlords swooped do

        • I think you're confusing the protocol verbs with a heap of javabollocks on the back end.

          In most systems GET occurs much more often than POST, and if you're returning to a system after logging off, you'll be doing a lot of GETs just to restore your environment (or page view).

          I think a POST+GET optimisation would be nice, but it would be an optimisation, a little like how some DBs have a 'insert, or update if already exists' statement. But you can already return data from a POST, it does break the concept but

      • Erm, but in that instance POSTing then doing a GET makes sense.

        The POST creates the new user.
        The GET retrieves the information for user 3813812.

        Too many people seem to think it's cool to add round trips for some incoherent appeal to logical consistency.

        How are those two things the same? That is exactly how it is supposed to be done.

        Who cares? HTTP verbs are insufficient to express jack or jill and HTTP completely lacks any useful transaction semantics. REST in the abstract is a great idea... only problem is HTTP is shit and when you don't treat shit like shit you end up with shit. HTTP is simply the wrong layer to be toying with any kind of abstraction if you care about useful results.

        If your server can't handle that 'additonal' load very well, then I've got a 486 upgrade I'll donate for free.

        The actual problem is users suffering t

        • by Shados ( 741919 )

          The paradigm and semantic is perfectly clean/correct. The roundtrip is just an implementation detail.

          The protocol could simply return the result of the post with a redirect, as well as the result of the get, in 1 response. Then under the hood even though you do a GET, the get "chunk" of the post result would get rendered. No additional roundtrip.

          This is already used in some context. Imagine I want to show you a server rendered image based on some query string generated in javascript or something (ie: nothin

          • The paradigm and semantic is perfectly clean/correct. The roundtrip is just an implementation detail.

            Like the rest of HTTP it is perfectly useless. No coherency nor atomicity nor any way to implement verbs beyond trivial "CRUD".

            The protocol could simply return the result of the post with a redirect, as well as the result of the get, in 1 response.

            Why is returning a pointer to data allowed while returning actual data itself not? This just sounds like bullshit.

            Then under the hood even though you do a GET, the get "chunk" of the post result would get rendered. No additional roundtrip.

            Or you can just return data and stop being silly.

            (yeah, I could request the image and then set the data to the img tag. This was just a roundabout example).

            It always is... I'll leave failure to communicate a coherent use case to speak for itself.

            That technique works today, and for some edge cases, it is actually being used in the real world. Making a post -> redirect -> Get without roundtrip wouldn't be very different from existing paradigms.

            But what is the point?

    • That will cause the browser

      First, browser developers are in charge of browser behavior, not a protocol. RFC may say "should", or "must", but browser implementers don't have to do that.

      There is a need for this, but your question is whether the standard requests, or requires, the feature. Your other, unasked question, is whether any development team has committed to respect the "required" or "must" statement from the RFC.

  • Looks like all this protocol does is to allow the server to send more data than requested by the browser. Since the server "knows" more about the page to be rendered than the browser, it can pack more info and minimize wait times. The browser rendering can still do its slowpoke rendering, but when it wants the next bit of data or a frame, it is already in.

    Apparently this was already being done in a non-standard compliant manner but supported by all the browsers. The non-standard version, created and promo

  • I have no plans to adopt HTTP/2. The mandatory fauxcryption (as implemented in the browsers) is a dealbreaker. Certificates are nothing but a scam and they certainly aren't trustworthy since the CAs are subject to the whims of cybercriminals and governments. All this does is increase the barrier to entry for having your own webpage.

  • Since IE 11 is end of life and so is the 2nd most popular browser IE 8 it will prevent adoption.

    The saga of short sighted ceos not wanting to loose customers who run old operating systems kills innovation again as win 7 will continue to exist for a long time and with it IE 11 and 8 for corps with old apps

    • by gl4ss ( 559668 )

      doesn't matter.

      as long as android and ios support it, the sites will implement it. it's not like it needs that much effort to anyways.

The last person that quit or was fired will be held responsible for everything that goes wrong -- until the next person quits or is fired.

Working...