Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Networking Upgrades

HTTP/2 Finalized 171

An anonymous reader writes: Mark Nottingham, chair of the IETF HTTP working group, has announced that the HTTP/2 specification is done. It's on its way to the RFC Editor, along with the HPACK specification, where it'll be cleaned up and published. "The new standard brings a number of benefits to one of the Web's core technologies, such as faster page loads, longer-lived connections, more items arriving sooner and server push. HTTP/2 uses the same HTTP APIs that developers are familiar with, but offers a number of new features they can adopt. One notable change is that HTTP requests will be 'cheaper' to make. ... With HTTP/2, a new multiplexing feature allows lots of requests to be delivered at the same time, so the page load isn't blocked." Here's the HTTP/2 FAQ, and we recently talked about some common criticisms of the spec.
This discussion has been archived. No new comments can be posted.

HTTP/2 Finalized

Comments Filter:
  • by Billly Gates ( 198444 ) on Wednesday February 18, 2015 @09:04AM (#49079267) Journal

    MS will only support this in Spartan on Windows 10. Which means 7 and XP are already out of support.

    • Re: (Score:3, Informative)

      by Anonymous Coward
      Incorrect; HTTP/2 is also supported in the classic IE 11 on Windows 10. In fact, we are having trouble with it already in the current Technical Preview 2 of Windows 10 that does not contain Spartan (some sites don't negotiate it correctly and take over 30 seconds to load until you turn off HTTP/2 in the browser). Of course, the current build is using draft-14 of the spec as are most other implementations today (some are on draft-16).

      However, you are correct that IE 11 on Windows 7 is not going to support H
    • Re: (Score:3, Insightful)

      by DarkOx ( 621550 )

      Which will mean one of three things:

      1) HTTP/2 will not be used in the wild for anything important. Due to the sheer number of Win7/8 machines out there with IE versions that don't support it.

      2) Users on windows will move away from IE. Once people leave they don't typically come back so IE will eventually become an also ran.

      3) Microsoft will fear loosing to much browser market share, will back pedal and backport spartan.

      • by QuasiSteve ( 2042606 ) on Wednesday February 18, 2015 @09:27AM (#49079389)

        4) People enable it on their servers and those with browsers that do support it enjoy the benefits (and possibly some of the side-effects) that it brings, while everybody else will either chug along on HTTP/1.1 or even HTTP/1.0, or switch to a browser that does support HTTP/2, and whether or not older versions of IE support it remains a non-issue.

        • by 93 Escort Wagon ( 326346 ) on Wednesday February 18, 2015 @10:20AM (#49079685)

          If people do "switch to a browser that does support http/2", it will be for some other reason unrelated to the protocol - getting http/2 support will be serendipitous. Very few people other than tech heads are going to care about the protocol, one way or the other... they won't even be aware of it.

          • by DarkOx ( 621550 ) on Wednesday February 18, 2015 @12:32PM (#49080761) Journal

            The masses will switch as soon as they see "Did you know Facebook could be faster --> Click Here ---"

            • The masses will switch as soon as they see "Did you know Facebook could be faster --> Click Here ---"

              Somehow I doubt that'll be any more effective than those old website buttons that said "This site is optimized for Netscape Navigator - click here to download".

              • by DarkOx ( 621550 ) on Wednesday February 18, 2015 @04:06PM (#49082195) Journal

                The web is a different place than it used to be. Let me take you back to 199[345].

                There were four kinds of Internet Users:

                Group 1)
                Has just arrived at your GeoCities page with its "optimized for Netscape" banner after following several webring links. They had only recently finished unboxing their Packard Bell and working out the relationship between the mouse and the cursor.

                The were sitting in front of Windows 3.1x feeling a mix of awe and pride in their AOL dialing skills and terror they might some how break this machine having just spent nearly a months salary on it, because the kids teacher said they should get a PC. They were not about to download anything let alone install it. They still had the sakes from last time they tried something like that, and continue to wonder who this Gen. Protection Fault is and what he did to their computer.

                Group 2)
                Were practically experts by today's standards. They maybe had a 286 from a few years back and remembered some DOS commands. This and their command of cutting and pasting into notepad from "View Source" in Navigator has enabled them FTP their very own page to GeoCities that folks in group 1 are now viewing.

                Group 3)
                Has some professional or academic experience using a platform other than DOS and Netware. They are already frustrated back the lack of development the X11R2 edition of Navigator is seeing. Its fine they because all the stuff they think is really worth while is still available via BBS, and someone was good enough to install Lynx and internet gateway in case they do want to look at GeoCities. They had formed their opinion about what browser was good an proper and nothing was going to make them change, EVER.

                Group 4)
                Mac users, this group was small and mostly ashamed of themselves during this period. They clung to the belief their shitty platform was in someway superior to Microsoft's shitty platform running on Packard Bell (it wasn't). They really did not having anything to choose from besides Netscape, no matter what the banners indicated and they knew it.

                In short things were nothing like today; well actually group 3 hasn't change much. Groups 1 and 2 merged; but the fear is gone. These people will run anything now. Ask them to put their password in so they can run NoIreallyAmATrojanLookingToStealYourOnlineBankingPassword.exe and they probably will if you promise them some extra Facebook likes on their posts or something.

                Group 4) Is all self assured again. Some group 3 folks are joining them, although they still don't really mix at parties.

        • Problem is http2 is for Web services. If you have to support older IE hence http1 then you can't do the same things. ... or write 2 different versions of your site.

      • by Richard_at_work ( 517087 ) on Wednesday February 18, 2015 @09:35AM (#49079435)

        3) Microsoft will fear loosing to much browser market share, will back pedal and backport spartan.

        Didn't work for DirectX, don't think it will work for this.

    • Windows XP is completely out of support, while Windows 7 is out of mainstream support (as of January 2015), so not supporting them with new features is only right.

    • HTML/2 strong point is centered around Web Services communication. So this is non-browser to to non-browser communication.

    • by rock_climbing_guy ( 630276 ) on Wednesday February 18, 2015 @09:52AM (#49079531) Journal
      As a web developer, I can only imagine how much more pleasant my work would be if I asked my customer, "What versions of IE does this application need to support?" and the response was, "IE? What's IE?"
      • by Qzukk ( 229616 )

        I get that response all the time. Then they tell me it needs to support version 7 of "the internet".

        • I don't think I could stand to hear someone refer to IE7 as "version 7 of the Internet". I would go completely Daffy Duck over that!
  • by Xharlie ( 4015525 ) on Wednesday February 18, 2015 @09:15AM (#49079337)
    The post even has counter-arguments and links to stuff. Are the 90's back again? Do I hear rave music, modem hand-shake tones and browser wars?
  • by jjn1056 ( 85209 ) <jjn1056.yahoo@com> on Wednesday February 18, 2015 @09:20AM (#49079359) Homepage Journal

    I'm really going to miss being able to telnet to a server and troubleshoot using plain text. Feels like a lot of simple has disappeared from the internet

    • by Lennie ( 16154 ) on Wednesday February 18, 2015 @09:57AM (#49079559)

      I'm really going to miss being able to telnet to a server and troubleshoot using plain text. Feels like a lot of simple has disappeared from the internet

      Yes, HTTP/2 is a multipllexing binary framing layer, but it has all the same semantics of HTTP/1.x on top.

      HTTP/2 is 'just' an optimization. So if your webserver supports HTTP/1.x and HTTP/2 then you can still use telnet to check whatever you want and it should give the same result as HTTP/2.

      But you also have to remember:
      The IETF which is the group of people who design the Internet protocol made this statement:
      https://www.iab.org/2014/11/14... [iab.org]

      "Newly designed protocols should prefer encryption to cleartext operation."

      The W3C made a similar statement, there are also drafts with the intention to moving to HTTPS by default.

      So it is all moving to TLS protocols like HTTPS or STARTTLS for SMTP anyway. Those are clearly not text protocol either.

      So even if you want to interact with the text protocol used inside that TLS-encrypted connection, you'll need to use a tool because netcat or telnet won't cut it.

      Let's look at HTTP, because this is a HTTP/2 article.

      That tool could be the openssl s_client, but as you can see that is kind of cumbersome:
      echo -en "HEAD / HTTP/1.1\nHost: slashdot.org\nConnection: close\n\n" | openssl s_client -ign_eof -host slashdot.org -port 443 -servername slashdot.org

      But I suggest you just use:
      curl -I https://slashdot.org/ [slashdot.org]

      The main developer for cURL works for Mozilla these days and is one of the people working on the HTTP/2 implementation in Firefox and is writing a document explaining HTTP/2: http://daniel.haxx.se/http2/ [daniel.haxx.se]
      So as you would expect Curl supports HTTP/2:
      https://github.com/http2/http2... [github.com]

      Basically every browser include 'developer tools' which will also let you see the headers and everything else you are used from HTTP/1.x.

      I would rather see we all move to using encrypted protocols then that we can still use telnet.

      • echo -en "HEAD / HTTP/1.1\nHost: slashdot.org\nConnection: close\n\n" | openssl s_client -ign_eof -host slashdot.org -port 443 -servername slashdot.org

        Thanks, Lennie. I'm self-learning the whole HTTP (simple) and TLS (not simple) protocols out of interest as I work as an ad-hoc server admin (devops). This one-liner answers a whole slew of questions that I had, and points me in the direction to learn others. Thank you!

    • by higuita ( 129722 )

      how to you test a https currently with telnet? you don't, you use a tool (openssl s_client -connect ip:port), then you test like telnet

      with http2, you will also have to use a tool to connect.. then you can do what ever you wan...
      (and by the way, chrome and firefox will only allow http2 with TLS, so even if it was plain text, you still would need to use openssl s_client to test )

  • by Anonymous Coward on Wednesday February 18, 2015 @09:24AM (#49079379)

    Web pages being slow has nothing to do with HTTP and all to do with bloat eating every bit of performance that is available to the marketers who are in charge of the web now. A binary protocol to save a couple of bytes in times where every web page loads at least three "analytics" scripts weighing dozens of kbytes each. Marketing rots the brain.

    • by sinij ( 911942 )
      When Internet revolution arrives, they will be first up against the wall.
    • by Dagger2 ( 1177377 ) on Wednesday February 18, 2015 @09:58AM (#49079567)

      The main problem isn't the size of the stuff that gets loaded. What's dozens of kb these days? Even a megabyte takes a tenth of a second to transfer on my connection. The problem is latency: it takes more than 100ms for that megabyte of data to even start flowing, let alone get up to speed. That's what the multiplexing is intended to deal with.

      That said, the root cause of all this is the sheer amount of unnecessary stuff on pages these days. Fancy multiplexing or not, no request can finish faster than the one that's never made.

      • What's dozens of kb these days? Even a megabyte takes a tenth of a second to transfer on my connection.

        Which means you could burn through a 5 GB/mo cap in less than 9 minutes.[1]

        Cellular ISPs in the United States charge a cent or more for each megabyte. Satellite and rural DSL ISPs tend to charge half a cent per megabyte.[2] This means cost-conscious users will have to either install ad blockers or just do without sites that put this "sheer amount of unnecessary stuff on pages".

        [1] 5000 MB/mo * 0.1 s/MB * 1 min/60 s = 8.3 min/mo
        [2] Sources: major cell carriers' web sites; Exede.com; Slashdot/a [slashdot.org]

      • "The main problem isn't the size of the stuff that gets loaded [...] the root cause of all this is the sheer amount of unnecessary stuff on pages these days"

        So the main problem *is* in fact the size of that stuff, isn't it?

        Oh! by the way, not everybody's connection is like yours, specially over mobile networks.

        • by Dagger2 ( 1177377 ) on Wednesday February 18, 2015 @06:57PM (#49083397)

          No... maybe. It depends.

          Amdahl's law [wikipedia.org] is in full force here. There comes a point where increasing the bandwidth of an internet connection doesn't make pages load faster, because the page load time is dominated by the time spent setting up connections and requests (i.e. the latency). Each TCP connection needs to do a TCP handshake (one round trip), and then each HTTP request adds another round trip. Also, all new connections need to go through TCP window scaling, which means the connection will be slow for a few more round trips. Keep-alive connections help a bit by keeping TCP connections alive, but 74% of HTTP connections only handle a single transaction [blogspot.co.uk], so they don't help a great deal.

          Oh! by the way, not everybody's connection is like yours, specially over mobile networks.

          Mobile networks (and, yes, satellite) tend to have high latency, so round-trips are even more of the problem there. Also... when people shop for internet connections, they tend to concentrate on the megabits, and not give a damn about any other quality metrics. So that's what ISPs tend to concentrate on too. You'll see them announce 5x faster speeds, XYZ megabits!!, yet they don't even monitor latency on their lines. And even if your ISP had 0ms latency, there's still the latency from them to the final server (Amdahl's law rearing its ugly head again).

          Given all that, I think I'm justified in saying that the main problem with page loading times isn't the amount of data but the number of round-trips required to fetch it. Reducing the amount of data is less important than reducing the number of, or impact of, the round-trips involved. And that's the main problem that HTTP/2 is trying to address with its fancy binary multiplexing.

          (Now, if your connection is a 56k modem with 2ms latency, then feel free to ignore me. HTTP/2 isn't going to help you much.)

    • by TopherC ( 412335 )

      I feel like there's something important in the observation that advertising usually funds communication/information technologies. It seems like a kind of economic failure to me. But I can't quite put my finger on exactly what it is.

    • by Bengie ( 1121981 ) on Wednesday February 18, 2015 @11:01AM (#49079991)
      It's a bandwidth vs latency issue. HTTP1/1.1 is more latency sensitive, HTTP2 helps high latency. You say that pages load slowly because they're so large, yet these large bloated web pages consume nearly no bandwidth. HTTP2 multiplexing allows async requests and preemptive pushing of data, which should allow better usage of bandwidth. Fewer connections will also allow for quicker TCP relieve window ramp up and reduce the thundering hurd issue that many connections over a congested link creates.
    • This protocol will enable new types of web applications, things like realtime multiplayer games that need to constantly stream lots of packets at a lower latency. Things that are not done today because the current technology simply can not provide a good enough experience. Plus it will reduce your server load which is nice (marketing scripts and ads are served from third party servers).

    • Indeed. Someone needs to do a study of how harmful to the environment all the horrible marketing Javascript running on web pages is. It's about the only thing that pegs my CPU when my computer is in "normal" use now ( "normal" being "what normal people do with a computer", because "what programmers do with a computer" is often a little more intense.)

      I'm not someone who so far has joined with the ad-blocking movement, on the proviso that websites need *some* way to keep the server running, but I'm getting

    • Marketing rots the brain.

      It also pays the bills.

    • by AFCArchvile ( 221494 ) on Wednesday February 18, 2015 @12:29PM (#49080735)

      Most of that bloat you speak of is delivered via Javascript. A few weeks ago, I finally put my foot down and made my default browser Firefox with NoScript. I have a (very) small list of sites allowed to execute scripts, and most of the time I will browse a website in its broken state, since I can still see all the text and images. It even foils the news websites that use Javascript to "faux-paywall" their content behind canvases, as opposed to only sending part of the story content from the server (I'm still shaking my head as to who on the business side thought this was a good business stance since one can still read most of the articles for free, but that's another discussion entirely). But lo and behold, once I started a staunch NoScript policy, page loads completed much faster, cookie sprawl was reduced, and Firefox's memory usage stayed relatively low. I also started to learn which scripts from which servers were truly allowed for things like externally-served comment systems (disqus, etc.), and also noticed the way that some webpages end up triggering a cascading dependency of server connections due to scripts calling scripts on other servers, etc.

      One of the worst instances of cascading Javascript sprawl that I've seen was a page from The Verge, with 33 domains allowed (including theverge.com) executing 133 scripts. The SBNation "eulogy for RadioShack" article had the "script counter" go over 160. Oh, and that's leaving out web fonts, which NoScript also blocks (which also reveals how often some websites use custom fonts to draw vector-based icons; you can see the Unicode codes for each character since the font isn't loaded). Vox seems to love abusing Javascript in their designs; it's most of the reason why I've abandoned reading Polygon (the other reasons being the banal editorial content, aside from a few notable articles). In comparison, Slashdot is currently asking for 21 scripts, but is running perfectly fine without any Javascript enabled (despite nagging here and there).

      I've ended up moving YouTube browsing to Pale Moon, since it natively supports the HTML5 player without issues. I may end up moving all of my browsing to Pale Moon with NoScript, since it natively supports a status bar.

      The whole situation with the Javascript bloat reminds me of the scene from Spaceballs where Lonestar tells Princess Vespa, "Take ONLY what you NEED to SURVIVE." We're stuck with a bunch of prima donna web designers who want to duct tape on more adverspamming and social spying avenues to their website, and not standing back and taking a look at how bad it's impacting the user experience, not to mention the bloat from the hundreds of extra scripts and objects loaded by the browser, as well as the tens of connections to third-party servers.

    • by dave420 ( 699308 )

      Those analytics scripts are usually from an analytics service provider, and so are incredibly well cached, and shared across services. If you are repeatedly downloading the 15.7KB Google Analytics JS file, you are doing something wrong.

      If you'd spent any time looking in to HTTP performance, you'd know there is a lot to be desired. HTTP/2 is a step in the right direction. It's not saving a couple of bytes, it's saving a shit-tonne of bytes, using connections more efficiently, and improving encryption supp

  • Not really happy (Score:5, Interesting)

    by Aethedor ( 973725 ) on Wednesday February 18, 2015 @09:29AM (#49079405)

    As the author of an open source webserver [hiawatha-webserver.org], I must say that I'm not really happy with HTTP/2. It adds a lot of extra complexity to the server side of the protocol. And all sorts of ugly and nasty things in HTTP/1 (too much work to go into that right now) have not been fixed.

    What I have experienced is that SPDY (and therefor also HTTP/2) will only offer more speed if you are Google or are like Google. Multiplexing doesn't offer that much speed increase as some people would like you to believe. Often, the content of a website is located on multiple systems (pictures, advertisements, etc), which still requires that the browser uses more than one connection, even with HTTP/2. Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request. This is called request pipelining, but is turned off by default in most browsers. What I also often see is that a browser makes a first request (often for a CGI script) and the following requests (for the images, JS, CSS, etc) are never made due to browser caching. So, to me HTTP/2 adds a lot of complexity with almost no benefits in return.

    Then why do we have HTTP/2? Well, because it's good for Google. They have all the content for their websites on their own servers. Because IETF failed to come up with a HTTP/2 proposal, a commercial company (Google in this case) used that to take control. HTTP/2 is in fact a protocol by Google, for Google.

    In my experience, you are far better off with smart caching. With that, you will be able to get far better speed-increase results than HTTP/2 will ever offer. Specially if you use a framework that communicates directly with the webserver about this (like I did with my PHP framework [banshee-php.org]). You will be able to get hundreds to thousands requests per second for a CGI script instead of a few tens of requests. This is a speed increase that HTTP/2 will never offer.

    I think this is a failed change to do it right. HTTP is just like SMTP and FTP one of those ancient protocols. In the last 20 years, a lot has changed. HTTP/1 worked fine for those years. But for where the internet is headed, we need something new. Something completely new and not a HTTP/1 patch.

    • Then why do we have HTTP/2? Well, because it's good for Google....HTTP/2 is in fact a protocol by Google, for Google.

      Those are interesting points, but I don't understand why Google would ever be able to hijack a standard like this when there are plenty of other big players such as Microsoft, Facebook, and Amazon to counteract their Machiavellian schemes. I could believe that HTTP/2 might benefit big organizations more than a typical little guy like me who uses shared web hosting (and not even a CDN), but is there something about HTTP/2 that uniquely favors Google over all the other big players?

    • by higuita ( 129722 )

      This is a evolution step, it still requires to work with http/1.1 designed sites.
      When sites are designed with http/2 in mind, it is time for the http/3, where more things could be changed (and deprecate http/1.1)

      Is not perfect, but changing too many things leads to the ipv6 problem... is good, but people don't want to break things and so don't change anything.

      • Re:Not really happy (Score:5, Informative)

        by higuita ( 129722 ) on Wednesday February 18, 2015 @10:14AM (#49079661) Homepage

        And for the http/1.1 pipeline, it simple don't work... 90% of the sites can work with it, but others fail and fail badly... the page take forever to load, waiting for resources that were asked but never received. all browsers tried to enabled it and all had to revert it due to the (few but important) problems founds that are impossible to solve without a huge whitelist/blacklist of servers (impossible to really implement and a pain for all those important and old internal servers)

        So the 2 major issues that http/2 want to solve are really the tls slow start and a working pipeline... By announcing http/2, a browser knows that this things do work and are safe to use, no more guessing games and workarounds for bad servers.

        • by ftobin ( 48814 ) *

          By announcing http/2, a browser knows that this things do work and are safe to use, no more guessing games and workarounds for bad servers.

          Didn't browsers announce for HTTP 1.1 as well? Why should expectations be significantly different w.r.t. pipelining?

          • by raxx7 ( 205260 )

            What happened to HTTP/1.1 pipelining is fortunately not a common case: popular webserver breaks the standard and doesn't get fixed, making the standard de facto useless. While it's not impossible for the same to happen with HTTP/2.0, this type of situation is more the exception than the norm.
            All popular webservers today strive for a good degree of standard compliance.

            But, maybe as importantly, as pointed out before, that was not the only problem with HTTP/1.1 pipelining. You also have the head of line block

    • Re:Not really happy (Score:5, Informative)

      by raxx7 ( 205260 ) on Wednesday February 18, 2015 @10:21AM (#49079695) Homepage

      You might be happier if you researched why HTTP pipelining is off in most browsers and what problem it actually wants to solve.

      First, HTTP/1.1 pipelining and HTTP/2.0 are not about increasing the number of request your server can handle. It's mainly about reducing the effect of round trip latency in page loading times, which is significant.
      If a browser is fetching a simple page with 10 tiny elements (or even cached elements subject to conditional GET) but the server round trip latency is 100 ms, then it will take over 1 second to load the page.

      HTTP was a first attempt to solve this problem, by allowing the server to send multiple requests over the connection without waiting for the earlier requests to complete.
      If you can pipeline N requests, the round trip latency contribution is divided by N.
      However, HTTP/1.1 pipelining has two issues which led most browsers to disable it by default (it's not because they enjoy implementing features not be used):
      - There are or were a number of broken servers which do not handle pipelining correctly.
      - HTTP/1.1 pipelining is subject to head of line blocking: the server serves the requests in order and a small/fast request may have to wait inline behind a larger/slow request.

      Instead, browsers make multiple parallel requests to each server.
      However, because some servers (eg, Apache) have problems with large numbers of requests so browsers use arbitrary low limits (eg, 4-8 parallel requests per hostname).

      HTTP/2.0 attempts to solve these shortcomings by:
      a) being new, hopefully without broken servers out there
      b) using multiplexing, allowing the server to serve the requests in whatever order. Thus, no head-of-line blocking.

      So. HTTP/2.0 is, fundamentally, HTTP/1.1 pipelining without these shortcomings. We hope.

      • by Anonymous Coward

        - There are or were a number of broken servers which do not handle pipelining correctly.

        Wrong, there are so few servers that are broken that neither Mozilla nor Google could determine what software was broken. The "unknown" broken software was possibly malware or antivirus. Today pipelining works fine. It's disabled because Google rushed SPDY out to preempt pipelining from being turned on.

        b) using multiplexing, allowing the server to serve the requests in whatever order. Thus, no head-of-line blocking.

        Also wrong. Using multiple connections is faster when there is packet loss, because only one connection's bandwidth is slowed down by a lost packet and the others proceed normally. Head of line blocking

        • by Bengie ( 1121981 )

          Using multiple connections is faster...

          You sir, are an idiot. Please stop spouting horrible information. There are only a few situations where multiple connections are faster and it primarily has to do with load balancing over aggregate links or laod balancing CPU cores. Multiple TCP streams are almost always a bad thing. They may seem faster, but only because other people do it, over all, it's slower.

          Lots of TCP connections between two end points is known to exasperate issues like buffer bloat and global synchronization. Each additional TCP c

        • Today pipelining works fine. It's disabled because Google rushed SPDY out to preempt pipelining from being turned on.

          I stopped reading your comment right here.

          Pipelining was introduced in HTTP/1.1, standardized in RFC 2616 which was released in 1999. We've had 16 years to get pipelining working well. 13 if you count up to when Google first started seriously experimenting with SPDY.

          (Disclaimer: I work for Google, but has nothing to do with my opinions on this issue. I thought pipelining was obviously broken long before I started working for Google.)

        • It's disabled because Google rushed SPDY out to preempt pipelining from being turned on.

          The pipelining spec dates back to what ... 1997? SPDY was released in 2012.

          That's a hell of a pre-preemption you have right there. Even IE 8 had pipelining support a whole 4 years before SPDY was announced. Are you telling me Microsoft disabled it by default there too because one of their biggest competitors may release something to replace it 4 years later?

    • by Bengie ( 1121981 )

      Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request.

      Incorrect. HTTP/1.1's pipelining is blocking, in that the responses must be returned in the order they were requested. It has a front of line blocking issue.

    • What I have experienced is that SPDY (and therefor also HTTP/2) will only offer more speed if you are Google or are like Google.

      This is so worn out it's even in the FAQ [daniel.haxx.se].

      Multiplexing doesn't offer that much speed increase as some people would like you to believe.

      It's good for every high-latency connection, like all the small sites have. I run a server on a VM on a generic Xeon box behind a DSL line, four hops from a backbone, for security and privacy reasons. It'll be great for my use case - I can't really

    • by Bengie ( 1121981 )

      Multiplexing doesn't offer that much speed increase as some people would like you to believe

      Limit your browser to one connection per server and tell me that with a strait face. TCP has horrible congestion control and should not be used to multiplex web requests.

    • Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request.

      But it doesn't have a decent mechanism for sending responses before they're requested [github.io]. With HTTP/2, your server can say "here's the page HTML, and here's a stream for the favicon linked in the HTML headers, and here's another stream for the JavaScript". The quickest request is the one you don't have to make.

  • I like how HTTP/2 and SystemD are bringing binary data formats to replace slow to parse text formats. Just throwing the controversial opinion on the table.
    • by Bengie ( 1121981 )
      Some of what HTTP2 is trying to fix the issues with TCP by multiplexing multiple streams over a single TCP connection instead of over many TCP connections. anyone with basic understandings of how TCP, HTTP, HTTPS, and networks interact will understand why HTTP2 has major benefits. It's more like a bandaid over other fundamental network issues, but it's better than what we got.
  • From the draft... (Score:3, Interesting)

    by johnnys ( 592333 ) on Wednesday February 18, 2015 @09:52AM (#49079529)

    "HTTP/2... also introduces unsolicited push of representations from servers to clients."

    Seriously? Do we need yet ANOTHER way for a server to push unwanted code and malware onto our client systems? This is the greatest gift we could POSSIBLY give to the cybercriminals who want to break into our systems.

    How about we think of security somewhere in this process, instead of pretending it's someone elses's problem?

    • I think you misunderstand what is going on here. Server push is basically a way for sites to pre-populate a browser's cache, by sending it suggested requests for things it doesn't yet know it will need ("push promises"), and the responses to said requests (the pushes themselves). If the server pushes responses to requests that the client never actually ends up making - not even to the extent of pulling the data from its local cache - then the pushed data will never be processed.

      Unless you are sitting on an

    • "HTTP/2... also introduces unsolicited push of representations from servers to clients."

      Seriously? Do we need yet ANOTHER way for a server to push unwanted code and malware onto our client systems?

      Yes, we do.

      What you're missing here is that this is pushing content that the browser was going to request in a few hundred milliseconds anyway. Why was the browser going to request it? Because the web page included a link to it.

      The only way this change could affect security is if you assume a threat model where the attacker is able to modify the web server to get it to push additional content to the browser but is somehow unable to modify the content the server is already pushing anyway. If the attacker

      • "HTTP/2... also introduces unsolicited push of representations from servers to clients."

        Seriously? Do we need yet ANOTHER way for a server to push unwanted code and malware onto our client systems?

        Yes, we do.

        What you're missing here is that this is pushing content that the browser was going to request in a few hundred milliseconds anyway. Why was the browser going to request it? Because the web page included a link to it.

        Well, that sounds to me like my ad-blocking software won't work as well, because stuff that it would have stopped my browser from downloading will be pushed to me whether I request it or not :(

        • by raxx7 ( 205260 )

          Note that server push applies to content the browser could request _from the same server_.
          Usually ad content is served from third party servers.

          But ultimately yes, this can lead to wasting some of your bandwidth in content your browser would have otherwise not asked.

  • Slashdot (Score:5, Interesting)

    by ledow ( 319597 ) on Wednesday February 18, 2015 @10:09AM (#49079635) Homepage

    While we're discussing HTTP, what's wrong with Slashdot lately?

    I keep getting timeouts, signed out, nothing but the front page, everything else loaded from a CDN (that just gives me browser security warnings because it's not coming up as slashdot.org), etc.

    Are you guys under attack, or just unable to keep a site with ~100 comments per article up?

  • by master_kaos ( 1027308 ) on Wednesday February 18, 2015 @10:35AM (#49079781)

    So you don't having something like HTTP_REFERER again

    • That still bites me every now and then after all these years, and continues to introduce discussions as to at what point in backend code the typo should be fixed.

    • It's by design. Datacompresion eats consecutively ocuring and pointles leters which in turn means a faster and more eficient protocol.

  • by codealot ( 140672 ) on Wednesday February 18, 2015 @11:11AM (#49080079)

    Existing standards that are "good enough" tend to be hard to replace.

    • by ADRA ( 37398 )

      IPv6 needs 100% buy-in from all participants or else you need to run/pay for bridging services who converts between the two. HTTP/2 is backward compatible meaning any participants will transparently fall back on HTTP 1/1.1 if it's not supported. Plus, there are far fewer vendors of HTTP servers / clients than there are for IPv4 based software and hardware products.

      • That isn't at all true. My laptop has both IPv6 and IPv4 addresses. When I make a request to google.com, I'm using IPv6, when reaching sites that don't have IPv6 I fall back to IPv4. As a user I don't even notice this.

        Similarly, HTTP/2 has to be implemented on clients and servers before it will be functional, else both endpoints need to agree to fall back on HTTP/1.1.

        There's some additional network configuration needed before IPv6 is useful, but no need to convert anything.

    • There are fewer parties - a few servers and a few clients, both which are updated fairly frequently (the servers because admins, the clients because auto-update) are the only ones that matter. Google already supports HTTP/2 (nee SPDY) so a huge percentage of internet traffic is already set up to use it as soon as browsers update (Chrome has had it as SPDY for years, Firefox has it or will soon).

      The v6 slowness was always the ISP (both on the client and server) and the CPE. Now that most of the big US ISPs h

    • Existing standards that are NOT good enough, where replacements are not backwards compatible are hard to replace.

      This is very different from IPv6 where both the server and the client have an automatic fallback ability. It's more like USB2.0 vs USB1.0 or SSL3 vs SSL2 (security issues with these aside). HTTP/2 can be implemented by any server regardless of the client (like Apache's mod_SPDY). It can be implemented by any client regardless of the server (Chrome already implements SPDY). All transparent to the

Per buck you get more computing action with the small computer. -- R.W. Hamming

Working...