Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Businesses Communications Google Privacy Security

HTTP/2 - the IETF Is Phoning It In 161

An anonymous reader writes HTTP/2 is back in the spotlight again. After drawing significant ire over a proposal for officially sanctioned snooping, the IETF is drawing criticism for plowing ahead with its plans for HTTP/2 on an unrealistically short schedule and with an insufficiently clear charter. A few days ago the IETF announced Last Call for comments on the HTTP/2 protocol.

Poul-Henning Kamp writes, "Some will expect a major update to the world's most popular protocol to be a technical masterpiece and textbook example for future students of protocol design. Some will expect that a protocol designed during the Snowden revelations will improve their privacy. Others will more cynically suspect the opposite. There may be a general assumption of 'faster.' Many will probably also assume it is 'greener.' And some of us are jaded enough to see the "2.0" and mutter 'Uh-oh, Second Systems Syndrome.' The cheat sheet answers are: no, no, probably not, maybe, no and yes."

"Given this rather mediocre grade-sheet, you may be wondering why HTTP/2.0 is even being considered as a standard in the first place. The Answer is Politics. Google came up with the SPDY protocol, and since they have their own browser, they could play around as they choose to, optimizing the protocol for their particular needs. SPDY was a very good prototype which showed clearly that there was potential for improvement in a new version of the HTTP protocol. Kudos to Google for that. But SPDY also started to smell a lot like a 'walled garden'."

"The IETF, obviously fearing irrelevance, hastily 'discovered' that the HTTP/1.1 protocol needed an update, and tasked a working group with preparing it on an unrealistically short schedule. This ruled out any basis for the new HTTP/2.0 other than the SPDY protocol. With only the most hideous of SPDY's warts removed, and all other attempts at improvement rejected as 'not in scope,' 'too late,' or 'no consensus,' the IETF can now claim relevance and victory by conceding practically every principle ever held dear in return for the privilege of rubber-stamping Google's initiative."
This discussion has been archived. No new comments can be posted.

HTTP/2 - the IETF Is Phoning It In

Comments Filter:
  • by Anonymous Coward

    If the protocol sucks, it'll go mostly unadopted.

    See also: xhtml and arguably ipv6

    • by Anrego ( 830717 ) *

      What made xhtml suck. As a non-web guy who just occasionally dabbles, xhtml seemed like a good idea. Unclosed tags in html always looked ugly, and as far as I can tell, that's really the most notable difference between xhtml and html.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        The only "bad" thing about XHTML is that it forced web developers to do things properly.

        Web developers are generally total amateurs, or the worst of the worst of the so-called "professionals".

        So expecting them to put even the slightest about of care into their work is just too much. That includes properly closing markup tags.

        XHTML was admired and used very effectively by the very small proportion of real software developers who get stuck dealing with web development now and then.

        But a small number of people

        • Re:Shrug (Score:4, Insightful)

          by mwvdlee ( 775178 ) on Friday January 09, 2015 @08:39AM (#48774063) Homepage

          The problem with XHTML is that it would never be able to stand on it's own.
          Even if all web developers started creating perfect XHTML code, we'd still have a huge legacy that would require all the browser kludges XHTML was supposed to fix.
          XHTML is best described as such: http://xkcd.com/927/ [xkcd.com]

          • by allo ( 1728082 )

            you can't use the same cartoon for all matters. this is not a problem of a competing standard.

            • by mwvdlee ( 775178 )

              It's a problem of trying to solve the problem of bad standards by introducing yet another standard, which ends up not replacing any of the old standards.

          • Re:Shrug (Score:4, Interesting)

            by CastrTroy ( 595695 ) on Friday January 09, 2015 @09:44AM (#48774513)
            The "problem" with HTML and browsers is that they have always worked with, and will always be expected to work with invalid code. Feed invalid code into a C compiler, a Java compiler, or XML interpreter, and if the syntax is incorrect, it will return an error and refuse to process anything. Browsers on the other hand are supposed to take invalid HTML and try to do something useful with it. If browser developers didn't have to spend so much time trying to make their code interpret invalid syntax, they could probably fix a lot of the other bugs that actually affect valid code.
            • by mwvdlee ( 775178 )

              OTOH, A typical web developer has to spend a lot of time trying to create a single HTML codebase that works across all browsers.
              Restrictive syntax validation in browsers wouldn't be as big a problem if all browsers could agree on what that syntax actually does.

            • Browsers on the other hand are supposed to take invalid HTML and try to do something useful with it. If browser developers didn't have to spend so much time trying to make their code interpret invalid syntax, they could probably fix a lot of the other bugs that actually affect valid code.

              While it may well be more difficult to write an HTML parser this effort is an insignificant rounding error when considered within context of effort needed to produce a modern browser stack.

          • XHTML serves a purpose. It adds the eXtensibility so that XHTML can be encorporated into other XML documents and visa versa, and it allows you to parse, generate and manipulate it with XML tools. The fact that browsers still have to deal with non-XML HTML doesn't take away from it's advantages.

            If you're generating HTML, there's no reason to not generate XHTML -- it's only the code that consumes it that has to deal with HTML. And what, besides a browser, consumes HTML? (Whatever it is, it's probably doing it

        • by Lennie ( 16154 )

          Let's not kid ourselfs.

          We all make mistakes.

          Especially when we start to generate HTML based on different sources.

          One mistake meant: the visitor on the webpage got to see an error instead of most of the page when you are not using XHTML.

          XHTML was just to complicated, not flexible enough and strict.

          Could it be that is also the reason JSON is now much more popular than XML ?

          • by neoform ( 551705 )

            JSON is very simple, but it's also very strict. A single typo will result in no data being readable.

      • Re:Shrug (Score:5, Interesting)

        by omnichad ( 1198475 ) on Friday January 09, 2015 @09:01AM (#48774213) Homepage

        As a web guy, I still use primarily XHTML. I may call it HTML5 when I need to use an HTML5 tag, but for a true developer/designer who doesn't use a GUI, properly nesting tags is a must. And with HTML5 being so loose, most XHTML documents are also valid HTML5 documents.

        Also - being able to load another web site's XML-compliant DOM to scrape data (for personal use)? Priceless.

      • XHTML sucks because websites are not well-defined as content; they're a mix of content and software and layout (unfortunately) and therefore a single XML file doesn't well describe what it needs to contain. If HTML were replaced by a whole new content system, it could be well-defined and described by XML but HTML isn't it.

    • Re:Shrug (Score:4, Insightful)

      by mwvdlee ( 775178 ) on Friday January 09, 2015 @08:36AM (#48774053) Homepage

      IPv6 doesn't suck. We're just not feeling the pain of IPv4 enough to care.

      • Re:Shrug (Score:5, Interesting)

        by petermgreen ( 876956 ) <plugwash@p[ ]ink.net ['10l' in gap]> on Friday January 09, 2015 @09:30AM (#48774395) Homepage

        ways in which IPv6 sucks or sucked.

        1: mechanisms for interoperability were bolted on later, not included as core features that every client and router should support and enable by default. The result is that relays for the transition mechanisms are in seriously short supply on the internet and often cause traffic to be routed significantly out of it's way.
        2: the designers were massively anti-nat, as a result we don't have any interoperability mechanisms that go with the flow of NAT, instead we have two incompatible interoperability mechanisms one of which doesn't work with NAT at all and the other of which makes itself unnessaceraly fragile by fighting the NAT rather than going with it. The company behind the latter mechanism also disabled it by default for machines on "managed networks"*, presumablly because they were afraid of annoying corporate network admins.
        3: there was lots of dicking around with trying to solve other problems at the same time rather than focusing on the core problem of address shortage. For example for a long time it was not possible to get IPv6 PI space because of pressure from people who wanted to reduce routing table size. Stateless autoconfiguation and the elimination of NAT seemed like good things at the time but they raised privacy issues and added considerable complexity to home/small buisness deployments.
        4: there was little incentive to support it and so the time when you can use an IPv6 only system as a general internet client or server without resorting to transition mechanisms seems as far off as ever.

        * Defined as any network with something windows thinks is a domain controller.

        • by hab136 ( 30884 )

          You forgot DHCPv6 being rejected because stateless autoconfig/RAs would be enough - except you couldn't get DNS or PXE boot info that way because it's not part of routing, so couldn't be included in router advertisements (politics, not technical). So, DHCPv6 was bolted on after.

          • I recall reading about DHCPv6 right from the start. Problem was that autoconfig/RAs were thought to be magic bullets, w/o considering that admins may like to have control over how addresses were assigned.

            Autoconfig - particularly EUI-64, is fine for link local addresses and maybe even for site local (fd00::/8) addresses. It's a bad idea to use w/ global unique addresses, which would be better off managed by DHCP6.

            However, any IPv6 node not just can, but will have multiple IPv6 addresses. So it can h

          • by jandrese ( 485 )
            This was the dumbest thing. Not including the basic DNS functionality in the router advertisement--on a protocol utterly dependent on DNS because the addresses are so ugly--was a colossal blunder. Even then, stateless autoconfig has no mechanism to notify the DNS of the address it chose, so good luck populating a DNS server. Sure you don't need a hostname if you're purely a client, but that's far too narrow a view for how people actually use networks. I like a lot of what IPv6 does and think that in 10-
        • by allo ( 1728082 )

          boils all down to the same: When you need to break a protocol (you need to, because of bigger addresses), then take the opportunity to fix all other mistakes as well.

        • 1. Was it? IPv6 was defined w/ what was thought would be transition mechanisms - IPv4 compatible addresses, and IPv4 mapped addresses. However, they weren't what worked for transitions, which is why they thought out techniques like tunneling, LSNAT, Dual Stack, DS-lite and so on.

          2. They did write an NAPT standard for those who just have to have NAT. It differs from NAT44 in that it's a 1:1 mapping b/w public and private addresses (I'm being colloquial here) rather than many:1 that you have in NAT44. So

        • by Bengie ( 1121981 )
          NAT is a crutch with no valid use case other than "I can't do it correctly, so I'll fudge the data flow". It's an evil required to handle the limits of IPv4, but is being abused for many other reasons; I can't wait for it to die in a holy fire.
          • Aside from the address conservation issues, which is a problem, NAT does have few things that are attractive to network admins:

            1. It abstracts networks behind the firewall so that an external malicious app wouldn't know what to look for

            2. In the absence of PI addresses, it enables a network to have a permanent internal address topology that would remain unchanged even if global unicast addresses changed due to changing an ISP, or an ISP itself changing addresses

            3. It enables load balancing

            This is why the I

        • 1: mechanisms for interoperability were bolted on later, not included as core features that every client and router should support and enable by default. The result is that relays for the transition mechanisms are in seriously short supply on the internet and often cause traffic to be routed significantly out of it's way.

          The Internet is a production network. You either deploy IPv6 fully in a production quality matter or don't do it at all. The mistake was in developing transition mechanisms in the first place which have done nothing but get in the way of adoption.

          there was lots of dicking around with trying to solve other problems at the same time rather than focusing on the core problem of address shortage. For example for a long time it was not possible to get IPv6 PI space because of pressure from people who wanted to reduce routing table size.

          Not everyone in the world has access to the same buying power enjoyed by rich western states. *Someone* ultimately has to pay for PI, rinky-dink multi-homing and lazy TE shenanigans. It is a political calculation whom that should be.

          Stateless autoconfiguation and the elimination of NAT seemed like good things at the time but they raised privacy issues and added considerable complexity to home/small buisness deployments.

          Reality is IPv6 privacy exte

        • by sjames ( 1099 )

          For years, I had 6to4 working through a NAT just fine.

          Teredo was disabled by default on managed networks because it was effectively bypassing the firewall. It was a significant security risk.

          3 was a problem external to the scope of the protocol spec driven by a bunch of anemic routers. 3a (stateless autoconfig, no NAT) work quite well if you have a router that supports v6. They greatly simplify home and SOHO configurations (do nothing and it works). The privacy issues have been fully addressed. Filtering ra

    • If the protocol sucks, it'll go mostly unadopted.

      See also: xhtml and arguably ipv6

      I'll bite. While xhtml can be ignored rather safely, IPv6 not so much. IPv6 adoption is like the Y2K problem, but with no clear cut off date. We know we will run out of IPv4 addresses, but when depends on who you speak to or what your analysis is based on. As someone who takes care of infrastructure, I would rather start addressing IPv4 exhaustion problem with something other than double or tripple NATting, and provide a solution that is already working when others are screaming for lack of foresight.

      To the

      • by unixisc ( 2429386 ) on Friday January 09, 2015 @10:32PM (#48779267)

        I'd argue that IPv6 is a variable availability problem, unlike Y2K. Y2K had a single cutoff date for everybody - 1/1/2000, as opposed to the various dates that the RIRs are running out of. Which is why Asia and Europe were already there, and ARIN just got there last year.

        It is a good idea to start designing in IPv6 networks and introducing them in organizations now before running out of IPv4 addresses. That way, services can make use of IPv6 addresses, while the IPv4 addresses can be just transition addresses b/w IPv6 and IPv4 points.

        Even NAT, or more precisely, NAPT, is now available for IPv6 if people must have it: it eliminates many:1 mapping which was the main issue w/ IPv4, but has all the other advantages that NAT does.

  • by Anonymous Coward on Friday January 09, 2015 @08:15AM (#48773949)

    A typical "modern" web site loads untold numbers of scripts and other files from dozens of domains, mostly for tracking, A/B testing and other things that the user doesn't want or need. That's what makes the web slow. I don't think HTTP is a particularly nice protocol, but HTTP/2 is taking a bad protocol and making it worse by "optimizing" it, while the real bottleneck is obviously somewhere else.

    • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Friday January 09, 2015 @08:55AM (#48774177) Homepage

      Whenever I bring a new computer up, I'm shocked all over again at just how slow browsers are before ad blocking is enabled. On most sites, all of the real content is there long before all of the ad and tracking content arrives. Today, nothing speeds up a slow computer and connection like Adblock Plus.

      • I think the major ad networks have some serious bottlenecks. Always the last thing to load. You would think that fixing that would be a huge priority considering it leads to people installing Adblock. That's the primary reason I've installed it for people - not because the sites they go to have ads in intrusive locations.

        • by allo ( 1728082 )

          maybe they are the last thing, because a sane webdevelopers let the page load its important parts before it loads the ads?

          • If the content is on another domain, it loads simultaneously in a separate HTTP request. It just takes longer to complete that smaller (but slower) request than it takes to load the entire rest of the web site.

            And some web sites don't define a fixed width/height for the ad, meaning the page (or big chunks of the page) doesn't actually render until the ad finishes loading.

          • To a lot of content providers, the ads are the important parts.

            Ads anymore seem to have lot of dynamically generated content that isn't fully known until after the initial page is downloaded and some Javascript is run, perhaps even inspecting the local computer and its cookies. When that happens, it's impossible for loading the ads to happen concurrently with the main content. You're guaranteed a whole second round trip before the ad content is available.

    • Part of the reasons for dozens of (sub)domains is because even modern browsers still have a connection limit per host. And there's a lot of overhead in establishing an HTTP connection. If you're loading lots of tiny files, it makes sense to download them all through one HTTP connection. HTTP/1.1 already has pipelining [wikipedia.org], but almost no server is set up to use it.

      • by Bengie ( 1121981 )
        HTTP1.1 pipelining requires responses to be returned in the same order the requests were made. This means all requests are dependent on the other and one long running response can block all other responses behind it. HTTP2.0 "fixes" this.
        • When I said lots of tiny files I was referring to generally static content. But having all of these on one subdomain and the dynamic content on the primary domain name will allow this to happen asynchronously. Pipelining would still fix this as long as we don't let go of all our other current tricks.

      • Part of the reasons for dozens of (sub)domains is because even modern browsers still have a connection limit per host. And there's a lot of overhead in establishing an HTTP connection. If you're loading lots of tiny files, it makes sense to download them all through one HTTP connection. HTTP/1.1 already has pipelining, but almost no server is set up to use it.

        Completely disagree. RFC7413 is already an RFC unlike SPDY and already solves the problem of overhead for new requests using stateless cookies without keeping session state (e.g. tied up resources) open speculatively in anticipation of future reuse.

        Multiplexing multiple streams within a single stream = Head of Line blocking. A problem that does not exist when using multiple independent streams are employed.

        The same concept applied to TLS currently in the pipeline allows for requests to be processed by the

        • The only reason you've given for HTTP/2.0 being worse is that it's not already an RFC. SPDY and by extension HTTP/2.0 does not have head of line blocking issues. The requests are multiplexed, but tagged, and requests can be answered out of order.

          Head of line blocking is really only an issue for dynamic content. Pipelining all of your static resources through a single connection to a single subdomain is more efficient than multiple requests. And nothing is going to be stopping you from using your bandwid

          • The only reason you've given for HTTP/2.0 being worse is that it's not already an RFC.

            It is worse because it is HOL'd and requires additional resources to manage state persistence for idle TCP channels. The other solutions leverage stateless cookies without speculative tradeoffs inherent with sitting on idle sessions. This is a BFD when your servicing thousands of concurrent requests.

            SPDY and by extension HTTP/2.0 does not have head of line blocking issues. The requests are multiplexed, but tagged, and requests can be answered out of order.

            *Everything* implemented over TCP has head of line blocking issues. This property is inherent in the definition of a stream which is what TCP implements. The only way around it is multiple independent stream

            • There's a different type of HOL blocking specific to multiplexed HTTP pipelining (at the next highest protocol layer). If one resource is slow to load because of being dynamic, it can hold up the entire queue. That's not an issue with SPDY or HTTP/2.0.

              My understanding is that your browser cookies and user agent string would be re-sent with every request using RFC7413. That's not small. And it can't handle POST requests safely, meaning fragmented protocols.

              • There's a different type of HOL blocking specific to multiplexed HTTP pipelining (at the next highest protocol layer). If one resource is slow to load because of being dynamic, it can hold up the entire queue.

                This makes little sense. HTTP/1.1 pipelining is only even possible if the size of content is known a-priori. Hard to imagine limited cases where you can know the size in advance before taking time to generate it.

                I do agree there are multiple instances at multiple layers that can have the affect of stalling the pipeline.

                My understanding is that your browser cookies and user agent string would be re-sent with every request using RFC7413. That's not small.

                Its insignificant, what matters for senders is latency.

                And it can't handle POST requests safely, meaning fragmented protocols.

                I hope your kidding there are no useful transaction semantics defined for POST requests or any other HTTP verbs. Any assumption this

                • Multiplexing multiple streams within a single stream = Head of Line blocking. A problem that does not exist when using multiple independent streams are employed.

                  SPDY will allow later requests to be answered before the first one. You seem to be focusing on the aspect of re-using old stale connections. I'm talking about the many dozens of connections needed on the initial visit to a web site right now.

                  I hope your kidding there are no useful transaction semantics defined for POST requests or any other HTTP verbs. Any assumption this is somehow safe today is wrong. It can only be made safe by application layer detection.

                  The RFC itself says that it's vulnerable to replay attacks. Even more so than what's currently in use.

                  • SPDY will allow later requests to be answered before the first one. You seem to be focusing on the aspect of re-using old stale connections. I'm talking about the many dozens of connections needed on the initial visit to a web site right now.

                    When I mention head of line blocking I am referring to the transmission of the overall stream of data transported via TCP. Whatever structure comprises SPDY the stream itself is subject to head-of-line blocking. Multiple unrelated assets within a related stream are at the mercy of the properties of that stream. Multiple unrelated parallel streams are able to operate *independently* of the other.

                    The problem occurs normally (bad luck, ICW) and especially with lossy networks such as a high latency wireless

    • Comment removed based on user account deletion
    • by mellon ( 7048 )

      Personally I have no opinion about HTTP/2, but I have to say that this anonymous hit piece looks a lot like some IETF participant who didn't like how the process came out trying to create the appearance of consensus against it by pumping up the anger of the interwebs without actually saying what's wrong with the spec. When I see people making statements not supported by explanations as to why we might want to consider them correct, my tendency is to assume that it's hot air trying to bypass the consensus

      • Personally I have no opinion about HTTP/2, but I have to say that this anonymous hit piece looks a lot like some IETF participant who didn't like how the process came out trying to create the appearance of consensus against it by pumping up the anger of the interwebs without actually saying what's wrong with the spec. When I see people making statements not supported by explanations as to why we might want to consider them correct, my tendency is to assume that it's hot air trying to bypass the consensus process.

        It's also a bit annoying to see the IETF accused of having published a document advocating snooping when in fact someone floated that idea in the IETF and it was shot down in flames, and what we actually published was a document stating that snooping is to be considered an attack and addressed in all new IETF protocol specifications (RFC 7258).

        What "anonymous hit piece"? Second link in the fine summary [acm.org] has a clear byline, Poul-Henning Kamp [freebsd.org].

        From the article:

        HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc. I would flunk students in my (hypothetical) protocol design class if they submitted it. HTTP/2.0 also does not improve your privacy.

        I too would like more details, but I doubt he's just blowing smoke here.

    • by Afty0r ( 263037 )

      loads untold numbers of scripts and other files from dozens of domains, mostly for tracking, A/B testing and other things that the user doesn't want or need

      I know this is a popular meme around here, and on the tracking side I am kinda with you (though it is nice to have ads which are more contextually relevant to me and this can help) but on A/B users DEFINITELY want and need this... it's a fantastic tool in making web sites better over time - meaning all users benefit from continued usage.

      Arguing against A

    • by Qzukk ( 229616 )

      As a PHP developer I can tell you exactly where one huge bottleneck is: POST-Redirect-GET. The current paradigm for handling POST requests requires that I initialize my framework, load all my objects, create a new object, save it in the database then set fire to the whole thing and tell the browser to redirect to another page where I initialize my framework, load all the objects, recreate the object I just burned down and plug it into the relevant view.

      Not being able to respond to a POST request with a vie

    • This is precisely why header compression is so useful.

      Loading this page, for example, I see 93 separate requests, dozens of which are less than few kilobytes. And while there are a number of different domains, there are quite a few requests that share the same domain. I imagine that having only one connection per domain, instead of one connection per request, would reduce the number of connections by a factor of five or more (I'm not taking the time to look through and count nearly a hundred requests).

      So

  • Alarmist much? (Score:2, Interesting)

    by Anonymous Coward

    I think the submitter doesn't like Google.

    First, 'SPDY was a very good prototype' followed by 'the most hideous of SPDY's warts removed' was summed up with 'the IETF can now claim relevance and victory by conceding practically every principle ever held dear in return for the privilege of rubber-stamping Google's initiative.'.

    Adopting and modifying a demonstrably working improvement for a standard is no cause for ire. Besides, this is the IETF we're talking about, be glad this is a modicum of improvement an

    • With how much of HTML and other standards that Internet Explorer is responsible for, we should be glad that Google is having an influence.

      • no, at this point (can't believe I'm saying this) - I trust MS more than I trust google.

        I know what MS is up to and I can deal with them. they are not into wholesale spying and being *everywhere* on the web (can't avoid a google tracking site; everyone, now, uses them, sadly).

        but google has depths of evil that we have not even seen yet. they have shown their true colors. they don't sell things TO US, they SELL US. that's a world of difference.

        • You can if you install Ghostery https://www.ghostery.com/en/ [ghostery.com]

          I also recently switched to Duck Duck Go, since I'm sick to death of all the spying and tracking my internet habits attract.

          Include AdBlock of some sort, and you've just made the internet a much nicer place again (assuming you stay off Facebook and Twitter ;-> ).

    • by mbkennel ( 97636 )
      | First, 'SPDY was a very good prototype' followed by 'the most hideous of SPDY's warts removed' was summed up with 'the IETF can now claim relevance and victory by conceding practically every principle ever held dear

      aka "not worshipping doe-eyed at my political whinging"

      | in return for the privilege of rubber-stamping Google's initiative.'.

      aka "they wanted to ship something that works, now"

      Sure, it should be called 1.2 and not 2.0, but that's marketing BS anyway.
  • by Anonymous Coward

    I see no hidden agendas possible in this article. No, really..

    • Re: (Score:2, Funny)

      by Chrisq ( 894406 )

      I see no hidden agendas possible in this article. No, really..

      That's because you viewed it with an HTTP 2.0 compliant browser

    • Er....the guy who wrote it is a heavy contributor to FreeBSD and developed Varnish.

      Although he may have an agenda it's not because he's a shill for Microsoft.

    • Given that IE11 already supports SPDY, and the pre-release version of IE12 in Windows 10 supports the current HTTP/2.0 draft, that would be a very strange way to protest, and with no clear reason.

    • Oh, and apparently it's an article by Poul-Henning Kamp. You know, the guy who wrote like half of FreeBSD.

  • With IPv6 the IETF has shown that they're on a long path toward oblivion. Too many cooks in the kitchen.

    • by bigpat ( 158134 )

      With IPv6 the IETF has shown that they're on a long path toward oblivion. Too many cooks in the kitchen.

      We are all on the long path toward oblivion...the trick is to try and keep up.

  • by Anonymous Coward

    This was written by Poul-Henning Kamp, and published in the ACM Queue [acm.org].

    Intosi

    • That article is actually linked to in the post. It's just one of the worst submissions I've seen for a while in making it hard to find the article it is referencing.

  • by mysidia ( 191772 ) on Friday January 09, 2015 @08:55AM (#48774175)

    Criticisms belong in the IETF discussion forum, but as long as the protocol is an improvement over HTTP/1, then this is progress. Sorry, PKH, about the Not Invented Here.

    Yes, if the improvement to be made is great and Google or a 3rd party has already done enough work to have good results, then the standardization process should be expeditious, and if the IETF wishes to stay relevant, they should work to provide technologically better standards at a reasonable pace.

    • PKH is a great man most of the time, but when he is wrong or fell overlooked, he is indistinguishable from a troll.

      I like the fact IETF keeps this revision simple. The last thing we need is something overengineered that will never be implemented fully.

  • SPDY is a protocol by Google, for Google. Unless you are doing more or less the same as Google does, SPDY is not very relevant for you. Having multiple HTTP requests via a single connection via multiplexing is only relevant if all website content is located at one and the same server. This is not the case for many websites on the internet. Images, specially for advertisements, are often located at a different webserver. I've read about real live scenario's where SPDY only gave up to 4% speed increase. And f

    • by Nemyst ( 1383049 )
      If SPDY is indeed focused on single-server sources like Google, then I'd say the opposite: SPDY is very relevant to the overwhelming majority of internet users. The vast majority of traffic is done on a select few sites such as Google, YouTube and Netflix. All of them benefit from that new protocol.

      It won't necessarily help small websites and hosts, but let's not forget that a protocol isn't strictly about them.
  • by Lennie ( 16154 ) on Friday January 09, 2015 @09:08AM (#48774253)

    The Tao of IETF still mentions:
    "We reject kings, presidents and voting. We believe in rough consensus and running code"
    http://www.ietf.org/tao.html [ietf.org]

    Maybe it's just me, but might it apply here ?

    Before the httpbis working group started looking at proposals for HTTP/2.0 SPDY was already implemented and deployed in the field by mutliple browser vendors, library builders for servers and several large websites. A bunch of research documents was written. And a protocol specification document draft existed. SPDY wasn't created in the open perse, but it was iterated with the help the community.

    So the IETF WG let people suggest proposals:
    http://trac.tools.ietf.org/wg/... [ietf.org]

    And then they voted.

    SPDY got selected.

    Also the SPDY draft was used as a basis for writing the new HTTP/2.0 draft.

    Is anyone surprised ?

    There might fundamental parts of the protocol which might have turned out differently if they would have gone through a open collaborative process.

    But at first glace it doesn't look that bad.

    I can see the appeal of rubberstamping what already exists.

    • by dackroyd ( 468778 ) on Friday January 09, 2015 @12:42PM (#48776205) Homepage

      But at first glace it doesn't look that bad.

      I can see the appeal of rubberstamping what already exists.

      That's the real problem with the proposed protocol; it solves today's problems for todays computers. It doesn't attempt to look ahead and solve problems that should be solved over the next ten years.

      Seeing as it's going to take a few years and a huge amount of effort before HTTP 2 is widely adopted, we're going to need to start working on a replacement for it's even finished its rollout.

      Poul-Hennings has written his thoughts on the problems that actually should be solved in the next version of HTTP: http://phk.freebsd.dk/words/ht... [freebsd.dk]

      The fact that the IETF has decided to ignore those problems so that HTTP 2 can be pushed out the door is what makes the situation be such a joke. Almost the only entities that will benefit from having HTTP 2 in the next 5 years are companies that have a web presence on the same scale as Google, Facebook, Twitter etc. that will save a small amount of money through reduced bandwidth costs.

      For everyone else, rolling out HTTP 2 will be a massive initial and ongoing technical burden, with almost no benefit.

  • by raymorris ( 2726007 ) on Friday January 09, 2015 @09:35AM (#48774427) Journal

    HTTP/2, like Java, was written with the time frame in mind, ad it was decided that it's better to release a good specification soon than insist on a perfect specification that's never finished and deployed. There is a reason for that - a number of reasons, actually, but the #1 reason is IPv6.

    On April Fool's day 2002 I announced that the backbones, root name servers, and other core infrastructure would be doing a cutover to IPv6 and we expected a few hours of downtime for the internet as a whole. The story was believable because IPv6 had been in the works for a couple of years and switchover at that point seemed logical, if the reader wasn't a network engineer.

    Thirteen years later, 95% of internet traffic is still IPv4. Ten or twenty years from now, do we want to be using a better version of HTTP, or still be using HTTP/1.1 and talking about HTTP/2?

    • by bigpat ( 158134 )

      I think you could argue that IPv6 is a counter example to your argument. The parallel with http 2.0 could be that no matter what the features http 1.1 might be good enough for a very long time.

      IPv6 perhaps came out 15 years too early because IPv4 deficiencies had quicker and easier workarounds than the switch to IPv6 and even some deficiencies like the limitation on address space was turned into a perceived benefit as more and more security concerns meant the boxes doing network address translation were n

      • Was it really the 'internet of things' that was required to drive the urgency? IPv4 has a max of not 4 billion, but actually 3.2 billion (if one does the math), and even w/ NAT, there were limitations since after each available address to the ISPs had been split among Class A/B/C customers, one would still need multiple layers of NAT. So IPv6 was always gonna be needed. I'd imagine that Mobile IP is what has driven the need for IPv6, since having several correspondent and mobile nodes was gonna be needed
        • by bigpat ( 158134 )

          Looking at IPv4 and a population growth chart. In 1981 when the IPv4 and RFC 791 were issued it would have been almost sufficient to give an IP address to every man woman and child on the planet. This at a time when rotary phones were still common and the idea of every man woman and child having their own computer was pretty extreme. Also the idea of keeping all those extra bytes in the routing tables would have been pretty wasteful considering it was a long term theoretical problem. Wasteful and expens

    • Thirteen years later, 95% of internet traffic is still IPv4. Ten or twenty years from now, do we want to be using a better version of HTTP, or still be using HTTP/1.1 and talking about HTTP/2?

      I don't care if we're still using HTTP/1.0 a hundred years from now. IPv6 is actually needed to solve an actual problem and offers real benefit to users needing to directly communicate with their peers - especially those currently stuck behind carrier NATs lacking a global address of their own.

      HTTP/2 isn't going to make anyone's online experience any better or faster. Even today with our quad core muti-ghz CPUs, GPUs, several GB ram, dozens mbits of bandwidth sites still take forever to load... the only t

  • by allo ( 1728082 ) on Friday January 09, 2015 @09:40AM (#48774473)

    Or at least something backward compatible, no stinking binary protocols.

    Compression? Bandwidth is bigger and cheaper than ever. So why?
    SPDY had in the first draft the nice feature, to require TLS, which was dropped, too. So not even this advantage stays there for spdy/http2

    • by wonkey_monkey ( 2592601 ) on Friday January 09, 2015 @10:11AM (#48774709) Homepage

      Compression? Bandwidth is bigger and cheaper than ever. So why?

      Because it's faster whether or not you've saturated your bandwidth, for a start. Secondly, just because you're not paying for your internet traffic by the megabyte, doesn't mean no-one else is along the way.

      Lastly - why not?

    • by DarkOx ( 621550 ) on Friday January 09, 2015 @10:22AM (#48774801) Journal

      Bandwidth is bigger and cheaper than ever. So why?

      In places where its being delivered by cable to the edge or damn near to the edge yes. The rest of the world not so much.

      Think about. Anywhere dense enough AND stable enough pretty is pretty well covered for highspeed Internet access.

      The problem is everywhere else, is being more and more covered by Cellular. There is only so much spectrum, there are laws of physics that place caps on just how much information can be sent there.

      So you have trouble on both ends. You have very high population places, us westerners might think of as slums where people want to run lots of cellular radios. You can only get so far with micro cells and wifi. After all the micro-cell or wifi have to connect to something. If the cell is to small you could have just pulled the cable or fiber in.

      Ditto for sparsely populated areas. You again get lots of people on one tower there as well (but using more TX power) again because its only economical and practical to put so much density in. Satellites have essentially the limitations.

      We are approaching the point were many of high bandwidth have nots are likely to remain have nots pretty no matter what policy well meaning pols come up with it; because at some point basic economic reality slaps you in the face. Yes there is still plenty of USA to cover, we got just about everyone power and phone 60+ years ago, fast Internet will get there too but it will take time.

      I don't think we should let trying to keep bandwidth requirements held to a minimum stand in the way of solving real problems and doing new things, but I also don't think its a good or fair idea to just completely say "fuck it" with regard to something like protocol overhead.

      • by 0123456 ( 636235 )

        There's a much, much easier way to reduce mobile bandwidth usage.

        Block ads.
        Block tracking Javascript.
        Block all the crap, other than the actual content you're actually trying to download.

      • Most of the bandwidth for modern web sites goes to content, not the HTTP headers. That's even with content compression, which is already part of HTTP/1.1. Reducing overhead by going to binary in the headers isn't going to reduce the bandwidth requirements by enough to notice, and comes at the cost of not being able to use very simple tools to do diagnosis and debugging (I've lost count of the number of times I was able to use telnet or openssl and copy-and-paste to show exactly what the problem with a serve

        • by DarkOx ( 621550 )

          modern web sites goes to content

          Sites maybe. Applications on the other hand stand to gain considerably. Watch some "modern" application sit there and make 100's of ajax requests for 14 lines of JSON and get back to me. For something web based trying to show any real-time information those headers can be 30% of the traffic.

          As far as tooling I am not that worried, there will be plenty of accepted widely used tools available to dump web headers if the protocol went binary. I have never had anyone question if tcpdump is decoding ether f

    • by ADRA ( 37398 )

      There's no reason to hate a protocol because its binary. That's just retarded since any protocol analyzer will be able to represent the data in a consice way for anyone who cares to know. The benefit of the new tech as I see it is this: You hit home page X, analytics has proven that 95% of users on page X go to page Y. Why not start batching out page pieces from Y early, so that when the user navigates to Y, it'll be there significantly faster. Seems like a win for me, as long as there's some semblence of s

      • Because none of that requires a new protocol? You can do that in HTTP/1.0, it's entirely a matter of client programming. And yes a protocol analyzer can decode a binary protocol for you, but it takes a bit of work to set them up to display one and only one request stream. A text-based protocol, meanwhile, can be dumped trivially at either end just by dumping the raw data to the console or a log file. Decoding and formatting a binary protocol takes quite a bit more code and adds work. As for bandwidth, the H

        • by ADRA ( 37398 )

          Well, to my understanding, it isn't as simple as client programming alone. Even if you do open an out of band background streamer for backend pages, you still have a round-trip per resource, where you could, say pump images 1-100 in one push instead of a 'request-response * 100' loop along a persistent HTTP stream. Any more client-side processing, and you'd have to change the contract and let javascript parse and insert individual resulting blocks into the cache individually, which I don't believe is the ca

    • by Bengie ( 1121981 )
      Many web servers disable compression because it leaks data and has been a security threat for a while now. It has been mentioned before that IPSec implements compression, VPN implements compression, HTTP, and the files transferred over HTTP, like images. How many times does data need to be compressed? Once. We have too many layers recompression the same freaking data. Find a layer, do it all there.
  • by codealot ( 140672 ) on Friday January 09, 2015 @10:35AM (#48774923)

    Two remarkable things about HTTP/1.1.

    One, it remained a relatively simple protocol. Yes there are a lot of nuances around content negotiation, transfer encodings and such but at its core it is a simple, flexible and effective protocol to use, and can be implemented quite efficiently via persistent connections and pipelining. It was designed for response caching as well, and the CDN infrastructure is in place to make use of caching whenever possible.

    Two, despite the simplicity of HTTP/1.1, a shocking number of implementations get it wrong or don't use it efficiently. Pipelining is disabled in many implementations due to compatibility concerns, and few applications can use it effectively. Many applications make excessive and unnecessary use of POST requests which are inherently not cacheable and result in many synchronous requests performed over high-latency connections. (SOAP was notorious for that.)

    I'm skeptical that any protocol revision can improve on HTTP/1.1 sufficiently without making it harder to implement correctly than it already is.

    If there were a broad initiative to begin to use the features of HTTP/1.1 properly, as they were designed, most of the shortcomings would vanish without the need for a new protocol.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      As an implementer of HTTP/1.1 and participants of the HTTP WG, I can say that HTTP/1.1 has many defects, the worst one is that people *think* they get it right without reading the docs. When you read the docs, you realize how many corner cases you can accidently get into. Most of them are caused by the inheritance of HTTP/1.0 where content-length was not needed. Other important issues concern the need to differenciate idempotent vs non-idempotent requests when you want to reuse an existing connection. It go

      • Can't argue, and thank you for the interesting examples. I don't think HTTP is perfect, I was wondering out loud whether it is merely good enough.

        Seems to me though that most of those problems arise from sloppy implementations (like you said, did they read the docs??) which supports my 2nd point. A perfect specification isn't going to prevent poor implementations.

  • This entire summary is devoid of content. It's just a long ranting insult with no valuable technical information at all. It could be talking about anything. This does not belong on Slashdot. With Slashdot these days I just want to downvote entire articles, or be able to edit the summary or something. HTTP 2.0 is probably a good topic to discuss, but not with a summary like this one.

    Some will expect McDonald's new french fries to be a masterpiece, while others expect it to be a great example of design. Others will be cynical. There may be an assumption it is 'tastier.' Others will think it is 'greener.' Well the truth is yes, no ,yes, yes ,maybe, only sometimes, and definitely not.

    Instead, how about something more like:

    The IETF is preparing to ratify HTTP 2.0. This is the first significant update to the most widely-used protocol blah blah blah... However, the proposal is very polarizing because of ...

  • What, precisely, is wrong with it?

    "Because Google." Is not an answer.

  • by Anonymous Coward

    What PHK is not telling you is that he fought, successfully, against mandatory encryption in HTTP/2.

  • The IETF like the UN is nothing more than meeting spaces for those with power to negotiate when it's in their best interests to do so.

    I think it is unfortunate the principals IETF claims to stand for (technical merit over BS) are allowed to so easily be silenced by hand waving and procedural BS. Specifically it is laughable no appeal by anyone for any reason has ever succeeded within the IETF structure.

    I've heard rumors this is not true yet having been subscribed to IETF announce for 10+ years and thumbing

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...