Forgot your password?
typodupeerror
Networking The Internet

PHK: HTTP 2.0 Should Be Scrapped 220

Posted by Unknown Lamer
from the just-give-up dept.
Via the HTTP working group list comes a post from Poul-Henning Kamp proposing that HTTP 2.0 (as it exists now) never be released after the plan of adopting Google's SPDY protocol with minor changes revealed flaws that SPDY/HTTP 2.0 will not address. Quoting: "The WG took the prototype SPDY was, before even completing its previous assignment, and wasted a lot of time and effort trying to goldplate over the warts and mistakes in it. And rather than 'ohh, we get HTTP/2.0 almost for free', we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them. ... Wouldn't we get a better result from taking a much deeper look at the current cryptographic and privacy situation, rather than publish a protocol with a cryptographic band-aid which doesn't solve the problems and gets in the way in many applications ? ... Isn't publishing HTTP/2.0 as a 'place-holder' is just a waste of everybody's time, and a needless code churn, leading to increased risk of security exposures and failure for no significant gains ?"
This discussion has been archived. No new comments can be posted.

PHK: HTTP 2.0 Should Be Scrapped

Comments Filter:
  • Encryption (Score:5, Insightful)

    by neokushan (932374) on Monday May 26, 2014 @08:31PM (#47096009)

    I hope that whatever HTTP2.0 ends up being enforces encryption by default.

  • by Anonymous Coward on Monday May 26, 2014 @08:48PM (#47096107)

    Dang, I'm sad Linus Torvalds, John Carmack, et. al. are "too self important" because someone else made a wikipedia page about them. Or maybe programming, especially concerning the next standard for what most of the internet would ideally run, is too important for fucking hipsters to get involved.

  • by l0ungeb0y (442022) on Monday May 26, 2014 @09:01PM (#47096165) Homepage Journal

    And why shouldn't have a moratorium and review ESPECIALLY in regards to what has come to light about how fucked the internet is in just the last year?

    Why proceed blindly with a protocol that comes from Google, who gladly works hand in hand with the NSA and is a Corporation whose core focus to track and monitor every single person and thing online?

    What? Just proceed with something that addresses NONE of the present mass surveillance issues, and possibly could make us less secure than we are now just because we don't have a fall back lined up?

    Or how about we take this time to step back and reevaluate what HTTP2.0 needs to be -- such as changing to a focus on security and privacy.

  • Re:Encryption (Score:4, Insightful)

    by gweihir (88907) on Monday May 26, 2014 @09:05PM (#47096193)

    That is stupid. First, encryption essentially belongs on a different layer, which means combining it with HTTP is always going to be a compromise that will not work out well in quite a number of situations. Hence at the very least you should be able to leave it out, and either do without or use a different mechanism on a different layer. SSL (well, actually TLS) would have worked if it had solved the trust-in-certificates problem, which it spectacularly failed at due to naive trust models, that I now suspect were actively encouraged by various Three Letter Agencies at that time. In fact, if you control the certificates on both ends, TLS works pretty well and does not actually need a replacement.

    That said, putting security for specific, limited (but widely used) scenarios in can be beneficial, but always remember that it makes the protocol less flexible as it needs to do more things right. And there has to be a supporting infrastructure that actually works in establishing identity and trust. In order for the security to have any benefit at all, it must be done right, must be free from fundamental flaws and must give the assurances it was designed to give. That is exceedingly hard to do and very unlikely to be done right on the first attempt.

  • Moving goal posts (Score:5, Insightful)

    by abhi_beckert (785219) on Monday May 26, 2014 @09:13PM (#47096243)

    I don't think HTTP has any problems with security. All the real world problems with HTTP security are caused by:

      * dismally slow roll out of dnssec. It should have been finished years ago, but it has barely even started.
      * the high price of acquiring an SSL certificate (it's just bits!).
      * slow rollout of IPv6 (SSL certificates generally require a unique IP and we don't have enough to give every domain name a unique IP).
      * arguments in the industry about how to revoke a compromised SSL certificate, which has lead to revocation being almost useless.
      * SSL doesn't really work when there are thousands of certificate authorities, so some changes are needed to cope with the current situation (eg: dsnssec could be used to prevent two certificate authorities from signing the same domain name)

  • Death by Committee (Score:5, Insightful)

    by bill_mcgonigle (4333) * on Monday May 26, 2014 @09:19PM (#47096283) Homepage Journal

    HTTP/1.1 is roughly seventeen years old now - technically HTTP/1.0 came out seven years before that, but in terms of mass adoption, NSFNet fizzled in '94 and then people really started to pay attention to the web - I had my first webpage about six months before that (at College) and there were maybe a dozen in the whole school who had heard of it previously. Argue for seven years if you'd like, but I'll say that HTTP/1.0 got seriously revised after three years of significant broad usage. SSLv3, still considered almost usable today, was released the year before. TLSv1.2, considered good, has been a standard for over five years and still it's poorly supported though now critically necessary for some security surfaces.

    After this burst of innovation, somebody dreamt up the W3C and we got various levels of baroque standards, all while everybody else solved the same problems over and over again. IETF used to be pretty efficient, but it seems like they're at the same point now.

    I won't argue for SPDY becoming HTTP/2.0 but I will admire it as an effort to freaking do something. Some guys at Google said, "look, screw you guys, we're going to try to fix this mess," and they did something. While imperfect, they still did enough that the HTTP/2.0 committee looked at it and said (paraphrasing), "hrm, since we haven't done anything useful for 15 years, let's take SPDY and tweak it and call it a day's work well done."

    The part Google got most right was the "screw you guys" part - central-planning the web is not working.. I'm not positive what the right organization structure looks like, but it's not W3C and IETF. We need to figure out what went right back in the mid 90's and do that again, but now with more experience under our belts. This talk of "one protocol to rule them all for 20 years" is undeniably a toxic approach. HTTP/1. 1 should have been deprecated by 2005 and we should be on to the third iteration beyond it by now. Yeah, more core stuff for the devs to do - used to be we had people who could start a company and write a whole new web browser in a year - half the time it takes to change the color of tabs these days.

    And don't start with this "but this old browser on ... " crap either - we rapidly iterated before and can do it again. Are there people who fear change? Sure - and nobody is going to stop HTTP/1.1 from working 50 years from now, but by golly nobody should want to use it by then either.

  • Re:Encryption (Score:5, Insightful)

    by AuMatar (183847) on Monday May 26, 2014 @09:43PM (#47096383)

    It doesn't need to be perfect. If cracking it still takes some time, it lowers their resources. And it can still be unbreakable for attackers with fewer resources at their disposal.

  • Re:Encryption (Score:5, Insightful)

    by jmv (93421) on Monday May 26, 2014 @09:47PM (#47096403) Homepage

    Nothing is NSA-proof, therefore we should just scrap TLS and transmit everything in plaintext, right? The whole point here is not to make the system undefeatable, just to increase the cost of breaking it, just like your door lock isn't perfect, but still useful. If HTTP was always encrypted, even with no authentication, it would require the NSA to man-in-the-middle every single connection if it wants to keep its pervasive monitoring. This would not only make the cost skyrocket, but also make it trivial to detect.

  • Re:Encryption (Score:5, Insightful)

    by gweihir (88907) on Monday May 26, 2014 @10:14PM (#47096499)

    Unfortunately, breaking the crypto directly is _not_ what they are going to do. Protocol flaws usually allow very low cost attacks, it just takes some brain-power to figure them out. The NSA has a lot of that available.

  • Re:Encryption (Score:2, Insightful)

    by gweihir (88907) on Monday May 26, 2014 @10:21PM (#47096529)

    You are confused. The modern crypto we have _is_ NSA proof (as the NSA made sure of that). The protocols using it are a very different matter. These have the unfortunate property that they are either secure or are cheap to attack (protocols do not have a lot of state and hence cannot put up a lot of resistance to brute-forcing). Hence getting the protocols right, and more importantly, designing them so that they have several effective layers of security and can be fixed if something is wrong is critical. Unfortunately, that involves making very conservative choices, while most IT folks are starry eyed suckers for the new and flashy.

  • Re:Encryption (Score:5, Insightful)

    by swillden (191260) <shawn-ds@willden.org> on Monday May 26, 2014 @10:25PM (#47096551) Homepage Journal

    In order for the security to have any benefit at all, it must be done right, must be free from fundamental flaws and must give the assurances it was designed to give. That is exceedingly hard to do and very unlikely to be done right on the first attempt.

    SPDY's security component is TLS. SPDY is essentially just some minor restrictions (not changes) in the TLS negotiation protocol, plus a sophisticated HTTP acceleration protocol tunneled inside. So this really isn't a "first attempt", by any means. Not to mention the fact that Google has been using SPDY extensively for years now and has a great deal of real-world experience with it. Your argument might hold water when applied to QUIC, but not so much to SPDY.

    It really helps to read the thread and get a sense of what the actual dispute is about. In a nutshell, Kamp is bothered less that HTTP/2 is complex than that it doesn't achieve enough. In particular, it doesn't address problems that HTTP/1.1 has with being used as a large file (multi-GB) transfer protocol, and it doesn't eliminate cookies. Not many committee members seem to agree that these are important problems for HTTP/2, though most do agree that it would be nice some day to address those issues, in some standard.

    What many do agree on is that there is some dangerous complexity in one part of the proposal, a header compression algorithm called HPACK. The reason for using HPACK is the CRIME attack, which exploits Gzip compression of headers to deduce cookies and other sensitive header values. It does this even though the compressed data is encrypted. HPACK is designed to be resistant to this sort of attack, but it's complex. Several committee members are arguing that it would be reasonable to proceed without header compression at all, thus greatly simplifying the proposal. Others are arguing that they can specify HPACK, but make header compression negotiable and allow clients and servers to choose to use nothing rather than HPACK, if they prefer (or something better when it comes along).

    Bottom line: What we have here is one committee member who has been annoyed that his wishes to deeply rethink HTTP have been ignored. He is therefore latching onto a real issue that the rest of the committee is grappling with and using it to argue that they should just throw the whole thing out and get to work on what he wanted to do. And he made his arguments with enough flair and eloquence to get attention beyond the committee. All in all, just normal standards committee politics which has (abnormally) escaped the committee mailing list and made it to the front page of /.

  • by WaffleMonster (969671) on Monday May 26, 2014 @10:55PM (#47096711)

    I think following demonstrates reality participants in standards organizations are constrained by the market and while they do yield some power it must be exercised with extreme care and creativity to have any effect past L7.

    As much as many people would like to get rid of Cookies -- something
    you've proposed many times -- doing it in this effort would be counter-productive.

    Counter-productive for *who* Mark ?

    Counter-productive for FaceBook, Google, Microsoft, NSA and the other mastodons who use cookies and other mistakes in HTTP
    (ie: user-agent) to deconstruct our personal identities, across the entire web ?

    Even with "SSL/TLS everywhere", all those small blue 'f' icons will still tell FaceBook all about what websites you have visited.

    The "don't track" fiasco has shown conclusively, that there is never going to be a good-faith attempt by these mastodons to improve personal privacy: It's against their business model.

    And because this WG is 100% beholden to the privacy abusers and gives not a single shit for the privacy abused, fixing the problems would be "counter-productive".

    If we cared about human rights, and privacy, being "counter-productive" for the privacy-abusing mastodons would be one of our primary goals.

    It is impossible for me to disagree with this. Have several dozen tracking/market intelligence/stat gathering firms blackholed in DNS where creative use of DNS to implement tracking cookies do not work. I count on the fact they are all much too lazy to care about a few people screwing with DNS or operating browser privacy plugins.

    I'm personally creeped out by hoards of stalkers following me everywhere I go...yet I see the same mistakes play out again and again... people looking to solve problems without consideration of second order effects of their solutions.

    You could technically do something about those army of stalker creeps ... yet this may just force them underground, pulling same data thru backchannels established directly with site - rather than a cut and paste javascript job it would likely turn into module loaded into backend stack with no visibility to the end user or ability to control.

    While this would certainly work wonders for site performance and bandwidth usage... those limited feedback channels we did have for the stalked to watch the stalker are denied. On flipside of the ledger not collecting direct proof of access could disrupt some stalker creeps business models.

    I think emotional half-assed reaction to NSA with established ability to "QUANTUM INSERT" ultimately encourages locally optimal solution having effect of affording no actual safety or privacy to anyone.

    Not only does opportunistic encryption provide a false sense of security to the vast majority of people who simply do not understand relationship between encryption and trust such deceptions effectively work to relieve pressure on need for a real solution.. which I assume looks more like DANE and associated implosion of SSL CA market.

    My own opinion HTTP 2.0 is only a marginal improvement with no particular pressing need... I think they should think hard and add something cool to it.. make me want to care...as is I'm not impressed.

  • by jandrese (485) <kensama@vt.edu> on Monday May 26, 2014 @11:29PM (#47096877) Homepage Journal
    My impression is that the IETF was doing a pretty good job until the businesses started taking the internet seriously and instead of being a group of engineers trying to make the best protocols it became a bunch of business interests trying to push their preferred solution because it gives them an advantage in the market. Get a few of those in the same room and deadlock is the result.
  • Re:Encryption (Score:4, Insightful)

    by fractoid (1076465) on Tuesday May 27, 2014 @12:06AM (#47097047) Homepage
    Even lower cost is simply subpoenaing one end of the transaction. There's no point bothering with a cryptographic or man-in-the-middle attack when you control the man-at-the-other-end.
  • by MatthiasF (1853064) on Tuesday May 27, 2014 @12:34AM (#47097151)
    Bullshit. BULLSHIT!

    Google has derailed so much of the web's evolution in an attempt to control it that they do not have the right for them or any Google lover to suggest they get to the web's standards from committees. From the "development" trees in Chrome, to WebRT and WebM, they have splintered the internet numerous times with no advantage to the greater good.

    The committee was strong armed into considering SPDY simply because they knew Google could force it down everyone's throats with their monopoly powers across numerous industries (search, advertising, email, hosting, android, etc.). HTTP/1.1 has worked well for the web. The internet has not had any issues in the last 22 years except when assholes like Google and Microsoft decided to deviate from a centralized standard.

    There is no way we should let Google set ANY standard after the numerous abuses they have done over the last 8 years, nor should any shills like you be allowed to suggest they should be the one calling the shots.

    So, kindly go to hell.
  • by Aethedor (973725) on Tuesday May 27, 2014 @02:33AM (#47097495)

    The biggest problem with SPDY is that it's a protocol by Google, for Google. Unless you are doing the same as Google, you won't benefit from it. In my free time, I'm writing an open source webserver [hiawatha-webserver.org] and by doing so, I've encountered several bad things in the HTTP and CGI standard. Things can be made really more easy and thus faster if we, for example, agree to let go of this rediculous pathinfo, agree that requests within the same connection are always for the same host and make the CGI headers work better with HTTP.

    You want things to be faster? Start by making things more simple. Just take a look at all the modules for Apache. The amount of crap many web developers want to put into their website can't be fixed by a new HTTP protocol.

    We don't need HTTP/2.0. HTTP/1.3 with some things removed, fixed or at least have some vague things be specified more clearly, would be more than enough for 95% of all the websites.

  • Tunnelling (Score:5, Insightful)

    by msobkow (48369) on Tuesday May 27, 2014 @02:55AM (#47097547) Homepage Journal

    Maybe if we weren't trying to tunnel every god damned protocol and transport known to mankind through HTTP it wouldn't be such a massive problem to re-engineer and fix.

    Seriously: The idea of TCP was to have multiple protocol ports, not to tunnel everything over :80.

  • Re:Tunnelling (Score:4, Insightful)

    by Aethedor (973725) on Tuesday May 27, 2014 @03:12AM (#47097585)
    Yeah, tell that to the firewall administrator who thinks that opening an extra port is far more insecure than to tunnel that extra connection via HTTP. The IT world is defined by a mass amount of incompetent administrators and developers.
  • Re:Encryption (Score:4, Insightful)

    by gweihir (88907) on Tuesday May 27, 2014 @03:34AM (#47097627)

    I think the only thing we know is that they would like to be able to break modern crypto directly. There is no indication that they can. Of course, they can brute-force DES or 512 bit RSA keys, but that is not going to help them a lot and that capability does not scale to , say, 128 bit symmetrical or 2048 bit asymmetrical keys. There are also indications that they _may_ be able to break some non-compromised ECC crypto and they likely know of ways to compromise elliptic curves in a way that allows them to cheaply (!) attack some ECC crypto. All of which is not a problem when algorithm selection is done carefully.

    Note that sabotaging crypto is not the same as breaking it. Breaking crypto means successfully attacking non-sabotaged crypto. (Crypto lingo deviates a bit from common use here.)

You are in a maze of UUCP connections, all alike.

Working...