Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google The Internet

Google To Propose QUIC As IETF Standard 84

As reported by TechCrunch, "Google says it plans to propose HTTP2-over-QUIC to the IETF as a new Internet standard in the future," having disclosed a few days ago that about half of the traffic from Chrome browsers is using QUIC already. From the article: The name "QUIC" stands for Quick UDP Internet Connection. UDP's (and QUIC's) counterpart in the protocol world is basically TCP (which in combination with the Internet Protocol (IP) makes up the core communication language of the Internet). UDP is significantly more lightweight than TCP, but in return, it features far fewer error correction services than TCP. ... That's why UDP is great for gaming services. For these services, you want low overhead to reduce latency and if the server didn't receive your latest mouse movement, there's no need to spend a second or two to fix that because the action has already moved on. You wouldn't want to use it to request a website, though, because you couldn't guarantee that all the data would make it. With QUIC, Google aims to combine some of the best features of UDP and TCP with modern security tools.
This discussion has been archived. No new comments can be posted.

Google To Propose QUIC As IETF Standard

Comments Filter:
  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Saturday April 18, 2015 @10:50PM (#49503031) Homepage Journal

    These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.

    If pseudo-open proprietary standards are de-rigour, then adopt the Scheduled Transfer Protocol and Delay Tolerant Protocol. Hell, bring back TUBA, SKIP and any other obscure protocol nobody is likely to use. It's not like anyone cares any more.

    • by Lally Singh ( 3427 ) on Saturday April 18, 2015 @11:25PM (#49503125) Journal

      SCTP, for one, doesn't have any encryption. QUIC integrates a TLS layer into it, in a way that avoids a lot of connection setup time. The best you could do in SCTP is to put it under DTLS, which won't be as fast. Second, SCTP has horrible fragmentation behavior -- NDATA was supposed to help, but didn't make it in. It uses TCP's congestion window system over the entire association, while QUIC also has pacing. And looking at RFC2960, you'll see the names: Motorola, Cisco, Siemens, Nortel, Ericsson, and Telecordia. Generally someone has to pay engineers to make the standards.

      As for the article, the UDP vs TCP discussion is a red herring. AFAICT, QUIC's use of UDP is for compatibility with existing IP infrastructure.

      • by WaffleMonster ( 969671 ) on Sunday April 19, 2015 @12:09AM (#49503239)

        SCTP, for one, doesn't have any encryption.

        Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack). A feature QUIC does not provide.

        Managing transport and encryption in a single protocol makes the resulting system more brittle and complex. Improvements to TCP helps everything layered on top of it. Improvements to TLS helps everything layered on top of it.

        Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.

        QUIC integrates a TLS layer into it, in a way that avoids a lot of connection setup time.

        TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.

        I have yet to hear a coherent architectural justification for QUIC that makes sense... The reason Google pushes it is entirely *POLITICAL* this is the path of least resistance granting them full access to the TCP stack and congestion algorithms without having to work to build consensus with any other stakeholder.

        • by Anonymous Coward

          No they don't.
          IG TLS frames are unalgned with the transport framing, then a packet loss causes a delay in the interpretation of any data of that TLS frame until the packet is recovered. The means, practically, that TLS on TCP is lots slower when there is any packet loss. .. And packet loss is fairly common, at between 1.5 and 3% on "good" networks,and far worse in placed like India.

          • [...] TLS on TCP is lots slower when there is any packet loss.

            And how a (almost) stateless protocol like QUIC supposed to handle the packet loss any better?

            The previous write-ups about the Google protocols were all like one based on the premise that packet loss is a very very rare occurrence. That's why they use effectively a stateless transport: because they assume that errors are rare. In other words, they are too very bad at handling it.

            Coming from the old days of IPX vs TCP debates, I remember how the IPX proponents were going abruptly silent in the face of a

        • by swillden ( 191260 ) <shawn-ds@willden.org> on Sunday April 19, 2015 @02:49AM (#49503565) Journal

          SCTP, for one, doesn't have any encryption.

          Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack).

          I disagree. To me there's at least one really compelling reason: To push universal encryption. One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.

          But there are other reasons as well, quite well-described in the documentation. The most significant one is performance. QUIC achieves new connection setup with less than one round trip on average, and restart with none... just send data.

          Improvements to TCP helps everything layered on top of it.

          True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least. That would not only slow adoption, it would also mean a whole lot of additional design complexity forced by backward compatibility requirements. QUIC, on the other hand, will be rolled out in applications, and it doesn't have to be backward compatible with anything other than previous versions of itself. It will make its way into the OS stacks, but systems that don't have it built in will continue using it as an app library.

          Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.

          So improve and use those protocols. You may even want to look to QUIC's design for inspiration. Then you can figure out how to integrate your new ideas carefully into the old protocols without breaking compatibility, and then you can fight your way through the standards bodies, closely scrutinized by every player that has an existing TLS or TCP implementation. To make this possible, you'll need to keep your changes small and incremental, and well-justified at every increment. Oh, but they'll also have to be compelling enough to get implementers to bother. With hard work you can succeed at this, but your timescale will be measured in decades.

          In the meantime, QUIC will be widely deployed, making your work irrelevant.

          As for using TCP without encryption so you don't have to pay for something you don't need, I think you're both overestimating the cost of encryption and underestimating its value. A decision that a particular data stream doesn't have enough value to warrant encryption it is guaranteed to be wrong if your application/protocol is successful. Stuff always gets repurposed and sufficient re-evaluation of security requirements is rare (even assuming the initial evaluation wasn't just wrong).

          TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.

          Only for restarts. For new connections you still have all the TCP three-way handshake overhead, followed by all of the TLS session establishment. QUIC does it in one round trip, in the worst case, and zero in most cases.

          There was much valid (IMO) criticism of SPDY, that it really only helped really well-optimized sites -- like Google's -- to perform significantly better. Typical sites aren't any slower with SPDY, but aren't much faster, either, because they are so inefficient in other areas that request bottlenecks aren't their problem, so fixing those bottlenecks doesn't help. But QUIC will generally cut between two and four RTTs out of every web browser connection. And, of course, it also includes all of the improvements SPDY brought, plus new congestion management mechanisms which are significantly bette

          • I disagree. To me there's at least one really compelling reason: To push universal encryption.

            Too bad the goal was not pushing universal security instead.

            One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.

            What we need are systems that are actually secure not ones that pretend to be. Hard to get excited about a "feature" that is worthless against most threats.

            True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least.

            Is there a technical barrier with respect to TCP and or TLS that makes addressing issues unrealistic? Linux kernel has had TFO support for years and it didn't take decades for SYN cookies or SACKs to hit all but the long tail. There must be at least a dozen or two TLS extensions by now. How

        • by epiphani ( 254981 ) <epiphani@dal . n et> on Sunday April 19, 2015 @03:04AM (#49503599)

          I have yet to hear a coherent architectural justification for QUIC that makes sense... The reason Google pushes it is entirely *POLITICAL* this is the path of least resistance granting them full access to the TCP stack and congestion algorithms without having to work to build consensus with any other stakeholder.

          Many years ago, in an earlier life, I tried to make changes through the IETF to an existing protocol. I was responsible for one of the major IRC servers, and still am though IRC is effectively in maintenance only. IRC is a shit protocol - really, embarrassingly bad. So I set up a conversation - grabbed the developers of all of the major clients and servers, and got us all on a mailing list to try to do something about it. We ALL knew it was bad, and we all knew it needed serious overhaul - if not a complete scrapping. We'd even fantasized about a non-tcp multipathing protocol that would be more appropriate for IRC. But like hell that was gonna happen.

          This was a group of people that, for the most part, didn't make money from IRC. It was a hobby. We had no corporate agendas, no major impacts to our livelihoods, and the only constraint to implementation was our own time. In the six months we spent, we managed to publish one draft to the IETF. It expired and we effectively gave up. Building consensus is hard, time consuming, and quite honestly not worth the effort when you're talking about this kind of thing.

          Google is in a position to just do it, and honestly, I'm fine with that. Otherwise everyone would pop up with an opinion, and nobody would get anywhere. That's why we haven't seen anything come up to rival TCP, even though TCP is pretty bad for a lot of applications.

          The only point at which I'd have a problem is if their QUIC protocol isn't completely open and free, and totally unencumbered by intellectual property constraints (patents, etc). Otherwise, go for it - and give me a protocol api/sdk in C so I can give it a shot.

      • by mveloso ( 325617 )

        Did SCTP have horrible behavior, or the tested implementation? The QUIC doc says nothing about that. QUIC vs SCTP is on page 8.

        https://docs.google.com/docume... [google.com]

    • by Jack9 ( 11421 )

      > QUIC solves nothing that hasn't already been solved.

      Creating an IETF standard, based on a working implementation, isn't relevant to what problems it can service. While Google makes strategic and implementation mistakes, their technical research and solutions are usually quite good. The IETF is for this kind of documentation. ie Producing high quality, relevant technical documents that influence the way people design. The fact that someone might reject it as a "NotInventedHere", is not a compelling reas

      • by Anonymous Coward

        hey. we reject kings, etc etc etc

        write a spec, get three interoperable implementations developed independently, then
        lets go to town

        100,000 of undigestable c++ doesn't quite look the same

    • by Anonymous Coward

      WHY NOT USE SCTP over DTLS?
      One recurring question is: Why not use SCTP (Stream Control Transmission Protocol) over DTLS (Datagram Transport Layer Security)? SCTP provides (among other things) stream multiplexing, and DTLS provides SSL quality encryption and authentication over a UDP stream. In addition to the fact that both of these protocols are in various levels of standardization, this combination is currently on a standard track, and described in this Network Working Group Internet Draft.

      The largest v

    • These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.

      Yeah, but it's from Google, and whatever Google wants, Google gets. They've already done this with SPDY, rammed through the IETF with unseemly haste [acm.org] as "HTTP 2.0", with any objections either ignored or declared out of scope. I don't see how QUIC will be any different, the IETF will rename it to give the impression they had some input into the process, but that'll be all.

      • by osu-neko ( 2604 )
        ...and good for that. Bad standards arise from committees sitting around spit-balling ideas. Good standards come from committees blessing existing practices already proven in the field. Maybe you smooth out a rough spot or two, but ultimately it ought to look for the most part like what's already out there working well. "Not in scope" was precisely the right response for most of the junk people wanted to throw into HTTP/2.0. Alas, it does give people who didn't their favorite feature thrown in ample op
    • NIH syndrome, a Google specialty.

    • Hence, whatever happened to RUDP?

      Everyone keeps turning UDP into some pseudo TCP w/o all the extras....that's what RUDP was built for.

  • by koan ( 80826 )

    It sounds a bit like a coding cluster fuck.

    Side note: Anyone using Comcast? I notice that all my Google services are extremely slow at least some of the day, everyday.
    When I turn on VPN there is no issue, I'd to say it's routing... but at this point don't we all know better?

    • The official explanation is that there is insufficient peering 'twixt Comcast (or $OTHER_ISP) and Google, and that's the congested link.

      Of course, other Google services have no such problems at such times, which makes me suspect it's bullshit. But that's still the story.

  • Google just wants dominion over congestion algorithms for their own benefit.

  • Oh good! (Score:5, Insightful)

    by 93 Escort Wagon ( 326346 ) on Sunday April 19, 2015 @12:31AM (#49503279)

    I bet QUIC will make those DART-driven VP9 video services really SPDY.

  • The second link in the story ("half of the traffic from Chrome browsers is using QUIC already") is broken - it's an empty <a> tag, with no href. Also means we can't workaround it, since we don't have any hints about the destination.

  • QUIC is designed so that if a client has talked to a given server before, it can can start sending data without any round trips, which makes web pages load faster.

    So, if I have three clients behind a NAT or proxy, do they all share the same TLS key? Does that mean my encryption is compromised in a WiFi environment?

    ~~

  • Stupid NAT. (Score:4, Interesting)

    by SuricouRaven ( 1897204 ) on Sunday April 19, 2015 @03:29AM (#49503629)

    It's impossible to introduce any new transport-layer protocols now, because the vast majority of connected devices are behind at least one layer of NAT, and that means transport protocols can only work if the router support them. We're stuck with TCP and UDP, and no chance of deploying any potentially better alternatives.

    Bring on IPv6! Coming soon since 1998.

    • by ledow ( 319597 )

      So when everyone goes IPv6, when ISP's literally say "That single IPv4 address you used to have? This is the replacement IPv6 starting with XXXX:XXXX:XXXX:0001?

      Do you think they are going to pull up all their existing systems renumber every internal machine, make them all publicly accessible, give each a unique IP from the range allocated, etc.

      Or do you think they'll buy an IPv6 compatible router, slap it into the network as the same gateway on IPv4, and have it pick up the first IPv6 address offered from

      • Do you think they are going to pull up all their existing systems renumber every internal machine, make them all publicly accessible, give each a unique IP from the range allocated, etc.

        Wow, it's almost like you're completely ignorant of how networking actually works, and yet still posted on slashdot anyway!

        UDP works JUST FINE with NAT, if you haven't noticed.

        Yes, as long as the firewall is stateful. Otherwise you need protocol support to receive responses.

        People have this thing about NAT being evil but it's not.

        No, certain protocols are evil, like SIP, and NAT exacerbates that evil. As such, it's not evil, just a massive PITA.

      • "I can name precisely one ISP in the UK that I know offers IPv6 connectivity"

        Would you mind letting us know who that might be?

        I'm considering moving ISP and want one where I can have a static IP address; looking to the future it seems to me that it's worth having IPv6 support now rather than a second upheaval in a few years time

    • It is coming, finally [google.com]. In 2010 0.1% of the connections to Google's services were native ipv6, and about the same used 6to4. Now, about 6% of the connections are native ipv6, while 6to4 is almost completely gone. 6% is enough that it's actually starting to matter. The fraction currently seems to be growing by 2.5 percentage points per year, though it might still be accelerating. So perhaps we will finally be free from the curse of NAT in a few more decades.

    • by Lumpy ( 12016 )

      I send custom Transport protocols as well as the "wrong" protocols over port 80 that traverse NAT just fine.

      Routers dont care about the content just the port, the firewall may freak out, but that has nothing to do with routing.

    • https://en.wikipedia.org/wiki/... [wikipedia.org] The U stands for UDP.
  • From the summary:

    UDP is significantly more lightweight than TCP, but in return, it features far fewer error correction services than TCP. ... That's why UDP is great for gaming services.

    Stateful vs. Stateless, Connection Oriented vs Connectionless. These are all at the core of TCP/IP and specifically TCP vs. UDP and the reliability contract that each protocol provides. It's not that "one's better than the other at error correction."

    • by Anonymous Coward

      One actually does error detection and re-transmits.

  • Does this mean all web pages are going to partially reload for 5 minutes, toggling between 2 resolutions, then give up with a static filled error message?

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...