Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google The Internet

QUIC: Google's New Secure UDP-Based Protocol 97

New submitter jshurst1 writes "Google has announced QUIC, a stream multiplexing protocol running over a new variation of TLS, as well as UDP. The new protocol offers connectivity with a reduced number of round trips, strong security, and pluggable congestion control. QUIC is in experiment now for Chrome dev and canary users connecting to Google websites."
This discussion has been archived. No new comments can be posted.

QUIC: Google's New Secure UDP-Based Protocol

Comments Filter:
  • How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection? TCP uses the overhead caused by ACK packets as a rate limiter on clients.

    There are obviously high-bandwidth frameworks where you're already putting a strain on systems just by using them, where low-latency is also critical, and UDP is appropriate; video chat comes to mind, but outside of that very limited purview, what use could encrypting UDP actually do?

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection

      How do you stop it if someone does not bother to respect the rate limiter? You are assuming that someone doing something bad is going to play by the rules.

    • by Anonymous Coward

      This is mostly covered either in the QUIC FAQ or the design doc.

      But, attempting to answer your questions given material in the article and some sprinkling of industry knowledge: TCP is subject to DoS attacks with SYNs. There are mitigation techniques in there, but.. look.. you've received the packet already and have to do some processing on it to figure out if you should discard or not. This will be true of *any* protocol, TCP, UDP, SCTP, whatever.

      The purpose of the encryption is twofold:
      1) it makes it less

    • by Wesley Felter ( 138342 ) <wesley@felter.org> on Friday June 28, 2013 @03:53PM (#44136501) Homepage

      QUIC uses an equivalent of SYN cookies to prevent some kinds of DoS. It also uses packet reception proofs to prevent some ACK spoofing attacks that TCP is vulnerable to. Overall it looks even better than TCP.

      As for encryption, Google gives two reasons. They intend to run HTTP over QUIC and Google services are encrypted by default; it's more efficient for QUIC itself to implement encryption than to layer HTTP over TLS over QUIC. The other reason is that middleboxes do so much packet mangling that encryption is the only way to avoid/detect it.

    • I take it you've never heard of tarpits. Depending upon the type of DOS or DDOS, you can run through an incredible amount of processing power on the part of the attacker without straining your server, but it really depends upon the type of attack and the specifics of your set up.

    • Games, document sharing, aptics, real time text chat.

      • Games, document sharing, aptics, real time text chat.

        Please, if you can't use SSL+TCP for text chat and keep it real time, you've got horrendous software. Moreover, the potentially lossy nature of UDP is really bad for text. You can outright lose data. Your packets can arrive out of order. It's okay with video data where a hiccup only makes a few missing pixels, but with text, that's a terrible idea.

        • by raymorris ( 2726007 ) on Friday June 28, 2013 @05:14PM (#44137455) Journal
          > Please, if you can't use SSL+TCP for text chat and keep it real time

          They could have, but QUIC is "better" for their use cases. In many ways, it's like an improved version of TCP. It runs on top of UDP simply
          because routers, firewalls, etc. often only speak TCP and UDP. From the FAQ:

          > it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas,
          > and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.

          > You can outright lose data. Your packets can arrive out of order. It's okay with video data where a hiccup only makes a few missing pixels,
          > but with text, that's a terrible idea.

          Unless of course the protocol you're running over UDP handles that stuff, just like TCP handles that stuff.
          Normally, it's a bad idea to use UDP to run a protocol that has in-order packets, guaranteed delivery, etc. because TCP already gives you that.
          Why re-invent TCP? Unless you're going to spend a few million dollars on R&D to make your UDP-based protocol actually be better than TCP,
          you should just use TCP.

          That "unless you're going to spend a few million dollars on R&D" is the key here. Google DID make the investment, so the protocol actually does
          work better for the particular use than TCP does.
        • There's nothing stopping you from implementing your own flow control protocol in the data you send by UDP. TCP sends a periodic sequence acknowledgement of every set of packets it receives. If you implement your own flow control in UDP, you could have it only send back a message when it detects that some data was lost. Likewise, TCP connections maintain a little bit of state on each side. UDP does not, so the networking software in the client and server operating system has less work to do - just hand
    • How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection? TCP uses the overhead caused by ACK packets as a rate limiter on clients.

      The "zero" RTT is like TLS session resumption or session tickets in that it only works by assuming a set of initial parameters. If it fails then you fallback to additional rounds to hello/ handshake TLS parameters.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Friday June 28, 2013 @03:44PM (#44136351) Homepage Journal

    I have no objection to protocol experiments that are 100% Open Source implementations. I wouldn't trust one that was not, and an Open Standard is just instructions for people who make implementations.

    But it seems that a lot of this might belong in a system facility rather than the browser and server. I don't know if it makes sense to put all of TLS in the kernel, but at least it could go in its own process. Using UDP is fine for an experiment, but having 200 different ad-hoc network stacks each tied into their own application and all of them using UDP is not.

    • I think Google intends to put it in the kernel once they have finished actually designing and standardizing it. Since it would take 10-15 years to get QUIC into the Windows kernel, they're putting it in Chrome as a stopgap.

    • Well, hopefully a library at least. That's how some OS's are handling DTLS, which is similar.

      That initial question of mine is addressed (partially) in the FAQ:

      Why didnâ(TM)t you use existing standards such as SCTP over DTLS? QUIC incorporates many techniques in an effort to reduce latency. SCTP and DTLS were not designed to minimize latency, and this is significantly apparent even during the connection establishment phases. Several of the techniques that QUIC is experimenting with would be difficult

  • The point of this is to improve performance for tiny HTTP transactions. The need for all those tiny transactions comes from ads and tracking data and their associated tiny bits of Javascript and CSS. The useful content is usually one big TCP send.

    Blocking of all known tracking systems [mozilla.org] is a simpler solution.

    • by grmoc ( 57943 )

      You should open up the perf tab of your browser and look at this page to see if it supports your conclusions.

    • by Bengie ( 1121981 )
      You should see HTTPS, which has a 12-way hand-shake, over a 200ms cell-phone link. This is one of the reasons why we need something other than TCP+HTTPS. Fewer-hand-shakes in exchange for more CPU usage, which we have tons idle CPU time.
  • So... we're probably going to see new connection flood DOS attacks like the ones that prompted SYN cookies a couple of decades ago. Application stacks will need to handle their own congestion control, and applications that do so poorly will negatively impact carrier networks. And, yay, a new variant of TLS when there are already several versions that aren't widely implemented, let alone deployed.

    Oh, and in the application so that each of those problems can be addressed over and over. Yay!

  • Here we are again.

    After VP8, protocol buffer, Google is a it again providing some free replacement of some existing standard (DTLS here http://www.rfc-editor.org/rfc/rfc4347.txt [rfc-editor.org])

    But of course, Google's people know better and have more money. And the list can go on. Dart as a replacement of javascript. Protocol buffer as replacement of ASN.1, SPDY to replace HTTP. With Jingle google tried to replace SIP protocol as well but at least the extended an existing standard but they dropped the support when stopping

    • by Anonymous Coward

      Google, frankly, doesn't care what other people create. They suffer from the world's worst case of "not invented here" syndrome, and it's starting to seriously hinder the web in general.

      They need to step back and stop trying to reinvent every wheel on their own. It's cringe-worthy to see them do this kind of self-centered stuff while hiding behind the facade of open specifications.

      And yes, I mean hiding. It doesn't matter if this is an open source and open spec, because by the time people start relying on i

      • Of course they're open! Look at the public API for Google Plus, and their pledge to support XMPP forever.

        Oh, wait...
  • Reducing RTT for connection setup and encryption is a nobel goal yet I'm not clear on why technically this can't be solved without the reinvention of TCP over UDP.

    TCP fast open coupled with TLS session tickets/snap start offers essentially the same possibility for actual transmission of an encrypted request before completion of first round trip.

    Multiple concurrent TCP streams normally end up having much the same properties as multiplexed UDP in the aggregate so I don't buy head of line koolaid either.

    What I

  • What I don't understand is why over UDP? They are building a transport protocol, which logically should be another alternative to TCP, UDP, SCTP, etc. Wouldn't this be both more efficient and architecturally cleaner?
    • UDP provides a mechanism (source ports) for multiple client applications on the same host to coexist. Furthermore through the mangling of source ports NATs can allow multiple hosts running UDP applications to coexist and communicate with the same servers from behind one public IP.

      If you created a new IP protocol then you'd have to implement your own mechanism for multiple client applications on the same host to exist. Furthermore your system would likely break if two users behind the same NAT tried to acces

  • What's with the naming convention? With Google releasing SPDY, Snappy and QUIC, I'm guessing they will run out of synonyms for 'fast' sooner than Apple will run out of cats...

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...