QUIC: Google's New Secure UDP-Based Protocol 97
New submitter jshurst1 writes "Google has announced QUIC, a stream multiplexing protocol running over a new variation of TLS, as well as UDP. The new protocol offers connectivity with a reduced number of round trips, strong security, and pluggable congestion control. QUIC is in experiment now for Chrome dev and canary users connecting to Google websites."
Re:Don't trust 'em (Score:5, Informative)
I have serious doubts in ANY new network tech not having backdoors of some sort.
Oh, come on. This is a network protocol. Sure, protocols *can* have flaws, but it's a very long stretch from being forced to run an unknown binary. Just implement it on your own if you're paranoid enough.
Re: (Score:2, Informative)
I have serious doubts in ANY new network tech not having backdoors of some sort.
Oh, come on. This is a network protocol. Sure, protocols *can* have flaws, but it's a very long stretch from being forced to run an unknown binary. Just implement it on your own if you're paranoid enough.
The problem with trying to implement a new protocol over tcp/ip (the internet) like Tim Berners-Lee did with the web, is that the mythical 'Open Internet' has been degraded. QUIC and webRTC reek to me of some orwellian attempt to make the lies about home servers being less worthy of net-neutrality protections than skype's servers make sense. I.e. allowing 'client to client' file transfers and video chats. All this is because of the conspiracy to deprive residential internet users the power to serve. Now
Re: (Score:2)
in case people aren't familiar with my non-drunken-but-close-enough debate style- scratch the tcp/ from the first sentence. And obviously I was just flailing against webrtc and this quic thing which smells like it is also trying to address the more general problem space of - "well, since we don't really have an open ipv6 internet that works for everyone, lets waste our lives engineering this big mess over here..."
Re: (Score:3, Insightful)
Re: (Score:2)
Re:Don't trust 'em (Score:5, Insightful)
Re:Don't trust 'em (Score:5, Informative)
Maybe if you would RTFA instead of pontificating, you would have found that the reference QUIC implementation is already open source, the specification is open, the wire specification is open, the whole thing is open. If you don't trust Google's implementation then roll your own.
Re: (Score:1)
the specification is open, the wire specification is open, the whole thing is open. If you don't trust Google's implementation then roll your own.
While I appreciate the sentiment, I think you are missing an important point - the specification itself could be deliberately flawed. Crypto is hard, and not just the math itself but all the infrastructure details. The number of people able to recognize a weak design (deliberate or not) is quite small. Probably a couple of orders smaller than the number of people able to re-implement a network protocol from specs.
Re:Don't trust 'em (Score:5, Informative)
They are not doing their own crypto.... they are using TLS. Again, please read the actual documents.
Re: (Score:1)
Are you a cryptography expert? Because there are a _lot_ of ways to attack cryptography schemes. Length of transferred data can often be inferred very easily, for instance. As another example, sophisticated replay attacks can often significantly weaken encryption strength; perhaps in some yet unknown way. A layperson's "actual read" of the documents cannot and should not provide any reassurance to anyone. It is their availability to the cryptography community, as a whole, who will then inspect them which wi
Re: (Score:1)
I think you might be the one that needs to do some reading. Their info page says that they're using something "similar to TLS" and then, in the docs, mention "the analog of" when referring to features of TLS. It doesn't sound like they're using stock TLS or DTLS, so there would be ample opportunity to make small mistakes (whether intentional or not.)
Re: (Score:2)
They are not doing their own crypto.... they are using TLS. Again, please read the actual documents.
Come on man that is barely relevant to what I said. I can't believe you got +5 informative for that glib drivel. There is more to the infrastructure than just TLS. If TLS was all there is to it then they wouldn't be doing anything new, would they?
Re:Don't trust 'em (Score:4, Insightful)
It's "like" TLS, as in "its none dairy but it tastes just like milk".
Google's reason for doing this is to lower their costs associated with better security. This creates a 3 way instead of a 5 way exchange for the security protocol setup. Fewer connections less load on their stuff and less stuff they have to buy.
The security landscape is littered with security implementations which tried improve existing protocols. Just type in the terms WAP and security for a story on how to take a secure starting point SSL and bugger it.
Another is Microsoft's introduction of PKINIT for keberos, kerberos is a proveably security protocol which is limitied by the entropy in a users password, MS "fixed" this with PKINIT however they initroduced replay attach vectors precisely because they wanted fewer exchanges. BTW google seems to have done a better job in this regard +1 for google, -1 for MS.
Re: (Score:2)
Re: (Score:2)
Google's reason for doing this is to lower their costs associated with better security. This creates a 3 way instead of a 5 way exchange for the security protocol setup. Fewer connections less load on their stuff and less stuff they have to buy.
IMO It's not just direct financial costs.
Google is now into mobile in a pretty big way. A "GSM based" smartphone would typically move between.
GRPS: encryption exists but it's an old design and has security flaws that can't really be fixed due to compaitbility with legacy equipment. Fortunately the equipment needed to exploit things is expensive enough to keep most people out.
3G: better encryption systems but they can be subverted by forcing the phone to drop back to GRPS.
Public wifi networks: Either no encr
Re: (Score:2)
And yet there always seems to be somebody out there that's capable of finding the flaws that exist.
Yes, there are a relatively small number of people able to find those flaws, but it's still a large enough number of people that the flaws will be found at some point. And at any rate, the crypto has already been done, they're reusing TLS for the crypto.
Re: (Score:1)
The always-present question for UDP (Score:2, Insightful)
How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection? TCP uses the overhead caused by ACK packets as a rate limiter on clients.
There are obviously high-bandwidth frameworks where you're already putting a strain on systems just by using them, where low-latency is also critical, and UDP is appropriate; video chat comes to mind, but outside of that very limited purview, what use could encrypting UDP actually do?
Re: (Score:3, Insightful)
How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection
How do you stop it if someone does not bother to respect the rate limiter? You are assuming that someone doing something bad is going to play by the rules.
Re: (Score:1)
This is mostly covered either in the QUIC FAQ or the design doc.
But, attempting to answer your questions given material in the article and some sprinkling of industry knowledge: TCP is subject to DoS attacks with SYNs. There are mitigation techniques in there, but.. look.. you've received the packet already and have to do some processing on it to figure out if you should discard or not. This will be true of *any* protocol, TCP, UDP, SCTP, whatever.
The purpose of the encryption is twofold:
1) it makes it less
Re:The always-present question for UDP (Score:4, Interesting)
QUIC uses an equivalent of SYN cookies to prevent some kinds of DoS. It also uses packet reception proofs to prevent some ACK spoofing attacks that TCP is vulnerable to. Overall it looks even better than TCP.
As for encryption, Google gives two reasons. They intend to run HTTP over QUIC and Google services are encrypted by default; it's more efficient for QUIC itself to implement encryption than to layer HTTP over TLS over QUIC. The other reason is that middleboxes do so much packet mangling that encryption is the only way to avoid/detect it.
Re: (Score:2)
I take it you've never heard of tarpits. Depending upon the type of DOS or DDOS, you can run through an incredible amount of processing power on the part of the attacker without straining your server, but it really depends upon the type of attack and the specifics of your set up.
Re: (Score:2)
Games, document sharing, aptics, real time text chat.
Re: (Score:1)
Games, document sharing, aptics, real time text chat.
Please, if you can't use SSL+TCP for text chat and keep it real time, you've got horrendous software. Moreover, the potentially lossy nature of UDP is really bad for text. You can outright lose data. Your packets can arrive out of order. It's okay with video data where a hiccup only makes a few missing pixels, but with text, that's a terrible idea.
QUIC is more like TCP in these ways, exception to (Score:5, Insightful)
They could have, but QUIC is "better" for their use cases. In many ways, it's like an improved version of TCP. It runs on top of UDP simply
because routers, firewalls, etc. often only speak TCP and UDP. From the FAQ:
> it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas,
> and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.
> You can outright lose data. Your packets can arrive out of order. It's okay with video data where a hiccup only makes a few missing pixels,
> but with text, that's a terrible idea.
Unless of course the protocol you're running over UDP handles that stuff, just like TCP handles that stuff.
Normally, it's a bad idea to use UDP to run a protocol that has in-order packets, guaranteed delivery, etc. because TCP already gives you that.
Why re-invent TCP? Unless you're going to spend a few million dollars on R&D to make your UDP-based protocol actually be better than TCP,
you should just use TCP.
That "unless you're going to spend a few million dollars on R&D" is the key here. Google DID make the investment, so the protocol actually does
work better for the particular use than TCP does.
Re: (Score:2)
Re: (Score:2)
How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection? TCP uses the overhead caused by ACK packets as a rate limiter on clients.
The "zero" RTT is like TLS session resumption or session tickets in that it only works by assuming a set of initial parameters. If it fails then you fallback to additional rounds to hello/ handshake TLS parameters.
Re: (Score:2)
While the inner working of protocols are something I've not looked at and suspect are over my head, this does sound interesting and, whatever else Google is or isn't doing, I'm glad they're continuing to do some interesting research and fooling around with things.
Not necessarily the right place (Score:3, Insightful)
I have no objection to protocol experiments that are 100% Open Source implementations. I wouldn't trust one that was not, and an Open Standard is just instructions for people who make implementations.
But it seems that a lot of this might belong in a system facility rather than the browser and server. I don't know if it makes sense to put all of TLS in the kernel, but at least it could go in its own process. Using UDP is fine for an experiment, but having 200 different ad-hoc network stacks each tied into their own application and all of them using UDP is not.
Re: (Score:3)
That's why we have Linux. You can get a real OS implementation in users hands immediately. You only need these poor half measures for the Microsoft version.
Re: (Score:2)
Immediately you say? Android users might disagree.
Re: (Score:2)
I wasn't saying that all of the Red Hat Enterprise Linux users would install it immediately in their mission critical systems on Wall Street, either.
But we can give you a significant number of users of a real kernel for your experiment.
Re: (Score:2)
"That's why we have Linux."
That's an absurd and meaningless statement. It may be valuable, but it's not "why" and you don't speak for everyone. There are others whose opinions are far more principal to that question than yours.
"I have no objection to protocol experiments..."
What a relief! Google can go ahead on now that it has your blessing.
"But we can give you a significant number of users..."
Because they are yours to give?
If there's one thing you make clear here, Bruce Perens, it's conceit.
Re: (Score:1)
Those who ignore history are bound to make really big fools of themselves on Slashdot.
Go away, troll.
Re: (Score:3)
I think Google intends to put it in the kernel once they have finished actually designing and standardizing it. Since it would take 10-15 years to get QUIC into the Windows kernel, they're putting it in Chrome as a stopgap.
Re: (Score:3)
Well, hopefully a library at least. That's how some OS's are handling DTLS, which is similar.
That initial question of mine is addressed (partially) in the FAQ:
Re: (Score:1)
Yes, I am not going to lose sleep over communicating with GMail via QUIC. I already assume anything I store there is in NSA vaults.
It's all about ads and tracking (Score:1)
The point of this is to improve performance for tiny HTTP transactions. The need for all those tiny transactions comes from ads and tracking data and their associated tiny bits of Javascript and CSS. The useful content is usually one big TCP send.
Blocking of all known tracking systems [mozilla.org] is a simpler solution.
Re: (Score:3)
You should open up the perf tab of your browser and look at this page to see if it supports your conclusions.
Re: (Score:2)
Lol, and sadly, it does. But it isn't true for many other sites. :)
Re: (Score:2)
Re: (Score:3)
QUIC has congestion control. (I suppose your brain would explode if you saw uTP, which runs over UDP yet is even less aggressive than TCP.)
Re: (Score:2)
Wrong. QUIC is better at congestion control than TCP, and is fair when used along side TCP. QUIC monitors both packet loss, and latency which gives it more information than TCP for flow control.
BS there are a number of congestion algorithms for TCP that use latency.
The ACKs also include proof of received packets so an invalid ACK attack to cause a server to flood a network (which works with TCP) does not work with QUIC
ACK attack requires guessing sequence numbers or being able to spy on the data path which severly limits the usefulness vs much much lower hanging fruit (DNS/chargen amplification)
QUIC also optionally (when beneficial) includes FEC to recover lost packets so it can still detect congestion via packet loss
Yea "FEC" as in sending duplicate packets from what I've read.
but without the retransmittion delay TCP gets.
I don't understand this shit. If there is no cost then how can there be meaningful congestion avoidance? TCP has fast retransmit why is that not enough?
Also, the multiple multiplexed streams over QUIC get to work together to collect congestion information, which further provides an advantage for congestion control over TCP.
Hu? What prevents the OS vendor from using d
Re: (Score:2)
Nope. Suppose you have 1% packet loss. By sending 1/50 of your data as parity packets, you can avoid most retransmition delays. Sending duplicate packets also works, but is a naive trivial case. There are much more sophisticated approaches as well (LDPC, hamming codes etc). Regardless, there are many cases where a round trip delay is way worse than a a small increase in data size, so selectively doing FEC where it is beneficial can be very useful. QUIC has the info to know when its beneficial.
Read the design document again.. they say key packets in session setup can be proactivly duplicated..later they explicitly state that simple packet duplication counts as "FEC".. .. FEC is normally implemented within or below the packetization layer within the link where it makes sense and can scale to replace individual corrupted symbols in the transmission stream..when you get to the packet layer your severly constrained if you don't fill the MTU you are wasting resources. Correction codes will either con
Re: (Score:2)
The main reason to not use TCP is that you can roll your own hand-shake and flow-control and congestion detection, without relying on the baked-in static implementation your OS has.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Not to be as conspiratorial as some other posters here;
There's nothing conspiratorial about it. It's just the usual stupid-trying-to-look-clever attitude of some to immediately assume that any announcement of a new experiment or idea is doomed to failure.
Google aren't claiming this is going to be the next thing, yet. They're experimenting, and it's interesting that they're doing so, so let's watch this space instead of pissing on their very small, low-key parade.
Re:Because TCP is broken? (Score:5, Informative)
There are a lot of write-ups on issues with TCP, but one of the big issues that is starting to become a problem as speeds increase but latency is staying fixed, is the congestion control. Because TCP starts off slow and ramps up, it tends not to make use of available bandwidth. Un-used bandwidth is bad. The other issue is current TCP uses packet-loss to decide when to back off. The issue this creates is packet-loss tends to affect a lot of connections at the same time. You get this synchronization where lots of computers experiencing packet-loss all at the same time, so they all back-off at the same time. Suddenly the route is under-utilized. All of the connections start building up again until the route is over-utilized, then they all back-off at the same time.
This issue alone could possible cause large portions of the Internet to fail. It has happened in the past and the variables are getting to be similar again. Essentially you're left with a large portion of the Internet routes in a constant violent swing between over-utilized and under-utilized.
You get this issue where the average utilization is low, but packet-loss and jitter is high. Relatively speaking.
There is a lot of theory on how to fight these issues, but the only real way to figure this out is to actually test these theories on the large scale. A protocol that rides on top of UDP and runs in the application is the perfect place to test this. If something goes wrong, you just disable it. You can't do that with most OSs TCP stacks.
Re: (Score:2)
Mod parent up, please!
Re: (Score:2)
TCP has some major issues with congestion control that isn't playing well with buffer-bloat.
Nothing plays well with buffer bloat thats why you fix buffer bloat.
The Internet is bursty in nature. TCP takes too long to ramp-up.
This is what TCP quick start is for.
but one of the big issues that is starting to become a problem as speeds increase but latency is staying fixed, is the congestion control.
Because TCP starts off slow and ramps up, it tends not to make use of available bandwidth. Un-used bandwidth is bad.
Path oversubscription is far worse.
The other issue is current TCP uses packet-loss to decide when to back off.
What else would it use?
The issue this creates is packet-loss tends to affect a lot of connections at the same time. You get this synchronization where lots of computers experiencing packet-loss all at the same time, so they all back-off at the same time. Suddenly the route is under-utilized. All of the connections start building up again until the route is over-utilized, then they all back-off at the same time.
Hence the jitter parameter in the retransmit timer computation.
This issue alone could possible cause large portions of the Internet to fail. It has happened in the past and the variables are getting to be similar again. Essentially you're left with a large portion of the Internet routes in a constant violent swing between over-utilized and under-utilized.
The historical congestive collapses occured because nobody was using any congestion control.
There is a lot of theory on how to fight these issues, but the only real way to figure this out is
And RFCs and working code even. The year is 2013...please adjust your chronometer accordingly.
Re: (Score:2)
Just like in my nightmares (Score:2)
So... we're probably going to see new connection flood DOS attacks like the ones that prompted SYN cookies a couple of decades ago. Application stacks will need to handle their own congestion control, and applications that do so poorly will negatively impact carrier networks. And, yay, a new variant of TLS when there are already several versions that aren't widely implemented, let alone deployed.
Oh, and in the application so that each of those problems can be addressed over and over. Yay!
Re: (Score:2)
Fed up with google "standards" (Score:1, Flamebait)
Here we are again.
After VP8, protocol buffer, Google is a it again providing some free replacement of some existing standard (DTLS here http://www.rfc-editor.org/rfc/rfc4347.txt [rfc-editor.org])
But of course, Google's people know better and have more money. And the list can go on. Dart as a replacement of javascript. Protocol buffer as replacement of ASN.1, SPDY to replace HTTP. With Jingle google tried to replace SIP protocol as well but at least the extended an existing standard but they dropped the support when stopping
Re: (Score:1)
Google, frankly, doesn't care what other people create. They suffer from the world's worst case of "not invented here" syndrome, and it's starting to seriously hinder the web in general.
They need to step back and stop trying to reinvent every wheel on their own. It's cringe-worthy to see them do this kind of self-centered stuff while hiding behind the facade of open specifications.
And yes, I mean hiding. It doesn't matter if this is an open source and open spec, because by the time people start relying on i
Re: (Score:2)
Oh, wait...
Re: (Score:1)
Skip you next physical, your kneejerk reflexes are working. The doc is actually an interesting read, and goes into their reasoning as to why they aren't recreating TCP, etc. I guess some people just find that kind of reading interesting, and some would rather be +5 awesome.
Why I question googles motives (Score:2)
Reducing RTT for connection setup and encryption is a nobel goal yet I'm not clear on why technically this can't be solved without the reinvention of TCP over UDP.
TCP fast open coupled with TLS session tickets/snap start offers essentially the same possibility for actual transmission of an encrypted request before completion of first round trip.
Multiple concurrent TCP streams normally end up having much the same properties as multiplexed UDP in the aggregate so I don't buy head of line koolaid either.
What I
QUIC; We were first! (Score:2)
http://www.cs.utexas.edu/users/sustik/QUIC [utexas.edu]
Why over UDP? (Score:2)
Re: (Score:3)
UDP provides a mechanism (source ports) for multiple client applications on the same host to coexist. Furthermore through the mangling of source ports NATs can allow multiple hosts running UDP applications to coexist and communicate with the same servers from behind one public IP.
If you created a new IP protocol then you'd have to implement your own mechanism for multiple client applications on the same host to exist. Furthermore your system would likely break if two users behind the same NAT tried to acces
Re: (Score:2)
SPDY Snappy QUIC Go Dart (Score:1)
What's with the naming convention? With Google releasing SPDY, Snappy and QUIC, I'm guessing they will run out of synonyms for 'fast' sooner than Apple will run out of cats...
Re: (Score:2)
Haven't you heard? Apple already ran out of cats. 10.9 is going to be called Mavericks [wikipedia.org].