Google To Propose QUIC As IETF Standard 84
As reported by TechCrunch, "Google says it plans to propose HTTP2-over-QUIC to the IETF as a new Internet standard in the future," having disclosed a few days ago that about half of the traffic from Chrome browsers is using QUIC already. From the article: The name "QUIC" stands for Quick UDP Internet Connection. UDP's (and QUIC's) counterpart in the protocol world is basically TCP (which in combination with the Internet Protocol (IP) makes up the core communication language of the Internet). UDP is significantly more lightweight than TCP, but in return, it features far fewer error correction services than TCP. ... That's why UDP is great for gaming services. For these services, you want low overhead to reduce latency and if the server didn't receive your latest mouse movement, there's no need to spend a second or two to fix that because the action has already moved on. You wouldn't want to use it to request a website, though, because you couldn't guarantee that all the data would make it.
With QUIC, Google aims to combine some of the best features of UDP and TCP with modern security tools.
What is wrong with SCTP and DCCP? (Score:5, Interesting)
These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.
If pseudo-open proprietary standards are de-rigour, then adopt the Scheduled Transfer Protocol and Delay Tolerant Protocol. Hell, bring back TUBA, SKIP and any other obscure protocol nobody is likely to use. It's not like anyone cares any more.
Re: (Score:1)
you _do_ known that SCTP was by IBM, right?
Re: (Score:2)
Having seen the result of design-by-committee (i.e. design by politics instead of designing to fit a functional need), I can say that it doesn't work.
The outcome is almost always better when the protocol has actually been implemented, the kinks worked out, and then you ask others to use it. ...You know, useful is a necessary component of reusable...
But, if you're interested in FUD and a lack of progress instead of something which actually works, by all means do design-by-committee and get nearly useless pro
Re: (Score:2)
Dude, the code is open. Then a spec is written, potentially with modifications, if it proves useful.
You *don't* have to use it.
Re: (Score:3)
Do you know what the acronym "RFC" fucking means?!
If you have no intention of accepting "Comments" suggesting changes to your protocol, then WTF is the point of submitting a "Request For" them?
Re: (Score:2)
RFCs no longer mean request-for-comments, at least in the IETF context.
Re: (Score:2)
How about discussing the technical differences and pros & cons instead of the source then? A post below does that and is way more informative than just listing off other protocols and saying nothing about them.
Nooooooo!! It should be decided by a house vote. Face-painters rule!
Re:What is wrong with SCTP and DCCP? (Score:5, Informative)
SCTP, for one, doesn't have any encryption. QUIC integrates a TLS layer into it, in a way that avoids a lot of connection setup time. The best you could do in SCTP is to put it under DTLS, which won't be as fast. Second, SCTP has horrible fragmentation behavior -- NDATA was supposed to help, but didn't make it in. It uses TCP's congestion window system over the entire association, while QUIC also has pacing. And looking at RFC2960, you'll see the names: Motorola, Cisco, Siemens, Nortel, Ericsson, and Telecordia. Generally someone has to pay engineers to make the standards.
As for the article, the UDP vs TCP discussion is a red herring. AFAICT, QUIC's use of UDP is for compatibility with existing IP infrastructure.
Re:What is wrong with SCTP and DCCP? (Score:5, Interesting)
SCTP, for one, doesn't have any encryption.
Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack). A feature QUIC does not provide.
Managing transport and encryption in a single protocol makes the resulting system more brittle and complex. Improvements to TCP helps everything layered on top of it. Improvements to TLS helps everything layered on top of it.
Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.
QUIC integrates a TLS layer into it, in a way that avoids a lot of connection setup time.
TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.
I have yet to hear a coherent architectural justification for QUIC that makes sense... The reason Google pushes it is entirely *POLITICAL* this is the path of least resistance granting them full access to the TCP stack and congestion algorithms without having to work to build consensus with any other stakeholder.
Re: What is wrong with SCTP and DCCP? (Score:2, Interesting)
No they don't. .. And packet loss is fairly common, at between 1.5 and 3% on "good" networks,and far worse in placed like India.
IG TLS frames are unalgned with the transport framing, then a packet loss causes a delay in the interpretation of any data of that TLS frame until the packet is recovered. The means, practically, that TLS on TCP is lots slower when there is any packet loss.
Re: (Score:2)
[...] TLS on TCP is lots slower when there is any packet loss.
And how a (almost) stateless protocol like QUIC supposed to handle the packet loss any better?
The previous write-ups about the Google protocols were all like one based on the premise that packet loss is a very very rare occurrence. That's why they use effectively a stateless transport: because they assume that errors are rare. In other words, they are too very bad at handling it.
Coming from the old days of IPX vs TCP debates, I remember how the IPX proponents were going abruptly silent in the face of a
Re:What is wrong with SCTP and DCCP? (Score:5, Interesting)
SCTP, for one, doesn't have any encryption.
Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack).
I disagree. To me there's at least one really compelling reason: To push universal encryption. One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.
But there are other reasons as well, quite well-described in the documentation. The most significant one is performance. QUIC achieves new connection setup with less than one round trip on average, and restart with none... just send data.
Improvements to TCP helps everything layered on top of it.
True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least. That would not only slow adoption, it would also mean a whole lot of additional design complexity forced by backward compatibility requirements. QUIC, on the other hand, will be rolled out in applications, and it doesn't have to be backward compatible with anything other than previous versions of itself. It will make its way into the OS stacks, but systems that don't have it built in will continue using it as an app library.
Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.
So improve and use those protocols. You may even want to look to QUIC's design for inspiration. Then you can figure out how to integrate your new ideas carefully into the old protocols without breaking compatibility, and then you can fight your way through the standards bodies, closely scrutinized by every player that has an existing TLS or TCP implementation. To make this possible, you'll need to keep your changes small and incremental, and well-justified at every increment. Oh, but they'll also have to be compelling enough to get implementers to bother. With hard work you can succeed at this, but your timescale will be measured in decades.
In the meantime, QUIC will be widely deployed, making your work irrelevant.
As for using TCP without encryption so you don't have to pay for something you don't need, I think you're both overestimating the cost of encryption and underestimating its value. A decision that a particular data stream doesn't have enough value to warrant encryption it is guaranteed to be wrong if your application/protocol is successful. Stuff always gets repurposed and sufficient re-evaluation of security requirements is rare (even assuming the initial evaluation wasn't just wrong).
TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.
Only for restarts. For new connections you still have all the TCP three-way handshake overhead, followed by all of the TLS session establishment. QUIC does it in one round trip, in the worst case, and zero in most cases.
There was much valid (IMO) criticism of SPDY, that it really only helped really well-optimized sites -- like Google's -- to perform significantly better. Typical sites aren't any slower with SPDY, but aren't much faster, either, because they are so inefficient in other areas that request bottlenecks aren't their problem, so fixing those bottlenecks doesn't help. But QUIC will generally cut between two and four RTTs out of every web browser connection. And, of course, it also includes all of the improvements SPDY brought, plus new congestion management mechanisms which are significantly bette
Re: (Score:2)
I disagree. To me there's at least one really compelling reason: To push universal encryption.
Too bad the goal was not pushing universal security instead.
One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.
What we need are systems that are actually secure not ones that pretend to be. Hard to get excited about a "feature" that is worthless against most threats.
True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least.
Is there a technical barrier with respect to TCP and or TLS that makes addressing issues unrealistic? Linux kernel has had TFO support for years and it didn't take decades for SYN cookies or SACKs to hit all but the long tail. There must be at least a dozen or two TLS extensions by now. How
Re:What is wrong with SCTP and DCCP? (Score:5, Interesting)
I have yet to hear a coherent architectural justification for QUIC that makes sense... The reason Google pushes it is entirely *POLITICAL* this is the path of least resistance granting them full access to the TCP stack and congestion algorithms without having to work to build consensus with any other stakeholder.
Many years ago, in an earlier life, I tried to make changes through the IETF to an existing protocol. I was responsible for one of the major IRC servers, and still am though IRC is effectively in maintenance only. IRC is a shit protocol - really, embarrassingly bad. So I set up a conversation - grabbed the developers of all of the major clients and servers, and got us all on a mailing list to try to do something about it. We ALL knew it was bad, and we all knew it needed serious overhaul - if not a complete scrapping. We'd even fantasized about a non-tcp multipathing protocol that would be more appropriate for IRC. But like hell that was gonna happen.
This was a group of people that, for the most part, didn't make money from IRC. It was a hobby. We had no corporate agendas, no major impacts to our livelihoods, and the only constraint to implementation was our own time. In the six months we spent, we managed to publish one draft to the IETF. It expired and we effectively gave up. Building consensus is hard, time consuming, and quite honestly not worth the effort when you're talking about this kind of thing.
Google is in a position to just do it, and honestly, I'm fine with that. Otherwise everyone would pop up with an opinion, and nobody would get anywhere. That's why we haven't seen anything come up to rival TCP, even though TCP is pretty bad for a lot of applications.
The only point at which I'd have a problem is if their QUIC protocol isn't completely open and free, and totally unencumbered by intellectual property constraints (patents, etc). Otherwise, go for it - and give me a protocol api/sdk in C so I can give it a shot.
Re: (Score:1)
libquic: "sources and dependencies extracted from Chromium's QUIC Implementation with a few modifications and patches to minimize dependencies needed to build QUIC library." [github.com]
Re: (Score:3)
Working code speak volumes in the standards process and that is okay. You take on the risk that nobody will be interested in what you have built or you may discover political opposition that you never counted on; if the resistance is strong enough you get left holding the bag having spent time and treasure on something that will never see wide use.
On the other hand if you start out with a large open consensus building process as you say its very likely you don't get anywhere, or end up with a bastardized d
Re: (Score:2)
Did SCTP have horrible behavior, or the tested implementation? The QUIC doc says nothing about that. QUIC vs SCTP is on page 8.
https://docs.google.com/docume... [google.com]
Re: (Score:3)
> QUIC solves nothing that hasn't already been solved.
Creating an IETF standard, based on a working implementation, isn't relevant to what problems it can service. While Google makes strategic and implementation mistakes, their technical research and solutions are usually quite good. The IETF is for this kind of documentation. ie Producing high quality, relevant technical documents that influence the way people design. The fact that someone might reject it as a "NotInventedHere", is not a compelling reas
Re: (Score:1)
hey. we reject kings, etc etc etc
write a spec, get three interoperable implementations developed independently, then
lets go to town
100,000 of undigestable c++ doesn't quite look the same
Re: (Score:1)
WHY NOT USE SCTP over DTLS?
One recurring question is: Why not use SCTP (Stream Control Transmission Protocol) over DTLS (Datagram Transport Layer Security)? SCTP provides (among other things) stream multiplexing, and DTLS provides SSL quality encryption and authentication over a UDP stream. In addition to the fact that both of these protocols are in various levels of standardization, this combination is currently on a standard track, and described in this Network Working Group Internet Draft.
The largest v
Re: (Score:1)
These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.
Yeah, but it's from Google, and whatever Google wants, Google gets. They've already done this with SPDY, rammed through the IETF with unseemly haste [acm.org] as "HTTP 2.0", with any objections either ignored or declared out of scope. I don't see how QUIC will be any different, the IETF will rename it to give the impression they had some input into the process, but that'll be all.
Re: (Score:1)
Re: (Score:1)
NIH syndrome, a Google specialty.
Re: (Score:2)
"NIH syndrome, a Google specialty."
What has the National Institute of Health have to do with it.
(Although I'm guessing Rick Perry will abolish it if he becomes President)
Re: (Score:1)
http://lmgtfy.com/?q=nih+syndr... [lmgtfy.com]
Re: (Score:2)
Hence, whatever happened to RUDP?
Everyone keeps turning UDP into some pseudo TCP w/o all the extras....that's what RUDP was built for.
Well... (Score:1)
It sounds a bit like a coding cluster fuck.
Side note: Anyone using Comcast? I notice that all my Google services are extremely slow at least some of the day, everyday.
When I turn on VPN there is no issue, I'd to say it's routing... but at this point don't we all know better?
Re: (Score:3)
The official explanation is that there is insufficient peering 'twixt Comcast (or $OTHER_ISP) and Google, and that's the congested link.
Of course, other Google services have no such problems at such times, which makes me suspect it's bullshit. But that's still the story.
Re: (Score:1)
Ahhh haven't heard that one....
Re: (Score:2)
The main difference I'd see is that it's much harder if not impossible to spoof an IP address in a TCP connection, considering that it takes a completed handshake before any meaningful traffic (read: lots of bits) can take place. I could for example see this making upstream filtering of DDoS attacks more difficult.
Just say no (Score:1)
Google just wants dominion over congestion algorithms for their own benefit.
Re: (Score:2)
And WaffleMonster just wants to bash Google for his/her own benefit (i.e. paid by MS).
Was in the room for delivery of one-sided Google presentations touting benefits of aggressive congestion schemes and ICW settings during TCPM meetings. I honestly fear ease at which clients and servers can cheat and tinker for competitive advantage with completely user land stacks embedded in applications.
Wording was probably a little unfair and oversimplified yet I believe in the general theme. I don't work for a large corporation and have little interest in this space other than promotion of an open dece
Oh good! (Score:5, Insightful)
I bet QUIC will make those DART-driven VP9 video services really SPDY.
Second link is empty A tag (Score:2)
The second link in the story ("half of the traffic from Chrome browsers is using QUIC already") is broken - it's an empty <a> tag, with no href. Also means we can't workaround it, since we don't have any hints about the destination.
Re: (Score:3)
Though the first link's article does mention the destination, https://blog.chromium.org/2015... [chromium.org]
Proxies share key? (Score:2)
QUIC is designed so that if a client has talked to a given server before, it can can start sending data without any round trips, which makes web pages load faster.
So, if I have three clients behind a NAT or proxy, do they all share the same TLS key? Does that mean my encryption is compromised in a WiFi environment?
~~
Re: (Score:3)
No, and no more than usual.
Stupid NAT. (Score:4, Interesting)
It's impossible to introduce any new transport-layer protocols now, because the vast majority of connected devices are behind at least one layer of NAT, and that means transport protocols can only work if the router support them. We're stuck with TCP and UDP, and no chance of deploying any potentially better alternatives.
Bring on IPv6! Coming soon since 1998.
Re: (Score:1)
So when everyone goes IPv6, when ISP's literally say "That single IPv4 address you used to have? This is the replacement IPv6 starting with XXXX:XXXX:XXXX:0001?
Do you think they are going to pull up all their existing systems renumber every internal machine, make them all publicly accessible, give each a unique IP from the range allocated, etc.
Or do you think they'll buy an IPv6 compatible router, slap it into the network as the same gateway on IPv4, and have it pick up the first IPv6 address offered from
Re: (Score:2)
Do you think they are going to pull up all their existing systems renumber every internal machine, make them all publicly accessible, give each a unique IP from the range allocated, etc.
Wow, it's almost like you're completely ignorant of how networking actually works, and yet still posted on slashdot anyway!
UDP works JUST FINE with NAT, if you haven't noticed.
Yes, as long as the firewall is stateful. Otherwise you need protocol support to receive responses.
People have this thing about NAT being evil but it's not.
No, certain protocols are evil, like SIP, and NAT exacerbates that evil. As such, it's not evil, just a massive PITA.
Re: (Score:2)
"I can name precisely one ISP in the UK that I know offers IPv6 connectivity"
Would you mind letting us know who that might be?
I'm considering moving ISP and want one where I can have a static IP address; looking to the future it seems to me that it's worth having IPv6 support now rather than a second upheaval in a few years time
Re: (Score:2)
Andrews & Arnold.
Re: (Score:2)
Thanks
I've heard good things about A & A and they're high on my list of potential new suppliers
Re: (Score:3)
It is coming, finally [google.com]. In 2010 0.1% of the connections to Google's services were native ipv6, and about the same used 6to4. Now, about 6% of the connections are native ipv6, while 6to4 is almost completely gone. 6% is enough that it's actually starting to matter. The fraction currently seems to be growing by 2.5 percentage points per year, though it might still be accelerating. So perhaps we will finally be free from the curse of NAT in a few more decades.
Re: (Score:2)
I send custom Transport protocols as well as the "wrong" protocols over port 80 that traverse NAT just fine.
Routers dont care about the content just the port, the firewall may freak out, but that has nothing to do with routing.
Re: (Score:1)
shit is it that bad already? (Score:2)
From the summary:
UDP is significantly more lightweight than TCP, but in return, it features far fewer error correction services than TCP. ... That's why UDP is great for gaming services.
Stateful vs. Stateless, Connection Oriented vs Connectionless. These are all at the core of TCP/IP and specifically TCP vs. UDP and the reliability contract that each protocol provides. It's not that "one's better than the other at error correction."
Re: (Score:1)
One actually does error detection and re-transmits.
Given Google's past protocols (Score:2)
Does this mean all web pages are going to partially reload for 5 minutes, toggling between 2 resolutions, then give up with a static filled error message?