Taking Google's QUIC For a Test Drive 141
agizis writes "Google presented their new QUIC (Quick UDP Internet Connections) protocol to the IETF yesterday as a future replacement for TCP. It was discussed here when it was originally announced, but now there's real working code. How fast is it really? We wanted to know, so we dug in and benchmarked QUIC at different bandwidths, latencies and reliability levels (test code included, of course), and ran our results by the QUIC team."
Fuck you, site. (Score:5, Informative)
Javascript required to view *static* content?
No.
Learn how to write a webpage properly.
Re: (Score:2)
The content is really on the page; they just added it a little more to it (went to extra trouble!), to make their site fail.
I guess browsers need "pay attention to refresh" to become an opt-in option.
Re:Fuck you, site. (Score:5, Interesting)
Fun trick: Copy the address into your URL bar, hit enter and then very quickly hit Escape.
Javascript isn't technically required to view the page (as shitty as that would be). They're just being dicks.
Re: (Score:3)
Heh. I was trying to read this from work 'cause, well, it is exactly the sort of work relevant stuff it is worth checking out.
Stupid firewall here was stupid and blocked their site (as instant messaging or some other stupidity). No prob, I switch to ssh + tmux + w3m (where I have like 30 tabs open) and open it. Aaaaand hit their lame redirect. Luckily, hitting back was sufficient to solve that in w3m.
Re: (Score:2)
Says the ad agency shill?
It's not broken... (Score:2)
...but Google said something.
Let's fix it!
Re: (Score:3)
It is broken. Google just made a bad solution. Doesnt mean the problem doesnt exist.
Re: (Score:2)
To be clear, the solution isn't even done being implemented yet-- the project is working towards achieving correctness still, and hasn't gotten there yet. After that part is done, the work on optimization begins.
As it turns out, unsurprisingly, implementing a transport protocol which works reliably over the internet in all conditions isn't trivial!
Re: (Score:1, Interesting)
TCP is far from not broken. TCP for modern uses is hacks upon hacks at best. Everyone knows this.
The problem is coming to an agreement as to what is the best way to get away from this to optimize things best.
SPDY worked fairly well, god knows what is happening there. It helped fix a lot of lag and could cut down some requests by more than half with very little effort on part of the developers or delivery mechanisms.
This... yeah not sure what is happening there either.
TCP is far from useful. It is terrib
Re: (Score:2)
I think most of the people screaming "TCP is broken!" are those with lots of bandwidth who have very specific uses. TCP seems to work quite good for almost everything I have thrown at it. I have a low latency 500kbps down / 64kbps Internet connection and do mostly SSH and HTTP. I am able to saturate my link quite well. I don't know if the QUIC guys are thinking about a significant portion of the population wh
Re: (Score:2)
TCP has limitations even on links like yours. In fact many of the limitations of TCP are worse on lossy, low-bandwidth links than on faster, more reliable ones. And the fact that you can saturate your link is not evidence that TCP isn't slowing you down -- given enough ACKs I can fill any pipe, but that doesn't mean I'm transferring data efficiently.
Also be careful how you characterize "wasteful of bandwidth". For example, a typical TCP over a typical DSL connection would be unusable without the FEC correct
Re: (Score:2)
You are entirely correct, but you seem to have missed the point. FEC at layer 3 is intended to handle the fact that some layer 2 networks are broken or unreliable and neither we users nor Google or any other content provider have any control over that.
Sometimes you're on a 25,000' above-ground DSL line that stretches in the heat of the sun and sends the SNR to hell. Sometimes the WiFi AP is just a bit further away than you'd want. Sometimes you're on a cell phone or other mobile data connection traveling
Re: (Score:2)
Part of the focus is on mobile devices, which often achieve fairly poor throughput, with large jitter and moderate to large RTTs. .. so, yes there is attention to low bandwidth scenarios.
Surprisingly, QUIC can be more efficient given how it packs stuff together, but there this wasn't a primary goal.
Think about second-order effects:
Given current numbers, if FEC is implemented, it is likely that it would reduce the number of bytes actually fed to the network, since you end up sending fewer retransmitted packe
Re: (Score:2)
I think that you're forgetting that packet loss on a TCP stream incurs a retransmit.
So, when there is 33% loss, you end up sending rexmits with an overhead of 50% (33% of the first 33% lost would also be lost, etc. so the series of sum of (p^i) where p ==1/3 and i goes from 1->infinity converges at 50%)
In any case, you end up with an overhead of packets/bytes on the wire with rexmits as well.
With XOR based FEC, it takes one FEC packet at MTU size to recreate any one lost packet in the range of packets co
Re: (Score:2)
Actually non-Google studies suggest the SPDY is only marginally helpful in decreasing page load times unless there's aggressive server push of dependent documents AND favorable parameters and network conditions for the underlying TCP connection. For example, SPDY does very poorly on lossy connections, particularly with the default TCP recovery settings. And even server push has problems -- in addition to requiring configuration it bypasses the client cache mechanism, and on low-bandwidth connections the add
Re: (Score:2)
I'd be curious to see that/those study-- can you tell which one/ones it is (so I can go read 'em and fix stuff?)
I thought I was aware of most of the SPDY/HTTP2 studies, but that is becoming more and more difficult these days!
Better on Paper, Worse In Reality (Score:1)
I understand the limitations of TCP, and although QUIC may look good on paper, the benchmarks provided in the link provided show that in every test QUIC failed miserably and was far worse than TCP. So the real-world benefits of QUIC would be what then? Once Google has a protocol that actually out-performs the tried and true on every front then bring it to the party, otherwise just stahp already.
Re: (Score:1)
The private sector always does a better job, you fucking heathen.
Re: (Score:2)
Yeah. Google is breaking the internet, selling off its users, and generally being a Facebook parody, and YouTube co-founder Jawed Karim had something (however brief) to say about it [theguardian.com]. It's a case study in why selling off your internet startup that happens to fulfill your life dreams and customer needs should be a worst-case scenario, not a bloody business model.
Re: (Score:1)
alpha is, if your pages are all 10MB single files (Score:5, Informative)
As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for
TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....
QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.
* browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.
Re: (Score:3, Informative)
Thanks. What were web page results? (Score:2)
Thank for that info, and for making your test scripts available on Github.
I'm curious* what were the results of web page tests? Obviously a typical web page with CSS files, Javascript files, images, etc. is much different from a monolithic 10 MB file.
* curious, but not curious enough to run the tests for myself.
Re:Thanks. What were web page results? (Score:4, Informative)
Re: (Score:3)
The benchmark looked well constructed, and as such is a fair test for what it is testing: unfinished-userspace-QUIC vs kernel-TCP
It will be awesome to follow along with future runs of the benchmark (and further analysis) as the QUIC code improves.
It is awesome to see people playing with it, and working to keep everyone honest!
Re: (Score:3, Informative)
As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for
TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....
QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.
* browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.
What the hell are you talking about? You're conflating HTTP with TCP. TCP has no such limitation. TCP doesn't deal in files at all.
Read up on QUIC. if (tcp && http) stream== (Score:2)
> You're conflating HTTP with TCP.
I'm discussing how HTTP over TCP works, in contrast to how it works over QUIC.
TCP provides one stream, which when used with HTTP means one file.
QUIC provides multiple concurrent streams specifically so that http can retrieve multiple concurrent files.
Re: (Score:2)
Actually not quite.
IP provides a bnuch of packets: there is no port number.
TCP and IP provide multiplexing over IP by introducing the port number concept.
Re: (Score:3)
Sadly ports are dead, and we're watching them get re-invented. We've gone from the web being a service built on the internet, to the internet being a service build on the web (well, on port 80/443). Security idiots who mistook port-based firewalling for something useful have killed the port concept, and now that we're converging on all-443-all-the-time, we have to re-invent several wheels.
Re: (Score:2)
Yup; True
yes you rewrite http and while you're at it (Score:2)
Yes, you just need to redo http. While you're redoing http, you make several improvements.
As you improve http, you realize the biggest performance issues for http come from the fact that it's limited by the requirement that sends and recieves via an ancient protocol that wasn't designed to carry http. Http doesn't run atop TCP because it's a good fit - on runs atop http because that's what was available.
I think it's more like designing automobiles 3.0, designed to go 300 MPH, and realizing that if you want
Re: (Score:2, Informative)
I haven't RTFA and I don't know much about QUIC, but if it's what you suggest...
As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for
TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....
...then it sounds like a really horrible idea. If I click on a link will I have to wait for 20MB of images to finish downloading before I can read the content of a webpage, only to find out it wasn't what I was looking for anyway?
Re: (Score:3)
AC, don't worry.
TCP is simply a reliable, in-order stream transport.
HTTP on TCP is what was described, and, yes, not the best idea in today's web (though keep in mind that most browsers open up 6 connections per hostname), but that is also why HTTP2 is working on being standardized today.
well yeah, html then css, js, images concurrently (Score:2)
Yeah, I assume that's obvious. Send the html first, then css, js, and images concurrently.
Also, I hope you don't browse too many pages with 20MB of images. :)
Re: (Score:2)
One does wonder though how it would compare vs. an extension to keepalive that truly multiplexes OR that allows queueing requests in the event that the web server might be able to do something useful if it knows what it will be sending next.
There may be cases where FEC is useful at the protocol layer, but it seems like the link layer is amore likely to be the right place for it. The link knows what sort of link it is and how likely it might be to lose data. That would also mean that if there are multiple lo
Re: (Score:2)
Right now QUIC is unfinished, so I hesitate to draw conclusions about it. :)
What I mean by unfinished is that the code does not yet implement the design; the big question right now is how the results will look after correctness has been achieved and a few rounds of correctness/optimization iterations have finished.
Only one thing. (Score:5, Funny)
Paragraph 1 of RFC:
User SHALL register for Google Plus
Re: (Score:2)
MUST
And free ddos (Score:5, Informative)
The current problem with UDP is that many border routers do not check whether outgoing udp packages are from within their network. This is the base for DNS based ddos attacks. They are very difficult to mitigate on server level without creating openings for Joe job attacks instead... Standardizing on udp for other protocols will emphasize this problem
Re: (Score:2)
How would this be worse than a SYN flood attack today?
Re: (Score:2)
The current problem with UDP is that many border routers do not check whether outgoing udp packages are from within their network. This is the base for DNS based ddos attacks. They are very difficult to mitigate on server level without creating openings for Joe job attacks instead... Standardizing on udp for other protocols will emphasize this problem
This is incorrect. Ingress filtering is a global IP layer problem.
TCP handles the problem with SYN packets and SYN cookie extensions to prevent local resource exhaustion by one sided SYNs from an attacker.
A well designed UDP protocol would be no more vulnerable to this form of attack than TCP using the same proven mechanisms of TCP and other better designed UDP protocols (DTLS).
DNS can also be fixed the same way using cookies but seems people are content to make the problem worse by implementing DNSSEC and
Re: (Score:2)
It's in-effect correct because there are lots of UDP protocols designed before the general concept of "do not amplify unauthenticated solicitations with a larger reply" finally sunk in. (Or at least, sunk in among more serious protocol designers/implementers.)
Re: (Score:2)
It's in-effect correct because there are lots of UDP protocols designed before the general concept of "do not amplify unauthenticated solicitations with a larger reply" finally sunk in. (Or at least, sunk in among more serious protocol designers/implementers.)
The parent was making a point against QUIC because it used UDP. It is a false statement. QUIC has appropriate mechanisms to prevent unsolicited mischief.
What DNS, SNMP, NTP and god knows what else did way back then have nothing at all to do with the topic at hand.
Re: (Score:3)
Because the ISPs can barely manage to tape the BGP infrastructure together in a stable fashion; there are numerous problems encountered when to ask a L3 router to perform at the speeds demanded at peering locations, and keeping a full trust mesh of ASNs and IP prefixes is beyond the state of the art (you have to not only know who's advertisements you can trust, but who's readvertisements of who's advertisements you can trust, etc, etc.) Both strict and loose reverse-path filtering are rarer to find in use
Thank you (Score:3)
Re:Thank you (Score:4, Informative)
It has existed for decades. It's called TCP.
Did you RTFA? This new protocol appears to have little to no advantages over TCP and significant disadvantages under some circumstances.
Re: (Score:1)
Re: (Score:2)
bprodoehl is absolutely correct-- the code is unfinished, and while the scenario is certainly one which is worried about, it isn't the focus of attention at the moment. The focus at the moment is getting the protocol working reliably and in all corner cases... Some of the bugs here can cause interesting performance degredations, even when the data gets transferred successfully.
I hope to see the benchmarking continue!
Re: (Score:2)
There's nothing fundamental to the TCP transport to make "several files in parallel" faster than "several files serially" between two endpoints. It's frankly bizarre that you're addressing that problem by discarding TCP.
And anyone who does invent a better protocol and doesn't work "TRUCK" into the acronym gets no respect from me!
Re: (Score:2)
Wait, what? :)
Where was that claimed?!
In any case:
TCP's implementations are almost without fail doing per-flow congestion control, instead of per-session congestion control/per-ip-ip-tuple congestion control. This implies that, if loss on path is mostly independent (and that is what data seems to show), per-flow congestion control backs off at a constant factor (N where N == number of parallel connections) more slowly than a single stream between the same endpoints would.
So, indeed, sending several files in
Re: (Score:2)
TCP is a stream protocol. UDP is a message protocol. They have different limitations and features and aren't always suitable for the same purposes. How do you expect to participate in a discussion about the limitations of TCP if you can't be bothered to learn even the basics of the existing protocols?
Re:Thank you (Score:5, Insightful)
You want a reliable *unreliable* datagram protocol protocol?
Sounds like something guaranteed to fail.
Everyone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.
Re: (Score:2)
1) Avoid the usually-doomed-from-the-start approach of starting at the standards committee level. Frame up the project, and let some really smart engineers go nuts. Take what works to a standards committee, and save the time that you would've spent arguing over stuff that might or might not have worked.
2) Make it work with the existing Internet infrastructure. Making everyone adopt a new TCP stack
Re: (Score:2)
Making everyone adopt a new TCP stack is probably not going to happen.
Neither is it likely to happen that a true multipath helper will be built into core routers (e.g. something that uses TTL counters to determine when to take the second/third/etc preferred route in their route table and leaves the job of computing preferable paths to the end systems.) Which means what really needs to happen... won't. We've reached a technological glaciation point where the existing install base is dictating the direction
Re: (Score:2)
veryone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.
No, you misunderstand. The GP wants a reliable protocol like TCP but with datagram boundaries perserved like UDP. That's not a particularly unreasonable thing to want since the packet boundaries do exist and it's a pain to put them into a stream.
In fact some systems provide exactly such a protocol, the Bluetooth L2CAP protocol, for example. It's quite appropriate for the ATT protocol, for example.
Re:Thank you - THIS (Score:3)
> reliable UDP protocol You want a reliable *unreliable* datagram protocol protocol? Sounds like something guaranteed to fail. Everyone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.
I once worked at a company that made Parking Meters - and accepted credit cards at them. They sent their data over https, and had random issues with timeouts.
It turns out they would format their data in (very descriptive) XML, and discovered an excessively large file combined with an SSL handshake over crappy 2g connection took too long to transfer the data (it didn't help the programmers 'forgot' they hardcoded a timeout, so if the comms was just slow, it would throw a generic error and they blamed Apach
Re: (Score:3)
No, you really don't understand what this is useful for. They aren't "reinventing TCP" because they think they can do it better: they have a different problem domain and can do better than TCP for their specific problem.
TCP insists on strong ordering of data: it provides reliability AND ordering. Sometimes you don't want both of these, and giving up one or the other can get you big benefits.
For example, there are many classes of problems where you want reliability but are willing to lessen the ordering r
Morons (Score:2, Insightful)
UDP is for messages (eg. DNS) and real time data. TCP is far superior for bulk data because of the backoff mechanism, something they want to work around with that UDP crap.
QoS works perfectly with TCP because of TCP backoff.
So much wrong with this idea it makes my head hurt. It is OK to run game servers with UDP. It is OK for RT voice or even video to use UDP. It is not OK to abuse the network to run bulk, time insensitive transfers over UDP, competing with RT traffic.
What is the problem? Too many connectio
Re: (Score:1)
Re: (Score:2)
The back off mechanism is one of the problems they're trying to fix. Internet protocols need some way to control bandwidth usage, but there are a lot of limitations with the existing options in TCP. And if your RTFA you'd see they intend to provide alternative mechanisms to regulate bandwidth, addressing both the continuing need to avoid flooding and the limitations of TCP's back off mechanisms.
Plus stream protocols are inefficient when transferring multiple items (which is the typical use case for HTTP) an
Re: (Score:2)
The back off mechanism is one of the problems they're trying to fix. Internet protocols need some way to control bandwidth usage, but there are a lot of limitations with the existing options in TCP.
Like? Please be specific. This thread is getting old quick with people saying "TCP sucks" going on and on about how it just sucks without ever citing any technical justifications why that is so.
There are tons of congestion algorithms
http://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm [wikipedia.org]
and extensions
http://en.wikipedia.org/wiki/TCP_tuning [wikipedia.org]
for TCP.
Re: (Score:3)
TCP doesn't suck.
TCP is, however, a bottleneck, and not optimal for all uses.
Part of the issue there is the API-- TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.
As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets, even when that lost packet would affect only one resource and would not affect the fr
Re: (Score:2)
TCP is, however, a bottleneck, and not optimal for all uses.
I think this is obvious to everyone. Hammers are not optimal for all uses. TCP provides an ordered reliable byte stream. Not all applications require or benefit from these properties. Using the right tool for the right job tends to provide the best results.
TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.
As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets
What are you referring to specifically? Is this before or after established state? Are you talking about some kind of SYN cross where data sent in the same round as SYN where SYN is
Re: (Score:2)
TCP implementations are very mature. As impementors, we've fixed most/many of the bugs, both correctness and performance related. TCP offers reliable delivery, and, excepting some particular cases of tail-loss/tail-drop, knows the difference between packets that are received but not delivered and packets that are neither received nor delivered.
TCP has congestion control in a variety of different flavors.
TCP has various cool extensions, e.g. MPTCP, TCP-secure (not an RFC, but a working implentation), TFO, et
Re: (Score:2)
And sorry if I sound frustrated about it... I am *really* frustrated by the current state of the world w.r.t. parallel connections. It makes my life such a pain in the butt!
Re: (Score:2)
Please go read TFA. Congestion control is part of the proposed protocol & not all transport needs the dual three way handshake and round trip latency that TCP imposes - sequence numbers & congestion control look to be more than enough for some.
Big mistake (Score:3, Funny)
Re:Big mistake (Score:4, Funny)
And if they do, they won't acknowledge it.
Benchmarking premature; QUIC isn't even 100% coded (Score:5, Informative)
As someone working with the project.
The benchmarking here is premature.
The code is not yet implementing the design, it is just barely working at all.
Again, they're not (yet) testing QUIC-- they're testing the very first partial implementation of QUIC!
That being said, it is great to see that others are interested and playing with it.
Re: (Score:2)
Re: (Score:2)
Nah; it is valuable for many people to be doing this benchmarking even with the current state of code. .. it just requires careful explanation of what the benchmark entails!
Concluding that buggy-unfinished-QUIC is slower than TCP is absolutely valid, for instance.
That isn't the same as QUIC being slower than TCP (at least, not yet!)
Re: (Score:2)
The benchmarking itself is awesome-- it is good to have people playing with it.
Concluding things right now, when the implementation is working towards correctness instead of optimality, however, is potentially misleading.
UDT (Score:1)
Hmmm, several posts, yet no mention of UDT [sourceforge.net] so far. It would be nice if the benchmark included it.
wherever you go, there you aren't (Score:5, Informative)
Those fuckers at www.connectify.me redirected my connection attempt to
http://www.connectify.me/no-javascript/ [connectify.me] so that even after I authorized Javascript for their site I was unable to navigate to my intended destination (whatever shit they pulled did not even leave a history item for the originally requested URL).
This sucks because I middle-click many URLs into tabs I might not visit until ten minutes later. It I had a bunch of these tabs open I wouldn't even have been able to recollect where I had originally been. In this case, I knew to come back here.
Those fuckers at www.connectify.me need to procure themselves an Internet clue stick PDQ.
Re: (Score:2)
You're browsing it wrong.
Nerfing congestion avoidance for increased profits (Score:3)
May I be so bold as to suggest graphs citing only x performance improvement for protocol y are insufficient, harmful and useless measures of usable efficiency. We know how to make faster protocols.. the challenge is faster while preserving generally meaningful congestion avoidance. This part is what makes the problem space non-trivial.
Look at TFA and connectify links it is all performance talk with total silence on addressing or simulating congestion characteristics of the protocol.
Having sat in on a few tcpm meetings it is always the same with google... they show data supporting by doing x there will be y improvement but never as much enthusiasm for consideration of secondary repercussions of the proposed increased network aggression.
My personal view RTT reductions can be achieved thru extension mechanisms to existing protocols without wholesale replacement. TCP fast open and TLS extensions enabling 0 RTT requests thru the layered stack...experimental things for which "code" exists today can provide the same round trip benefits as QUIC.
What google is doing here is taking ownership of the network stack and congestion algorithms away from the current chorous of stakeholders and granting themselves the power to do whatever they please. No need to have a difficult technical discussion or get anyones opinions or signoff before droping in a new profit enhancing congestion algorithm which could very well be tuned to prefer google traffic globally at the expense of everyone elses ... they control the clients and the servers...done deal.
There are two fundamental improvements I would like to see regarding TCP.
1. Session establishment in the face of in band adversaries adding noise to the channel. Currently TCP connections can be trivially reset by an in-band attacker. I think resilience to this necessarily binding security to the network channel can be a marginally useful property in some environments yet is mostly worthless in the real world as in-band adversaries have plenty of other tools to make life difficult.
2. Efficient Multi-stream/message passing. Something with the capabilities of ZeroMQ as an IP layer protocol would be incredibly awesome.
Re: (Score:2)
I tend to agree. I am glad that someone is trying to create a better TCP. If they fail, we validate that TCP is a good idea. If they succeed, then we can have a better protocol.
If the QUIC exercise is successful, then the IETF should consider extending TCP to support the relevant features. For example, their point about multiple steams is a good one. Perhaps TCP should define an option to open N simultaneous connections with a single 3-way handshake. Existing implementations would ignore the new optio
Re: (Score:2)
If the QUIC exercise is successful, then the IETF should consider extending TCP to support the relevant features. For example, their point about multiple steams is a good one. Perhaps TCP should define an option to open N simultaneous connections with a single 3-way handshake. Existing implementations would ignore the new option bytes in the header so nothing would break.
While TCP is ancient there has been continuous work to improve it over the years. I think most people throwing stones here have not taken the time to look around and understand the current landscape. Indeed many ideas in QUIC are good ones yet not a single one of them are something new or something that had not been implemented or discussed in various WGs.
Regarding multiple streams what effectively is the difference between this and fast open? I send a message the message arrives and is processed immed
Re: (Score:2)
Doesn't IPSEC protect against this?
SCTP (Score:2)
And have they taken various security considerations into account, e.g. SYN floods?
Re: (Score:2)
It is similar in some ways, and dissimilar in other ways.
One of the outcomes of the QUIC stuff that is considered a good outcome is that the lessons learned are incorporated into other protocols like TCP, or SCTP.
QUIC absolutely takes security into account, including SYN floods, magnification attacks, etc.
Pacing, Bufferbloat (Score:5, Interesting)
The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?
I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.
From the linked slides:
Does Packet Pacing really reduce Packet Loss?
* Yes!!! Pacing seems to help a lot
* Experiments show notable loss when rapidly sending (unpaced) packets
* Example: Look at 21st rapidly sent packet
- 8-13% lost when unpaced
- 1% lost with pacing
Re: (Score:2)
The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?
I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.
Yes essentially it is a hedge against future probability of packet loss. I don't know about QUIC but with TCP statistically more packet loss tends to present toward the end of a window than start therefore normally more expensive to correct.
Re: (Score:2)
Re:Pacing, Bufferbloat (Score:4, Interesting)
What seems likely is that when you generate a large burst of back-to-back packets, you are much more likely to overflow a buffer, causing packet loss.
Pacing makes it less likely that you overflow the router buffers, and so reduces the chance of packet loss.
TCP does actually do pacing, though it is what is called "ack-clocked". For every ACK one receives, one can send more packets out. Since the ACKs traverse the network and get spread out in time as they go through bottlenecks, you end up with pacing.... but ONLY when bytes are continually flowing. TCP doesn't end up doing well in terms of pacing out packets when the bytes start flowing and stop and restart, as often happens with web browsing.
Re: (Score:3)
Reducing packet loss is not always a good thing. Packet loss is mechanism that an IP network uses to indicate a lack of capacity somewhere in the system. BufferBloat is one attempt to eliminate packet loss with very bad consequences, never throw packets away by queueing them for a very long time. Pacing can be the opposite side of that coin, send packets so slowly loss never occurs, but that also means the transfer happens very slowly.
When many TCP connections are multiplexed onto a single link the maxim
Re: (Score:2)
The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?
I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.
From the linked slides:
Does Packet Pacing really reduce Packet Loss?
* Yes!!! Pacing seems to help a lot
* Experiments show notable loss when rapidly sending (unpaced) packets
* Example: Look at 21st rapidly sent packet
- 8-13% lost when unpaced
- 1% lost with pacing
Well, if you're sending UDP, and your server is connected to a gig link, and the next link between you and the server is 1m, and the buffer size of the device is 25ms...
Sending data over 25k, you might as well set packets 18+ on fire, because they sure aren't making it to the destination unless you delay them accordingly.
Handled at layer 7 (Score:2)
Are you friggin nuts? This seems to imply that any filtering at the kernel level will need to unwrap all the application specific jibber-jabber in this protocol to determine wtf it's supposed to do with it. That would be quite costly in terms of performance. No, I don't trust applications to handle the security of packet data coming in. Especially when some entity wants to bury support for the protocol in their own web browser. This just smells like all kinds of Hell naw .
Re: (Score:3)
There are definitely people and opinions on both sides of the fence on this.
Unfortunately, though performance might improve with access to the hardware, wide and consistent deployment of anything in the kernel/OS (how many WinXP boxes are there still??) takes orders of magnitude more time than putting something into the application.
So.. we have a problem: We want to try out a new protocol and learn and iterate (because, trust me, it isn't right the first time out!), however can't afford to wait long period
Re: (Score:2)
No, I don't trust applications to handle the security of packet data coming in.
In fact you do, unless you're running a pretty top notch L7 DPI box. And even then,,,
Besides, these days everything that's an app must be considered good and trustworthy. How else can we expect you to turn over all your data to criminals, much less corporations?
Re:first impression (Score:4, Informative)
Re: (Score:2)
Is Google's focus on making serving up the traffic more efficient? Obviously if it improves the client experience it's a win, but I would imagine they'd be more invested in a way to pump 2,000 QUIC streams out of a box that can only handle 1,000 TCP/HTTP streams today.
Re: (Score:2)
That is a complicated question :)
Hopefully this mostly answers it:
The goal is to not be worse than TCP in any way. Whether or not we'll achieve that is something we won't know 'till we've achieve correctness and had time to optimize, and then time to analyze where and why it performs badly, then iterate a few times!
Re: (Score:3)
I know a shipping solution, commonly deployed, that already meets the goal of not being worse than TCP in any way. You can probably guess what it is.
Re: (Score:2)
Re: (Score:2)
Hey, he didn't say his goal was "better". Honestly, when someone says "my PowerPoint is better than your shipping solution" I sort of tune out. You need something fundamentally better to be worth talking about for a huge change in deployed tech, and even in the case of e.g. IPv6 vs batshit crazy NATting it's an uphill battle.
Re: (Score:3)
As a newish developer who knows only the minimum I need to about TCP/IP protocol, I was surprised that this, and a number of common things (apparently games, streaming video [stackoverflow.com]) use UDP at all. I thought it was basically just used for ping.
Out of curiosity can anyone point out good books for learning more about how to implement applications that use TCP/IP including udp in ways other than the common ssh/http/ftp connections.
ICMP is used for ping, friend. I recommend the Comer [purdue.edu] books. Also, I'd also recommend that you read the IP [ietf.org], UDP [ietf.org] and TCP [ietf.org] specs.
Re:UDP vs TCP (Score:4, Informative)
The general gist is that UDP and TCP both have kind of an ideal mileu. UDP is great for small packets that you want delivered with a minimum of overhead, and if the packet is late, lost, or out of order, it won't kill anything.
TCP is great if you are sending large amounts of data at once, between a pair of systems, and in situations where its important not to lose packets or get them out of order, and you don't care that much if this takes a little extra time (occasionally perhaps a lot of extra time) to accomplish. Also good in situations where you'd like to know when your partner on the other side goes away for some reason.
Most applications are going to be in-between somewhere, so you have to make a decision. For example, if your packets are small and need to be delivered quickly, but you also need reliability, you might go with TCP just to get that reliability. Alternatively, if you can get away with it, you might instead go with UDP, but use dedicated links between the systems and a handshaking protocol at your application layer to prevent collisions.
Alternatively, you might do what Google is doing, and try to reimplement TCP's reliability in your application layer on top of UDP. The thing about UDP is that you can always reimplement any parts of TCP you need on top of it.
Re: (Score:2)
UDP isn't used for ping either. You're thinking of ICMP -- a lower-level protocol than TCP or UDP. ICMP is *almost* part of the IP layer; technically it uses IP for data transfer, but IP also depends on it for control messaging.
Re:UDP vs TCP (Score:5, Informative)
UDP and TCP have different uses; one isn't better than the other, they just do different things.
UDP is message-based. When I send a UDP message the remote end either gets the whole message or none of it. This can make parsing in applications a lot easier; rather than putting delimiters into a stream and trying to pick apart the data as it comes in in chunks I can be sure that I'm always working with a complete message, and the messages can by of different types/sizes/etc. as dictated by my application-layer needs. But there's a maximum size for messages, and if I need to send more data than fits in a single message it's up to my application to ensure they get put into the right order when they are received.
UDP is unreliable, in that if a UDP packet gets drop the message is lost and no notification is made to the sender. Often this is bad, but in certain instances it is valuable. One such instance is data with a short lifetime, such as games or streaming media. If I'm in the middle of a game and a packet gets dropped it doesn't do me any good to get that packet 2 seconds later -- the game has moved on. This unreliable nature also makes UDP simpler; there's no need to setup a "connection" to send UDP data -- you just slap an IP address on the packet and send it along, and the other end will get it or not and use it or not and you don't have to care. So if you're writing a server that will handle billions of clients UDP has a lot less overhead as it doesn't have to keep track of billions of "connections" (or have billions ports available).
TCP is a streaming protocol. You put data in on one end and it pops out in the same order on the other. This is great if you're sending a single file -- you can be sure the other end will get all the bits in the right order. But it also means if you have something important to say you have to wait in line until all of the preceding data has been transmitted, possibly including things that will be expired by the time they are received. It also means your application-layer protocol has to have some method to separate messages if you send more than one thing over a single connection.
TCP has reliable delivery. Often this is a good thing, as the sender can be sure the receiver got all the data (and got it in the right order). But in order to make the protocol reliable the receiver must acknowledge each and every packet from the sender, and the sender and receiver must store information about each other so they can keep track of this ongoing bi-directional connection. So there's at least a couple of round-trip exchanges necessary to setup a TCP connection, and when you connect to a server it must have a free TCP port number for each and every client it's connected to, and enough memory to keep track of all of the connections.
QUIC (and several other new-ish protocols) are proposing a sort of compromise protocol -- a protocol that's both message-based and reliable, and frequently that allows messages of any size. Such protocols provide the delivery assurances of TCP without the waiting-in-line issues the streaming model can produce, and they reduce the amount of setup overhead by allowing clients to open a single connection to the server and fetch many different things.