Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Google Stats The Internet

Taking Google's QUIC For a Test Drive 141

agizis writes "Google presented their new QUIC (Quick UDP Internet Connections) protocol to the IETF yesterday as a future replacement for TCP. It was discussed here when it was originally announced, but now there's real working code. How fast is it really? We wanted to know, so we dug in and benchmarked QUIC at different bandwidths, latencies and reliability levels (test code included, of course), and ran our results by the QUIC team."
This discussion has been archived. No new comments can be posted.

Taking Google's QUIC For a Test Drive

Comments Filter:
  • Fuck you, site. (Score:5, Informative)

    by Anonymous Coward on Friday November 08, 2013 @01:51PM (#45370817)

    Javascript required to view *static* content?

    No.

    Learn how to write a webpage properly.

    • It looks like they know how to do it, and are just actively hostile:

      <noscript>
      <meta http-equiv="refresh" content="0; url=/no-javascript/" />
      </noscript>

      The content is really on the page; they just added it a little more to it (went to extra trouble!), to make their site fail.

      I guess browsers need "pay attention to refresh" to become an opt-in option.

  • ...but Google said something.

    Let's fix it!

    • It is broken. Google just made a bad solution. Doesnt mean the problem doesnt exist.

      • by grmoc ( 57943 )

        To be clear, the solution isn't even done being implemented yet-- the project is working towards achieving correctness still, and hasn't gotten there yet. After that part is done, the work on optimization begins.

        As it turns out, unsurprisingly, implementing a transport protocol which works reliably over the internet in all conditions isn't trivial!

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      TCP is far from not broken. TCP for modern uses is hacks upon hacks at best. Everyone knows this.

      The problem is coming to an agreement as to what is the best way to get away from this to optimize things best.
      SPDY worked fairly well, god knows what is happening there. It helped fix a lot of lag and could cut down some requests by more than half with very little effort on part of the developers or delivery mechanisms.
      This... yeah not sure what is happening there either.

      TCP is far from useful. It is terrib

      • TCP is far from not broken. TCP for modern uses is hacks upon hacks at best. Everyone knows this.

        I think most of the people screaming "TCP is broken!" are those with lots of bandwidth who have very specific uses. TCP seems to work quite good for almost everything I have thrown at it. I have a low latency 500kbps down / 64kbps Internet connection and do mostly SSH and HTTP. I am able to saturate my link quite well. I don't know if the QUIC guys are thinking about a significant portion of the population wh

        • TCP has limitations even on links like yours. In fact many of the limitations of TCP are worse on lossy, low-bandwidth links than on faster, more reliable ones. And the fact that you can saturate your link is not evidence that TCP isn't slowing you down -- given enough ACKs I can fill any pipe, but that doesn't mean I'm transferring data efficiently.

          Also be careful how you characterize "wasteful of bandwidth". For example, a typical TCP over a typical DSL connection would be unusable without the FEC correct

        • by grmoc ( 57943 )

          Part of the focus is on mobile devices, which often achieve fairly poor throughput, with large jitter and moderate to large RTTs. .. so, yes there is attention to low bandwidth scenarios.
          Surprisingly, QUIC can be more efficient given how it packs stuff together, but there this wasn't a primary goal.
          Think about second-order effects:
          Given current numbers, if FEC is implemented, it is likely that it would reduce the number of bytes actually fed to the network, since you end up sending fewer retransmitted packe

      • Actually non-Google studies suggest the SPDY is only marginally helpful in decreasing page load times unless there's aggressive server push of dependent documents AND favorable parameters and network conditions for the underlying TCP connection. For example, SPDY does very poorly on lossy connections, particularly with the default TCP recovery settings. And even server push has problems -- in addition to requiring configuration it bypasses the client cache mechanism, and on low-bandwidth connections the add

        • by grmoc ( 57943 )

          I'd be curious to see that/those study-- can you tell which one/ones it is (so I can go read 'em and fix stuff?)
          I thought I was aware of most of the SPDY/HTTP2 studies, but that is becoming more and more difficult these days!

  • by Anonymous Coward

    I understand the limitations of TCP, and although QUIC may look good on paper, the benchmarks provided in the link provided show that in every test QUIC failed miserably and was far worse than TCP. So the real-world benefits of QUIC would be what then? Once Google has a protocol that actually out-performs the tried and true on every front then bring it to the party, otherwise just stahp already.

    • The private sector always does a better job, you fucking heathen.

    • Yeah. Google is breaking the internet, selling off its users, and generally being a Facebook parody, and YouTube co-founder Jawed Karim had something (however brief) to say about it [theguardian.com]. It's a case study in why selling off your internet startup that happens to fulfill your life dreams and customer needs should be a worst-case scenario, not a bloody business model.

    • Its pretty nice that the code is actually in a state that anyone can download, build, and benchmark things they care about, and the stuff presented in the IETF slides (in TFA) is really interesting about how they can use Chrome to A/B test parameters in the protocol, to see which actually work out. Presumably that's just for folks that hop in to chrome internals and enable QUIC, but who hasn't done that already? ;)
    • by raymorris ( 2726007 ) on Friday November 08, 2013 @02:33PM (#45371299) Journal

      As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for

          TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....

      QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.

      * browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.

      • Re: (Score:3, Informative)

        by bprodoehl ( 1730254 )
        Yes, you're absolutely right that this left out stream multiplexing, but it did test QUIC's ability to handle packet loss. Seeing as how QUIC is aiming, at a high level, to fix problems in TCP and make the Internet faster, I think the test is fair, and I'm excited to see how things improve. There are other scenarios in the tests in Github, including some sample webpages with multiple images and such, if anyone is interested.
        • Thank for that info, and for making your test scripts available on Github.
          I'm curious* what were the results of web page tests? Obviously a typical web page with CSS files, Javascript files, images, etc. is much different from a monolithic 10 MB file.

          * curious, but not curious enough to run the tests for myself.

          • by bprodoehl ( 1730254 ) on Friday November 08, 2013 @03:16PM (#45371741)
            In some of the web page scenario tests, HTTP over QUIC was about 4x faster than HTTP over TCP, and in others, QUIC was about 3x worse. I'll probably look into that next. The QUIC demo client appears to take about 200ms to get warmed up for a transfer, so testing with small files over fast connections isn't fair. After that 200ms, it seemed to perform as one would expect, so tests that take longer than a couple seconds are a pretty fair judge of current performance.
        • by grmoc ( 57943 )

          The benchmark looked well constructed, and as such is a fair test for what it is testing: unfinished-userspace-QUIC vs kernel-TCP

          It will be awesome to follow along with future runs of the benchmark (and further analysis) as the QUIC code improves.
          It is awesome to see people playing with it, and working to keep everyone honest!

      • Re: (Score:3, Informative)

        As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for

        TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....

        QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.

        * browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.

        What the hell are you talking about? You're conflating HTTP with TCP. TCP has no such limitation. TCP doesn't deal in files at all.

        • > You're conflating HTTP with TCP.

          I'm discussing how HTTP over TCP works, in contrast to how it works over QUIC.
          TCP provides one stream, which when used with HTTP means one file.

          QUIC provides multiple concurrent streams specifically so that http can retrieve multiple concurrent files.

          • Actually not quite.

            IP provides a bnuch of packets: there is no port number.

            TCP and IP provide multiplexing over IP by introducing the port number concept.

            • by lgw ( 121541 )

              Sadly ports are dead, and we're watching them get re-invented. We've gone from the web being a service built on the internet, to the internet being a service build on the web (well, on port 80/443). Security idiots who mistook port-based firewalling for something useful have killed the port concept, and now that we're converging on all-443-all-the-time, we have to re-invent several wheels.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        I haven't RTFA and I don't know much about QUIC, but if it's what you suggest...

        As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for

        TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....

        ...then it sounds like a really horrible idea. If I click on a link will I have to wait for 20MB of images to finish downloading before I can read the content of a webpage, only to find out it wasn't what I was looking for anyway?

        • by grmoc ( 57943 )

          AC, don't worry.
          TCP is simply a reliable, in-order stream transport.
          HTTP on TCP is what was described, and, yes, not the best idea in today's web (though keep in mind that most browsers open up 6 connections per hostname), but that is also why HTTP2 is working on being standardized today.

        • Yeah, I assume that's obvious. Send the html first, then css, js, and images concurrently.

          Also, I hope you don't browse too many pages with 20MB of images. :)

      • by sjames ( 1099 )

        One does wonder though how it would compare vs. an extension to keepalive that truly multiplexes OR that allows queueing requests in the event that the web server might be able to do something useful if it knows what it will be sending next.

        There may be cases where FEC is useful at the protocol layer, but it seems like the link layer is amore likely to be the right place for it. The link knows what sort of link it is and how likely it might be to lose data. That would also mean that if there are multiple lo

    • by grmoc ( 57943 )

      Right now QUIC is unfinished, so I hesitate to draw conclusions about it. :)

      What I mean by unfinished is that the code does not yet implement the design; the big question right now is how the results will look after correctness has been achieved and a few rounds of correctness/optimization iterations have finished.

  • by Richy_T ( 111409 ) on Friday November 08, 2013 @02:08PM (#45371033) Homepage

    Paragraph 1 of RFC:

    User SHALL register for Google Plus

  • And free ddos (Score:5, Informative)

    by Ubi_NL ( 313657 ) <joris.benschopNO@SPAMgmail.com> on Friday November 08, 2013 @02:10PM (#45371055) Journal

    The current problem with UDP is that many border routers do not check whether outgoing udp packages are from within their network. This is the base for DNS based ddos attacks. They are very difficult to mitigate on server level without creating openings for Joe job attacks instead... Standardizing on udp for other protocols will emphasize this problem

    • by plover ( 150551 )

      How would this be worse than a SYN flood attack today?

    • The current problem with UDP is that many border routers do not check whether outgoing udp packages are from within their network. This is the base for DNS based ddos attacks. They are very difficult to mitigate on server level without creating openings for Joe job attacks instead... Standardizing on udp for other protocols will emphasize this problem

      This is incorrect. Ingress filtering is a global IP layer problem.

      TCP handles the problem with SYN packets and SYN cookie extensions to prevent local resource exhaustion by one sided SYNs from an attacker.

      A well designed UDP protocol would be no more vulnerable to this form of attack than TCP using the same proven mechanisms of TCP and other better designed UDP protocols (DTLS).

      DNS can also be fixed the same way using cookies but seems people are content to make the problem worse by implementing DNSSEC and

      • by skids ( 119237 )

        It's in-effect correct because there are lots of UDP protocols designed before the general concept of "do not amplify unauthenticated solicitations with a larger reply" finally sunk in. (Or at least, sunk in among more serious protocol designers/implementers.)

        • It's in-effect correct because there are lots of UDP protocols designed before the general concept of "do not amplify unauthenticated solicitations with a larger reply" finally sunk in. (Or at least, sunk in among more serious protocol designers/implementers.)

          The parent was making a point against QUIC because it used UDP. It is a false statement. QUIC has appropriate mechanisms to prevent unsolicited mischief.

          What DNS, SNMP, NTP and god knows what else did way back then have nothing at all to do with the topic at hand.

  • by TFoo ( 678732 ) on Friday November 08, 2013 @02:15PM (#45371113)
    The Internet has needed a standardized reliable UDP protocol for many years. There have been many attempts, but hopefully this one will stick.
    • Re:Thank you (Score:4, Informative)

      by whoever57 ( 658626 ) on Friday November 08, 2013 @02:24PM (#45371205) Journal

      The Internet has needed a standardized reliable UDP protocol for many years. There have been many attempts, but hopefully this one will stick

      It has existed for decades. It's called TCP.

      Did you RTFA? This new protocol appears to have little to no advantages over TCP and significant disadvantages under some circumstances.

      • Definitely a work in progress, and the scenario tested is outside of Google's main focus. I suspect that things will get a lot better in the weeks and months to come.
        • by grmoc ( 57943 )

          bprodoehl is absolutely correct-- the code is unfinished, and while the scenario is certainly one which is worried about, it isn't the focus of attention at the moment. The focus at the moment is getting the protocol working reliably and in all corner cases... Some of the bugs here can cause interesting performance degredations, even when the data gets transferred successfully.

          I hope to see the benchmarking continue!

          • by lgw ( 121541 )

            There's nothing fundamental to the TCP transport to make "several files in parallel" faster than "several files serially" between two endpoints. It's frankly bizarre that you're addressing that problem by discarding TCP.

            And anyone who does invent a better protocol and doesn't work "TRUCK" into the acronym gets no respect from me!

            • by grmoc ( 57943 )

              Wait, what? :)
              Where was that claimed?!

              In any case:
              TCP's implementations are almost without fail doing per-flow congestion control, instead of per-session congestion control/per-ip-ip-tuple congestion control. This implies that, if loss on path is mostly independent (and that is what data seems to show), per-flow congestion control backs off at a constant factor (N where N == number of parallel connections) more slowly than a single stream between the same endpoints would.

              So, indeed, sending several files in

      • TCP is a stream protocol. UDP is a message protocol. They have different limitations and features and aren't always suitable for the same purposes. How do you expect to participate in a discussion about the limitations of TCP if you can't be bothered to learn even the basics of the existing protocols?

    • Re:Thank you (Score:5, Insightful)

      by fatphil ( 181876 ) on Friday November 08, 2013 @02:47PM (#45371441) Homepage
      > reliable UDP protocol

      You want a reliable *unreliable* datagram protocol protocol?
      Sounds like something guaranteed to fail.

      Everyone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.
      • Maybe, but this still looks really promising. They've made a few really smart decisions, in my opinion.

        1) Avoid the usually-doomed-from-the-start approach of starting at the standards committee level. Frame up the project, and let some really smart engineers go nuts. Take what works to a standards committee, and save the time that you would've spent arguing over stuff that might or might not have worked.
        2) Make it work with the existing Internet infrastructure. Making everyone adopt a new TCP stack
        • by skids ( 119237 )

          Making everyone adopt a new TCP stack is probably not going to happen.

          Neither is it likely to happen that a true multipath helper will be built into core routers (e.g. something that uses TTL counters to determine when to take the second/third/etc preferred route in their route table and leaves the job of computing preferable paths to the end systems.) Which means what really needs to happen... won't. We've reached a technological glaciation point where the existing install base is dictating the direction

      • veryone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.

        No, you misunderstand. The GP wants a reliable protocol like TCP but with datagram boundaries perserved like UDP. That's not a particularly unreasonable thing to want since the packet boundaries do exist and it's a pain to put them into a stream.

        In fact some systems provide exactly such a protocol, the Bluetooth L2CAP protocol, for example. It's quite appropriate for the ATT protocol, for example.

      • > reliable UDP protocol You want a reliable *unreliable* datagram protocol protocol? Sounds like something guaranteed to fail. Everyone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.

        I once worked at a company that made Parking Meters - and accepted credit cards at them. They sent their data over https, and had random issues with timeouts.
        It turns out they would format their data in (very descriptive) XML, and discovered an excessively large file combined with an SSL handshake over crappy 2g connection took too long to transfer the data (it didn't help the programmers 'forgot' they hardcoded a timeout, so if the comms was just slow, it would throw a generic error and they blamed Apach

      • by TFoo ( 678732 )

        No, you really don't understand what this is useful for. They aren't "reinventing TCP" because they think they can do it better: they have a different problem domain and can do better than TCP for their specific problem.

        TCP insists on strong ordering of data: it provides reliability AND ordering. Sometimes you don't want both of these, and giving up one or the other can get you big benefits.

        For example, there are many classes of problems where you want reliability but are willing to lessen the ordering r

  • Morons (Score:2, Insightful)

    by Anonymous Coward

    UDP is for messages (eg. DNS) and real time data. TCP is far superior for bulk data because of the backoff mechanism, something they want to work around with that UDP crap.

    QoS works perfectly with TCP because of TCP backoff.

    So much wrong with this idea it makes my head hurt. It is OK to run game servers with UDP. It is OK for RT voice or even video to use UDP. It is not OK to abuse the network to run bulk, time insensitive transfers over UDP, competing with RT traffic.

    What is the problem? Too many connectio

    • Oh, you should definitely read up on the docs. Yes, they care about backoff mechanisms and generally not breaking the Internet, a LOT. By moving to UDP, they can work on schemes for packet pacing and backoff that do a better job at not overwhelming routers. And they can get away from all the round-trips you need to set up a SSL session over TCP, without sacrificing security.
    • The back off mechanism is one of the problems they're trying to fix. Internet protocols need some way to control bandwidth usage, but there are a lot of limitations with the existing options in TCP. And if your RTFA you'd see they intend to provide alternative mechanisms to regulate bandwidth, addressing both the continuing need to avoid flooding and the limitations of TCP's back off mechanisms.

      Plus stream protocols are inefficient when transferring multiple items (which is the typical use case for HTTP) an

      • The back off mechanism is one of the problems they're trying to fix. Internet protocols need some way to control bandwidth usage, but there are a lot of limitations with the existing options in TCP.

        Like? Please be specific. This thread is getting old quick with people saying "TCP sucks" going on and on about how it just sucks without ever citing any technical justifications why that is so.

        There are tons of congestion algorithms
        http://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm [wikipedia.org]

        and extensions
        http://en.wikipedia.org/wiki/TCP_tuning [wikipedia.org]

        for TCP.

        • by grmoc ( 57943 )

          TCP doesn't suck.

          TCP is, however, a bottleneck, and not optimal for all uses.

          Part of the issue there is the API-- TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.

          As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets, even when that lost packet would affect only one resource and would not affect the fr

          • TCP is, however, a bottleneck, and not optimal for all uses.

            I think this is obvious to everyone. Hammers are not optimal for all uses. TCP provides an ordered reliable byte stream. Not all applications require or benefit from these properties. Using the right tool for the right job tends to provide the best results.

            TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.

            .. more general statements where specifics would be most helpful ..

            As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets

            What are you referring to specifically? Is this before or after established state? Are you talking about some kind of SYN cross where data sent in the same round as SYN where SYN is

            • by grmoc ( 57943 )

              TCP implementations are very mature. As impementors, we've fixed most/many of the bugs, both correctness and performance related. TCP offers reliable delivery, and, excepting some particular cases of tail-loss/tail-drop, knows the difference between packets that are received but not delivered and packets that are neither received nor delivered.
              TCP has congestion control in a variety of different flavors.
              TCP has various cool extensions, e.g. MPTCP, TCP-secure (not an RFC, but a working implentation), TFO, et

              • by grmoc ( 57943 )

                And sorry if I sound frustrated about it... I am *really* frustrated by the current state of the world w.r.t. parallel connections. It makes my life such a pain in the butt!

        • by phayes ( 202222 )

          Please go read TFA. Congestion control is part of the proposed protocol & not all transport needs the dual three way handshake and round trip latency that TCP imposes - sequence numbers & congestion control look to be more than enough for some.

  • Big mistake (Score:3, Funny)

    by gmuslera ( 3436 ) on Friday November 08, 2013 @02:18PM (#45371149) Homepage Journal
    The problem with that UDP proposal is that the IETF may not get it.
  • by grmoc ( 57943 ) on Friday November 08, 2013 @02:19PM (#45371167)

    As someone working with the project.

    The benchmarking here is premature.
    The code is not yet implementing the design, it is just barely working at all.

    Again, they're not (yet) testing QUIC-- they're testing the very first partial implementation of QUIC!

    That being said, it is great to see that others are interested and playing with it.

  • Hmmm, several posts, yet no mention of UDT [sourceforge.net] so far. It would be nice if the benchmark included it.

  • by epine ( 68316 ) on Friday November 08, 2013 @02:56PM (#45371545)

    Those fuckers at www.connectify.me redirected my connection attempt to
    http://www.connectify.me/no-javascript/ [connectify.me] so that even after I authorized Javascript for their site I was unable to navigate to my intended destination (whatever shit they pulled did not even leave a history item for the originally requested URL).

    This sucks because I middle-click many URLs into tabs I might not visit until ten minutes later. It I had a bunch of these tabs open I wouldn't even have been able to recollect where I had originally been. In this case, I knew to come back here.

    Those fuckers at www.connectify.me need to procure themselves an Internet clue stick PDQ.

  • May I be so bold as to suggest graphs citing only x performance improvement for protocol y are insufficient, harmful and useless measures of usable efficiency. We know how to make faster protocols.. the challenge is faster while preserving generally meaningful congestion avoidance. This part is what makes the problem space non-trivial.

    Look at TFA and connectify links it is all performance talk with total silence on addressing or simulating congestion characteristics of the protocol.

    Having sat in on a few tcpm meetings it is always the same with google... they show data supporting by doing x there will be y improvement but never as much enthusiasm for consideration of secondary repercussions of the proposed increased network aggression.

    My personal view RTT reductions can be achieved thru extension mechanisms to existing protocols without wholesale replacement. TCP fast open and TLS extensions enabling 0 RTT requests thru the layered stack...experimental things for which "code" exists today can provide the same round trip benefits as QUIC.

    What google is doing here is taking ownership of the network stack and congestion algorithms away from the current chorous of stakeholders and granting themselves the power to do whatever they please. No need to have a difficult technical discussion or get anyones opinions or signoff before droping in a new profit enhancing congestion algorithm which could very well be tuned to prefer google traffic globally at the expense of everyone elses ... they control the clients and the servers...done deal.

    There are two fundamental improvements I would like to see regarding TCP.

    1. Session establishment in the face of in band adversaries adding noise to the channel. Currently TCP connections can be trivially reset by an in-band attacker. I think resilience to this necessarily binding security to the network channel can be a marginally useful property in some environments yet is mostly worthless in the real world as in-band adversaries have plenty of other tools to make life difficult.

    2. Efficient Multi-stream/message passing. Something with the capabilities of ZeroMQ as an IP layer protocol would be incredibly awesome.

    • by MobyDisk ( 75490 )

      I tend to agree. I am glad that someone is trying to create a better TCP. If they fail, we validate that TCP is a good idea. If they succeed, then we can have a better protocol.

      If the QUIC exercise is successful, then the IETF should consider extending TCP to support the relevant features. For example, their point about multiple steams is a good one. Perhaps TCP should define an option to open N simultaneous connections with a single 3-way handshake. Existing implementations would ignore the new optio

      • If the QUIC exercise is successful, then the IETF should consider extending TCP to support the relevant features. For example, their point about multiple steams is a good one. Perhaps TCP should define an option to open N simultaneous connections with a single 3-way handshake. Existing implementations would ignore the new option bytes in the header so nothing would break.

        While TCP is ancient there has been continuous work to improve it over the years. I think most people throwing stones here have not taken the time to look around and understand the current landscape. Indeed many ideas in QUIC are good ones yet not a single one of them are something new or something that had not been implemented or discussed in various WGs.

        Regarding multiple streams what effectively is the difference between this and fast open? I send a message the message arrives and is processed immed

    • by Agripa ( 139780 )

      1. Session establishment in the face of in band adversaries adding noise to the channel. Currently TCP connections can be trivially reset by an in-band attacker. I think resilience to this necessarily binding security to the network channel can be a marginally useful property in some environments yet is mostly worthless in the real world as in-band adversaries have plenty of other tools to make life difficult.

      Doesn't IPSEC protect against this?

      2. Efficient Multi-stream/message passing. Something with the ca

  • I'm no networking expert, but does anyone know how this compares to SCTP [wikipedia.org]?
    And have they taken various security considerations into account, e.g. SYN floods?
    • by grmoc ( 57943 )

      It is similar in some ways, and dissimilar in other ways.
      One of the outcomes of the QUIC stuff that is considered a good outcome is that the lessons learned are incorporated into other protocols like TCP, or SCTP.

      QUIC absolutely takes security into account, including SYN floods, magnification attacks, etc.

  • Pacing, Bufferbloat (Score:5, Interesting)

    by MobyDisk ( 75490 ) on Friday November 08, 2013 @03:32PM (#45371933) Homepage

    The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?

    I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.

    From the linked slides:

    Does Packet Pacing really reduce Packet Loss?
    * Yes!!! Pacing seems to help a lot
    * Experiments show notable loss when rapidly sending (unpaced) packets
    * Example: Look at 21st rapidly sent packet
            - 8-13% lost when unpaced
            - 1% lost with pacing

    • The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?

      I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.

      Yes essentially it is a hedge against future probability of packet loss. I don't know about QUIC but with TCP statistically more packet loss tends to present toward the end of a window than start therefore normally more expensive to correct.

      • the problem with packet loss on the last packets is that tcp/ip only resends packets after a timeout due to lack of ack, or after acknowledging earlier packets multiple times signifying a gap. so if you send 8 packets of data for a typical web page it's way worse if you lose the 8th packet than the 3rd packet. (linux defaults to an initial window size of 10 packets now, windows defaults to 8192 bytes, linux to 14600 bytes. so in linux that would arrive in one round trip time for the data, one one round tr
    • by grmoc ( 57943 ) on Friday November 08, 2013 @04:16PM (#45372435)

      What seems likely is that when you generate a large burst of back-to-back packets, you are much more likely to overflow a buffer, causing packet loss.
      Pacing makes it less likely that you overflow the router buffers, and so reduces the chance of packet loss.

      TCP does actually do pacing, though it is what is called "ack-clocked". For every ACK one receives, one can send more packets out. Since the ACKs traverse the network and get spread out in time as they go through bottlenecks, you end up with pacing.... but ONLY when bytes are continually flowing. TCP doesn't end up doing well in terms of pacing out packets when the bytes start flowing and stop and restart, as often happens with web browsing.

    • by Above ( 100351 )

      Reducing packet loss is not always a good thing. Packet loss is mechanism that an IP network uses to indicate a lack of capacity somewhere in the system. BufferBloat is one attempt to eliminate packet loss with very bad consequences, never throw packets away by queueing them for a very long time. Pacing can be the opposite side of that coin, send packets so slowly loss never occurs, but that also means the transfer happens very slowly.

      When many TCP connections are multiplexed onto a single link the maxim

    • The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?

      I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.

      From the linked slides:

      Does Packet Pacing really reduce Packet Loss?
      * Yes!!! Pacing seems to help a lot
      * Experiments show notable loss when rapidly sending (unpaced) packets
      * Example: Look at 21st rapidly sent packet

      - 8-13% lost when unpaced

      - 1% lost with pacing

      Well, if you're sending UDP, and your server is connected to a gig link, and the next link between you and the server is 1m, and the buffer size of the device is 25ms...

      Sending data over 25k, you might as well set packets 18+ on fire, because they sure aren't making it to the destination unless you delay them accordingly.

  • Are you friggin nuts? This seems to imply that any filtering at the kernel level will need to unwrap all the application specific jibber-jabber in this protocol to determine wtf it's supposed to do with it. That would be quite costly in terms of performance. No, I don't trust applications to handle the security of packet data coming in. Especially when some entity wants to bury support for the protocol in their own web browser. This just smells like all kinds of Hell naw .

    • by grmoc ( 57943 )

      There are definitely people and opinions on both sides of the fence on this.

      Unfortunately, though performance might improve with access to the hardware, wide and consistent deployment of anything in the kernel/OS (how many WinXP boxes are there still??) takes orders of magnitude more time than putting something into the application.

      So.. we have a problem: We want to try out a new protocol and learn and iterate (because, trust me, it isn't right the first time out!), however can't afford to wait long period

    • by skids ( 119237 )

      No, I don't trust applications to handle the security of packet data coming in.

      In fact you do, unless you're running a pretty top notch L7 DPI box. And even then,,,

      Besides, these days everything that's an app must be considered good and trustworthy. How else can we expect you to turn over all your data to criminals, much less corporations?

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...