Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google Network IT

Quic Gives the Internet's Data Transmission Foundation a Needed Speedup (cnet.com) 80

One of the internet's foundations just got an upgrade. From a report: Quic, a protocol for transmitting data between computers, improves speed and security on the internet and can replace Transmission Control Protocol, or TCP, a standard that dates back to Ye Olde Internet of 1974. Last week, the Internet Engineering Task Force, which sets many standards for the global network, published Quic as a standard. Web browsers and online services have been testing the technology for years, but the IETF's imprimatur is a sign the standard is mature enough to embrace fully.

It's extremely hard to improve the internet at the fundamental level of data transmission. Countless devices, programs and services are built to use the earlier infrastructure, which has lasted decades. Quic has been in public development for nearly eight years since Google first announced Quic in 2013 as an experimental addition to its Chrome browser. But upgrades to the internet's foundations are crucial to keep the world-spanning communication and commerce backbone humming. That's why engineers spend so much effort on titanic transitions like Quic, HTTPS for secure website communications, post-quantum cryptography to protect data from future quantum computers, and IPv6 for accommodating vastly more devices on the internet.

This discussion has been archived. No new comments can be posted.

Quic Gives the Internet's Data Transmission Foundation a Needed Speedup

Comments Filter:
  • by oldgraybeard ( 2939809 ) on Monday May 31, 2021 @11:15AM (#61439804)
    "Google first announced Quic in 2013 as an experimental addition to its Chrome browser"
    • Once I saw that, I had to wonder --- how much data is being slurped up by this protocol? Or how does it make google's data vacuuming easier? There's no other reason I can see for google to promote this.
      • by Improv ( 2467 )

        If you phrase it as "slurped up", I'm not sure you understand how network protocols work.

        • I understand how protocols work. I used "slurped up" in the context of making it easier for google either directly or as a by-product of the protocol.
      • by arglebargle_xiv ( 2212710 ) on Monday May 31, 2021 @12:08PM (#61439926)

        There's no data being slurped up, but it's yet another protocol hacked up by the Google children to make Google's life easier. Specifically, it makes it easier and more efficient for Google to push content out to clients (as a side-effect, it also helps other vendors like Fecebook push out content). For everyone else, it does nothing.

        It's also a sign of how much Google now dominates any IETF WG they need to. They can pretty much get anything through the IETF with no hope of anyone else putting the brakes on.

    • Yep, Google is the showstopper. The name is pure poison. They are salting the earth

  • and does it address any of the really important things like DDOS attacks or possibly even the opposite, like this article says? https://blog.nexusguard.com/co... [nexusguard.com]
    • by Entrope ( 68843 ) on Monday May 31, 2021 @11:33AM (#61439838) Homepage

      What do you define as a "real 'standard'" as opposed to one on paper? Usually people write down standards on paper to document them. In the case of QUIC, those documents are RFC 9000, and to a lesser extent RFC 8999.

      The article you linked is essentially a sales pitch: "Give us money because we know how to avoid turning QUIC into a DDoS amplifier". Good server implementations should include countermeasures. For example, section 21.1.1.1 of RFC 9000 discusses how address validation (an entire separate section of the RFC) protects against that attack in general.

      Obviously, if a DDoS attack is simply to flood the network connection, it doesn't much matter what transport protocol gets used. But if the attack is based on exhausting on-server resources, cutting the number of round trips needed to establish a connection helps against DDoS -- the server can keep enormously less state for not-yet-verified clients.

      • Obviously, if a DDoS attack is simply to flood the network connection, it doesn't much matter what transport protocol gets used. But if the attack is based on exhausting on-server resources, cutting the number of round trips needed to establish a connection helps against DDoS -- the server can keep enormously less state for not-yet-verified clients.

        There is more truth in the opposite. Both TCP and QUIC have stateless mechanisms for rejecting spoofed connection requests.

        The aggressive no round trip strategies are worthless in practice. Too unreliable for most uses and susceptibility to BS like replay attacks only increase DOS risk because they cut thru the transport, security and application stacks.

        • What are you talking about? TCP has no such mechanism. When you're being syn flooded, it takes a traffic analysis to conclude that, indeed, you're being attacked, and another service outside of the regular network stack to intervene and fin those connections.

          • What are you talking about? TCP has no such mechanism.

            Syn cookies

            When you're being syn flooded, it takes a traffic analysis to conclude that, indeed, you're being attacked, and another service outside of the regular network stack to intervene and fin those connections.

            It has been automatically handled in the kernel of every major OS for decades.

      • by Anonymous Coward

        What do you define as a "real 'standard'" as opposed to one on paper? Usually people write down standards on paper to document them. In the case of QUIC, those documents are RFC 9000, and to a lesser extent RFC 8999.

        When I was in grad school there was an operating systems class where students implemented the TCP standard. Then they tested each pair of students to see if their stacks could communicate. I heard it was brutal. It was possible to correctly implement the standard and still not be able to talk to another different, but also correct, implementation. I was told the way professionals wrote a new stack was to test it against dozens of existing stacks. The "standard" wasn't enough.

  • by xack ( 5304745 ) on Monday May 31, 2021 @11:26AM (#61439826)
    And after we stopped using ie6 in our critical asp.net cobol front ends.
  • Privacy? (Score:5, Insightful)

    by QuietLagoon ( 813062 ) on Monday May 31, 2021 @11:33AM (#61439840)
    Google QUIC-ly left privacy behind in its quest for a speedier internet, boffins find ( https://www.theregister.com/20... [theregister.com] )
    • Good. I don't want some theoretical and entirely irrelevant (as voted by Facebook users everywhere) privacy improvement hampering advancement. If you want that privacy then take responsibility for it yourself.

      We're being fingerprinted and tracked all over the internet, and you know what... we're doing just fine.

  • Google is trying to grab a larger market share and maybe a percent in cost reduction (if even that). All in all, this is an exceptionally bad idea and there is zero need for it. I think I will block this in my firewall.

  • by Improv ( 2467 ) <pgunn01@gmail.com> on Monday May 31, 2021 @11:49AM (#61439876) Homepage Journal

    I don't know if quic is really going anywhere, but the same concern remains from when I first heard of it -- is a modest reduction in bandwidth and transmission time worth all the headaches of adding another distinct protocol above the IP layer? So much network hardware and software is designed to think primarily about TCP, UDP, and ICMP packets. Do we really need another verb? Not every inefficiency needs a specific solution, and a world that builds a lot of specific solutions ends up more complex than one that relies on general ones (like TCP).

  • ..at least to get it adopted.

  • No protocol can move data faster than line speed, and TCP/IP already does a good job at attaining that speed.
    The point here is to centralize the web even more by increasing the complexity needed in both browsers and web servers.

    Just like with HTTP/2 Google will just claim that it's "better" and rank sites using it higher.

    • by DDumitru ( 692803 ) <doug@nosPaM.easyco.com> on Monday May 31, 2021 @01:11PM (#61440072) Homepage

      TCP will hit "line speed" with a bunch of IFs. Mostly, if you are moving a lot of data.

      QUIK mostly helps the very small connections that are encrypted because it moves data early more efficiently. If you are Google, a lot of your traffic is advertising HTTPS hits that are short. Pull up any web page and see how many network events are needed before it is done. This is why Google wants this. It makes their advertising more efficient. It also makes your user experience "better" because the ads don't take as long to load. The "contrarians" will just say that this will increase the number of ads (and maybe it will), but we are already way past saturation, so I don't think this will matter.

      If you are a network admin, you will have another metric to monitor. You don't have to do anything, at least right away. If you are "spying" this might make your job harder. Because the crypto is not a separate step, there will be less overhead to use crypto so more traffic will be encrypted. The more traffic is encrypted, the less a bad actor can try to capture and analyze.

      All in all, the internet is a very different place than it was 30+ years ago. Data rates are a lot higher. Latencies "can" be a lot lower. The TCP window and stream compromises never were ideal, but are further from reality for today's internet than they were. Who knows, streaming video conferencing might actually be "better" over QUIK because the congestion retry logic is more suited for "today's" internet.

    • by Captain Segfault ( 686912 ) on Monday May 31, 2021 @01:12PM (#61440080) Homepage Journal

      The largest issues here involve head of line blocking, both for QUIC and for HTTP/2.

      Yes, TCP does a great job of achieving line rate, when properly tuned. If all you want to do is download/upload a large file, TCP will saturate your 10 gigabit network connection just fine once you replace congestion control with something a little bit less loss sensitive like BBR.

      The problem is that for usecases like downloading a web page, you have two head of line blocking issues:
      1. In HTTP/1.1, pipelining lets you have multiple requests on a single connection, but the server has to respond to them in order. That means a server can't respond to the second or third request promptly if the first is large or takes a long time to process at the server. HTTP/2 fixes this. That ultimately makes things easier for both web clients and servers that care about performance, given a robust HTTP/2 implementation. The alternative is to have a bunch of parallel connections, which is both more resource intensive and is less friendly with respect to congestion control, and unless you never pipeline at all you can still get unlucky.
      2. TCP being a single stream is more sensitive to single packet drops, which is inefficient when you logically have several streams. If your higher level abstraction is multiple independent streams (as with HTTP/2) switching from TCP to a multi-stream transport protocol like QUIC (or e.g. SCTP) packet less has less latency impact because it only affects the streams where the packet loss occurs rather than blocking all streams.

      Note that neither of these makes a difference if all you're trying to do is download a single large file. They *do* make a difference if you're downloading a 2 GB file and then decide, on the same connection, that you want to fetch a 1 KB piece of metadata.

      Lastly, I don't think the centralization argument holds a lot of water. There are plenty of sources of complexity in implementing a web browser or web server. If protocol complexity is significant, presumably you use one of a number of existing well tested implementations rather than rolling it yourself.

    • TCP (and any other protocol which relies on acknowledgments for data integrity) is fundamentally limited by round trip time (in addition to the speed of the slowest hop). On a local network, this isn't a big deal, but if you are going to the other side of the country with a 50 msec round trip time and say a 65K window size, your maximum throughput is 10Mb (even though the entire link is probably 10Gb fiber or better) -- the throughput is window/RTT. Throughput can be improved by increasing the window siz

      • Yeah but those problems have been solved decades ago. You can now transmit more than 10 Mb over 50ms latency links.

        • Not with TCP -- unless you are using a 62.5GB window size. There is a basic performance equation for any protocol that does acknowledgement:

          speed = window/RTT.

          or

          window = speed * RTT

          10e9/8 * .050 = 62,500,000 bytes.

          Streaming protocols could as long as the link wasn't flakey.

  • by WaffleMonster ( 969671 ) on Monday May 31, 2021 @12:01PM (#61439912)

    The innovation QUIC provides over modern TCP implementations is complete control over congestion algorithms in user space so that Google has carte blanche to do whatever maximizes Google's profit. Google web servers, Google browsers and Google transport completing the triad.

    Imagine a bandwidth constrained channel in which there are 20 users all downloading via TCP and one user downloading via QUIC from unconstrained sites with infinite bandwidth.

    That one single QUIC channels data rate operates at twice that of the sum total of all 20 users combined. This is not because better algorithms or that TCP was leaving unutilized capacity on the table its because they simply tweaked cubit parameters to make it more aggressive.

    From 0-RTT ,TLP and CWND hysteresis there is not a single useful feature that has not been implemented in TCP. The benefit of QUIC is complete control over the stack. The ability for a single vendor to develop and deploy whatever congestion scheme they feel like without having to bother with consensus process with other stakeholders.

    • by thegarbz ( 1787294 ) on Monday May 31, 2021 @12:51PM (#61440016)

      so that Google has carte blanche

      So to be clear you're saying that Google somehow has power over me because of some completely open RFC, being implemented in my open source non GOogle Firefox browser, while I communicate with my open source non-google nginx webserver, over a non-google L3 internet?

      Your ideas are intriguing to me, and I wish to subscribe to your newsletter.

      • What if Google just wanted you to use their search engine no matter what protocols are used?

      • Companies gain power through the mindshare of paranoid people. This poster is now going to go home and go to bed thinking of all the ways Google is going to take over his brain.

      • So to be clear you're saying that Google somehow has power over me because of some completely open RFC, being implemented in my open source non GOogle Firefox browser, while I communicate with my open source non-google nginx webserver, over a non-google L3 internet?

        The direct power Google has is over the triad of servers, clients and transports. They own the plurality of worlds clients (by extension QUIC transports) and the most popular destinations (Google, Youtube..).

        The consequences of this are in fact global and extend even to people who don't use Google servers or clients because Google's traffic is effectively being prioritized over the traffic of others by means of aggressive congestion algorithms.

        Your ideas are intriguing to me, and I wish to subscribe to your newsletter.

        Fuck it. I'm just going to write my own transport called "snow

    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Monday May 31, 2021 @01:21PM (#61440090) Homepage

      It's not controlled by Google -- the working group is pretty diverse. There's no conspiracy to make QUIC take unfair priority over other traffic. QUIC has some great real-world benefits. The biggest ones off the top of my head are:

      • It bakes in the TLS handshake, removing one round trip compared to TCP+TLS. So you'll have better latency when connecting to brand new servers.
      • Streams within a connection are decoupled, so packet loss on one stream won't freeze the entire connection (this was a problem with HTTP/2 on less than perfect networks).
      • Connections are identified separate from their IP/port, and can migrate. It lets you seamlessly hop connections between wifi and 5G, and can help load balancing on the server side.
      • It's easier to extend. For instance, datagrams and forward error correction can improve real-time apps like video streaming and games.

      (Caveat: I wrote .NET's HTTP/3 client implementation and am one of the primary developers working on its QUIC APIs)

      • It's not controlled by Google -- the working group is pretty diverse.

        My remarks were about real world power not WG control. I recall Google threatening to abandon TLS entirely and go their own route with QUIC encryption if they didn't get their way with regards to certain TLS 1.3 features.

        There's no conspiracy to make QUIC take unfair priority over other traffic

        Yet both simulation and real world results demonstrate this is exactly what is happening.

        It bakes in the TLS handshake, removing one round trip compared to TCP+TLS. So you'll have better latency when connecting to brand new servers.

        Certainly a benefit.

        Streams within a connection are decoupled, so packet loss on one stream won't freeze the entire connection (this was a problem with HTTP/2 on less than perfect networks).

        Open more connections.

        Connections are identified separate from their IP/port, and can migrate. It lets you seamlessly hop connections between wifi and 5G

        I certainly don't want this.

        It's easier to extend. For instance, datagrams and forward error correction can improve real-time apps like video streaming and games.

        Would I rather have FEC implemented within a codecs or by an out of touch transport? In a video game would I rather send more upda

    • by Captain Segfault ( 686912 ) on Monday May 31, 2021 @01:30PM (#61440118) Homepage Journal

      TCP already has your 20:1 style problem even without Google doing anything special. Cubic, along with most other congestion control algorithms, provide fairness in the form of competing flows having equal sized windows. That means that a flow with 1/20 the RTT across a bottleneck will get 20x the throughput.

      Who has 1/20 the RTT? The companies, like Google, serving lots of traffic behind large CDNs, compared to someone just trying to download from a faraway server.

      Meanwhile the head of line blocking issues that HTTP/2 and QUIC target? The usual workaround for them is to have multiple connections, and multiple connections means that you have N flows that in combination compete at N times the weight of a single connection.

      • TCP already has your 20:1 style problem even without Google doing anything special. Cubic, along with most other congestion control algorithms, provide fairness in the form of competing flows having equal sized windows. That means that a flow with 1/20 the RTT across a bottleneck will get 20x the throughput.

        This is flat wrong. You are probably thinking of the original Reno schemes. Window is explicitly not RTT dependent in CUBIC.

  • by kackle ( 910159 ) on Monday May 31, 2021 @12:05PM (#61439922)
    The quote at the bottom of this web page right now:

    "If it ain't broke, don't fix it." - Bert Lantz
  • Totally off topic, but I know there are smarter people than me out there. So I go to distrowatch.com to read their weekly news and I got ERR_CONNECTION_REFUSED five times. Didn't work on firefox either. Went to isitdownrightnow.com and they say there's no problem. Every other site seems to work. I'm stymied.
  • TCP vs UDP (QUIC) (Score:4, Interesting)

    by FeelGood314 ( 2516288 ) on Monday May 31, 2021 @12:36PM (#61439990)
    TCP's main advantage in the last 40 years has been it handles congestion by backing off. Resending missing packets at the TCP level has always been lazy and/or inefficient. If I'm on voice or video a packet lost in transmission should just be dropped because I would rather a tiny hiccup in my sound than a pause while I wait for the resend. There are different types of loss that can be handled differently. For a file transfer I can get packets out of order, for a noisy connection I can send redundant packets of XORs of ever N packets and if one in N packets are lost recreate that packet from the others. Since QUIC knows what data it will be carrying it can be optimized for that task.
    • Resending missing packets at the TCP level has always been lazy and/or inefficient.

      No it hasn't.

      If I'm on voice or video a packet lost in transmission should just be dropped because I would rather a tiny hiccup in my sound than a pause while I wait for the resend.

      This is why stream transports (e.g. TCP) are not used for realtime communications.

      There are different types of loss that can be handled differently. For a file transfer I can get packets out of order, for a noisy connection I can send redundant packets of XORs of ever N packets and if one in N packets are lost recreate that packet from the others.

      Even QUIC dropped FEC because it is just as dumb as it sounds.

      As for the rest appropriate windowing and selective acknowledgement allows out of order and retransmission while hiding associated latency. Where it can't extensions exist to address tail loss.

  • I checked a somewhat recent Linux kernel and it had no CONFIG options for "quic". Their were options for "QUICC" but that appears to be for a SoC.

    So what about SCTP?

    While SCTP was not designed as a replacement for TCP, it does solve multiple problems. Like:
    - Out of order packet reception
    - multiple streams to same host, (so you can make up to 255 requests for different web page elements over same SCTP connection)
    - 32 bit checksum
    - multiple checksum algorithms

    SCTP even has network path fail-over
    • QUICC was the Motorola 68360 communications chip and successors. This evolved to PowerQUICC.
    • by mishehu ( 712452 )
      SCTP has a 4-step handshake iirc. I think that one of the goals of QUIC is to reduce the time that is needed to set up the connection in the first place. Weirdly enough I still don't see much carrier-grade interconnects for VoIP traffic using SCTP.
      • Yes, SCTP has a 4 step handshake. It was supposed to improve security and reliability, to prevent things like the TCP half-open. Obviously similar things could happen with SCTP. But, if I remember correctly, their are defined timeouts to clear out failed sessions. Unlike TCP, which originally did not have comprehensive failure mitigation. Like what to do on half-open.

        As for QUIC trying to reduce time for setup, seems silly of one goal to reduce the initial connection time by introducing a whole new protoc
        • by mishehu ( 712452 )

          I don't know all the ins-and-outs about QUIC, but if my understanding is correct, part of it is to help with sourcing data from multiple locations at once. As for the handshake issue, let's say two systems are 125 ms of latency apart. That can mean 1 second delay to complete the handshake alone. Multiply that over multiple connections, and that can add up.

          I work in VoIP and have contributed in the past to FreeSWITCH. The vast bulk of VoIP signalling traffic that traverses the internet at least to-from c

    • by amorsen ( 7485 )

      QUIC is built on UDP. Currently there is no QUIC offload option on network cards, so there is nothing to support in the kernel.

      While you can argue that the UDP header is a bit useless for QUIC, it provides a convenient way to run multiple QUIC connections on the same IP address, and It only costs 8 bytes per packet. Wasting those 8 bytes means that QUIC can traverse all firewalls, unlike SCTP which has approximately zero deployment.

  • by theskipper ( 461997 ) on Monday May 31, 2021 @01:05PM (#61440058)

    Go to chrome://flags and set "Experimental QUIC protocol" to disabled. Doesn't mean much now but at least it's a tiny gesture to stick it to the man.

    Oh but be sure to turn it back on once Google completely perfects fingerprinting. Don't want your TMZ Kardashian news slowed to a crawl because you're a selfish dickwad that cares about privacy. Rock on.

    • Gone? I doubt it. They'll bring it back just when we aren't looking.

      Google owns the internet. Didn't you get the memo?

  • Firewalls (Score:5, Interesting)

    by Myria ( 562655 ) on Monday May 31, 2021 @01:55PM (#61440210)

    Firewalls handle TCP much better because they have a well-defined concept of a connection beginning and ending that the firewall can see. QUIC is over UDP because it has its own flow control. Firewalls will not do well with QUIC.

    Sure, firewalls could be reprogrammed to understand QUIC traffic, but the inertia is so high that we can't even get IPv6 into the mainstream, and IPv6 has been around since 1998.

    • Does Charter count as "mainstream"? And to wit since they offer free modems that makes it easier to make upgrades on both ends of the connection. Only thing not as much is owner routers and the newer one's have auto-update.

    • Many large organisations channel all HTTPS? traffic through proxies. I wonder how long it will be before major proxy platforms can accomodate a new protocol.
      • A few months to a couple of years, depending on the platform. They'll start paying more attention once HTTP/3 is approved. Until then, virtually every site will accept older versions of HTTP and the proxies will work just fine.

    • Firewalls and proxies will adapt. They caught up to HTTP/2, they'll catch up to QUIC and the upcoming HTTP/3.

      I think you also underestimate IPv6. Every firewall knows how to handle it, and it makes up a third of overall internet traffic. This is highly variable and based on the ISP, but several major US ISPs use it (Comcast and Spectrum, for example). Mobile providers in the US also make widespread use of it (Verizon, AT&T, and T-Mobile all provide native IPv6 addresses to mobile devices). After that, s

    • by amorsen ( 7485 )

      Firewalls that don't adapt will just see a standard UDP stream. They may time the QUIC connection out (because most firewalls are pretty aggressive about connectionless sessions), but as long as the client initiates after the pause, this does not matter. The firewall will think that the client is making a new connection unrelated to the previous one, and SNAT will assign a new source port address, but QUIC has its own connection IDs so it won't care.

      The only problem is if a client opens a connection, goes o

An authority is a person who can tell you more about something than you really care to know.

Working...