Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Network Google The Internet

Google's Network Congestion Algorithm Isn't Fair, Researchers Say (vice.com) 20

Researchers at Carnegie Mellon University say a Google algorithm designed help reduce internet congestion is mathematically unfair, resulting in network management systems that may disadvantage some traffic over others. From a report: Several years back, Google began work on a new open source congestion control algorithm (CCA) designed to improve the way the internet functions. The result was BBR, short for Bottleneck Bandwidth and RTT (Round-Trip Time). The goal of the project: to improve how network packets travel between servers to mitigate congestion on the internet. CCAs have long been used to help manage congestion -- ideally while treating all traffic equally. But in a study unveiled last week at the Internet Measurement Conference in Amsterdam, researchers revealed that BBR doesn't actually do a very good job of that last part. In fact, they found that during periods of congestion, BBR connections would take up 40 percent of the available bandwidth, leaving the remaining 60 percent to be fought over by the remaining users on the network.
This discussion has been archived. No new comments can be posted.

Google's Network Congestion Algorithm Isn't Fair, Researchers Say

Comments Filter:
  • by dgatwood ( 11270 ) on Thursday October 31, 2019 @03:22PM (#59367580) Homepage Journal

    Everybody knows QoS isn't "fair", if by "fair", you mean that all traffic is treated equally. The whole point of QoS is to ensure that latency-sensitive traffic never gets delayed or dropped, to ensure that non-latency-sensitive traffic doesn't break the user experience. By definition, if bandwidth is limited, that cannot possibly be fair if the throughput exceeds the size of the pipe, because the act of making those guarantees must, by definition, mean that something else happens later. This goes without saying.

    What "fair" means in this context is that no traffic is unduly penalized up to the point where the traffic exceeds capacity, to the maximum extent practical.

    What makes this problematic, of course, is that if you guarantee the throughput, people will use it. For example, if you give Skype 100 Mbps, if packets never get dropped or delayed, how would their own in-app algorithms know that the link is congested, and that they're delaying other traffic by using that bandwidth? I'm not sure how to solve that problem, but that's a different problem. :-)

    • BBR is not QoS (Score:5, Interesting)

      by thesupraman ( 179040 ) on Thursday October 31, 2019 @03:28PM (#59367618)

      You would almost have a point, if this was a QoS system. Its not, it a congestion management tool.

      The problem here appears to be that BBR enabled connections get prioritized through BBR systems, with a 2:3 ratio, so if BBR connections are less than 40% of the network traffic then they get an unfair advantage.

      So this is acting a little like QoS giving BBR a boost IF its traffic is not very common.. it is a subtle but important problem.

      • TFA is not news (Score:5, Informative)

        by complete loony ( 663508 ) <Jeremy.Lakeman@nOSpaM.gmail.com> on Thursday October 31, 2019 @09:12PM (#59368562)

        BBR tries to measure the baseline RTT (round trip time) of the network path. Comparing that to the current RTT allows the detection of congestion without needing to fill network buffers and trigger packet loss.

        The available bandwidth might increase, so sometimes you want to send too much data. If the RTT goes up, there's no unused bandwidth.

        You might be sending too much data, causing congestion that you haven't measured since you've never seen the real RTT. So sometimes you want to send less data and see if the RTT goes down. But you don't want to do this very often, since you're leaving that bandwidth unused. And you want to synchronise this with all flows through the same bottleneck.

        During the startup phase, you want to ramp up throughput and deliberately overshoot the limit of the network path and create congestion. Otherwise it would take forever to detect the network path's actual capacity.

        If the network path is mostly full of long lived streams, they should and up with a reasonably fair share of bandwidth.

        If the network path is full of short lived streams, they're never going to back off and see the real RTT. This is not a surprise, this is described in the original BBR spec.

        Because of the lack of information from the network, I'm pretty sure there's no general solution that can meet all goals you might have. IMHO BBR does fairly well though.

    • Everybody knows QoS isn't

      BBR is a congestion algorithm for TCP not a QoS algorithm.

      By definition, if bandwidth is limited, that cannot possibly be fair if the throughput exceeds the size of the pipe, because the act of making those guarantees must, by definition, mean that something else happens later. This goes without saying.

      Not applicable.

      What "fair" means in this context is that no traffic is unduly penalized up to the point where the traffic exceeds capacity, to the maximum extent practical.

      What fair means is not drowning out other competing congestion aware streams.

      • by dgatwood ( 11270 )

        BBR is a congestion algorithm for TCP not a QoS algorithm.

        Sorry, yes, I hadn't dug into the details on the algorithm (and I still haven't dug very deeply, but I at least skimmed a short article). Knowing that this is a client-side congestion management algorithm makes the story even more of a head-scratcher.

        What fair means is not drowning out other competing congestion aware streams.

        I kind of wonder if that is even possible. Somebody else's algorithm could easily back off too aggressively, resulting in yo

        • by Bengie ( 1121981 ) on Friday November 01, 2019 @12:09AM (#59368812)
          BBR cares more about latency and pacing than actual packet loss. Any transient packetloss is effectively ignored because it just looks like normal noisy loss than congestion based loss. If there was loss followed by an increase in latency, then it cares. Common congestion control algorithms just care about any loss and cut the window in half. Because of these two features, BBR responds less to sporadic loss and others respond a lot, making BBR get more bandwidth. BBR 2.0 fixes most of these issues and many other issues and general improvements.

          It's been known for a long time that BBR tends to hog bandwidth, but in doing so, latency and loss goes down because BBR is better about it. I see it as still an overall win. But this was known via empirical measuring. I think what this story is really about is that they were able to mathematically model it.

          And an entirely other aspect is completely ignored. Most of those other CC algorithms that only care about loss tend to synchronize and fill buffers until they are full, well beyond the route capacity. While BBR may "hog" bandwidth, it may actually be increasing effective bandwidth for those other CC algorithms by reducing overall loss. I've see common CC algorithms have upwards of 50% retransmission's. They might get more bandwidth, but at the cost of increase latency and bursty loss, which may reduce effective bandwidth via retransmission's.
    • by Bengie ( 1121981 ) on Friday November 01, 2019 @09:27AM (#59369510)
      Any decent queuing algorithm can guarantee latency in the face of any congestion for all streams. There are knobless algorithms that in constant memory isolate flows from each other and use a combination of fair queuing and punishing standing queues. If a queue for a flow lives longer than a single packet, then buffering is occurring because the flow is pushing data faster than it can be sent. The algorithm will give these flows the lowest priority until they have drained. So long as the flow does not drain, all other flows get to go first. If a packet for the standing queue for the greedy flow lives in the queue longer than 5ms, the packet gets discarded. This pretty much means that the hop latency is capped to about 5ms.

      Sparse flows that will never cause buffering, will never feel the delay. VOIP, games, etc are generally low bandwidth relatively speaking. Their packets are effectively prioritized because they don't cause a standing queue. In practice, what you see is latency measured in microseconds. It's awesome. For the greedy flows, like fast TCP downloads, they will either burst several packets, which will either all get drained within 5ms or the algorithm will start dropping packets only for the offending flows. For the set of flows that are all causing standing queues, they get prioritized by some deficit round robin, where the flow that had sent the most data goes last and the flow that has sent the least data goes first.

      What you're left with is non-greedy flows never see loss or latency. While greedy flows are allowed short bursts with no penalty, perpetually greedy flows are punished progressively harder and harder until they back off, and all in the 5ms time ranges, which means very fast respond to congestion by the sender. This actually results in less loss for greedy flows, stable transmission rates, and very stable bandwidth.

      In order for the sender to respond quickly to congestion, it must be notified quickly about congestion. The problem with current popular TCP algorithms coupled with dumb fifo queues is they fill the queue, which can be very large, and the loss is tail end, which means the sender isn't notified about the congestion until AFTER the queue has been drained, which can be thousands of milliseconds. With these more modern queuing algorithms, they cap it to about 5ms.

      Some interesting research that came out of investigating congestion is that regardless of the link bandwidth and the number of flows is that the number of greedy flows at any given instant is about 100, and those 100 represent about 80% of the congestion. Doesn't matter if you have 1,000 flows or 100,000,000 flows, it will always be 100. The reason these algorithms can be in constant memory regardless of the number of flows is because of this interesting quirk. Instead of uniquely identifying a flow, you can instead hash it into a bucket. You can either have enough buckets to keep the chance of collision low or you can uniquely identify a flow within a bucket aka "n-way hash bucket". Some researched found that a 128 entry hash table with 8 way buckets, 1024 total entries, NEVER experienced a collision in any testing. While this has not been mathematically proven to never have a collision, in practice, the chance of collision is small enough to be a non-issue. It doesn't matter how fast the link is or how many flows are going across, 1024 entries is enough to track all of the greedy flows.
      • I'd like to note that fq_codel - now the default on apple ios, osx, linux, and the default in many QoS systems now - over a billion boxes. works just fine with wrestling BBR under control. BBRv1 however has a slow response to packet loss that BBRv2 fixes. (BBRv2 has other problems, however) If you want a fairer internet for yourself and not one dominated by a single CGN or provider, or badly managed by your ISP, run fq_codel on your gateways. http://blog.cerowrt.org/post/n... [cerowrt.org]
  • by BeerFartMoron ( 624900 ) on Thursday October 31, 2019 @03:25PM (#59367594)
    Any packet that is unfairly prioritized by BBR MUST have the Evil Bit set.
  • Maybe they should author and submit an Internet RFC for a "better BBR" than BBR.
  • by WaffleMonster ( 969671 ) on Thursday October 31, 2019 @04:10PM (#59367780)

    Researchers at Carnegie Mellon University say a Google algorithm designed help reduce internet congestion

    Point was never about reducing congestion. The point is pushing aggression to benefit Google.

    Ware told me that the flaws in Googleâ(TM)s algorithm werenâ(TM)t intentional, and that the company has been highly receptive to criticism. She said work had already begun at Google on version 2 of the algorithm, though testing will need to be done to confirm whether the problem was fixed.

    Given they intentionally tweaked CUBIC parameters in QUIC to twice the aggression of TCP sessions I'll believe this when hell freezes over. Been in the room at a few IETF TCPM meetings and its always the same shit of Google pushing aggression every way they can with little consideration of consequences.

    âoeLots of companies deploy new congestion control algorithms, but most of them are proprietary,â she said.

    No they don't. Shit like FastTCP is an outlier.

    For example, content delivery network and cloud service provider Akamai uses an algorithm dubbed FastTCP whose implementation is entirely secret.

    Yep senders tweaking their shit and keeping it proprietary to keep other people from doing the same for their own benefit. While Google's efforts are open they are in a way different position than Akamai. They own the whole stack. The own the OS (Android), the browser (Chrome) and content (Google,YouTube..etc).. This gives them power to unilaterally push whatever is in their own best interests regardless of the impact on others.

    She also noted that Microsoft also uses its own non-transparent congestion control algorithm for Skype.

    Because lossy realtime audio and video algorithms have something meaningful to do with reliable stream transports.

    âoeIt's a classically challenging problem to get these algorithms to play nice with each other, and therefore we worry that there is a lot more unfairness going on on the Internet than we know about,â she said.

    This is all PR spin. Look at all these other people screwing with congestion control, all the unfairness everywhere.. we are just another cog in the system don't pay any attention to our outlier behavior.

    The reality is when Google made QUIC unfair they knew exactly what they were doing. It was a willful, deliberate intentional act.

    Many older algorithms (like Reno and CUBIC) only responded to congestion after congestion was detected.

    You can't make this shit up.

    Newer CCAs like BBR are being developed that can respond to internet congestion in real time, and will be important for improving network performance.

    In reality is there is little to improve upon as is.. When you throw in tail loss and some hysteresis on initial windows there are rounding error scraps left. The only possible "improvements" come from being more aggressive than competing streams.

    Or in PR speak our amazing algorithm predict the future before it happens.

    Equally important: transparency that allows researchers to look under the hood to make sure the end result is an internet that works equally well for everybody.

    Researchers have been raising these issues for years and nothing has changed. But hey at least we all "transparently" know what is happening for whatever that's worth (absolutely nothing).

    Even with a black box you can still simulate outcomes... Knowing the underlying algorithm is only marginally useful. The only substantive outcomes are simulation and real world results.

    • Many older algorithms (like Reno and CUBIC) only responded to congestion after network buffers were filled and packets were dropped.

      Yikes. Sure, reno and cubic congestion detection is pretty bad, but that wording was terrible.

    • by Bengie ( 1121981 ) on Friday November 01, 2019 @08:45AM (#59369442)
      Have you ever sniffed a TCP bittorrent stream of a popular torrent? I have a dedicated connection with guaranteed bandwidth and very low latency to pretty much everywhere in the world. I was downloading a popular Linux ISO that had been out for a few weeks when I noticed some packetloss and bittorrent was only about 1/2 of my bandwidth. I thought something was wrong and was planning to call the ISP. But then I checked my firewall. It was showing 100% rate ingressing the WAN, yet only 1/2 rate egressing the LAN.

      I decided to sniff WAN side. Lo and behold, TCP streams with nearly 50% dup rates and retransmissions everywhere. This was odd because I should have zero loss on my dedicated connection. I picked several IPs of the offending streams, did a trace route. Near perfect latency the entire route, except the last hop, then the latency went from 30ms to 2,000ms. Buffer bloat. The senders were cable model users seeding the torrent, but the buffer bloat was so high and Reno/CUBIC only respond to packet loss. Well, there was very little loss, but Reno/CUBIC interpret a non-ACK after 1500ms as a loss and retransmits immediately. Since the RTT was 2,000ms, every f'n packet was being retransmitted. I was being DDOS'd. Turned out I was ingressing about 120Mb/s on my 100Mb link. I was being DDOS'd by f'n shitting CC algorithms that don't care about latency.

      BBR does not have this problem. BBR cares about latency and mostly disregards stochastic loss. Reno/CUBIC on the other hand care almost exclusively about loss. This means BBR will back-off well before the link becomes "congested", even in the case of extreme buffer bloat. Reno/CUBIC will do one of two extremes. Continue to fill the buffer until a burst of loss occurs because it's full, or back off prematurely because of stochastic loss. This makes BBR look "greedy" when really it's just making sure the route is not congested. Anyway, how ever you try to spin BBR being the "bad guy" here, Reno/CUBIC is DDOSing me when there is extreme buffer bloat, which is the norm for residential connections. Reno/CUBIC are fundamentally broken on modern networks.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...