Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Networking The Internet

Network 'Jitters' Confuse Packet-Routing Algorithms, Make Unfair Congestion Inevitable (ieee.org) 51

IEEE Spectrum reports that a new study finds that many key algorithms designed to control "congestion" delays on computer networks "may prove deeply unfair, letting some users hog all the bandwidth while others get essentially nothing." [A]lthough hundreds of congestion control algorithms have been proposed in the last roughly 40 years, "there is no clear winner," says study lead author Venkat Arun, a computer scientist at MIT. "I was frustrated by how little we knew about where these algorithms would and would not work. This motivated me to create a mathematical model that could make more systematic predictions...."

Their new study finds that given the real-world complexity of network paths, there will always be a scenario where a problem known as "starvation" cannot be avoided — where at least one sender on a network receives almost no bandwidth compared to other users.... Congestion control algorithms rely on packet losses and delays as details to infer congestion and decide how fast to send data. However, packets can get lost and delayed for reasons other than network congestion. For example, data may be held up and then released in a burst with other packets, or a receiver's acknowledgement that it received packets might get delayed. The researchers called delays that do not result from congestion "jitter."

Congestion control algorithms cannot distinguish the difference between delays caused by congestion and jitter. This can lead to problems, as delays caused by jitter are unpredictable. This ambiguity confuses senders, which can make them each estimate delay differently and send packets at unequal rates. The researchers found this eventually leads to situations where starvation occurs and some users get shut out completely. In the new study, the researchers analyzed whether every congestion control algorithm of which they were aware, as well as some new ones they devised, could avoid starvation.

The scientists were surprised to find there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing.... "Extreme unfairness happens even when everybody cooperates, and it is nobody's fault." Although existing approaches toward congestion control may not be able to avoid starvation, the aim now is to develop a new strategy that does, Arun says.

This discussion has been archived. No new comments can be posted.

Network 'Jitters' Confuse Packet-Routing Algorithms, Make Unfair Congestion Inevitable

Comments Filter:
  • by bubblyceiling ( 7940768 ) on Monday August 22, 2022 @02:05AM (#62810219)
    One solution could be adding ”amt of time delayed” to the packet. Like if it was delayed due to bursting or maybe QoS or some other reason that is not latency.
    • by wangi ( 16741 )

      One solution could be adding ”amt of time delayed” to the packet. Like if it was delayed due to bursting or maybe QoS or some other reason that is not latency.

      Great, set that on my packets and give them an advantage.

      • The sender SHOULD also set the Evil Bit to 1.

      • by vivian ( 156520 )

        Is this what some game cheaters do to get unfair advantage on online games? I seem to remember hearing about something about them jitterind their connection for an advantage somehow, but I could never understand how that would help - seems to me it would also make their own actions and reactions laggy.

    • by AmiMoJo ( 196126 ) on Monday August 22, 2022 @07:45AM (#62810559) Homepage Journal

      The problem is trust. Anyone could tag their packets as heavily delayed an in need of urgent prioritization. An ISP could do it.

      It would also benefit providers who under invest.

      • But that will only make it worse for the âoemalicious actorâ. If say you set delay to a high value and that packet ends up actually being delayed due to congestion, the algorithm will only end up favoring the congested route.

        Which only makes things worse. The only thing this will do is help the algo get an accurate time spent due to congestion.

        It is in only in your interest to set this truthfully
  • by Anonymous Coward

    For a good introduction as to what the problem is, see the Briscoe paper. [stanford.edu]

    These guys do reference it, but so off-hand as to be grudging.

  • If you have congestion, you need to increase your bandwidth.

    • by samjam ( 256347 )

      but where on the multihop route do you need to increase bandwidth?

      • by Craefter ( 71540 )

        Last mile. The consumer connection is always the limiting factor. Unless of course you have a crap ISP.

        • Bahah. You've obviously never worked in a NOC troubleshooting backlinks going across an old qwest fibre backbone. I've seen their 40Gbit links suddenly trickle down to 500kbits, and without working at CenturyLink / qwest, there's nothing you can do but try to reroute all connectivity around it. Sometimes that's the only link though and you sit on their backbone support for hours while they troubleshoot the "issue of the night".
      • You need to upgrade the congested links. How is that not obvious?
      • Any professionally operated network should still work when one of the redundant links fails, which means that typically no link should ever have more than 50% utilization.

        The last mile is a bit special, as there it typically is under the control of the user. You can choose which packets your router will throw away. You can choose at what utilization level you operate your last mile. So congestion is a problem you can solve there.

    • Or you need to implement real QoS instead of what IP has.
      • QoS, CoS, and other priority control mechanisms shouldn't really do anything in the absence of network contention.
        • Agreed, QoS is a band-aid for an undersized network.

          • It's useful for dealing with short periods of contention in limited scenarios. For example inside a datacenter where there's a call switch, it's kind of important for 911 calls to keep working. A lot of people don't really understand that priority mechanisms are really only suited for local networks.
            • People that have used circuit-switched networks with end-to-end flow control and QoS, like ATM, don't have this confusion.
    • You will always have congestion (RFC970) due to the nature of tcp. Most tcps will grab all the bandwidth [youtube.com] until it gets a drop (funny video). What the paper shows is that delay based tcps can starve under more circumstances than previously understood.

      . What's the answer? FQ (especially rfc8290), and AQM at every fbottleneck on the internet.
  • ðY (Score:1, Offtopic)

    by Bimloaded ( 10104922 )
    Could this be real
  • Bandwidth is cheap and works. QoS is not and does not.
    • by Anonymous Coward

      You don't need much hardware support to make QoS work, but you need more than ethernet gives you. Since just about everything is ethernet these days, indeed, QoS doesn't work. But that's ethernet's fault, not QoS's. And it still can give us infinite bandwidth everywhere, so congestion will remain with us.

      • The telecommunications industry always pushes complicated network management systems. It's in their nature. But bandwidth delivers more bang for the buck, and that's why IP eats the competition alive. You don't need infinite bandwidth, just enough to statistically avoid regular congestion. If you operate networks, you know where the bottlenecks are and when congestion occurs. The cheapest and best fix is more bandwidth at those points so that congestion is almost always avoided.
      • We tried to deprecate the term QoS in favor of something that does work, SQM [bufferbloat.net]. Even then, if you are running at the line rate, just something like fq-codel or cake suffice (without a shaper). They are very lightweight, yet massive improvements over a fifo, and solve the problems outlined in this paper (were they at the bottleneck).

        There's been something of a quiet revolution here. I suspect the vast majority of slashdot readers are running fq_codel today, as it's the default in linux, osx, and ios, all t
  • >"The scientists were surprised to find there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing...."

    Then maybe introducing a random element in one of the steps somewhere might help? Or using the randomness to switch between algorithms on the fly, assuming that it is not always the same device(s) that end(s) up starved with each one.

  • If each algorithm tends to be unfair to different users, they could cycle through the group of algorithms. This would spread out the inevitable unfairness "fairly." I'm assuming that the overhead of changing algorithms is negligible.
  • The one algorithm that never gives all the bandwidth to one party is round robin, but it also doesn't provide any QoS. But doesn't doing multiple classifying steps where one of them is round robin or similar solve this problem? For example, having multiple bins that are all serviced at different rates, and one of them being simple round robin so that everyone always gets something?

  • by davecb ( 6526 ) <davecb@spamcop.net> on Monday August 22, 2022 @08:09AM (#62810617) Homepage Journal

    The lead author, Venkat Arun, noted that fair queuing (FQ) not congestion control is what one needs. And in the Linux kernel, that's part of both fq-codel and CAKE

    From the bloat list: https://lists.bufferbloat.net/... [bufferbloat.net] Starvation in e2e congestion control

    Hi Dave,

    Yes definitely, when fair-queuing is present, the onus is no longer on the congestion control algorithm to ensure fairness. In fact, if buffer-sharing is implemented correctly, FQ can even stand against adversarial congestion control algorithms. We have good reason to believe that end-to-end congestion control algorithms can provably work (i.e. achieve high utilization, bounded delay and fairness) in the presence of FQ.

    Cheers,
    Venkat

    I expected better from the IEEE, who previously avoided writing click-bait articles...

    By the way, the Dave he's writing to is Dave Taht, a /. member and an expert on network performance

  • This reminds me of a parenting discussion I had recently.

    Who decided "fair" was the ultimate goal here? What about "fairness" is inherently superior?

    Let's take a simple example, traffic and emergency vehicles. Is it fair that emergency vehicles get to ignore all traffic laws and bypass any traffic they encounter? Hell no it's not fair! But it's also the correct decision.

    Priority matters more than fairness. This is a very misguided "study".

    • by Z80a ( 971949 )

      There's no priority on the internet

    • by mtaht ( 603670 )
      "Not having fairness as a goal" is an old debate and has been essentially the position of bob briscoe for ages, who is highly influential in the design of things like L4S.

      To tackle your analogy head on: Packets are not cars. Right there the analogy falls apart. Flows are not ambulances. Also... the paper was about the prospect of flow starvation under various conditions. However... being stuck with analogies...

      I wish we didn't use the terms fairness or fair queuing as much as we do, because the real
    • by jvkjvk ( 102057 )

      >Is it fair that emergency vehicles get to ignore all traffic laws and bypass any traffic they encounter? Hell no it's not fair! But it's also the correct decision.

      Yes, it is fair, I would say. They are going to or coming from some emergency, possibly with a life or more on the line. By pulling over, or stopping on a green, you are giving the people the same respect that is due you should it be *your* life on the line, next time. Seems pretty fair to me.

    • You are assuming a very naive and simplistic definition of fairness.

      I was in second or third grade when I came up with the postulate that fairness requires, at least, that the ambulance coming to help you gets the same priority is the ambulance coming to help me, assuming our injuries are comparable.

      Fair doesn't mean everyone gets the same results (ie travel time).That's the definition for what they call "equity", being very loud and clear that it's NOT fairness. Fair means the rules (which are followed) do

  • Presuming streaming is the biggest user - how about quantizing the start time of the most popular shows then use broadcasting to reduce duplication?
    Or use proxying to localize the traffic?

    This is a kin to using trucks or trains for mass deliveries instead of building more and more roads.

    • Yeah, a lot of content distribution problems could be solved by a broadcast model, the last time that was really attempted was VBI on TV channels, which unfortunately never really caught on.

      You could probably do some time cycled delivery of popular content on shared last-mile cable media, but that would require some major upgrades to user owned storage and of course the software to go with it. This also unfortunately won't help at all for the biggest problem area which is 20+yo last mile phone lines that c

  • To do realtime interactive video streaming, such as video games

  • I have a somewhat related question, if someone wouldn't mind pointing me in the right direction.

    In 2003 or so, I remember reading a slashdot article claiming that something about packet routing required mathematical computation, and that there was a way to form packets in a special way so as to get other computers to perform arbitrary mathematical calculations. Does this sound familiar to anyone?

    • by mtaht ( 603670 )
      yes. a current term for this is in-network computing. incidences of the idea of doing computation with packets seem to be revisited every 10 years.

Old programmers never die, they just hit account block limit.

Working...