Network 'Jitters' Confuse Packet-Routing Algorithms, Make Unfair Congestion Inevitable (ieee.org) 51
IEEE Spectrum reports that a new study finds that many key algorithms designed to control "congestion" delays on computer networks "may prove deeply unfair, letting some users hog all the bandwidth while others get essentially nothing."
[A]lthough hundreds of congestion control algorithms have been proposed in the last roughly 40 years, "there is no clear winner," says study lead author Venkat Arun, a computer scientist at MIT. "I was frustrated by how little we knew about where these algorithms would and would not work. This motivated me to create a mathematical model that could make more systematic predictions...."
Their new study finds that given the real-world complexity of network paths, there will always be a scenario where a problem known as "starvation" cannot be avoided — where at least one sender on a network receives almost no bandwidth compared to other users.... Congestion control algorithms rely on packet losses and delays as details to infer congestion and decide how fast to send data. However, packets can get lost and delayed for reasons other than network congestion. For example, data may be held up and then released in a burst with other packets, or a receiver's acknowledgement that it received packets might get delayed. The researchers called delays that do not result from congestion "jitter."
Congestion control algorithms cannot distinguish the difference between delays caused by congestion and jitter. This can lead to problems, as delays caused by jitter are unpredictable. This ambiguity confuses senders, which can make them each estimate delay differently and send packets at unequal rates. The researchers found this eventually leads to situations where starvation occurs and some users get shut out completely. In the new study, the researchers analyzed whether every congestion control algorithm of which they were aware, as well as some new ones they devised, could avoid starvation.
The scientists were surprised to find there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing.... "Extreme unfairness happens even when everybody cooperates, and it is nobody's fault." Although existing approaches toward congestion control may not be able to avoid starvation, the aim now is to develop a new strategy that does, Arun says.
Their new study finds that given the real-world complexity of network paths, there will always be a scenario where a problem known as "starvation" cannot be avoided — where at least one sender on a network receives almost no bandwidth compared to other users.... Congestion control algorithms rely on packet losses and delays as details to infer congestion and decide how fast to send data. However, packets can get lost and delayed for reasons other than network congestion. For example, data may be held up and then released in a burst with other packets, or a receiver's acknowledgement that it received packets might get delayed. The researchers called delays that do not result from congestion "jitter."
Congestion control algorithms cannot distinguish the difference between delays caused by congestion and jitter. This can lead to problems, as delays caused by jitter are unpredictable. This ambiguity confuses senders, which can make them each estimate delay differently and send packets at unequal rates. The researchers found this eventually leads to situations where starvation occurs and some users get shut out completely. In the new study, the researchers analyzed whether every congestion control algorithm of which they were aware, as well as some new ones they devised, could avoid starvation.
The scientists were surprised to find there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing.... "Extreme unfairness happens even when everybody cooperates, and it is nobody's fault." Although existing approaches toward congestion control may not be able to avoid starvation, the aim now is to develop a new strategy that does, Arun says.
Add “ amt of time delayed” to the pack (Score:4, Interesting)
Re: (Score:2)
One solution could be adding ”amt of time delayed” to the packet. Like if it was delayed due to bursting or maybe QoS or some other reason that is not latency.
Great, set that on my packets and give them an advantage.
Re: (Score:3)
The sender SHOULD also set the Evil Bit to 1.
Re: (Score:1)
Is this what some game cheaters do to get unfair advantage on online games? I seem to remember hearing about something about them jitterind their connection for an advantage somehow, but I could never understand how that would help - seems to me it would also make their own actions and reactions laggy.
Re:Add “ amt of time delayed” to the p (Score:5, Insightful)
The problem is trust. Anyone could tag their packets as heavily delayed an in need of urgent prioritization. An ISP could do it.
It would also benefit providers who under invest.
Re: Add “ amt of time delayed” to the (Score:2)
Which only makes things worse. The only thing this will do is help the algo get an accurate time spent due to congestion.
It is in only in your interest to set this truthfully
Fairness 101 (Score:1)
For a good introduction as to what the problem is, see the Briscoe paper. [stanford.edu]
These guys do reference it, but so off-hand as to be grudging.
bandwidth (Score:2)
If you have congestion, you need to increase your bandwidth.
Re: (Score:3)
but where on the multihop route do you need to increase bandwidth?
Re: (Score:2)
Last mile. The consumer connection is always the limiting factor. Unless of course you have a crap ISP.
Re: bandwidth (Score:3)
Re: (Score:2)
where the utilization is above 50% (Score:2)
Any professionally operated network should still work when one of the redundant links fails, which means that typically no link should ever have more than 50% utilization.
The last mile is a bit special, as there it typically is under the control of the user. You can choose which packets your router will throw away. You can choose at what utilization level you operate your last mile. So congestion is a problem you can solve there.
Re: (Score:2)
Re: bandwidth (Score:2)
Re: (Score:2)
Agreed, QoS is a band-aid for an undersized network.
Re: bandwidth (Score:2)
Re: (Score:2)
rfc7567 - if you have congestion apply aqm and fq (Score:2)
. What's the answer? FQ (especially rfc8290), and AQM at every fbottleneck on the internet.
Re: (Score:2)
Oh yeah thanks.
ðY (Score:1, Offtopic)
Don't manage congestion, avoid it (Score:2)
That doesn't solve the problem (Score:1)
You don't need much hardware support to make QoS work, but you need more than ethernet gives you. Since just about everything is ethernet these days, indeed, QoS doesn't work. But that's ethernet's fault, not QoS's. And it still can give us infinite bandwidth everywhere, so congestion will remain with us.
It does. The fix is not free, but QoS costs more. (Score:3)
Re: (Score:2)
OK, but aren't the most congested links typically WAN links, and aren't the WAN links the least likely to use Ethernet? (I know some of them do.)
Re: (Score:2)
Token Ring [...] ARCnet
Strong "gramps talks about the war" vibes there... Look, the point is very easy to understand once you can see past the marketing: QoS is a great name for a crap concept. It means instead of not having congested networks, which is cheaper, you try to live with congestion by rationing bandwidth fairly, which is more expensive. More bandwidth is better, unless you're trying to sell QoS hardware.
Re: (Score:2)
sqm - smart queue management - does work (Score:3)
There's been something of a quiet revolution here. I suspect the vast majority of slashdot readers are running fq_codel today, as it's the default in linux, osx, and ios, all t
Randomness (Score:2)
>"The scientists were surprised to find there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing...."
Then maybe introducing a random element in one of the steps somewhere might help? Or using the randomness to switch between algorithms on the fly, assuming that it is not always the same device(s) that end(s) up starved with each one.
Cycle through the Algorithms (Score:1)
Why would one algorithm solve the problem? (Score:2)
The one algorithm that never gives all the bandwidth to one party is round robin, but it also doesn't provide any QoS. But doesn't doing multiple classifying steps where one of them is round robin or similar solve this problem? For example, having multiple bins that are all serviced at different rates, and one of them being simple round robin so that everyone always gets something?
Re: (Score:3)
Read the paper, not the blurb (;-)) (Score:5, Interesting)
The lead author, Venkat Arun, noted that fair queuing (FQ) not congestion control is what one needs. And in the Linux kernel, that's part of both fq-codel and CAKE
I expected better from the IEEE, who previously avoided writing click-bait articles...
By the way, the Dave he's writing to is Dave Taht, a /. member and an expert on network performance
Re: (Score:3)
Why is "fairness" the goal? (Score:2)
This reminds me of a parenting discussion I had recently.
Who decided "fair" was the ultimate goal here? What about "fairness" is inherently superior?
Let's take a simple example, traffic and emergency vehicles. Is it fair that emergency vehicles get to ignore all traffic laws and bypass any traffic they encounter? Hell no it's not fair! But it's also the correct decision.
Priority matters more than fairness. This is a very misguided "study".
Re: (Score:2)
There's no priority on the internet
Re: Why is "fairness" the goal? (Score:2)
Re: (Score:2)
To tackle your analogy head on: Packets are not cars. Right there the analogy falls apart. Flows are not ambulances. Also... the paper was about the prospect of flow starvation under various conditions. However... being stuck with analogies...
I wish we didn't use the terms fairness or fair queuing as much as we do, because the real
Re: (Score:2)
>Is it fair that emergency vehicles get to ignore all traffic laws and bypass any traffic they encounter? Hell no it's not fair! But it's also the correct decision.
Yes, it is fair, I would say. They are going to or coming from some emergency, possibly with a life or more on the line. By pulling over, or stopping on a green, you are giving the people the same respect that is due you should it be *your* life on the line, next time. Seems pretty fair to me.
That's equity, not fairness (Score:2)
You are assuming a very naive and simplistic definition of fairness.
I was in second or third grade when I came up with the postulate that fairness requires, at least, that the ambulance coming to help you gets the same priority is the ambulance coming to help me, assuming our injuries are comparable.
Fair doesn't mean everyone gets the same results (ie travel time).That's the definition for what they call "equity", being very loud and clear that it's NOT fairness. Fair means the rules (which are followed) do
Would quantizing video streaming start time help? (Score:2)
Presuming streaming is the biggest user - how about quantizing the start time of the most popular shows then use broadcasting to reduce duplication?
Or use proxying to localize the traffic?
This is a kin to using trucks or trains for mass deliveries instead of building more and more roads.
Re: (Score:2)
Yeah, a lot of content distribution problems could be solved by a broadcast model, the last time that was really attempted was VBI on TV channels, which unfortunately never really caught on.
You could probably do some time cycled delivery of popular content on shared last-mile cable media, but that would require some major upgrades to user owned storage and of course the software to go with it. This also unfortunately won't help at all for the biggest problem area which is 20+yo last mile phone lines that c
Ah the perfect place... (Score:2)
To do realtime interactive video streaming, such as video games
Some related question... (Score:1)
I have a somewhat related question, if someone wouldn't mind pointing me in the right direction.
In 2003 or so, I remember reading a slashdot article claiming that something about packet routing required mathematical computation, and that there was a way to form packets in a special way so as to get other computers to perform arbitrary mathematical calculations. Does this sound familiar to anyone?
Re: (Score:2)