Fixing the Unfairness of TCP Congestion Control 238
duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
A New Way to Look at Networking (Score:5, Informative)
Re:Not all sessions experience the same congestion (Score:3, Informative)
Sadly, no, upgrading doesn't help... (Score:4, Informative)
Re:So right, yet so wrong (Score:3, Informative)
Most people should be encrypting a large chunk of what goes across the Internet. Anything which sends a password or a session cookie should be encrypted. That's going to be fairly hard on traffic shapers.
Re:Not all sessions experience the same congestion (Score:4, Informative)
Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.
Obviously AIMD isn't going to fix this situation - it's not designed to. Similarly, expecting all computers to be updated in any reasonable timeframe won't happen (especially as a P2P user may have less little motivation to 'upgrade' to receive slower downloads). Still, since we're assuming the bottleneck is in the first hops, it follows that the congestion is in the ISPs managed network. I don't see why the ISP can't therefore tag and shape traffic so that their routers equally divide available bandwidth between each user, not TCP stream. Infact, most ISPs will give each home subscriber only 1 IP address at any point in time, so it should be easy to relate a TCP stream (or and IP packet type) to a subscriber. While elements of the physical network are always shared [plus.net], each user can still be given logical connection with guaranteed bandwidth dimensions. This isn't a new concept either, it's just multiplexing using a suitable scheduler, such as rate-monotonic (you get some predefined amount) or round-robin (you get some fraction of the available amount).
Such 'technology' could be rolled by ISPs according their roadmaps (although here in the UK it may require convincing BT Wholesale to update some of their infrastructure) and without requiring all users to upgrade their software or make any changes. However, I suspect here the "The politicization of an engineering problem" occurs because ISPs would rather do anything but admit they made a mistake in previous marketing of their services, raise subscriber prices, or make the investment to correctly prioritise traffic on a per user basis, basically knocking contention rates right down to 1:1. It's much easier to simply ban or throttle P2P applications wholesale and blame high bandwidth applications.
I have little sympathy for ISPs right now; the solution should be within their grasp.
One way to implement this... (Score:3, Informative)
I'd think that a simple fix to Jacobson's algorithm could help a lot. Instead of resetting the transmission rate on just one connection on a dropped packet, reset all of them. This would have no effect on anyone using a single stream, and would eliminate problems with the source of the congestion is nearby. Variations on this theme would included resetting all connections for a single process or process group, which would throttle my P2P without affecting my browser. This alone would be more than enough incentive for me to adopt the patch: instead of having to schedule different bandwidth limits during the day, I could just let everything flow at full speed 24x7. And by putting the patch into the kernel, you'd have less to worry about individual applications and/or users deciding to adopt it.
Bandwidth still isn't free. (Score:4, Informative)
You've got three options,
#1 Have an uncapped uncontended link for the £20/month you pay - you'll get about 250kbps.
#2 Have a fast link with a low bandwidth cap - think 8Mbits with a 50GB cap and chargeable bandwidth after that at around ~ 50p-£1/GB
#3 Deal with an ISP who's selling bandwidth they don't have and expect them to try as hard as possible to make #1 look like #2 with no overage charges.
If you want a reliable fast internet connection you want to go with a company that advertises #2. If you can't afford #2, you can spend your time working against the techs at ISP #3, but expect them to go our of their way to make your life shit until you take your service elsewhere because you cost them money.
A single slow connection changes your TCP window (Score:4, Informative)
You can change them on the fly by echoing the name into your procfs, IIRC. Also, if you have the stomache for it, and two connections to the internet, you can load balance and/or stripe them using Linux advanced Routing & Traffic Control [lartc.org] (mostly the ip(1) command). Very cool stuff if you want to route around a slow node or two (check out the multiple path stuff) at your ISP(s).
Driving Miss Internet (Score:4, Informative)
It occurred to me this morning that driving on public roadways and surfing the public networks were identical experiences for the vast majority of people. That experience being; "mine, mine, ALL MINE!....hahahaha!" AKA "screw you...it's all about me!"
Now, I have the joy of managing a global network with links to 150 countries AND a 30 mile one way commute. So, I get to see, in microcosm, how the average citizen behaves in both instances.
From a network perspective, charge by usage...period. Fairness only works in FAIRy tales.
We do very good traffic shaping and management across the world. QoS policies are very well designed and work. The end user locations do get charged an allocation for their network costs. So, you'd think the WAN would run nicely and fairly. After all, if the POS systems are impacted, we don't make money and that affects everyone, right?
Hardly. While we block obvious stuff like YouTube and Myspace, we have "smart" users who abuse the privilege. So, when we get a ticket about "poor network performance", we go back to a point before the problem report and look at the flows. 99 out of 100 times, it's one or more users hogging the pipe with their own agenda. Now, the branch manager gets a detailed report of what the employees were doing and how much it cost them. Of course, porn surfers get fired immediately. Abusers of the privilege just get to wonder what year they'll see a merit increase, if at all.
So, even with very robust network tuning and traffic shaping, the "me, me" crowd will still screw everybody else...and be proud that they did. Die a miserable death in prison you ignorant pieces of shit.
Likewise the flaming assholes I compete with on the concrete and asphalt network link between home and office every day. This morning, some idiot in a subcompact stuck herself about 2 feet from my rear bumper...at 70mph. If I apply ANY braking for ANY reason, this woman will collide with me. So, I tapped the brakes so she'd back off. She backed off with the upraised hand that seemed to be "yeah, I know I was in the wrong and being unsafe" She then performed 9 lane changes, all without signaling once, and managed to gain....wait for it.... a whole SEVEN SECONDS of time over 10 miles of driving.
I see it every day. People driving with little regard for anyone else and raising the costs for the rest of us. On the network, or on the highway, same deal. And they feel like they did something worthwhile. I've talked to many users at work and the VAST majority are not only unapologetic, but actually SMUG. Many times, I'll get the "I do this at home, so it must be okay at work". To which I say, "well you cannot beat your wife and molest your kids at the office, now can you?"
My tolerance of, and faith in, my fellow man to "do the right thing" are at zero.
A technical solution (to TCP Congestion Control, etc.) is teaching the pig to sing; horrible results. Charge the thieving, spamming bastards through the nose AND constrain their traffic. That'll get better results than any pollyanna crap about "fair".
Re:FUD (Score:3, Informative)
You're making the same mistake as the author of that article. What you fail to realise is precisely why the single connection did not operate as fast: because your kernel was slowing it down incorrectly. You are not fighting other users by opening more connections, you are fighting your own TCP implementation.
Yes, that bandwidth came from somewhere - but it's probably bandwidth that wasn't in use anyway, and your TCP implementation was just failing to get at it. For a change that dramatic, I bet it was the Windows implementation (which is known to suck).
All of this has NOTHING TO DO with congestion control on the internet. This is the ad-hoc mode used between equal peers on brainless bus systems like unmanaged switches and hubs. On the internet, congestion control is performed by QoS on real routers. ISPs track the bandwidth load by source address or whatever, and distribute traffic fairly between them (some penny-ante ISPs may run without QoS, but you shouldn't be using them). You are not "gaming the system" by working around the limitations of your own TCP implementation, because that isn't the system.
The article is pure gibberish. And it's wrong.
Re:This is a good proposal (Score:2, Informative)
Since when is it forced? I keep it turned off and download patches from filefront.
Re:Why this is a non-issue now (Score:3, Informative)
Fairness is not the problem. Fairness is the wedge-issue that CATV-ISPs are trying to use to justify their behavior.
I personally like the rudimentary aspects of the weighted fair queuing proposal -- so let's image that we had it. Would Comcast still have a problem with too many upload bytes from too many homes competing for the upload path back to the CMTS? Yes.
The real problem is that CATV-ISPs are at their upper limits and FIOS is currently superiour. Most CATV nets are DOCSIS 1.1, neighborhoods of 400-500 homes sharing 9-10 Mbps back to the CMTS. Meanwhile, they have to compete with FIOS advertising 15/15, 15-20/5, 20/20, or etc. Mbps. Due to TCP and higher-layer protocol RETURN overhead, CATV ISPs can only offer download speeds if they can reserve 5% in the upload pipe -- so their download is limited by their upload. For example, downloading 8 Mbps from NNTP requires around 200 Kbps. Their upload was 256 Kbps. What happened next: to increase their download speed offering, they pushed out configuration files allowing uploads of 384 Kbps! Cost $0.00 -- no new equipment, no neighborhood splits -- just "let's pretend that we have the bandwidth." After all, customers don't care about upload speed, they just want to download.
Heh.