Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking

Fixing the Unfairness of TCP Congestion Control 238

duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
This discussion has been archived. No new comments can be posted.

Fixing the Unfairness of TCP Congestion Control

Comments Filter:
  • by StCredZero ( 169093 ) on Monday March 24, 2008 @10:50AM (#22844968)
    A New Way to Look at Networking [google.com] is a Google Tech Talk [google.com]. It's about an hour long, but there's a lot of very good and fascinating historical information, which sets the groundwork for this guy's proposal. Van Jacobson was around at the early days when TCP/IP were being invented. He's proposing a new protocol layered on top of TCP/IP that can turn the Internet into a true broadcast medium -- one which is even more proof against censorship than the current one!
  • by Kjella ( 173770 ) on Monday March 24, 2008 @11:05AM (#22845082) Homepage
    Well, I don't know about your Internet connection but the only place I notice congestion is on the first few hops (and possibly the last few hops if we're talking a single host and not P2P). Beyond that on the big backbone lines I at least don't notice it, though I suppose it could be different for the computer.
  • by nweaver ( 113078 ) on Monday March 24, 2008 @11:29AM (#22845328) Homepage
    There have been plenty of lessons, Japan most recently, that upping the avaible capacity simply ups the amount of bulk-data P2P, without helping the other flows nearly as much.

  • by Sancho ( 17056 ) on Monday March 24, 2008 @11:45AM (#22845482) Homepage
    The problem with traffic shaping is that eventually, once everyone starts encrypting their data and using recognized ports (like 443) to pass non-standard traffic, you've got to start shaping just about everything. Shaping only works as long as you can recognize and classify the data.

    Most people should be encrypting a large chunk of what goes across the Internet. Anything which sends a password or a session cookie should be encrypted. That's going to be fairly hard on traffic shapers.
  • by Mike McTernan ( 260224 ) on Monday March 24, 2008 @12:12PM (#22845810)

    Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.

    Obviously AIMD isn't going to fix this situation - it's not designed to. Similarly, expecting all computers to be updated in any reasonable timeframe won't happen (especially as a P2P user may have less little motivation to 'upgrade' to receive slower downloads). Still, since we're assuming the bottleneck is in the first hops, it follows that the congestion is in the ISPs managed network. I don't see why the ISP can't therefore tag and shape traffic so that their routers equally divide available bandwidth between each user, not TCP stream. Infact, most ISPs will give each home subscriber only 1 IP address at any point in time, so it should be easy to relate a TCP stream (or and IP packet type) to a subscriber. While elements of the physical network are always shared [plus.net], each user can still be given logical connection with guaranteed bandwidth dimensions. This isn't a new concept either, it's just multiplexing using a suitable scheduler, such as rate-monotonic (you get some predefined amount) or round-robin (you get some fraction of the available amount).

    Such 'technology' could be rolled by ISPs according their roadmaps (although here in the UK it may require convincing BT Wholesale to update some of their infrastructure) and without requiring all users to upgrade their software or make any changes. However, I suspect here the "The politicization of an engineering problem" occurs because ISPs would rather do anything but admit they made a mistake in previous marketing of their services, raise subscriber prices, or make the investment to correctly prioritise traffic on a per user basis, basically knocking contention rates right down to 1:1. It's much easier to simply ban or throttle P2P applications wholesale and blame high bandwidth applications.

    I have little sympathy for ISPs right now; the solution should be within their grasp.

  • by vrmlguy ( 120854 ) <samwyse&gmail,com> on Monday March 24, 2008 @12:13PM (#22845818) Homepage Journal

    Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link. [...] The other major loophole in Jacobson's algorithm is the persistence advantage of P2P applications where P2P applications can get another order of magnitude advantage by continuously using the network 24×7.
    I agree with the first point, but not with the second. One of the whole points of having a computer is that it can do things unattended. Fortunately, the proposal seems to only fix the first issue.

    I'd think that a simple fix to Jacobson's algorithm could help a lot. Instead of resetting the transmission rate on just one connection on a dropped packet, reset all of them. This would have no effect on anyone using a single stream, and would eliminate problems with the source of the congestion is nearby. Variations on this theme would included resetting all connections for a single process or process group, which would throttle my P2P without affecting my browser. This alone would be more than enough incentive for me to adopt the patch: instead of having to schedule different bandwidth limits during the day, I could just let everything flow at full speed 24x7. And by putting the patch into the kernel, you'd have less to worry about individual applications and/or users deciding to adopt it.
  • by clare-ents ( 153285 ) on Monday March 24, 2008 @12:35PM (#22846128) Homepage
    In the UK bandwidth out of BTs ADSL network costs ~ £70/Mbit/Month wholesale. Consumer DSL costs ~ £20/month.

    You've got three options,

    #1 Have an uncapped uncontended link for the £20/month you pay - you'll get about 250kbps.

    #2 Have a fast link with a low bandwidth cap - think 8Mbits with a 50GB cap and chargeable bandwidth after that at around ~ 50p-£1/GB

    #3 Deal with an ISP who's selling bandwidth they don't have and expect them to try as hard as possible to make #1 look like #2 with no overage charges.

    If you want a reliable fast internet connection you want to go with a company that advertises #2. If you can't afford #2, you can spend your time working against the techs at ISP #3, but expect them to go our of their way to make your life shit until you take your service elsewhere because you cost them money.

  • If you're using Linux, which TCP Congestion algorithm are you using? Reno isn't very fair; if a single connection is congested beyond the first hop, you'll slow down the rest of your connections when the window slides to smaller units. Have you tried Bicubic, Veno, or any of the other 9 or 10 congestion algorithms?

    You can change them on the fly by echoing the name into your procfs, IIRC. Also, if you have the stomache for it, and two connections to the internet, you can load balance and/or stripe them using Linux advanced Routing & Traffic Control [lartc.org] (mostly the ip(1) command). Very cool stuff if you want to route around a slow node or two (check out the multiple path stuff) at your ISP(s).
  • by UttBuggly ( 871776 ) on Monday March 24, 2008 @12:45PM (#22846280)
    WARNING ~! Core dump follows.

    It occurred to me this morning that driving on public roadways and surfing the public networks were identical experiences for the vast majority of people. That experience being; "mine, mine, ALL MINE!....hahahaha!" AKA "screw you...it's all about me!"

    Now, I have the joy of managing a global network with links to 150 countries AND a 30 mile one way commute. So, I get to see, in microcosm, how the average citizen behaves in both instances.

    From a network perspective, charge by usage...period. Fairness only works in FAIRy tales.

    We do very good traffic shaping and management across the world. QoS policies are very well designed and work. The end user locations do get charged an allocation for their network costs. So, you'd think the WAN would run nicely and fairly. After all, if the POS systems are impacted, we don't make money and that affects everyone, right?

    Hardly. While we block obvious stuff like YouTube and Myspace, we have "smart" users who abuse the privilege. So, when we get a ticket about "poor network performance", we go back to a point before the problem report and look at the flows. 99 out of 100 times, it's one or more users hogging the pipe with their own agenda. Now, the branch manager gets a detailed report of what the employees were doing and how much it cost them. Of course, porn surfers get fired immediately. Abusers of the privilege just get to wonder what year they'll see a merit increase, if at all.

    So, even with very robust network tuning and traffic shaping, the "me, me" crowd will still screw everybody else...and be proud that they did. Die a miserable death in prison you ignorant pieces of shit.

    Likewise the flaming assholes I compete with on the concrete and asphalt network link between home and office every day. This morning, some idiot in a subcompact stuck herself about 2 feet from my rear bumper...at 70mph. If I apply ANY braking for ANY reason, this woman will collide with me. So, I tapped the brakes so she'd back off. She backed off with the upraised hand that seemed to be "yeah, I know I was in the wrong and being unsafe" She then performed 9 lane changes, all without signaling once, and managed to gain....wait for it.... a whole SEVEN SECONDS of time over 10 miles of driving.

    I see it every day. People driving with little regard for anyone else and raising the costs for the rest of us. On the network, or on the highway, same deal. And they feel like they did something worthwhile. I've talked to many users at work and the VAST majority are not only unapologetic, but actually SMUG. Many times, I'll get the "I do this at home, so it must be okay at work". To which I say, "well you cannot beat your wife and molest your kids at the office, now can you?"

    My tolerance of, and faith in, my fellow man to "do the right thing" are at zero.

    A technical solution (to TCP Congestion Control, etc.) is teaching the pig to sing; horrible results. Charge the thieving, spamming bastards through the nose AND constrain their traffic. That'll get better results than any pollyanna crap about "fair".

  • Re:FUD (Score:3, Informative)

    by asuffield ( 111848 ) <asuffield@suffields.me.uk> on Monday March 24, 2008 @01:15PM (#22846790)

    What about download accelerators? On a congested server, I've seen a near linear increase in bandwidth by opening multiple streams (which many servers now have limited, but not really the point). When I go from 25kb/s to 100kb/s, I took that bandwidth from someone.


    You're making the same mistake as the author of that article. What you fail to realise is precisely why the single connection did not operate as fast: because your kernel was slowing it down incorrectly. You are not fighting other users by opening more connections, you are fighting your own TCP implementation.

    Yes, that bandwidth came from somewhere - but it's probably bandwidth that wasn't in use anyway, and your TCP implementation was just failing to get at it. For a change that dramatic, I bet it was the Windows implementation (which is known to suck).

    All of this has NOTHING TO DO with congestion control on the internet. This is the ad-hoc mode used between equal peers on brainless bus systems like unmanaged switches and hubs. On the internet, congestion control is performed by QoS on real routers. ISPs track the bandwidth load by source address or whatever, and distribute traffic fairly between them (some penny-ante ISPs may run without QoS, but you shouldn't be using them). You are not "gaming the system" by working around the limitations of your own TCP implementation, because that isn't the system.

    The article is pure gibberish. And it's wrong.
  • by Alarindris ( 1253418 ) on Monday March 24, 2008 @01:51PM (#22847382)

    2. Does not certain companies like Blizzard force P2P patches onto their customers?

    Since when is it forced? I keep it turned off and download patches from filefront.
  • by funchords ( 937529 ) <robb@funchords.com> on Monday March 24, 2008 @05:08PM (#22850070) Homepage
    John,

    Fairness is not the problem. Fairness is the wedge-issue that CATV-ISPs are trying to use to justify their behavior.

    I personally like the rudimentary aspects of the weighted fair queuing proposal -- so let's image that we had it. Would Comcast still have a problem with too many upload bytes from too many homes competing for the upload path back to the CMTS? Yes.

    The real problem is that CATV-ISPs are at their upper limits and FIOS is currently superiour. Most CATV nets are DOCSIS 1.1, neighborhoods of 400-500 homes sharing 9-10 Mbps back to the CMTS. Meanwhile, they have to compete with FIOS advertising 15/15, 15-20/5, 20/20, or etc. Mbps. Due to TCP and higher-layer protocol RETURN overhead, CATV ISPs can only offer download speeds if they can reserve 5% in the upload pipe -- so their download is limited by their upload. For example, downloading 8 Mbps from NNTP requires around 200 Kbps. Their upload was 256 Kbps. What happened next: to increase their download speed offering, they pushed out configuration files allowing uploads of 384 Kbps! Cost $0.00 -- no new equipment, no neighborhood splits -- just "let's pretend that we have the bandwidth." After all, customers don't care about upload speed, they just want to download.

    Heh.

8 Catfish = 1 Octo-puss

Working...