Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking

Fixing the Unfairness of TCP Congestion Control 238

duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
This discussion has been archived. No new comments can be posted.

Fixing the Unfairness of TCP Congestion Control

Comments Filter:
  • Neutrality debate? (Score:2, Insightful)

    by Anonymous Coward on Monday March 24, 2008 @10:51AM (#22844978)

    Whichever side of the neutrality debate you're on, this is worth consideration.

    There is a debate? I thought it was more like a few monied interests decided "there is a recognized correct way to handle this issue; I just make more profit and have more control if I ignore that." That's not the same thing as a debate.
  • by spydum ( 828400 ) on Monday March 24, 2008 @10:56AM (#22845018)
    For what it's worth, Net Neutrality IS a political fight, p2p is not the cause, but just the straw that broke the camels back. Fixing the fairness problem of tcp flow control will not make Net Neutrality go away. Nice fix though, too bad getting people to adopt it would be a nightmare. Where was this suggestion 15 years ago?
  • by Chris Snook ( 872473 ) on Monday March 24, 2008 @10:58AM (#22845024)
    Weighted TCP is a great idea. That doesn't change the fact that net neutrality is a good thing, or that traffic shaping is a better fix for network congestion than forging RST packets.

    The author of this article is clearly exploiting the novelty of a technological idea to promote his slightly related political agenda, and that's deplorable.
  • by Anonymous Coward on Monday March 24, 2008 @10:58AM (#22845032)
    Just because someone comes up with a high-bandwidth protocol or service does not mean that it can be supported or should be supported with our current network capacity - especially at the expense of other protocols. Nor does it mean network providers (and ultimately users) should bare the expense of every new protocol someone on the network edge dreams up. Throttling disruptive protocols may be the least reactive solution. Blocking such protocols may be equally valid. I don't see this as a fairness issue.
  • Wag their fingers? (Score:2, Insightful)

    by rastilin ( 752802 ) on Monday March 24, 2008 @11:04AM (#22845074)
    How do they get off saying THAT line? By their own admission, the P2P apps simply TRANSFER MORE DATA, it's not an issue of congestion control if one guy uploads 500KB/Day and another uses 500MB in the same period. Hell you could up-prioritize all HTTP uploads and most P2P uploaders wouldn't care or notice. The issue with Comcast is that instead of prioritizing HTTP, they're dropping Bittorrent. There's a big difference in taking a small speed bump to non-critical protocols for the benefit of the network and having those protocols disabled entirely.

    Between the data transfer amount, and THAT line, this reads like a puff piece. It's not as if the P2P applications were the first to come up with multiple connections either, I'm pretty sure download managers like "GetRight" did it.
  • because the Internet is a group of autonomous systems (hence the identifier "ASN") agreeing to exchange traffic for as long as it makes sense for them to do so. There is no central Internet "authority" (despite what Dept of Commerce, NetSol, Congress and others keep trying to assert) - your rules end at the edge of my network. Your choices are to exchange traffic with me, or not, but you don't get to tell me how to run things (modulo the usual civil and criminal codes regarding the four horsemen of the information apocalypse). Advocates of network neutrality legislation would clearly like to have some add'l regulatory framework in place to provide a stronger encouragement to "good behavior" (as set out in the RFCs and in the early history of internetworks and the hacking community) than the market provides in some cases. It remains to be seen whether the benefits provided by that framework would at all outweigh the inevitable loopholes, unintended consequences and general heavy-handed cluelessness that's been the hallmark of any federal technology legislation.

    Those networks that show consistently boorish behavior to other networks eventually find themselves isolated or losing customers (e.g. Cogent, although somehow they still manage to retain some business - doubtless due to the fact that they're the cheapest transit you can scrape by with in most cases, although anybody who relies on them is inevitably sorry).

    The Internet will be a democracy when every part of the network is funded, built and maintained by the general public. Until then, it's a loose confederation of independent networks who cooperate when it makes sense to do so. Fortunately, the exceedingly wise folks that wrote the protocols that made these networks possible did so in a manner that encourages interconnectivity (and most large networks tend to be operated by folks with similar clue - when they're not, see the previous paragraph).

    Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.
  • by Anonymous Coward on Monday March 24, 2008 @11:11AM (#22845130)
    Throttling disruptive protocols may be the least reactive solution. Blocking such protocols may be equally valid. I don't see this as a fairness issue.

    There is no such thing as a "disruptive protocol", it's not like there's some kid with tourettes screaming obscenities at the back of the classroom keeping other people from learning. No protocol goes out and kneecaps other packets on its own. If the ISP wants to sell a megabit of bandwidth, it has plenty of tools available to make sure I don't take more than my megabit of bandwidth that don't rely on specifically targeting protocols like VoIP or iTMS that compete with products they sell.
  • by Sir.Cracked ( 140212 ) on Monday March 24, 2008 @11:18AM (#22845200) Homepage
    This article is well fine and good, but it fails to recognize that there are two types of packet discrimination being kicked around. Protocol filtering/prioritization, and Source/Destination filtering/prioritization. There are certainly good and bad ways of doing the former, and some of the bad ways are really bad (for a "for instance", see Comcast). However, the basic concept, that network bandwidth is finite over a set period of time, and that finite resource must be utilized efficiently, is not one most geek types will disagree with you on. Smart treatment of packets is something few object to.

    What brings a large objection is the Source/Destination filtering. I'm going to downgrade service on packets coming from Google video, because they haven't paid our "upgrade" tax, and coincidentally, we're invested in Youtube. This is an entirely different issue, and is not an engineering issue at all. It is entirely political. We know is technologically possible. People block sites today, for parental censorship reasons, among others. It would be little challenge, as an engineer, to set to a VERY low priority an arbitrary set of packets from a source address. This however violates what the internet is for, and in the end, if my ISP is doing this, am I really connected to the "Internet", or just a dispersed corporate net, similar to the old AOL.

    This is, and will be, a political question, and if it goes the wrong way, will destroy what is fundamentally interesting about the net. The ability to, with one connection, talk to anyone else, anywhere in the world, no different then if they were in the next town over.

  • Confusing... (Score:4, Insightful)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday March 24, 2008 @11:26AM (#22845294) Journal

    I need coffee before I'll really understand this, but here's a first attempt:

    Despite the undeniable truth that Jacobsons TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred. Groups like the Free Press and Vuze (a company that relies on P2P) files FCC complaints against ISPs (Internet Service Providers) like Comcast that try to mitigate the damage caused by bandwidth hogging P2P applications by throttling P2P.

    Ok, first of all, that isn't about TCP congestion avoidance, at least not directly. (Doesn't Skype use UDP, anyway?)

    But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections -- thus, BitTorrent can still work, and still be reasonably fast, by decreasing the number of connections. Make too many connections and Comcast starts dropping them, no matter what the protocol.

    They tell us that P2P isnt really a bandwidth hog and that P2P users are merely operating within their contracted peak bitrates. Never mind the fact that no network can ever support continuous peak throughput for anyone and that resources are always shared, they tell us to just throw more money and bandwidth at the problem.

    Well, where is our money going each month?

    But more importantly, the trick here is that no ISP guarantees any peak bitrate, or average bitrate. Very few ISPs even tell you how much bandwidth you are allowed to use, but most reserve the right to terminate service for any reason, including "too much" bandwidth. Comcast tells you how much bandwidth you may use, in units of songs, videos, etc, rather than bits or bytes -- kind of insulting, isn't it?

    I would be much happier if ISPs were required to disclose, straight up, how much total bandwidth they have (up and down), distributed among how many customers. Or, at least, full disclosure of how much bandwidth I may use as a customer. Otherwise, I'm going to continue to assume that I may use as much bandwidth as I want.

    But despite all the political rhetoric, the reality is that the ISPs are merely using the cheapest and most practical tools available to them to achieve a little more fairness and that this is really an engineering problem.

    Yes, it is a tricky engineering problem. But it's also a political one, as any engineering solution would have to benefit everyone, and not single out individual users or protocols. Most solutions I've seen that accomplish this also create a central point of control, which makes them suspect -- who gets to choose what protocols and usage patterns are "fair"?

    Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens. This is accomplished by the single-stream application tagging its TCP stream at a higher weight than a multi-stream application. TCP streams with higher weight values wont be slowed as much by the weighted TCP stack whereas TCP streams with smaller weight values will be slowed more drastically.

    Alright. But as I understand it, this is a client-side implementation. How do you enforce it?

    At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence cheat advantage by installing a newer TCP implementation.

    Nope. What I wonder is why a P2P user might want to do that, rather than install a different TCP implementation -- one which tags every single TCP connection as "weighted".

    Oh, and who gets to tag a connection -- the source, or the destination? Remember that on average, some half of th

  • FUD (Score:5, Insightful)

    by Detritus ( 11846 ) on Monday March 24, 2008 @11:27AM (#22845298) Homepage
    The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.
  • by Anonymous Coward on Monday March 24, 2008 @11:27AM (#22845304)
    3/4 of this article is basically an argument against net neutrality and p2p. It also seems to misrepresent the way ISP currently work for users to make its point. The article says if a user opens up multiple streams (not just p2p, but anything, FTP, HTTP downloads, Bittorrent), they're somehow "hogging" 10x the bandwidth of other users on the network.

    But any idiot knows this isn't true, opening multiple streams only hurts you locally (causing everyone else in your house major slowdown and latency). The maximum download rate is the same and governed by your modem speed (1.5mbit, 5mbit, etc). I'm not suddenly downloading at 100mbit and hogging the shared bandwidth of the ISP. Also, if your ISP's TOS has no clause relating to bandwidth usage or limitations, you have the right to use all your available bandwidth 24/7 within reason. You pay for it. If that's not enough then charge more. I've actually called my ISP on this before and specifically asked them "So it's OK if I am downloading at max speed 24 hours a day all month." And they unequivocally stated yes.

    Also doesn't anyone else find it funny that the author seems to think everyone should be limited to ONE stream? "Only big corporations need more..." WTF?
  • by vertinox ( 846076 ) on Monday March 24, 2008 @11:34AM (#22845368)
    But it's ridiculous that when I'm spending 30 seconds downloading CNN.com during a high-demand period, some asshat is using twenty times my bandwidth downloading some file that could just as easily be sent at any time of day.

    1. Could that possibly be to the processor demand on the CNN servers at peak times?
    2. Does not certain companies like Blizzard force P2P patches onto their customers?
    3. Is your 30 second video file just as important as a technician using torrents to download a Linux Distro to put on a server used for business they need up and running ASAP?
    4. And lastly... Someone using a torrent shouldn't soak up an ISPs entire bandwidth... Unless someone at CNN is using the web server to host torrents but thats nothing you or your ISP can control.
  • by Percy_Blakeney ( 542178 ) on Monday March 24, 2008 @12:32PM (#22846090) Homepage

    Here is the glaring flaw in his proposal:

    That means the client side implementation of TCP that hasn't fundamentally changed since 1987 will have to be changed again and users will need to update their TCP stack.

    So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates? He does mention this problem:

    At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence "cheat" advantage by installing a newer TCP implementation... I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole.

    There are two issues with this solution:

    1. How would the ISP distinguish between a network running NAT and a single user running P2P?
    2. If you can reliably detect "cheaters", why do you need to update the users' TCP stacks? You would just throttle the cheaters and be done with it.

    It's nice that he wants to find a solution to the P2P bandwidth problem, but this is not it.

  • by gweihir ( 88907 ) on Monday March 24, 2008 @12:47PM (#22846314)
    Every year or so somebody else proposes to "fix TCP". It never happens. Why?

    1) TCP works well.
    2) TCP is in a lot of code and cannot easily be replaced
    3) If you need something else, alternatives are there, e.g. UDP, RTSP and others.

    Especially 3) is the killer. Applications that need it are already using other protocols. This article, like so many similar ones before it, is just hot air by somebody that did either not do their homework or want attention without deserving it.
  • by Alarindris ( 1253418 ) on Monday March 24, 2008 @01:21PM (#22846896)
    I would like to interject that just because cellphones have ridiculous plans, doesn't mean the internet should. On my land line I get unlimited usage + unlimited long distance for a flat rate every month.

    It's just not a good comparison.
  • by electrictroy ( 912290 ) on Monday March 24, 2008 @01:31PM (#22847038)
    And I bet you pay a lot more for those "unlimited minutes" than the $5 a month I pay.

    Which is my point in a nutshell: - people who want unlimited gigabytes, should be paying a lot more than what I'm paying for my limited service ($15 a month). That's entirely and completely fair. Take more; pay more.

    Just like electrical service, cell service, water service, et cetera, et cetera.

  • by electrictroy ( 912290 ) on Monday March 24, 2008 @01:34PM (#22847068)
    P.S.

    There are some persons who think they should be able to download 1000 or even 10,000 times more data than what I download, and yet still pay the exact same amount of money.

    That's greed.

    If you want more, than you should pay more than what other people pay.
  • by dgatwood ( 11270 ) on Monday March 24, 2008 @02:05PM (#22847654) Homepage Journal

    But then they couldn't advertise that they are 10x the speed of dialup because they'd all probably be slower if they had to assume more than a few percent utilization.... :-)

  • by Animats ( 122034 ) on Monday March 24, 2008 @02:11PM (#22847764) Homepage

    As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), I suppose should say something.

    The way this was supposed to work is that TCP needs to be well-behaved because it is to the advantage of the endpoint to be well-behaved. What makes this work is enforcement of fair queuing at the first router entering the network. Fair queuing balances load by IP address, not TCP connection, and "weighted fair queueing" allows quality of service controls to be imposed at the entry router.

    The problem now is that the DOCSIS approach to cable modems, at least in its earlier versions, doesn't impose fair queuing at entry to the network from the subscriber side. So congestion occurs further upstream, near the cable headend, in the "middle" of the network. By then, there are too many flows through the routers to do anything intelligent on a per-flow basis.

    We still don't know how to handle congestion in the middle of an IP network. The best we have is "random early drop", but that's a hack. The whole Internet depends on stopping congestion near the entry point of the network. The cable guys didn't get this right in the upstream direction, and now they're hurting.

    I'd argue for weighted fair queuing and QOS in the cable box. Try hard to push the congestion control out to the first router. DOCSIS 3 is a step in the right direction, if configured properly. But DOCSIS 3 is a huge collection of tuning parameters in search of a policy, and is likely to be grossly misconfigured.

    The trick with quality of service is to offer either high-bandwidth or low latency service, but not both together. If you request low latency, your packets go into a per-IP queue with a high priority but a low queue length. Send too much and you lose packets. Send a little, and they get through fast. If you request high bandwidth, you get lower priority but a longer queue length, so you can fill up the pipe and wait for an ACK.

    But I have no idea what to do about streaming video on demand, other than heavy buffering. Multicast works for broadcast (non-on-demand) video, but other than for sports fans who want to watch in real time, it doesn't help much. (I've previously suggested, sort of as a joke, that when a stream runs low on buffered content, the player should insert a pre-stored commercial while allowing the stream to catch up. Someone will probably try that.)

    John Nagle

  • by Shakrai ( 717556 ) on Monday March 24, 2008 @02:28PM (#22848074) Journal

    That's entirely and completely fair. Take more; pay more.

    Claiming that something is "entirely and completely fair" while using the cellular industry as your example strains creditability just a tad.

    There is nothing fair about the billing system used by the wireless industry. It's a holdover to the early 90s when spectrum was limited and the underlying technology (AMPS) was grossly inefficient with it's use of said spectrum. Modern technology is drastically more efficient at cramming more calls into the same amount of spectrum and the carriers have much more spectrum now then at any point in the past.

    Do you you think charging $0.15 for a 160 byte text message is "fair"? Do you think $0.40/min overages are "fair"? Why are the first 450 minutes worth 8.8 cents ($39.99 / 450) but minute #451 is worth four and a half times as much $0.40)? That's your model of fairness?

    There is nothing fair about the way the wireless industry operates, least of all it's billing practices. This is seriously the model you want to see adopted for the internet? Charges for individual services way above the actual cost (*cough* SMS *cough*) and overages that bear no relation to the actual cost to the carrier and exist solely to pad the bottom line?

  • by John Sokol ( 109591 ) on Monday March 24, 2008 @02:38PM (#22848256) Homepage Journal
    Back in 1994 to 1997 I was in many debates on just this subject.

    We were buying T1 and T3 for use with video streaming and the ISP where getting upset that we were using 90% of the capacity they sold us. Apparently they specked out their cost based on office use doing web surfing. And based their models on older Telco traffic models where they needed 100 lines of outbound bandwidth for every 10000+ phone lines based on supporting 95% of the peak throughput.

    But we concluded if you are selling us 1.5Mbps I dam well better be able to use 1.5Mbps, don't blame me when I use what was sold to me.

    Well I see this as the same problem. If Comcast or Verizon sells me internet at at data rate, then I expect to be able to use all of it. There is nothing unfair about me using what I was sold. If they don't like it then they need to change their contractual agreements with me and change their hardware to match!

    Same goes with the internal infrastructure, backbones and exchange point. If you can't support it don't sell it! Don't attack the P2P users, they are using what they PAID FOR and what was sold to them!!! If they are not getting it, they should file a class action suit.
    No more then if you local cable company decided that 4 hr of TV was your limit and they would start to degrade your reception if you watched more, though this wasn't in the contract you signed up for.

    On the other side, P2P should be given the means to hug the edges of the network. By this I mean communication between 2 cable modem or DSL users running off the same upstream routers (less hops) should be preferable and more efficient, not clogging up the more costly backbones. Currently P2P doesn't take any of that into consideration. Maybe ISP's could consider some technical solution to that rather then trying to deny customers the very access they are contractually bound to provide...
  • by AuMatar ( 183847 ) on Monday March 24, 2008 @03:05PM (#22848682)
    No, its expecting the ISP to live up to its side of the contract. If the contract is pay per gig, then the high downloaders will pay more. If the ISP sells an unlimited plan, it should be unlimited. Either way is fine, but they have to follow their agreement.
  • Re:Confusing... (Score:3, Insightful)

    by Alsee ( 515537 ) on Monday March 24, 2008 @03:06PM (#22848694) Homepage
    But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections

    No, Comcast is specifically examining your data and is specifically forging packets to kill P2P connections.

    (a) George Ou is a corporate shill; and
    (b) George Ou considers BitTorrent and all P2P teh evilz of teh piratez.

    So his position is that Comcast should be doing exactly what they are doing, spying on your data and killing your connection whenever you use teh evilz P2P.

    His position is that ISPs should continue to sell flat rate "all you can eat" internet, but that they should spy on your traffic and be free to block any anything they don't like. Of course somehow in George Ou fantasyland, examining the CONTENT of your data to see if it is P2P or not, and filtering P2P content, is somehow magically not content based, and people on the Net Neutrality side calling it content based filtering are teh evilz liars.

    His position is that the EFF and other Net Neutrality defenders are teh evilz for suggesting any sort of content-neutral but usage-aware internet plans, because in Australia they have a variety of plans that limit usuage or charge for high usage, and they are all obscenely expensive. He rants that the EFF and Net Neutrality advocates are going to saddle everyone with crazy high internet bills. Of course he is being deliberately STUPID in neglecting to notice the fact that obscene Australian ISP fees have absolutely nothing to do with usage-aware plans, that Australian prices are crazy high because there are few Australians sparsely spread across a continent on the opposite side of the planet from everyone else splitting the cost of an undersea pipe to the rest of the world.

    -
  • by dgatwood ( 11270 ) on Monday March 24, 2008 @04:50PM (#22849874) Homepage Journal

    Losing access is completely unacceptable. Reduced bandwidth above a certain tier would be tolerable, but barely. You are, however, assuming that it is reasonable for total usage to be capped at all, and in so doing, are buying into the B.S. arguments of the sleazier ISPs.... There's no good reason to do either one. If the ISP are not irresponsibly overselling their capacity, neither a cap nor a usage-metered rate should be necessary.

    That said, if I had to choose something that I would consider acceptable, it would be this: every few minutes, each user is lumped into one of three categories: somebody doing intense downloading, casual browsing, or "offline" (only a trickle of bandwidth usage). The cutover point between casual browsing and intense downloading should depend in part on what percentage of the time a user has been "offline" and in part on the time of day/overall network utilization. A few services like VoIP are prioritized through the usual QoS techniques. All other network connections are prioritized based on which group the user fell into during the preceding five minute period. Users in the "offline" state get top priority for their trickle of bandwidth. Users in the casual browsing category get the next highest priority. Whatever bandwidth remains from the ISP is divided equally among people doing heavy downloading. This categorization should be reevaluated every few minutes so that a user is not penalized for an entire month or whatever because they downloaded a movie off of iTunes the night before.

    With such a scheme, there's no reason whatsoever to do metered or capped usage. Downloaders still get almost as much bandwidth as they otherwise would, but casual users would see reliably fast performance. This should be doable without any hardware infrastructure enhancements. Frankly, there's really no good reason to do metered usage. It is pretty clear that the only people who truly want that are the Telcos, and their reasons have nothing to do with user satisfaction or network performance and everything to do with being envious of the mobile phone operators and their ability to screw^H^H^H^H^Hnickel-and-dime their customers....

There are two ways to write error-free programs; only the third one works.

Working...