Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking

Fixing the Unfairness of TCP Congestion Control 238

duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
This discussion has been archived. No new comments can be posted.

Fixing the Unfairness of TCP Congestion Control

Comments Filter:
  • by thehickcoder ( 620326 ) * on Monday March 24, 2008 @10:44AM (#22844896) Homepage
    The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.
  • by Anonymous Coward on Monday March 24, 2008 @10:50AM (#22844950)
    The point isn't to kill p2p. It's simply to make sure that everyone plays by the same rules... no more exploitive cheating and bandwidth hogging by the few. When there really is leftover bandwidth, p2p filesharers can use as much as they like. But it's ridiculous that when I'm spending 30 seconds downloading CNN.com during a high-demand period, some asshat is using twenty times my bandwidth downloading some file that could just as easily be sent at any time of day.

    It's like taking a sofa on to the subway... if you're going to do it, pick a time when everyone else isn't trying to get to work.
  • by esocid ( 946821 ) on Monday March 24, 2008 @10:50AM (#22844956) Journal

    Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens...Background P2P applications like BitTorrent will experience a more drastic but shorter-duration cut in throughput but the overall time it takes to complete the transfer is unchanged.
    I am all for a change in the protocols as long as it helps everybody. The ISPs win, and so do the customers. As long as the ISPs don't continue to complain and forge packets to BT users I would see an upgrade to the TCP protocol as a solution to what is going on with neutrality issues, as well as an upgrade to fiber optic networks so the US is on par with everyone else.
  • by Ancient_Hacker ( 751168 ) on Monday March 24, 2008 @11:22AM (#22845244)
    Lots of WTF's in TFA:
    • Expecting the client end to backoff is a losing strategy. I can write over NETSOCK.DLL you know.
    • Results of a straw poll at an IETF confab is not particularly convincing.
    • Expecting ISP's to do anything rational is a bit optimistic.
    • It's not a technical nor a political problem, it's an economic one. If users paid per packet the problem would go away overnight.
  • by cromar ( 1103585 ) on Monday March 24, 2008 @11:25AM (#22845274)
    I have to agree with you. There is ever more and more traffic on the internet and we are going to have to look for ways to let everyone have a fair share of the bandwidth (and get a hella lot more of the stuff). Also, this sort of tactic to bandwidth control would probably make it more feasible to get really good speeds at off-peak times. If the ISPs would do this, they could conceivably raise the overall amt. of bandwidth and not worry about one user hogging it all if others need it.

    On the internet as a democracy: would ISPs get more votes because they own more addresses? The users could band together as a union and use our votes to decide the fate of the net. Haha, but Im rambling.
  • by smallfries ( 601545 ) on Monday March 24, 2008 @11:25AM (#22845282) Homepage
    Even if that is true, the congestion won't be correlated between between your streams, if it occurred on the final hops (and hence different final networks). There is a more basic problem than the lack of correlation between congestion on separate streams - the ZDnet editor, and the author of the proposal have no grasp of reality.

    Here's an alternative (but equally effective) way of reducing congestion - ask p2p users to download less. Because that is what this proposal amounts to. A voluntary measure to hammer your own bandwidth for the greater good of the network will not succeed. The idea that applications should have "fair" slices of the available bandwidth is ludicrous. What is fair about squeezing email and p2p into the same bandwidth profile?

    This seems to be a highly political issue in the US. Every ISP that I've used in the UK has used the same approach - traffic shaping using QoS on the routers. Web, Email, VoIP and almost everything else are "high priority". p2p is low priority. This doesn't break p2p connections, or reset them in the way that Verizon has done. But it means that streams belonging to p2p traffic will back off more because there is a higher rate of failure. It "solves" the problem without a crappy user-applied bandaid.

    It doesn't stop the problem that people will use as much bandwidth for p2p apps as they can get away with. This is not a technological problem and there will never be a technological solution. The article has an implicit bias when it talks about users "exploiting congestion control" and "hogging network resources". Well duh! That's why they have have network connections in the first place. Why is the assumption that a good network is an empty network?

    All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges. The final point is the numbers, 10Mb/s is slow for the next-gen connections now being sold (24Mb/s in the UK in some areas), and 100:1 is a large contention ratio. So why shouldn't someone use 240GB of traffic on that connection every month?
  • by Anonymous Coward on Monday March 24, 2008 @11:57AM (#22845618)
    Fuck no! Nothing beyond the IP header should ever matter in a routing decision. TCP is just a payload (and invisible in IPSec!)

    Fairness is when all IP packets are treated equally and any congestion is resolved purely by throwing virtual coins. It is up to the communication endpoints to negotiate stream bandwidth and throttle their output accordingly. If your network is congested to the point that it becomes unusable while all your customers are within their contractually acceptable usage patterns, you have to upgrade your network or lose customers.
  • by TheRaven64 ( 641858 ) on Monday March 24, 2008 @12:09PM (#22845764) Journal
    Depends on how the ISP charges you. IP packets have 3 relevant flags; low delay, high throughput and high reliability. I'm not really sure what high reliability means, since protocols that need reliability tend to implement retransmission higher up the protocol stack, so it can probably be ignored. There are very few things that need high throughput and low latency, so an ISP could place quite a low cap on the amount of data with these flags set you were allowed to send. If you exceeded this, then one or both of the flags would be cleared.

    This then lets the user put each packet into one of three buckets:

    • Low delay.
    • High throughput.
    • Don't care.
    A packet with the low delay flag set would go into a high priority queue, but only a limited fraction of the customer's allotted bandwidth could be used for these. Any more would either be dropped or have the low delay flag cleared. These would be suitable for VoIP use and would have low latency and (ideally) low jitter.

    Those with the high throughput flag set would have no guaranteed minimum latency. They would go into a low-priority, very wide queue. If you're doing a big download, you set this flag - you'll get the whole file faster, but you might get a lot of jitter and latency.

    Perhaps the high reliability flag could be used to indicate which packets should not have the flags cleared if the quota was exceeded (and other packets without the high reliability flag set were available for demotion).

    Of course, Microsoft's TCP/IP stack sets all of these flags by default, so most traffic would simply be placed into the default queue until they fixed it.

  • by irc.goatse.cx troll ( 593289 ) on Monday March 24, 2008 @12:28PM (#22846024) Journal

    Shaping only works as long as you can recognize and classify the data.


    Not entirely true. It works better the more you know about your data, but even knowing nothing you can get good results with a simple rule of prioritizing small packets.

    My original QoS setup was just a simple rule of anything small gets priority over anything large. This is enough to make (most) VoIP, games, SSH, and anything else that is lots of small real time packets all get through over lots of full queued packets (transfers).

    Admittedly BitTorrent was what hurt my original setup, as you end up with a lot of slow peers each trickling transfers in slowly. You could get around this with a hard limit of overall packet rate, or with connection tracking and limiting the number of IPs you hold a connection with per second (and then block things like UDP and ICMP)

    Yeah its an ugly solution, but we're all the ISP's bitch anyways, so they can do what they want.
  • by jd ( 1658 ) <[moc.oohay] [ta] [kapimi]> on Monday March 24, 2008 @01:19PM (#22846860) Homepage Journal
    Fortunately, there are plenty of software mechanisms already around to solve part of the problem. Unfortunately, very few have been tested outside of small labs or notebooks. We have no practical means of knowing what the different QoS strategies would mean in a real-world network. The sooner Linux and the *BSDs can include those not already provided, the better. We can then - and only then - get an idea of what it is that needs fixing. (Linux has a multitude of TCP congestion control algorithms, plus WEB100 for automatic tuning, so it follows that if there's a rea problem, then it's not really there.)

    I know that only a handful of these have been implemented for Linux or *BSD, even fewer for both. Instead of Summer of Code producing stuff nobody ever sees, how about one of the big players invest in students producing some of these meaty chunks of code?

    Schemes for reducing packet loss by active queue management: REM, RED, GRED, WRED, SRED, Adaptive RED, RED-Worcester, Self-Configuring RED, Exponential RED, BLUE, SFB, GREEN, BLACK, PURPLE, WHITE

    Schemes for adjusting packet queues: CBQ, Enhanced CBQ, HFSC, CSFQ, CSPFQ, PrFQ, Local Flow Separation,

    Schemes for scheduling traffic: Gaussian, Least Attained Service, ABE, CSDPS

    Schemes for shaping traffic flows: DSS, Constant bit Rate

    Schemes for bandwidth allocation: RSVP, YESSIR, M-YESSIR

    Schemes for active flow control: ECN, Mark Front ECN

    Schemes for managing queues: Adaptive Virtual Queue, PRIO

  • Solution (Score:2, Interesting)

    by shentino ( 1139071 ) <shentino@gmail.com> on Monday March 24, 2008 @01:24PM (#22846934)
    Personally, I think they should move to a supply and demand based system, where you are charged per packet or per megabyte, and per-unit prices rise during periods of peak demand.

    There are a few power companies who announce 24 hours in advance how much they're going to charge per Kwh in any given hour, and their customers can time their usage to take advantage of slack space, since the prices are based on demand.

    If we do the same thing with internet service *both in and out*, a real bandwidth hog is going to wind up paying a shitload of money for his service, especially if he tries to tie up the net during peak hours. However, a casual user won't get burned.

    And, coincidentally, it would solve the nasty "RIAA's making me block bittorrent" by comcast, or at least make it much harder for them to hide behind such a statement.

    One particular property shared by almost ALL multimedia is that it is friggin HUGE. A movie can easily run into multiple gigabytes.

    So start charging per-unit fees, and you'll put a massive leash on filesharing of media files. Suddenly, all those shared movies are costing major beaucoup to get, and they start going away.
  • by Crypto Gnome ( 651401 ) on Monday March 24, 2008 @04:30PM (#22849618) Homepage Journal
    Yet another clueless wannabe pontificating about something they clearly do not understand. Somedays I wish they'd firewall the RFC list and prevent retards like this commenting on stuff.
    1. many P2P protocols use UDP (skype, anyone?)

    2. proposes a client/application side change in behaviour, while they're whining about a failure of the protocol
      - (think my car needs a grease-and-oilchange, so I'll go walk the dog - proposed solution bears no relationship to the problem)

    3. enforcement proposal ignores how the interweb works, there's NO difference (at the IP level) between a user multi-streaming a TCP download of a single file, and a user opening multiple tcp connections to a webserver to simultaneously download *all* the crappy bits-n-shits that make up a web page (ie parallel non-pipelined http requests) rather than one-at-a-time
      - yet the first would be argued as an "unfair use* while the second is perfectly normal and acceptable behaviour
    I could go on for hours.
    • If 'the protocol' is broken, then 'the protocol' needs to change, recommending an app-level change only opens up further opportunity for abuse
      - after all, if the app developers were genuinely interested in playing nicely in the sandbox, they would already

    • recommending an *external* enforcement will never work, that costs time and money and who is gonna pay me to implement it?
      - TCP congestion control "works" (ie as engineered) because it's inherent in the protocol implementation, does not require "enforcement" by the ISP

    • P2P users are initiating "sessions" (assuming they're still using TCP) to different endpoints, so you don't have a beautiful and neat bundle of parallel-tubes as described in the metaphor
      - ie most of your assumptions about "how this works" are wrong.
    Of course, the entire article starts out from a baseless assumption (that users should get 'fair' access to the interweb).

    Anyone read their ISP Ts&Cs ? Ever?

    IP is a *best effort* protocol.... we will punt your packet upstream and hope it gets there - have a nice day.

    There is *no* guarantee of *anything*.

    Now, as far as anything approaching a "solution" to the supposed "problem".
    ..... As long as we're talking about *application level* tweaks....

    What about all the P2P developers marking their "data transmission" packets (whatever the protocol) with the lowest-of-the-low QoS markings.
    --> "if you need to manage congestion, I am exceedingly eligible for shaping"

    That would work nicely.

    In fact, if YouTube (and friends) did the same, it would actually *encourage* ISPs to enable proper QoS processing throughout their entire networks.

    If applications (and protocols) inherently played nicely in the sandbox, your ISP would bend-over-backwards to guarantee a near-perfect service. (mainly because it'd thusly be near-trivial to do)

    And yes I realise this raises the spectre of "Net Neutrality" - but seriously folks how is that argument any different than "because of the terorists" or "think of the children"?

    ISPs Applying QoS to traffic in order to guarantee the quality is not inherently bad. The *bad* ness comes about because they will (yes, I said WILL, not MIGHT or COULD) use said QoS practices to push their own services/enforce their own policies (we hate P2P/ignore client-QoS-markings, etc , etc, etc).

    All those people who're frothing-at-the-mouth because QoS is BAD need a RABIES shot.

    In an ideal world, we'd never need QoS. QoS is a congestion management mechanism. If you have no congestion, then you don't need to apply QoS techniques.

    But until the day when we all have quantum-entangled communications processors with near-infinite effective bandwidth we're going to need QoS, somewhere.
  • by Mike McTernan ( 260224 ) on Monday March 24, 2008 @05:04PM (#22850030)

    I'd argue for weighted fair queuing and QOS in the cable box.

    Seem to me that for ADSL it would be ideally placed in the DSLAM, where there is already a per-subscriber connection (in any case, most home users will only get 1 IP address, hence making a 1:1 mapping for subscriber to IP -nothing need be per IP connection as the original article assumes). In fact, the wikipedia page on DSLAMs [wikipedia.org] says QoS is already an additional feature, mentioning priority queues.

    So I'm left wondering why bandwidth hogs are still a problem for ADSL. You say that this is a "huge collection of tuning parameters", and I accept that correctly configuring this stuff maybe complex, but this is surely the job of the ISPs. Maybe I'm overestimating the capabilities of the installed DSLAMs, in which case I wonder if BTs 21CN [btplc.com] will help.

    Certainly though, none of the ISPs seem to be talking about QoS per subscriber. Instead they prefer to differentiate services, ranking P2P and streaming lower than uses on the subscribers behalf. PlusNet (a prominent UK ISP) have a pizza analogy [plus.net] to illustrate how sharing works - using their analogy, PlusNet would give you lots of Margarita slices, but make you wait for a Hawaiian even if you aren't eating anything else. Quite why they think this is acceptable is unknown to me; they should be able to enforce how many slices I get at the DSLAM, but still allow me to select the flavours at my house (maybe I get my local router to apply QoS policies when it takes packets from the LAN to the slower ADSL, or mark streams using TOS bits in IPv4 or the much better IPv6 QoS features to assist the shaping deeper into the network).

  • by Percy_Blakeney ( 542178 ) on Monday March 24, 2008 @07:50PM (#22851456) Homepage

    No, its expecting the ISP to live up to its side of the contract... either way is fine, but they have to follow their agreement.

    Are you saying that your ISP isn't living up to its contract with you? You don't need anything fancy to fix that -- just file a lawsuit. If they truly promised you unlimited bandwidth (as you interpret it), then you should easily win.

    On the other hand, you might not completely understand your contract, and thus would take a serious beating in court. Either way, you need to accept the harsh reality that any ISP that offers broadband service (1+ Mbps) without transfer caps will go out of business within 2 years.

  • Not just Freenet (Score:3, Interesting)

    by StCredZero ( 169093 ) on Monday March 24, 2008 @08:59PM (#22852030)
    What he's describing is not just Freenet. There's also a little bit of Bittorrent in there as well, and some more ingredients. Freenet is about distribution to prevent censorship. What he's proposing is to decentralize to turn the *entire Internet* into a huge broadcast cache. This will also have the effect of making censorship difficult, but that's only a byproduct.

You have a message from the operator.

Working...