Fixing the Unfairness of TCP Congestion Control 238
duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
Not all sessions experience the same congestion (Score:5, Interesting)
This is a good proposal (Score:1, Interesting)
It's like taking a sofa on to the subway... if you're going to do it, pick a time when everyone else isn't trying to get to work.
Weighted TCP solution (Score:5, Interesting)
Congestion shaping at client end? WTF? (Score:3, Interesting)
Re:Weighted TCP solution (Score:3, Interesting)
On the internet as a democracy: would ISPs get more votes because they own more addresses? The users could band together as a union and use our votes to decide the fate of the net. Haha, but Im rambling.
Re:Not all sessions experience the same congestion (Score:5, Interesting)
Here's an alternative (but equally effective) way of reducing congestion - ask p2p users to download less. Because that is what this proposal amounts to. A voluntary measure to hammer your own bandwidth for the greater good of the network will not succeed. The idea that applications should have "fair" slices of the available bandwidth is ludicrous. What is fair about squeezing email and p2p into the same bandwidth profile?
This seems to be a highly political issue in the US. Every ISP that I've used in the UK has used the same approach - traffic shaping using QoS on the routers. Web, Email, VoIP and almost everything else are "high priority". p2p is low priority. This doesn't break p2p connections, or reset them in the way that Verizon has done. But it means that streams belonging to p2p traffic will back off more because there is a higher rate of failure. It "solves" the problem without a crappy user-applied bandaid.
It doesn't stop the problem that people will use as much bandwidth for p2p apps as they can get away with. This is not a technological problem and there will never be a technological solution. The article has an implicit bias when it talks about users "exploiting congestion control" and "hogging network resources". Well duh! That's why they have have network connections in the first place. Why is the assumption that a good network is an empty network?
All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges. The final point is the numbers, 10Mb/s is slow for the next-gen connections now being sold (24Mb/s in the UK in some areas), and 100:1 is a large contention ratio. So why shouldn't someone use 240GB of traffic on that connection every month?
Re:Not all protocols should be supported equally (Score:1, Interesting)
Fairness is when all IP packets are treated equally and any congestion is resolved purely by throwing virtual coins. It is up to the communication endpoints to negotiate stream bandwidth and throttle their output accordingly. If your network is congested to the point that it becomes unusable while all your customers are within their contractually acceptable usage patterns, you have to upgrade your network or lose customers.
Re:This is a good proposal (Score:3, Interesting)
This then lets the user put each packet into one of three buckets:
Those with the high throughput flag set would have no guaranteed minimum latency. They would go into a low-priority, very wide queue. If you're doing a big download, you set this flag - you'll get the whole file faster, but you might get a lot of jitter and latency.
Perhaps the high reliability flag could be used to indicate which packets should not have the flags cleared if the quota was exceeded (and other packets without the high reliability flag set were available for demotion).
Of course, Microsoft's TCP/IP stack sets all of these flags by default, so most traffic would simply be placed into the default queue until they fixed it.
Re:So right, yet so wrong (Score:4, Interesting)
Not entirely true. It works better the more you know about your data, but even knowing nothing you can get good results with a simple rule of prioritizing small packets.
My original QoS setup was just a simple rule of anything small gets priority over anything large. This is enough to make (most) VoIP, games, SSH, and anything else that is lots of small real time packets all get through over lots of full queued packets (transfers).
Admittedly BitTorrent was what hurt my original setup, as you end up with a lot of slow peers each trickling transfers in slowly. You could get around this with a hard limit of overall packet rate, or with connection tracking and limiting the number of IPs you hold a connection with per second (and then block things like UDP and ICMP)
Yeah its an ugly solution, but we're all the ISP's bitch anyways, so they can do what they want.
I agree,but it's hard. (Score:3, Interesting)
I know that only a handful of these have been implemented for Linux or *BSD, even fewer for both. Instead of Summer of Code producing stuff nobody ever sees, how about one of the big players invest in students producing some of these meaty chunks of code?
Schemes for reducing packet loss by active queue management: REM, RED, GRED, WRED, SRED, Adaptive RED, RED-Worcester, Self-Configuring RED, Exponential RED, BLUE, SFB, GREEN, BLACK, PURPLE, WHITE
Schemes for adjusting packet queues: CBQ, Enhanced CBQ, HFSC, CSFQ, CSPFQ, PrFQ, Local Flow Separation,
Schemes for scheduling traffic: Gaussian, Least Attained Service, ABE, CSDPS
Schemes for shaping traffic flows: DSS, Constant bit Rate
Schemes for bandwidth allocation: RSVP, YESSIR, M-YESSIR
Schemes for active flow control: ECN, Mark Front ECN
Schemes for managing queues: Adaptive Virtual Queue, PRIO
Solution (Score:2, Interesting)
There are a few power companies who announce 24 hours in advance how much they're going to charge per Kwh in any given hour, and their customers can time their usage to take advantage of slack space, since the prices are based on demand.
If we do the same thing with internet service *both in and out*, a real bandwidth hog is going to wind up paying a shitload of money for his service, especially if he tries to tie up the net during peak hours. However, a casual user won't get burned.
And, coincidentally, it would solve the nasty "RIAA's making me block bittorrent" by comcast, or at least make it much harder for them to hide behind such a statement.
One particular property shared by almost ALL multimedia is that it is friggin HUGE. A movie can easily run into multiple gigabytes.
So start charging per-unit fees, and you'll put a massive leash on filesharing of media files. Suddenly, all those shared movies are costing major beaucoup to get, and they start going away.
Another Clueless Moron with an Opinion from ZDNET (Score:3, Interesting)
- (think my car needs a grease-and-oilchange, so I'll go walk the dog - proposed solution bears no relationship to the problem)
- yet the first would be argued as an "unfair use* while the second is perfectly normal and acceptable behaviour
- after all, if the app developers were genuinely interested in playing nicely in the sandbox, they would already
- TCP congestion control "works" (ie as engineered) because it's inherent in the protocol implementation, does not require "enforcement" by the ISP
- ie most of your assumptions about "how this works" are wrong.
Anyone read their ISP Ts&Cs ? Ever?
IP is a *best effort* protocol.... we will punt your packet upstream and hope it gets there - have a nice day.
There is *no* guarantee of *anything*.
Now, as far as anything approaching a "solution" to the supposed "problem".
What about all the P2P developers marking their "data transmission" packets (whatever the protocol) with the lowest-of-the-low QoS markings.
--> "if you need to manage congestion, I am exceedingly eligible for shaping"
That would work nicely.
In fact, if YouTube (and friends) did the same, it would actually *encourage* ISPs to enable proper QoS processing throughout their entire networks.
If applications (and protocols) inherently played nicely in the sandbox, your ISP would bend-over-backwards to guarantee a near-perfect service. (mainly because it'd thusly be near-trivial to do)
And yes I realise this raises the spectre of "Net Neutrality" - but seriously folks how is that argument any different than "because of the terorists" or "think of the children"?
ISPs Applying QoS to traffic in order to guarantee the quality is not inherently bad. The *bad* ness comes about because they will (yes, I said WILL, not MIGHT or COULD) use said QoS practices to push their own services/enforce their own policies (we hate P2P/ignore client-QoS-markings, etc , etc, etc).
All those people who're frothing-at-the-mouth because QoS is BAD need a RABIES shot.
In an ideal world, we'd never need QoS. QoS is a congestion management mechanism. If you have no congestion, then you don't need to apply QoS techniques.
But until the day when we all have quantum-entangled communications processors with near-infinite effective bandwidth we're going to need QoS, somewhere.
Re:Why this is an issue now (Score:4, Interesting)
Seem to me that for ADSL it would be ideally placed in the DSLAM, where there is already a per-subscriber connection (in any case, most home users will only get 1 IP address, hence making a 1:1 mapping for subscriber to IP -nothing need be per IP connection as the original article assumes). In fact, the wikipedia page on DSLAMs [wikipedia.org] says QoS is already an additional feature, mentioning priority queues.
So I'm left wondering why bandwidth hogs are still a problem for ADSL. You say that this is a "huge collection of tuning parameters", and I accept that correctly configuring this stuff maybe complex, but this is surely the job of the ISPs. Maybe I'm overestimating the capabilities of the installed DSLAMs, in which case I wonder if BTs 21CN [btplc.com] will help.
Certainly though, none of the ISPs seem to be talking about QoS per subscriber. Instead they prefer to differentiate services, ranking P2P and streaming lower than uses on the subscribers behalf. PlusNet (a prominent UK ISP) have a pizza analogy [plus.net] to illustrate how sharing works - using their analogy, PlusNet would give you lots of Margarita slices, but make you wait for a Hawaiian even if you aren't eating anything else. Quite why they think this is acceptable is unknown to me; they should be able to enforce how many slices I get at the DSLAM, but still allow me to select the flavours at my house (maybe I get my local router to apply QoS policies when it takes packets from the LAN to the slower ADSL, or mark streams using TOS bits in IPv4 or the much better IPv6 QoS features to assist the shaping deeper into the network).
Re:Not all sessions experience the same congestion (Score:4, Interesting)
Are you saying that your ISP isn't living up to its contract with you? You don't need anything fancy to fix that -- just file a lawsuit. If they truly promised you unlimited bandwidth (as you interpret it), then you should easily win.
On the other hand, you might not completely understand your contract, and thus would take a serious beating in court. Either way, you need to accept the harsh reality that any ISP that offers broadband service (1+ Mbps) without transfer caps will go out of business within 2 years.
Not just Freenet (Score:3, Interesting)