Fixing the Unfairness of TCP Congestion Control 238
duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
Neutrality debate? (Score:2, Insightful)
There is a debate? I thought it was more like a few monied interests decided "there is a recognized correct way to handle this issue; I just make more profit and have more control if I ignore that." That's not the same thing as a debate.
Good luck with that.. (Score:4, Insightful)
So right, yet so wrong (Score:5, Insightful)
The author of this article is clearly exploiting the novelty of a technological idea to promote his slightly related political agenda, and that's deplorable.
Not all protocols should be supported equally (Score:0, Insightful)
Wag their fingers? (Score:2, Insightful)
Between the data transfer amount, and THAT line, this reads like a puff piece. It's not as if the P2P applications were the first to come up with multiple connections either, I'm pretty sure download managers like "GetRight" did it.
ATTN CmdrTaco: it's not a democracy because ... (Score:3, Insightful)
Those networks that show consistently boorish behavior to other networks eventually find themselves isolated or losing customers (e.g. Cogent, although somehow they still manage to retain some business - doubtless due to the fact that they're the cheapest transit you can scrape by with in most cases, although anybody who relies on them is inevitably sorry).
The Internet will be a democracy when every part of the network is funded, built and maintained by the general public. Until then, it's a loose confederation of independent networks who cooperate when it makes sense to do so. Fortunately, the exceedingly wise folks that wrote the protocols that made these networks possible did so in a manner that encourages interconnectivity (and most large networks tend to be operated by folks with similar clue - when they're not, see the previous paragraph).
Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.
Re:Not all protocols should be supported equally (Score:1, Insightful)
There is no such thing as a "disruptive protocol", it's not like there's some kid with tourettes screaming obscenities at the back of the classroom keeping other people from learning. No protocol goes out and kneecaps other packets on its own. If the ISP wants to sell a megabit of bandwidth, it has plenty of tools available to make sure I don't take more than my megabit of bandwidth that don't rely on specifically targeting protocols like VoIP or iTMS that compete with products they sell.
Protocol filtering != Source/Destination filtering (Score:4, Insightful)
What brings a large objection is the Source/Destination filtering. I'm going to downgrade service on packets coming from Google video, because they haven't paid our "upgrade" tax, and coincidentally, we're invested in Youtube. This is an entirely different issue, and is not an engineering issue at all. It is entirely political. We know is technologically possible. People block sites today, for parental censorship reasons, among others. It would be little challenge, as an engineer, to set to a VERY low priority an arbitrary set of packets from a source address. This however violates what the internet is for, and in the end, if my ISP is doing this, am I really connected to the "Internet", or just a dispersed corporate net, similar to the old AOL.
This is, and will be, a political question, and if it goes the wrong way, will destroy what is fundamentally interesting about the net. The ability to, with one connection, talk to anyone else, anywhere in the world, no different then if they were in the next town over.
Confusing... (Score:4, Insightful)
I need coffee before I'll really understand this, but here's a first attempt:
Ok, first of all, that isn't about TCP congestion avoidance, at least not directly. (Doesn't Skype use UDP, anyway?)
But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections -- thus, BitTorrent can still work, and still be reasonably fast, by decreasing the number of connections. Make too many connections and Comcast starts dropping them, no matter what the protocol.
Well, where is our money going each month?
But more importantly, the trick here is that no ISP guarantees any peak bitrate, or average bitrate. Very few ISPs even tell you how much bandwidth you are allowed to use, but most reserve the right to terminate service for any reason, including "too much" bandwidth. Comcast tells you how much bandwidth you may use, in units of songs, videos, etc, rather than bits or bytes -- kind of insulting, isn't it?
I would be much happier if ISPs were required to disclose, straight up, how much total bandwidth they have (up and down), distributed among how many customers. Or, at least, full disclosure of how much bandwidth I may use as a customer. Otherwise, I'm going to continue to assume that I may use as much bandwidth as I want.
Yes, it is a tricky engineering problem. But it's also a political one, as any engineering solution would have to benefit everyone, and not single out individual users or protocols. Most solutions I've seen that accomplish this also create a central point of control, which makes them suspect -- who gets to choose what protocols and usage patterns are "fair"?
Alright. But as I understand it, this is a client-side implementation. How do you enforce it?
Nope. What I wonder is why a P2P user might want to do that, rather than install a different TCP implementation -- one which tags every single TCP connection as "weighted".
Oh, and who gets to tag a connection -- the source, or the destination? Remember that on average, some half of th
FUD (Score:5, Insightful)
Biased and poorly written (Score:1, Insightful)
But any idiot knows this isn't true, opening multiple streams only hurts you locally (causing everyone else in your house major slowdown and latency). The maximum download rate is the same and governed by your modem speed (1.5mbit, 5mbit, etc). I'm not suddenly downloading at 100mbit and hogging the shared bandwidth of the ISP. Also, if your ISP's TOS has no clause relating to bandwidth usage or limitations, you have the right to use all your available bandwidth 24/7 within reason. You pay for it. If that's not enough then charge more. I've actually called my ISP on this before and specifically asked them "So it's OK if I am downloading at max speed 24 hours a day all month." And they unequivocally stated yes.
Also doesn't anyone else find it funny that the author seems to think everyone should be limited to ONE stream? "Only big corporations need more..." WTF?
Re:This is a good proposal (Score:5, Insightful)
1. Could that possibly be to the processor demand on the CNN servers at peak times?
2. Does not certain companies like Blizzard force P2P patches onto their customers?
3. Is your 30 second video file just as important as a technician using torrents to download a Linux Distro to put on a server used for business they need up and running ASAP?
4. And lastly... Someone using a torrent shouldn't soak up an ISPs entire bandwidth... Unless someone at CNN is using the web server to host torrents but thats nothing you or your ISP can control.
It has an Achilles heel (Score:3, Insightful)
Here is the glaring flaw in his proposal:
So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates? He does mention this problem:
There are two issues with this solution:
It's nice that he wants to find a solution to the P2P bandwidth problem, but this is not it.
Doesn't stand a chance (Score:5, Insightful)
1) TCP works well.
2) TCP is in a lot of code and cannot easily be replaced
3) If you need something else, alternatives are there, e.g. UDP, RTSP and others.
Especially 3) is the killer. Applications that need it are already using other protocols. This article, like so many similar ones before it, is just hot air by somebody that did either not do their homework or want attention without deserving it.
Re:Not all sessions experience the same congestion (Score:2, Insightful)
It's just not a good comparison.
Re:Not all sessions experience the same congestion (Score:5, Insightful)
Which is my point in a nutshell: - people who want unlimited gigabytes, should be paying a lot more than what I'm paying for my limited service ($15 a month). That's entirely and completely fair. Take more; pay more.
Just like electrical service, cell service, water service, et cetera, et cetera.
Re:Not all sessions experience the same congestion (Score:4, Insightful)
There are some persons who think they should be able to download 1000 or even 10,000 times more data than what I download, and yet still pay the exact same amount of money.
That's greed.
If you want more, than you should pay more than what other people pay.
Re:Not all sessions experience the same congestion (Score:3, Insightful)
But then they couldn't advertise that they are 10x the speed of dialup because they'd all probably be slower if they had to assume more than a few percent utilization.... :-)
Why this is an issue now (Score:5, Insightful)
As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), I suppose should say something.
The way this was supposed to work is that TCP needs to be well-behaved because it is to the advantage of the endpoint to be well-behaved. What makes this work is enforcement of fair queuing at the first router entering the network. Fair queuing balances load by IP address, not TCP connection, and "weighted fair queueing" allows quality of service controls to be imposed at the entry router.
The problem now is that the DOCSIS approach to cable modems, at least in its earlier versions, doesn't impose fair queuing at entry to the network from the subscriber side. So congestion occurs further upstream, near the cable headend, in the "middle" of the network. By then, there are too many flows through the routers to do anything intelligent on a per-flow basis.
We still don't know how to handle congestion in the middle of an IP network. The best we have is "random early drop", but that's a hack. The whole Internet depends on stopping congestion near the entry point of the network. The cable guys didn't get this right in the upstream direction, and now they're hurting.
I'd argue for weighted fair queuing and QOS in the cable box. Try hard to push the congestion control out to the first router. DOCSIS 3 is a step in the right direction, if configured properly. But DOCSIS 3 is a huge collection of tuning parameters in search of a policy, and is likely to be grossly misconfigured.
The trick with quality of service is to offer either high-bandwidth or low latency service, but not both together. If you request low latency, your packets go into a per-IP queue with a high priority but a low queue length. Send too much and you lose packets. Send a little, and they get through fast. If you request high bandwidth, you get lower priority but a longer queue length, so you can fill up the pipe and wait for an ACK.
But I have no idea what to do about streaming video on demand, other than heavy buffering. Multicast works for broadcast (non-on-demand) video, but other than for sports fans who want to watch in real time, it doesn't help much. (I've previously suggested, sort of as a joke, that when a stream runs low on buffered content, the player should insert a pre-stored commercial while allowing the stream to catch up. Someone will probably try that.)
John Nagle
Re:Not all sessions experience the same congestion (Score:3, Insightful)
Claiming that something is "entirely and completely fair" while using the cellular industry as your example strains creditability just a tad.
There is nothing fair about the billing system used by the wireless industry. It's a holdover to the early 90s when spectrum was limited and the underlying technology (AMPS) was grossly inefficient with it's use of said spectrum. Modern technology is drastically more efficient at cramming more calls into the same amount of spectrum and the carriers have much more spectrum now then at any point in the past.
Do you you think charging $0.15 for a 160 byte text message is "fair"? Do you think $0.40/min overages are "fair"? Why are the first 450 minutes worth 8.8 cents ($39.99 / 450) but minute #451 is worth four and a half times as much $0.40)? That's your model of fairness?
There is nothing fair about the way the wireless industry operates, least of all it's billing practices. This is seriously the model you want to see adopted for the internet? Charges for individual services way above the actual cost (*cough* SMS *cough*) and overages that bear no relation to the actual cost to the carrier and exist solely to pad the bottom line?
If you can't support it don't sell it! (Score:5, Insightful)
We were buying T1 and T3 for use with video streaming and the ISP where getting upset that we were using 90% of the capacity they sold us. Apparently they specked out their cost based on office use doing web surfing. And based their models on older Telco traffic models where they needed 100 lines of outbound bandwidth for every 10000+ phone lines based on supporting 95% of the peak throughput.
But we concluded if you are selling us 1.5Mbps I dam well better be able to use 1.5Mbps, don't blame me when I use what was sold to me.
Well I see this as the same problem. If Comcast or Verizon sells me internet at at data rate, then I expect to be able to use all of it. There is nothing unfair about me using what I was sold. If they don't like it then they need to change their contractual agreements with me and change their hardware to match!
Same goes with the internal infrastructure, backbones and exchange point. If you can't support it don't sell it! Don't attack the P2P users, they are using what they PAID FOR and what was sold to them!!! If they are not getting it, they should file a class action suit.
No more then if you local cable company decided that 4 hr of TV was your limit and they would start to degrade your reception if you watched more, though this wasn't in the contract you signed up for.
On the other side, P2P should be given the means to hug the edges of the network. By this I mean communication between 2 cable modem or DSL users running off the same upstream routers (less hops) should be preferable and more efficient, not clogging up the more costly backbones. Currently P2P doesn't take any of that into consideration. Maybe ISP's could consider some technical solution to that rather then trying to deny customers the very access they are contractually bound to provide...
Re:Not all sessions experience the same congestion (Score:3, Insightful)
Re:Confusing... (Score:3, Insightful)
No, Comcast is specifically examining your data and is specifically forging packets to kill P2P connections.
(a) George Ou is a corporate shill; and
(b) George Ou considers BitTorrent and all P2P teh evilz of teh piratez.
So his position is that Comcast should be doing exactly what they are doing, spying on your data and killing your connection whenever you use teh evilz P2P.
His position is that ISPs should continue to sell flat rate "all you can eat" internet, but that they should spy on your traffic and be free to block any anything they don't like. Of course somehow in George Ou fantasyland, examining the CONTENT of your data to see if it is P2P or not, and filtering P2P content, is somehow magically not content based, and people on the Net Neutrality side calling it content based filtering are teh evilz liars.
His position is that the EFF and other Net Neutrality defenders are teh evilz for suggesting any sort of content-neutral but usage-aware internet plans, because in Australia they have a variety of plans that limit usuage or charge for high usage, and they are all obscenely expensive. He rants that the EFF and Net Neutrality advocates are going to saddle everyone with crazy high internet bills. Of course he is being deliberately STUPID in neglecting to notice the fact that obscene Australian ISP fees have absolutely nothing to do with usage-aware plans, that Australian prices are crazy high because there are few Australians sparsely spread across a continent on the opposite side of the planet from everyone else splitting the cost of an undersea pipe to the rest of the world.
-
Re:Not all sessions experience the same congestion (Score:3, Insightful)
Losing access is completely unacceptable. Reduced bandwidth above a certain tier would be tolerable, but barely. You are, however, assuming that it is reasonable for total usage to be capped at all, and in so doing, are buying into the B.S. arguments of the sleazier ISPs.... There's no good reason to do either one. If the ISP are not irresponsibly overselling their capacity, neither a cap nor a usage-metered rate should be necessary.
That said, if I had to choose something that I would consider acceptable, it would be this: every few minutes, each user is lumped into one of three categories: somebody doing intense downloading, casual browsing, or "offline" (only a trickle of bandwidth usage). The cutover point between casual browsing and intense downloading should depend in part on what percentage of the time a user has been "offline" and in part on the time of day/overall network utilization. A few services like VoIP are prioritized through the usual QoS techniques. All other network connections are prioritized based on which group the user fell into during the preceding five minute period. Users in the "offline" state get top priority for their trickle of bandwidth. Users in the casual browsing category get the next highest priority. Whatever bandwidth remains from the ISP is divided equally among people doing heavy downloading. This categorization should be reevaluated every few minutes so that a user is not penalized for an entire month or whatever because they downloaded a movie off of iTunes the night before.
With such a scheme, there's no reason whatsoever to do metered or capped usage. Downloaders still get almost as much bandwidth as they otherwise would, but casual users would see reliably fast performance. This should be doable without any hardware infrastructure enhancements. Frankly, there's really no good reason to do metered usage. It is pretty clear that the only people who truly want that are the Telcos, and their reasons have nothing to do with user satisfaction or network performance and everything to do with being envious of the mobile phone operators and their ability to screw^H^H^H^H^Hnickel-and-dime their customers....