Fixing the Unfairness of TCP Congestion Control 238
duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
Not all sessions experience the same congestion (Score:5, Interesting)
Re: (Score:3, Informative)
Re:Not all sessions experience the same congestion (Score:5, Interesting)
Here's an alternative (but equally effective) way of reducing congestion - ask p2p users to download less. Because that is what this proposal amounts to. A voluntary measure to hammer your own bandwidth for the greater good of the network will not succeed. The idea that applications should have "fair" slices of the available bandwidth is ludicrous. What is fair about squeezing email and p2p into the same bandwidth profile?
This seems to be a highly political issue in the US. Every ISP that I've used in the UK has used the same approach - traffic shaping using QoS on the routers. Web, Email, VoIP and almost everything else are "high priority". p2p is low priority. This doesn't break p2p connections, or reset them in the way that Verizon has done. But it means that streams belonging to p2p traffic will back off more because there is a higher rate of failure. It "solves" the problem without a crappy user-applied bandaid.
It doesn't stop the problem that people will use as much bandwidth for p2p apps as they can get away with. This is not a technological problem and there will never be a technological solution. The article has an implicit bias when it talks about users "exploiting congestion control" and "hogging network resources". Well duh! That's why they have have network connections in the first place. Why is the assumption that a good network is an empty network?
All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges. The final point is the numbers, 10Mb/s is slow for the next-gen connections now being sold (24Mb/s in the UK in some areas), and 100:1 is a large contention ratio. So why shouldn't someone use 240GB of traffic on that connection every month?
Re: (Score:2)
Re: (Score:2)
15,000 versus 250 megabytes.
You SHOULD pay more.
Just like a cellphone.
Some fools pay $90 a month in overage charges. I pay $5 a month and use my minutes sparingly. People who use more minutes/gigabytes should pay more than the rest of us pay. That's entirely fair and e
Re: (Score:2, Insightful)
It's just not a good comparison.
Re:Not all sessions experience the same congestion (Score:5, Insightful)
Which is my point in a nutshell: - people who want unlimited gigabytes, should be paying a lot more than what I'm paying for my limited service ($15 a month). That's entirely and completely fair. Take more; pay more.
Just like electrical service, cell service, water service, et cetera, et cetera.
Re: (Score:3, Insightful)
Re:Not all sessions experience the same congestion (Score:4, Insightful)
There are some persons who think they should be able to download 1000 or even 10,000 times more data than what I download, and yet still pay the exact same amount of money.
That's greed.
If you want more, than you should pay more than what other people pay.
Re: (Score:3, Insightful)
Re:Not all sessions experience the same congestion (Score:4, Interesting)
Are you saying that your ISP isn't living up to its contract with you? You don't need anything fancy to fix that -- just file a lawsuit. If they truly promised you unlimited bandwidth (as you interpret it), then you should easily win.
On the other hand, you might not completely understand your contract, and thus would take a serious beating in court. Either way, you need to accept the harsh reality that any ISP that offers broadband service (1+ Mbps) without transfer caps will go out of business within 2 years.
Re:Not all sessions experience the same congestion (Score:5, Funny)
The author of the article, George Ou, explains why he thinks you are a stupid and evil for suggesting such a thing. [zdnet.com] Well, he doesn't actually use the word "stupid" and I don't think he actually uses the word "evil", but yeah that is pretty much says.
You see in Australia they have a variety of internet plans like that. And the one thing that all of the plans have in common is that they are crazy expensive. Obscenely expensive.
So George Ou is right any you are wrong and stupid and evil, and the EFF is wrong and stupid and evil, and all network neutrality advocates are all wrong and stupid and evil, you are all going to screw everyone over force everyone to pay obscene ISP bills. If people don't side with George Ou, the enemy is going to make you get hit with a huge ISP bill.
Ahhhh... except the reason Australian ISP bills are obscene might have something to do with the fact that there are a fairly small number of Australians spread out across an entire continent on the bumfuck other side of the planet from everyone else.
Which might, just possibly MIGHT, mean that the crazy high Australian ISP rates kinda sorta have absolutely no valid connection to those sorts of usage-relevant ISP offerings.
So that is why George Ou is right and why you are wrong and stupid and evil and why no one should listen to your stupid evil alternative. Listen to George Ou and vote No on network neutrality or else the Network Neutrality Nazis are gonna make you pay crazy high for internet access.
-
Re: (Score:2)
Re: (Score:2)
Metered access is a bad idea. It's just like cell phone charges. You use more than some limit one month and suddenly your $45 phone bill just went up to $200. That's the last think I would want in an ISP, and indeed, I'm considering moving my home phone to VoIP so I don't have to pay overages for phone bills, either.
I'm not someone who runs BitTorrent constantly; that's not why I'm opposed to metered access. I just want to know that my bill at the end of the month will be a certain amount, and I would
Re: (Score:3, Insightful)
Losing access is completely unacceptable. Reduced bandwidth above a certain tier would be tolerable, but barely. You are, however, assuming that it is reasonable for total usage to be capped at all, and in so doing, are buying into the B.S. arguments of the sleazier ISPs.... There's no good reason to do either one. If the ISP are not irresponsibly overselling their capacity, neither a cap nor a usage-metered rate should be necessary.
That said, if I had to choose something that I would consider accepta
Ass end (Score:2)
I believe that the prime-ministerial term is "ass-end of the world". A proud moment for all Australians =), although, Colin Carpenter made us prouder when he said that Melbourne was the Paris end of the ass-end of the world.
Re: (Score:2)
Re: (Score:2)
What I don't understand is why this concerns TCP at all. An ISP's job is surely to send and receive IP datagrams on be
Re: (Score:2)
I agree very strongly with this. You are correct, this amounts to users volunteering to throttle their own bandwidth, and it will never work.
Another proposal would be for backbones and network interconnects to apply some sort of fairness discipline to traffic coming from the various networks. This would give ISPs incentive to throttle and prioritize appropriately. ISPs also need to modify their TOS to make it explicit that you have a burst bandwidth and a continuous bandwidth and that you cannot constan
A single slow connection changes your TCP window (Score:4, Informative)
You can change them on the fly by echoing the name into your procfs, IIRC. Also, if you have the stomache for it, and two connections to the internet, you can load balance and/or stripe them using Linux advanced Routing & Traffic Control [lartc.org] (mostly the ip(1) command). Very cool stuff if you want to route around a slow node or two (check out the multiple path stuff) at your ISP(s).
Re: (Score:2)
If ISP's would just build their networks to handle the speeds they sell instead of running around with their hands in the air over the fact the 'net has finally evolved to the point where there are reasons for an individu
Re: (Score:2)
Or better yet, advertise the connection realistically. i.e. If your network can't handle half your users doing 10 megabit video downloads, then sell them as 1 megabit lines instead. Downsize the marketing to reflect actual performance capability.
Re: (Score:3, Insightful)
But then they couldn't advertise that they are 10x the speed of dialup because they'd all probably be slower if they had to assume more than a few percent utilization.... :-)
Re:Not all sessions experience the same congestion (Score:4, Informative)
Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.
Obviously AIMD isn't going to fix this situation - it's not designed to. Similarly, expecting all computers to be updated in any reasonable timeframe won't happen (especially as a P2P user may have less little motivation to 'upgrade' to receive slower downloads). Still, since we're assuming the bottleneck is in the first hops, it follows that the congestion is in the ISPs managed network. I don't see why the ISP can't therefore tag and shape traffic so that their routers equally divide available bandwidth between each user, not TCP stream. Infact, most ISPs will give each home subscriber only 1 IP address at any point in time, so it should be easy to relate a TCP stream (or and IP packet type) to a subscriber. While elements of the physical network are always shared [plus.net], each user can still be given logical connection with guaranteed bandwidth dimensions. This isn't a new concept either, it's just multiplexing using a suitable scheduler, such as rate-monotonic (you get some predefined amount) or round-robin (you get some fraction of the available amount).
Such 'technology' could be rolled by ISPs according their roadmaps (although here in the UK it may require convincing BT Wholesale to update some of their infrastructure) and without requiring all users to upgrade their software or make any changes. However, I suspect here the "The politicization of an engineering problem" occurs because ISPs would rather do anything but admit they made a mistake in previous marketing of their services, raise subscriber prices, or make the investment to correctly prioritise traffic on a per user basis, basically knocking contention rates right down to 1:1. It's much easier to simply ban or throttle P2P applications wholesale and blame high bandwidth applications.
I have little sympathy for ISPs right now; the solution should be within their grasp.
How amazingly appropriate. Re:Goatse (Score:2)
That ass is what broadcasters and the people attacking net neutrality would like to shovel on everyone. The issue is free speech and the broadcaster goal is to eliminate competition so we are all forced to keep watching their usual shit.
I don't know why anyone would listen to Ou but his core arguments are easy to dismantle. This is the same ass who savagely attacked researcher Peter Gutmann [cypherpunks.to] only to whine later when Vista crapped out for him [zdnet.com]. The core argument so insultingly put forth is that selective b
Re: (Score:2)
What Ou seems to be saying in his own spitting rant is that P2P is blocking legitimate web traffic and this isn't fair (pout pout)!
Seems to me that if 99% of web traffic is P2P (to pull a number out of my hat, or maybe goatse's orifice), that is what users want, and that makes it legitimate. Another way to put it is that maybe the non-P2P world needs to catch up. I don't h
Weighted TCP solution (Score:5, Interesting)
Re: (Score:3, Interesting)
On the internet as
Sadly, no, upgrading doesn't help... (Score:4, Informative)
Re: (Score:2)
Also, see the appendix in
http://www.ietf.org/internet-drafts/draft-briscoe-tsvwg-relax-fairness-00.txt [ietf.org]
Re: (Score:2)
I agree,but it's hard. (Score:3, Interesting)
A New Way to Look at Networking (Score:5, Informative)
Not just Freenet (Score:3, Interesting)
Neutrality debate? (Score:2, Insightful)
There is a debate? I thought it was more like a few monied interests decided "there is a recognized correct way to handle this issue; I just make more profit and have more control if I ignore that." That's not the same thing as a debate.
Good luck with that.. (Score:4, Insightful)
So right, yet so wrong (Score:5, Insightful)
The author of this article is clearly exploiting the novelty of a technological idea to promote his slightly related political agenda, and that's deplorable.
Re: (Score:3, Informative)
Most people should be encrypting a large chunk of what goes across the Internet. Anything which sends a password or a session cookie should be encrypted. That's going to be fairly hard on traffic shapers.
Re:So right, yet so wrong (Score:4, Interesting)
Not entirely true. It works better the more you know about your data, but even knowing nothing you can get good results with a simple rule of prioritizing small packets.
My original QoS setup was just a simple rule of anything small gets priority over anything large. This is enough to make (most) VoIP, games, SSH, and anything else that is lots of small real time packets all get through over lots of full queued packets (transfers).
Admittedly BitTorrent was what hurt my original setup, as you end up with a lot of slow peers each trickling transfers in slowly. You could get around this with a hard limit of overall packet rate, or with connection tracking and limiting the number of IPs you hold a connection with per second (and then block things like UDP and ICMP)
Yeah its an ugly solution, but we're all the ISP's bitch anyways, so they can do what they want.
Re: (Score:2)
GP is not a troll (Score:2)
There's a graph that shows a bittorrent user as the highest bandwith user over a day and then puts a youtube surfer and a websurfer on the same bandwith level as an xbox gamer and things of that nature. That is so far off from eachother that it is despicable.
Every one of the ones I mentioned in the previous paragrap
Upstream != downstream (Score:2)
Re: (Score:2)
Yes I recognize downstream is far greater and didn't mean to misrepresent what I was stating. Thank you for clarifying.
Is Xbox Live Silver "online gaming"? (Score:2)
I did mix them up for web browsing, but gaming and web surfing are very much not in the same category.
You're right. For online games with real-time interaction, Page 2 of the article [zdnet.com] has the table. Perhaps by "online gaming", someone meant playing Flash/Java/JS games, activating Steam games, or playing on Xbox Live Silver (XBLA games, achievements, etc). Those have a similar bandwidth profile to HTTP transactions.
Re: (Score:2)
Duh, higher bandwidth applications take more bandwidth. Expecting parity between low bandwidth and high bandwidth applications is fundamentally biased against high bandwidth applications. If I'm an IRC user, and you
Re: (Score:2)
Yes, there's an engineering problem to solve. No, you aren't allowed to violate the Terms of Service to solve it.
Wag their fingers? (Score:2, Insightful)
ATTN CmdrTaco: it's not a democracy because ... (Score:3, Insightful)
Those networks that show consistently boorish behavior to other networks eventually find themselves isolated or losing customers (e.g. Cogent, although somehow they still manage to retain some business - doubtless due to the fact that they're the cheapest transit you can scrape by with in most cases, although anybody who relies on them is inevitably sorry).
The Internet will be a democracy when every part of the network is funded, built and maintained by the general public. Until then, it's a loose confederation of independent networks who cooperate when it makes sense to do so. Fortunately, the exceedingly wise folks that wrote the protocols that made these networks possible did so in a manner that encourages interconnectivity (and most large networks tend to be operated by folks with similar clue - when they're not, see the previous paragraph).
Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.
Re: (Score:2)
Dad? Is that you?
Re: (Score:2)
The government's own actions to help secure that monopoly are part of the problem. Cable providers don't have to share their lines with competitors, despite havin
Re: (Score:2)
* gov't regulation (or lack thereof), combined with a woeful lack of due diligence in ensuring taxpayer investment sees a decent return (the POTS system was almost entirely subsidized by taxpayer dollars, and we're still paying for that initial investment in the form of surcharges and taxes on copper laid a hundred years ago in some cases, with further technological deployments (e.g. FTTP) coming late or not at all, and always with grudging complai
Re: (Score:2)
I think the feds have been entirely too chummy with Ma Bell (and the cablecos, and BigCorp in general) for the last several decades. However, I'm very skeptical that the answer to poor federal legislation is additional federal legislation.
I think that the answer to poor federal legislation would be good federal legislation, but you're right that that's probably wishful thinking these days.
I guess the cure that I'd like to see is requirements that the line owners share their lines with competitors. In this way, at least competition has a shot at fixing the problem. We could examine alternatives later on, if that failed.
Right, but... (Score:2)
The problem, however, is that the fairness is an externality. You COULD build a BitTorrent-type client which monitors congestion and does AIMD style fairness common to all flows when it is clear that there is congestion in common on the streams rather than on the other side.
But there is no incentive to do so! Unless everyone else did, your "fair" P2P protocol gets stomped on like any
Leaving it up to applications? (Score:2)
I don't think I could walk to the kitchen and get a beer faster than it would take P2P authors to exploit that.
Protocol filtering != Source/Destination filtering (Score:4, Insightful)
What brings a large objection is the Source/Destination filtering. I'm going to downgrade service on packets coming from Google video, because they haven't paid our "upgrade" tax, and coincidentally, we're invested in Youtube. This is an entirely different issue, and is not an engineering issue at all. It is entirely political. We know is technologically possible. People block sites today, for parental censorship reasons, among others. It would be little challenge, as an engineer, to set to a VERY low priority an arbitrary set of packets from a source address. This however violates what the internet is for, and in the end, if my ISP is doing this, am I really connected to the "Internet", or just a dispersed corporate net, similar to the old AOL.
This is, and will be, a political question, and if it goes the wrong way, will destroy what is fundamentally interesting about the net. The ability to, with one connection, talk to anyone else, anywhere in the world, no different then if they were in the next town over.
Re:Protocol filtering != Source/Destination filter (Score:2)
Re:Protocol filtering != Source/Destination filter (Score:2)
Not in This World (Score:2)
Specifically for TCP, one can just hack into the OS kernel and force TCP to ignore all the congestion notifications etc.. and thus hogging all the bandwidth...(its not that difficult)
Congestion shaping at client end? WTF? (Score:3, Interesting)
QoS is not Net neutrality (Score:2)
Adding latency to only $foocorp (where $foocorp != $isp) so $isp can get more money violates net neutrality. This is a very bad idea, and borderline legal since the customer has alreaady paid.
Confusing... (Score:4, Insightful)
I need coffee before I'll really understand this, but here's a first attempt:
Ok, first of all, that isn't about TCP congestion avoidance, at least not directly. (Doesn't Skype use UDP, anyway?)
But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections -- thus, BitTorrent can still work, and still be reasonably fast, by decreasing the number of connections. Make too many connections and Comcast starts dropping them, no matter what the protocol.
Well, where is our money going each month?
But more importantly, the trick here is that no ISP guarantees any peak bitrate, or average bitrate. Very few ISPs even tell you how much bandwidth you are allowed to use, but most reserve the right to terminate service for any reason, including "too much" bandwidth. Comcast tells you how much bandwidth you may use, in units of songs, videos, etc, rather than bits or bytes -- kind of insulting, isn't it?
I would be much happier if ISPs were required to disclose, straight up, how much total bandwidth they have (up and down), distributed among how many customers. Or, at least, full disclosure of how much bandwidth I may use as a customer. Otherwise, I'm going to continue to assume that I may use as much bandwidth as I want.
Yes, it is a tricky engineering problem. But it's also a political one, as any engineering solution would have to benefit everyone, and not single out individual users or protocols. Most solutions I've seen that accomplish this also create a central point of control, which makes them suspect -- who gets to choose what protocols and usage patterns are "fair"?
Alright. But as I understand it, this is a client-side implementation. How do you enforce it?
Nope. What I wonder is why a P2P user might want to do that, rather than install a different TCP implementation -- one which tags every single TCP connection as "weighted".
Oh, and who gets to tag a connection -- the source, or the destination? Remember that on average, some half of th
Re: (Score:3, Insightful)
No, Comcast is specifically examining your data and is specifically forging packets to kill P2P connections.
(a) George Ou is a corporate shill; and
(b) George Ou considers BitTorrent and all P2P teh evilz of teh piratez.
So his position is that
Re: (Score:2)
While I do agree that Comcast is incompetent at best -- seriously, WTF are they doing telling us our limits in units of songs, emails, or photos? -- I'm just trying to get the facts straight.
FUD (Score:5, Insightful)
Re: (Score:2)
Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No.
What about download accelerators? On a congested server, I've seen a near linear increase in bandwidth by opening multiple streams (which many servers now have limited, but not really the point). When I go from 25kb/s to 100kb/s, I took that bandwidth from someone. Same with some slow international connections where there's plenty on both ends but crap in the middle. I would honestly say I'm gaming the system then. P2P have a "natural" large number of streams because it has so many peers, but there's no de
Re: (Score:3, Informative)
You're making the same mistake as the author of that article. What you fail to realise is precisely why the single connection did not operate as fast: because your kernel was slowing it down incorrectly. You are not fighting other users by opening
Re: (Score:2)
What's so wrong about this idea (Score:2)
Engineering Solution (Score:2)
Figure out the real committable bandwidth (available bandwidth / customer connections). Then, tag that amount of customer information coming into the network with a priority tag. Customers may prioritize what they want, and it will be respected up to the limit.
Example: 1000k connection shared between 100 people who each have 100k pipes. They get a committed 10k. The first 10k of packets in per second that are unmarked are marked "priority." Packets marked "low" are passed as low. Packets marked "high" or
One way to implement this... (Score:3, Informative)
I'd think that a simple fix to Jacobson's algorithm could help a lot. Instead of resetting the transmission rate on just one connection on a dropped packet, reset all of them. This would have no effect on anyone using a single stream, and would eliminate problems with the source of the congestion is nearby. Variations on this theme would included resetting all connections for a single process or process group, which would throttle my P2P without affecting my browser. This alone would be more than enough incentive for me to adopt the patch: instead of having to schedule different bandwidth limits during the day, I could just let everything flow at full speed 24x7. And by putting the patch into the kernel, you'd have less to worry about individual applications and/or users deciding to adopt it.
low vs high priority marking (Score:2)
That is, instead of marking packets you really care about (VoIP packets, say) high priority, you mark the ones you don't care that much (bittorrent downloads) about as low priority?
I recall reading about low priority marks having interesting advantages over high priority marks. It had to do with the high priority marks relying on perverse incentives (almost all routers would have to play by the rules and the more th
It has an Achilles heel (Score:3, Insightful)
Here is the glaring flaw in his proposal:
So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates? He does mention this problem:
There are two issues with this solution:
It's nice that he wants to find a solution to the P2P bandwidth problem, but this is not it.
Re: (Score:2)
So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates?
I'm sure that if Microsoft pushed an update, it would handle more than half of the P2P community. Over time, when the successor to Vista arrived, you wouldn't have an older stack to fall back upon. Linux might be a bit harder, since old stacks could still float around forever, but there's nothing today that's stopping anyone today from running a stack that has the Jacobson code disabled.
Instead of throttling based on per-host, though, I'd do it per process or process group. Right now, every P2P app that
Bandwidth still isn't free. (Score:4, Informative)
You've got three options,
#1 Have an uncapped uncontended link for the £20/month you pay - you'll get about 250kbps.
#2 Have a fast link with a low bandwidth cap - think 8Mbits with a 50GB cap and chargeable bandwidth after that at around ~ 50p-£1/GB
#3 Deal with an ISP who's selling bandwidth they don't have and expect them to try as hard as possible to make #1 look like #2 with no overage charges.
If you want a reliable fast internet connection you want to go with a company that advertises #2. If you can't afford #2, you can spend your time working against the techs at ISP #3, but expect them to go our of their way to make your life shit until you take your service elsewhere because you cost them money.
I love my unfair kernel. (Score:2)
Driving Miss Internet (Score:4, Informative)
It occurred to me this morning that driving on public roadways and surfing the public networks were identical experiences for the vast majority of people. That experience being; "mine, mine, ALL MINE!....hahahaha!" AKA "screw you...it's all about me!"
Now, I have the joy of managing a global network with links to 150 countries AND a 30 mile one way commute. So, I get to see, in microcosm, how the average citizen behaves in both instances.
From a network perspective, charge by usage...period. Fairness only works in FAIRy tales.
We do very good traffic shaping and management across the world. QoS policies are very well designed and work. The end user locations do get charged an allocation for their network costs. So, you'd think the WAN would run nicely and fairly. After all, if the POS systems are impacted, we don't make money and that affects everyone, right?
Hardly. While we block obvious stuff like YouTube and Myspace, we have "smart" users who abuse the privilege. So, when we get a ticket about "poor network performance", we go back to a point before the problem report and look at the flows. 99 out of 100 times, it's one or more users hogging the pipe with their own agenda. Now, the branch manager gets a detailed report of what the employees were doing and how much it cost them. Of course, porn surfers get fired immediately. Abusers of the privilege just get to wonder what year they'll see a merit increase, if at all.
So, even with very robust network tuning and traffic shaping, the "me, me" crowd will still screw everybody else...and be proud that they did. Die a miserable death in prison you ignorant pieces of shit.
Likewise the flaming assholes I compete with on the concrete and asphalt network link between home and office every day. This morning, some idiot in a subcompact stuck herself about 2 feet from my rear bumper...at 70mph. If I apply ANY braking for ANY reason, this woman will collide with me. So, I tapped the brakes so she'd back off. She backed off with the upraised hand that seemed to be "yeah, I know I was in the wrong and being unsafe" She then performed 9 lane changes, all without signaling once, and managed to gain....wait for it.... a whole SEVEN SECONDS of time over 10 miles of driving.
I see it every day. People driving with little regard for anyone else and raising the costs for the rest of us. On the network, or on the highway, same deal. And they feel like they did something worthwhile. I've talked to many users at work and the VAST majority are not only unapologetic, but actually SMUG. Many times, I'll get the "I do this at home, so it must be okay at work". To which I say, "well you cannot beat your wife and molest your kids at the office, now can you?"
My tolerance of, and faith in, my fellow man to "do the right thing" are at zero.
A technical solution (to TCP Congestion Control, etc.) is teaching the pig to sing; horrible results. Charge the thieving, spamming bastards through the nose AND constrain their traffic. That'll get better results than any pollyanna crap about "fair".
Re: (Score:2)
Economists call this the Tragedy of the Commons [sciencemag.org], and it's the reason driving in traffic sucks, and also the reason public toilets are filthy.
The Internet is fundamentally a shared infrastructure. BitTorrent and other protocols intentionally utilize that infrastructure unfairly. A BitTorrent swarm is like a pack of hundreds of cars driving 90 mph, both directions, in every lane including the shoulder. They cut y
Doesn't stand a chance (Score:5, Insightful)
1) TCP works well.
2) TCP is in a lot of code and cannot easily be replaced
3) If you need something else, alternatives are there, e.g. UDP, RTSP and others.
Especially 3) is the killer. Applications that need it are already using other protocols. This article, like so many similar ones before it, is just hot air by somebody that did either not do their homework or want attention without deserving it.
Solution (Score:2, Interesting)
There are a few power companies who announce 24 hours in advance how much they're going to charge per Kwh in any given hour, and their customers can time their usage to take advantage of slack space, since the prices are based on demand.
If we do the same thing with internet service *both in and out*, a real bandwidth hog is going t
Why this is an issue now (Score:5, Insightful)
As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), I suppose should say something.
The way this was supposed to work is that TCP needs to be well-behaved because it is to the advantage of the endpoint to be well-behaved. What makes this work is enforcement of fair queuing at the first router entering the network. Fair queuing balances load by IP address, not TCP connection, and "weighted fair queueing" allows quality of service controls to be imposed at the entry router.
The problem now is that the DOCSIS approach to cable modems, at least in its earlier versions, doesn't impose fair queuing at entry to the network from the subscriber side. So congestion occurs further upstream, near the cable headend, in the "middle" of the network. By then, there are too many flows through the routers to do anything intelligent on a per-flow basis.
We still don't know how to handle congestion in the middle of an IP network. The best we have is "random early drop", but that's a hack. The whole Internet depends on stopping congestion near the entry point of the network. The cable guys didn't get this right in the upstream direction, and now they're hurting.
I'd argue for weighted fair queuing and QOS in the cable box. Try hard to push the congestion control out to the first router. DOCSIS 3 is a step in the right direction, if configured properly. But DOCSIS 3 is a huge collection of tuning parameters in search of a policy, and is likely to be grossly misconfigured.
The trick with quality of service is to offer either high-bandwidth or low latency service, but not both together. If you request low latency, your packets go into a per-IP queue with a high priority but a low queue length. Send too much and you lose packets. Send a little, and they get through fast. If you request high bandwidth, you get lower priority but a longer queue length, so you can fill up the pipe and wait for an ACK.
But I have no idea what to do about streaming video on demand, other than heavy buffering. Multicast works for broadcast (non-on-demand) video, but other than for sports fans who want to watch in real time, it doesn't help much. (I've previously suggested, sort of as a joke, that when a stream runs low on buffered content, the player should insert a pre-stored commercial while allowing the stream to catch up. Someone will probably try that.)
John Nagle
Re:Why this is an issue now (Score:4, Interesting)
Seem to me that for ADSL it would be ideally placed in the DSLAM, where there is already a per-subscriber connection (in any case, most home users will only get 1 IP address, hence making a 1:1 mapping for subscriber to IP -nothing need be per IP connection as the original article assumes). In fact, the wikipedia page on DSLAMs [wikipedia.org] says QoS is already an additional feature, mentioning priority queues.
So I'm left wondering why bandwidth hogs are still a problem for ADSL. You say that this is a "huge collection of tuning parameters", and I accept that correctly configuring this stuff maybe complex, but this is surely the job of the ISPs. Maybe I'm overestimating the capabilities of the installed DSLAMs, in which case I wonder if BTs 21CN [btplc.com] will help.
Certainly though, none of the ISPs seem to be talking about QoS per subscriber. Instead they prefer to differentiate services, ranking P2P and streaming lower than uses on the subscribers behalf. PlusNet (a prominent UK ISP) have a pizza analogy [plus.net] to illustrate how sharing works - using their analogy, PlusNet would give you lots of Margarita slices, but make you wait for a Hawaiian even if you aren't eating anything else. Quite why they think this is acceptable is unknown to me; they should be able to enforce how many slices I get at the DSLAM, but still allow me to select the flavours at my house (maybe I get my local router to apply QoS policies when it takes packets from the LAN to the slower ADSL, or mark streams using TOS bits in IPv4 or the much better IPv6 QoS features to assist the shaping deeper into the network).
Re:Why this is a non-issue now (Score:3, Informative)
Fairness is not the problem. Fairness is the wedge-issue that CATV-ISPs are trying to use to justify their behavior.
I personally like the rudimentary aspects of the weighted fair queuing proposal -- so let's image that we had it. Would Comcast still have a problem with too many upload bytes from too many homes competing for the upload path back to the CMTS? Yes.
The real problem is that CATV-ISPs are at their upper limits and FIOS is currently superiour. Most CATV nets are DOCSIS 1.1, neighborhoods
If you can't support it don't sell it! (Score:5, Insightful)
We were buying T1 and T3 for use with video streaming and the ISP where getting upset that we were using 90% of the capacity they sold us. Apparently they specked out their cost based on office use doing web surfing. And based their models on older Telco traffic models where they needed 100 lines of outbound bandwidth for every 10000+ phone lines based on supporting 95% of the peak throughput.
But we concluded if you are selling us 1.5Mbps I dam well better be able to use 1.5Mbps, don't blame me when I use what was sold to me.
Well I see this as the same problem. If Comcast or Verizon sells me internet at at data rate, then I expect to be able to use all of it. There is nothing unfair about me using what I was sold. If they don't like it then they need to change their contractual agreements with me and change their hardware to match!
Same goes with the internal infrastructure, backbones and exchange point. If you can't support it don't sell it! Don't attack the P2P users, they are using what they PAID FOR and what was sold to them!!! If they are not getting it, they should file a class action suit.
No more then if you local cable company decided that 4 hr of TV was your limit and they would start to degrade your reception if you watched more, though this wasn't in the contract you signed up for.
On the other side, P2P should be given the means to hug the edges of the network. By this I mean communication between 2 cable modem or DSL users running off the same upstream routers (less hops) should be preferable and more efficient, not clogging up the more costly backbones. Currently P2P doesn't take any of that into consideration. Maybe ISP's could consider some technical solution to that rather then trying to deny customers the very access they are contractually bound to provide...
Another Clueless Moron with an Opinion from ZDNET (Score:3, Interesting)
- (think my car needs a grease-and-oilchange, so I'll go walk the dog - proposed solution bears no relationship to the problem)
- yet the first would be argued as an "unfair use* while the second is perfectly normal and acceptable behaviour
- after all, if the app developers were genuinely interested in playing nicely in the sandbox, they would already
- TCP congestion control "works" (ie as engineered) because it's inherent in the protocol implementation, does not require "enforcement" by the ISP
- ie most of your assumptions about "how this works" are wrong.
Anyone read their ISP Ts&Cs ? Ever?
IP is a *best effort* protocol.... we will punt your packet upstream and hope it gets there - have a nice day.
There is *no* guarantee of *anything*.
Now, as far as anything approaching a "solution" to the supposed "problem".
What about all the P2P developers marking their "data transmission" packets (whatever the protocol) with the lowest-of-the-low QoS markings.
--> "if you need to manage congestion, I am exceedingly eligible for shaping"
That would work nicely.
In fact, if YouTube (and friends) did the same, it would actually *encourage* ISPs to enable proper QoS processing throughout their entire networks.
If applications (and protocols) inherently played nicely in the sandbox, your ISP would bend-over-backwards to guarantee a near-perfect service. (mainly because it'd thusly be near-trivial to do)
And yes I realise this raises the spectre of "Net Neutrality" - but seriously folks how is that argument any different than "because of the terorists" or "think of the children"?
ISPs Applying QoS to traffic in order to guarantee the quality is not inherently bad. The *bad* ness comes about because they will (yes, I said WILL, not MIGHT or COULD) use said QoS practices to push their own services/enforce their own policies (we hate P2P/ignore client-QoS-markings, etc , etc, etc).
All those people who're frothing-at-the-mouth because QoS is BAD need a RABIES shot.
In an ideal world, we'd never need QoS. QoS is a congestion management mechanism. If you have no congestion, then you don't need to apply QoS techniques.
But until the day when we all have quantum-entangled communications processors with near-infinite effective bandwidth we're going to need QoS, somewhere.
Re: (Score:2)
Re:This is a good proposal (Score:5, Insightful)
1. Could that possibly be to the processor demand on the CNN servers at peak times?
2. Does not certain companies like Blizzard force P2P patches onto their customers?
3. Is your 30 second video file just as important as a technician using torrents to download a Linux Distro to put on a server used for business they need up and running ASAP?
4. And lastly... Someone using a torrent shouldn't soak up an ISPs entire bandwidth... Unless someone at CNN is using the web server to host torrents but thats nothing you or your ISP can control.
Re: (Score:2, Informative)
Since when is it forced? I keep it turned off and download patches from filefront.
Re: (Score:2)
The proposal seems to be relying on the clients to mark their traffic appropriately.
So p2p apps will just start marking their own traffic as high weight and we're back to square one.
I don't think any proposal that involves trusting the end clients is going to work on the internet. There are just too many untrustworthy people around
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You ignore the fact that ISPs pay for transit of your traffic to other networks in some fashion. Even if your ISP is a Tier-1, they pay to maintain peering arangments with other Tier-1 ISPs. Those arrangements are based on traffic balance. So even if you're running torrents in the middle of the night when the peering links are mostly unutilized, you're still potentially costing your ISP a lot of money.
The problem is economic, not technological. Nobody has figured out a way to fairly distribute the infrastr
Re: (Score:2)
This is because many bandwidth sharing methods out there are on a "per connection" basis. So if someone makes 20 connections they get 20x more than someone who makes one, everything else being the same.
The fair case would be everyone gets a fair share of bandwidth based on a per user basis. Now in _most_ cases this can be mapped to a "per IP" basis.
So even if some p2p person makes 100 connections, he only has one IP, and you have one IP,
Re:Flow bandwidth per connection, per host, per ap (Score:2)
Frankly, I think this article is a dirty trick. The author is talking about making the internet more "fair", but the ramification of his change is that ISPs will be able to charge more for "better" service if they want. In an attempt to make the network more fair, he could make it inherently unfair.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
TFA is about TCP congestion control, not about priority, and suffers from a fundamental misunderstanding of the effect that congestion control has on the network: None whatsoever. It is an algorithm which strives to improve the TCP link quality (!) for the communication endpoints of this connection. The reason why it appears that a P2P application can hog bandwidth is entirely on the local system. If an application sends more packets, more pa
Re: (Score:2)
Re: (Score:2)