Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking

Fixing the Unfairness of TCP Congestion Control 238

duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."
This discussion has been archived. No new comments can be posted.

Fixing the Unfairness of TCP Congestion Control

Comments Filter:
  • by thehickcoder ( 620326 ) * on Monday March 24, 2008 @10:44AM (#22844896) Homepage
    The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.
    • Re: (Score:3, Informative)

      by Kjella ( 173770 )
      Well, I don't know about your Internet connection but the only place I notice congestion is on the first few hops (and possibly the last few hops if we're talking a single host and not P2P). Beyond that on the big backbone lines I at least don't notice it, though I suppose it could be different for the computer.
      • by smallfries ( 601545 ) on Monday March 24, 2008 @11:25AM (#22845282) Homepage
        Even if that is true, the congestion won't be correlated between between your streams, if it occurred on the final hops (and hence different final networks). There is a more basic problem than the lack of correlation between congestion on separate streams - the ZDnet editor, and the author of the proposal have no grasp of reality.

        Here's an alternative (but equally effective) way of reducing congestion - ask p2p users to download less. Because that is what this proposal amounts to. A voluntary measure to hammer your own bandwidth for the greater good of the network will not succeed. The idea that applications should have "fair" slices of the available bandwidth is ludicrous. What is fair about squeezing email and p2p into the same bandwidth profile?

        This seems to be a highly political issue in the US. Every ISP that I've used in the UK has used the same approach - traffic shaping using QoS on the routers. Web, Email, VoIP and almost everything else are "high priority". p2p is low priority. This doesn't break p2p connections, or reset them in the way that Verizon has done. But it means that streams belonging to p2p traffic will back off more because there is a higher rate of failure. It "solves" the problem without a crappy user-applied bandaid.

        It doesn't stop the problem that people will use as much bandwidth for p2p apps as they can get away with. This is not a technological problem and there will never be a technological solution. The article has an implicit bias when it talks about users "exploiting congestion control" and "hogging network resources". Well duh! That's why they have have network connections in the first place. Why is the assumption that a good network is an empty network?

        All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges. The final point is the numbers, 10Mb/s is slow for the next-gen connections now being sold (24Mb/s in the UK in some areas), and 100:1 is a large contention ratio. So why shouldn't someone use 240GB of traffic on that connection every month?
        • by Shakrai ( 717556 )

          the ZDnet editor, and the author of the proposal have no grasp of reality.

          Indeed. Here was my favorite bit: "They tell us that reining in bandwidth hogs is actually the ISP's way of killing the video distribution competition"

          And it's not? Recall the recent news over Time Warner's announcement -- 40GB as the highest tier they plan on offering. How could a tier so low have any other purpose besides killing online video distribution? 40GB in one month is almost achievable with ISDN -- technology that's 20 years old. Can we really not do any better then that in 2008?

          Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges

          Sho

          • I have no objections to Overusage Fees (same as cellphone plans work). After all, if you insist upon downloading 15,000 megabyte HD DVD or Blu-ray movies, why shouldn't you pay more than what I pay (the type who prefers 250 meg xvid rips).

            15,000 versus 250 megabytes.
            You SHOULD pay more.
            Just like a cellphone.

            Some fools pay $90 a month in overage charges. I pay $5 a month and use my minutes sparingly. People who use more minutes/gigabytes should pay more than the rest of us pay. That's entirely fair and e
            • Re: (Score:2, Insightful)

              by Alarindris ( 1253418 )
              I would like to interject that just because cellphones have ridiculous plans, doesn't mean the internet should. On my land line I get unlimited usage + unlimited long distance for a flat rate every month.

              It's just not a good comparison.
              • by electrictroy ( 912290 ) on Monday March 24, 2008 @01:31PM (#22847038)
                And I bet you pay a lot more for those "unlimited minutes" than the $5 a month I pay.

                Which is my point in a nutshell: - people who want unlimited gigabytes, should be paying a lot more than what I'm paying for my limited service ($15 a month). That's entirely and completely fair. Take more; pay more.

                Just like electrical service, cell service, water service, et cetera, et cetera.

                • Re: (Score:3, Insightful)

                  by Shakrai ( 717556 )

                  That's entirely and completely fair. Take more; pay more.

                  Claiming that something is "entirely and completely fair" while using the cellular industry as your example strains creditability just a tad.

                  There is nothing fair about the billing system used by the wireless industry. It's a holdover to the early 90s when spectrum was limited and the underlying technology (AMPS) was grossly inefficient with it's use of said spectrum. Modern technology is drastically more efficient at cramming more calls into the same amount of spectrum and the carriers have much more

              • by electrictroy ( 912290 ) on Monday March 24, 2008 @01:34PM (#22847068)
                P.S.

                There are some persons who think they should be able to download 1000 or even 10,000 times more data than what I download, and yet still pay the exact same amount of money.

                That's greed.

                If you want more, than you should pay more than what other people pay.
                • Re: (Score:3, Insightful)

                  by AuMatar ( 183847 )
                  No, its expecting the ISP to live up to its side of the contract. If the contract is pay per gig, then the high downloaders will pay more. If the ISP sells an unlimited plan, it should be unlimited. Either way is fine, but they have to follow their agreement.
                  • by Percy_Blakeney ( 542178 ) on Monday March 24, 2008 @07:50PM (#22851456) Homepage

                    No, its expecting the ISP to live up to its side of the contract... either way is fine, but they have to follow their agreement.

                    Are you saying that your ISP isn't living up to its contract with you? You don't need anything fancy to fix that -- just file a lawsuit. If they truly promised you unlimited bandwidth (as you interpret it), then you should easily win.

                    On the other hand, you might not completely understand your contract, and thus would take a serious beating in court. Either way, you need to accept the harsh reality that any ISP that offers broadband service (1+ Mbps) without transfer caps will go out of business within 2 years.

        • by Alsee ( 515537 ) on Monday March 24, 2008 @12:24PM (#22845974) Homepage
          All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges.

          The author of the article, George Ou, explains why he thinks you are a stupid and evil for suggesting such a thing. [zdnet.com] Well, he doesn't actually use the word "stupid" and I don't think he actually uses the word "evil", but yeah that is pretty much says.

          You see in Australia they have a variety of internet plans like that. And the one thing that all of the plans have in common is that they are crazy expensive. Obscenely expensive.

          So George Ou is right any you are wrong and stupid and evil, and the EFF is wrong and stupid and evil, and all network neutrality advocates are all wrong and stupid and evil, you are all going to screw everyone over force everyone to pay obscene ISP bills. If people don't side with George Ou, the enemy is going to make you get hit with a huge ISP bill.

          Ahhhh... except the reason Australian ISP bills are obscene might have something to do with the fact that there are a fairly small number of Australians spread out across an entire continent on the bumfuck other side of the planet from everyone else.

          Which might, just possibly MIGHT, mean that the crazy high Australian ISP rates kinda sorta have absolutely no valid connection to those sorts of usage-relevant ISP offerings.

          So that is why George Ou is right and why you are wrong and stupid and evil and why no one should listen to your stupid evil alternative. Listen to George Ou and vote No on network neutrality or else the Network Neutrality Nazis are gonna make you pay crazy high for internet access.

          -
          • Nice, you made me smile. I saw his earlier article on ZDnet when I clicked through his history to establish how fucked in the head he was. He came high on the scale. The basic problem with his rant about metered access is that it's complete bollocks. A metered plan doesn't mean that you have an allowance of 0 bytes a month, with a per-byte cost. Instead it can be a basic allowance with a price to exceed that. This is how all mobile phone contracts in the UK work to price the access resource. We also have th
            • by dgatwood ( 11270 )

              Metered access is a bad idea. It's just like cell phone charges. You use more than some limit one month and suddenly your $45 phone bill just went up to $200. That's the last think I would want in an ISP, and indeed, I'm considering moving my home phone to VoIP so I don't have to pay overages for phone bills, either.

              I'm not someone who runs BitTorrent constantly; that's not why I'm opposed to metered access. I just want to know that my bill at the end of the month will be a certain amount, and I would

          • on the bumfuck other side of the planet from everyone else

            I believe that the prime-ministerial term is "ass-end of the world". A proud moment for all Australians =), although, Colin Carpenter made us prouder when he said that Melbourne was the Paris end of the ass-end of the world.
            • Colin Carpenter made us prouder when he said that Melbourne was the Paris end of the ass-end of the world.
              Would that be the Paris Hilton ass-end?
        • by Ed Avis ( 5917 )
          Indeed, on the face of it the proposal reminds me of range voting. If only each user would voluntarily agree to take less bandwidth than they're able to get, then the net would run more smoothly. But no P2Per would install the upgrade from Microsoft or from his Linux distribution to replace the fast, greedy TCP stack with a more ethical, caring-sharing one that makes his downloads slower.

          What I don't understand is why this concerns TCP at all. An ISP's job is surely to send and receive IP datagrams on be
        • I agree very strongly with this. You are correct, this amounts to users volunteering to throttle their own bandwidth, and it will never work.

          Another proposal would be for backbones and network interconnects to apply some sort of fairness discipline to traffic coming from the various networks. This would give ISPs incentive to throttle and prioritize appropriately. ISPs also need to modify their TOS to make it explicit that you have a burst bandwidth and a continuous bandwidth and that you cannot constan

      • If you're using Linux, which TCP Congestion algorithm are you using? Reno isn't very fair; if a single connection is congested beyond the first hop, you'll slow down the rest of your connections when the window slides to smaller units. Have you tried Bicubic, Veno, or any of the other 9 or 10 congestion algorithms?

        You can change them on the fly by echoing the name into your procfs, IIRC. Also, if you have the stomache for it, and two connections to the internet, you can load balance and/or stripe them using Linux advanced Routing & Traffic Control [lartc.org] (mostly the ip(1) command). Very cool stuff if you want to route around a slow node or two (check out the multiple path stuff) at your ISP(s).
    • He also ignores the fact that a throttling mechanism is already built into every DSL/Cable modem out there - the speed it's provisioned at. (incidentally, also the only place to implement any sort of effective dynamic throttling controls - anywhere else and users will find a way around it.)

      If ISP's would just build their networks to handle the speeds they sell instead of running around with their hands in the air over the fact the 'net has finally evolved to the point where there are reasons for an individu
      • >>>"If ISP's would just build their networks to handle the speeds they sell"

        Or better yet, advertise the connection realistically. i.e. If your network can't handle half your users doing 10 megabit video downloads, then sell them as 1 megabit lines instead. Downsize the marketing to reflect actual performance capability.

        • Re: (Score:3, Insightful)

          by dgatwood ( 11270 )

          But then they couldn't advertise that they are 10x the speed of dialup because they'd all probably be slower if they had to assume more than a few percent utilization.... :-)

    • by Mike McTernan ( 260224 ) on Monday March 24, 2008 @12:12PM (#22845810)

      Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.

      Obviously AIMD isn't going to fix this situation - it's not designed to. Similarly, expecting all computers to be updated in any reasonable timeframe won't happen (especially as a P2P user may have less little motivation to 'upgrade' to receive slower downloads). Still, since we're assuming the bottleneck is in the first hops, it follows that the congestion is in the ISPs managed network. I don't see why the ISP can't therefore tag and shape traffic so that their routers equally divide available bandwidth between each user, not TCP stream. Infact, most ISPs will give each home subscriber only 1 IP address at any point in time, so it should be easy to relate a TCP stream (or and IP packet type) to a subscriber. While elements of the physical network are always shared [plus.net], each user can still be given logical connection with guaranteed bandwidth dimensions. This isn't a new concept either, it's just multiplexing using a suitable scheduler, such as rate-monotonic (you get some predefined amount) or round-robin (you get some fraction of the available amount).

      Such 'technology' could be rolled by ISPs according their roadmaps (although here in the UK it may require convincing BT Wholesale to update some of their infrastructure) and without requiring all users to upgrade their software or make any changes. However, I suspect here the "The politicization of an engineering problem" occurs because ISPs would rather do anything but admit they made a mistake in previous marketing of their services, raise subscriber prices, or make the investment to correctly prioritise traffic on a per user basis, basically knocking contention rates right down to 1:1. It's much easier to simply ban or throttle P2P applications wholesale and blame high bandwidth applications.

      I have little sympathy for ISPs right now; the solution should be within their grasp.

  • by esocid ( 946821 ) on Monday March 24, 2008 @10:50AM (#22844956) Journal

    Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens...Background P2P applications like BitTorrent will experience a more drastic but shorter-duration cut in throughput but the overall time it takes to complete the transfer is unchanged.
    I am all for a change in the protocols as long as it helps everybody. The ISPs win, and so do the customers. As long as the ISPs don't continue to complain and forge packets to BT users I would see an upgrade to the TCP protocol as a solution to what is going on with neutrality issues, as well as an upgrade to fiber optic networks so the US is on par with everyone else.
    • Re: (Score:3, Interesting)

      by cromar ( 1103585 )
      I have to agree with you. There is ever more and more traffic on the internet and we are going to have to look for ways to let everyone have a fair share of the bandwidth (and get a hella lot more of the stuff). Also, this sort of tactic to bandwidth control would probably make it more feasible to get really good speeds at off-peak times. If the ISPs would do this, they could conceivably raise the overall amt. of bandwidth and not worry about one user hogging it all if others need it.

      On the internet as
    • by nweaver ( 113078 ) on Monday March 24, 2008 @11:29AM (#22845328) Homepage
      There have been plenty of lessons, Japan most recently, that upping the avaible capacity simply ups the amount of bulk-data P2P, without helping the other flows nearly as much.

    • by Sancho ( 17056 )
      Don't ISPs tend to pay by the amount of traffic (rather than just a connection fee, as most of their users pay?) This solution seems to be looking at the problem from the perspective that p2p users are harming bandwidth for casual users, instead of simply costing the ISPs more money due to the increased amount of data that they're pushing through their pipes.
    • by jd ( 1658 )
      Fortunately, there are plenty of software mechanisms already around to solve part of the problem. Unfortunately, very few have been tested outside of small labs or notebooks. We have no practical means of knowing what the different QoS strategies would mean in a real-world network. The sooner Linux and the *BSDs can include those not already provided, the better. We can then - and only then - get an idea of what it is that needs fixing. (Linux has a multitude of TCP congestion control algorithms, plus WEB10
  • by StCredZero ( 169093 ) on Monday March 24, 2008 @10:50AM (#22844968)
    A New Way to Look at Networking [google.com] is a Google Tech Talk [google.com]. It's about an hour long, but there's a lot of very good and fascinating historical information, which sets the groundwork for this guy's proposal. Van Jacobson was around at the early days when TCP/IP were being invented. He's proposing a new protocol layered on top of TCP/IP that can turn the Internet into a true broadcast medium -- one which is even more proof against censorship than the current one!
  • Neutrality debate? (Score:2, Insightful)

    by Anonymous Coward

    Whichever side of the neutrality debate you're on, this is worth consideration.

    There is a debate? I thought it was more like a few monied interests decided "there is a recognized correct way to handle this issue; I just make more profit and have more control if I ignore that." That's not the same thing as a debate.
  • by spydum ( 828400 ) on Monday March 24, 2008 @10:56AM (#22845018)
    For what it's worth, Net Neutrality IS a political fight, p2p is not the cause, but just the straw that broke the camels back. Fixing the fairness problem of tcp flow control will not make Net Neutrality go away. Nice fix though, too bad getting people to adopt it would be a nightmare. Where was this suggestion 15 years ago?
  • by Chris Snook ( 872473 ) on Monday March 24, 2008 @10:58AM (#22845024)
    Weighted TCP is a great idea. That doesn't change the fact that net neutrality is a good thing, or that traffic shaping is a better fix for network congestion than forging RST packets.

    The author of this article is clearly exploiting the novelty of a technological idea to promote his slightly related political agenda, and that's deplorable.
    • Re: (Score:3, Informative)

      by Sancho ( 17056 )
      The problem with traffic shaping is that eventually, once everyone starts encrypting their data and using recognized ports (like 443) to pass non-standard traffic, you've got to start shaping just about everything. Shaping only works as long as you can recognize and classify the data.

      Most people should be encrypting a large chunk of what goes across the Internet. Anything which sends a password or a session cookie should be encrypted. That's going to be fairly hard on traffic shapers.
      • by irc.goatse.cx troll ( 593289 ) on Monday March 24, 2008 @12:28PM (#22846024) Journal

        Shaping only works as long as you can recognize and classify the data.


        Not entirely true. It works better the more you know about your data, but even knowing nothing you can get good results with a simple rule of prioritizing small packets.

        My original QoS setup was just a simple rule of anything small gets priority over anything large. This is enough to make (most) VoIP, games, SSH, and anything else that is lots of small real time packets all get through over lots of full queued packets (transfers).

        Admittedly BitTorrent was what hurt my original setup, as you end up with a lot of slow peers each trickling transfers in slowly. You could get around this with a hard limit of overall packet rate, or with connection tracking and limiting the number of IPs you hold a connection with per second (and then block things like UDP and ICMP)

        Yeah its an ugly solution, but we're all the ISP's bitch anyways, so they can do what they want.
        • by Sancho ( 17056 )
          That's a fair point. Of course, the small packet example is not likely to help in the case of ISPs trying to reduce P2P, but there could be other solutions. Of course, in these cases, you run high risks of unintended consequences.
    • Poster above is not a troll on this matter. The issue is pointing at TCP being "Exploited" by Bittorrent and people have failed to look at how biased and full of false information the graphs are.

      There's a graph that shows a bittorrent user as the highest bandwith user over a day and then puts a youtube surfer and a websurfer on the same bandwith level as an xbox gamer and things of that nature. That is so far off from eachother that it is despicable.

      Every one of the ones I mentioned in the previous paragrap
      • There is no way in the world even websurfing alone can only use .1 kbps of upstream, as that would be 40 times slower than dialup.
        Are you confusing upstream and downstream? The upstream for a HTTP GET request consists of HTTP headers and TCP ACKs. It's also bursty, meaning that a bunch of it happens as the HTML finishes loading and the images start loading, but then nothing until the user navigates away from the page that he is reading.
        • I did mix them up for web browsing, but gaming and web surfing are very much not in the same category. Take for example counterstrike which averages about 20-40KB/s upstream up to about 60-80KB/s upstream. This = .1?

          Yes I recognize downstream is far greater and didn't mean to misrepresent what I was stating. Thank you for clarifying.
          • I did mix them up for web browsing, but gaming and web surfing are very much not in the same category.

            You're right. For online games with real-time interaction, Page 2 of the article [zdnet.com] has the table. Perhaps by "online gaming", someone meant playing Flash/Java/JS games, activating Steam games, or playing on Xbox Live Silver (XBLA games, achievements, etc). Those have a similar bandwidth profile to HTTP transactions.

    • by Hatta ( 162192 )
      Parent is no troll, the authors political motivation is obvious from statements like:

      While the network isn't completely melting down, it's completely unfair because fewer than 10% of all Internet users using P2P hogs roughly 75% of all network traffic at the expense of all other Internet users.

      Duh, higher bandwidth applications take more bandwidth. Expecting parity between low bandwidth and high bandwidth applications is fundamentally biased against high bandwidth applications. If I'm an IRC user, and you

    • I have to concur. The article is laced with derisive comments against the EFF and the like for coming down on the Comcast so hard for its throttling packages. There's something inherently defective in the TCP standard, I believe this now after reading the article. However, that doesn't mean that forging packets is _fair practice_, or an acceptable engineering solution.
      Yes, there's an engineering problem to solve. No, you aren't allowed to violate the Terms of Service to solve it.
  • Wag their fingers? (Score:2, Insightful)

    by rastilin ( 752802 )
    How do they get off saying THAT line? By their own admission, the P2P apps simply TRANSFER MORE DATA, it's not an issue of congestion control if one guy uploads 500KB/Day and another uses 500MB in the same period. Hell you could up-prioritize all HTTP uploads and most P2P uploaders wouldn't care or notice. The issue with Comcast is that instead of prioritizing HTTP, they're dropping Bittorrent. There's a big difference in taking a small speed bump to non-critical protocols for the benefit of the network and
  • because the Internet is a group of autonomous systems (hence the identifier "ASN") agreeing to exchange traffic for as long as it makes sense for them to do so. There is no central Internet "authority" (despite what Dept of Commerce, NetSol, Congress and others keep trying to assert) - your rules end at the edge of my network. Your choices are to exchange traffic with me, or not, but you don't get to tell me how to run things (modulo the usual civil and criminal codes regarding the four horsemen of the information apocalypse). Advocates of network neutrality legislation would clearly like to have some add'l regulatory framework in place to provide a stronger encouragement to "good behavior" (as set out in the RFCs and in the early history of internetworks and the hacking community) than the market provides in some cases. It remains to be seen whether the benefits provided by that framework would at all outweigh the inevitable loopholes, unintended consequences and general heavy-handed cluelessness that's been the hallmark of any federal technology legislation.

    Those networks that show consistently boorish behavior to other networks eventually find themselves isolated or losing customers (e.g. Cogent, although somehow they still manage to retain some business - doubtless due to the fact that they're the cheapest transit you can scrape by with in most cases, although anybody who relies on them is inevitably sorry).

    The Internet will be a democracy when every part of the network is funded, built and maintained by the general public. Until then, it's a loose confederation of independent networks who cooperate when it makes sense to do so. Fortunately, the exceedingly wise folks that wrote the protocols that made these networks possible did so in a manner that encourages interconnectivity (and most large networks tend to be operated by folks with similar clue - when they're not, see the previous paragraph).

    Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.
    • Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.


      Dad? Is that you?

    • by Sancho ( 17056 )
      All of that is perfectly reasonable as long as there are alternatives that customers can choose. When it's a content provider, it's not hard to switch to a new ISP. When it's an end-user, it can be quite hard to switch to a new ISP (in some cases, there just aren't other choices--there are plenty of areas where there is a monopoly on broadband.)

      The government's own actions to help secure that monopoly are part of the problem. Cable providers don't have to share their lines with competitors, despite havin
      • by darkuncle ( 4925 )
        for the record, I completely agree with the points you made:

        * gov't regulation (or lack thereof), combined with a woeful lack of due diligence in ensuring taxpayer investment sees a decent return (the POTS system was almost entirely subsidized by taxpayer dollars, and we're still paying for that initial investment in the form of surcharges and taxes on copper laid a hundred years ago in some cases, with further technological deployments (e.g. FTTP) coming late or not at all, and always with grudging complai
        • by Sancho ( 17056 )

          I think the feds have been entirely too chummy with Ma Bell (and the cablecos, and BigCorp in general) for the last several decades. However, I'm very skeptical that the answer to poor federal legislation is additional federal legislation.

          I think that the answer to poor federal legislation would be good federal legislation, but you're right that that's probably wishful thinking these days.

          I guess the cure that I'd like to see is requirements that the line owners share their lines with competitors. In this way, at least competition has a shot at fixing the problem. We could examine alternatives later on, if that failed.

  • TCP's fairness attempt (its not perfect, even so) is fairness among flows. But what people desire is fairness among users.

    The problem, however, is that the fairness is an externality. You COULD build a BitTorrent-type client which monitors congestion and does AIMD style fairness common to all flows when it is clear that there is congestion in common on the streams rather than on the other side.

    But there is no incentive to do so! Unless everyone else did, your "fair" P2P protocol gets stomped on like any
  • When you get to his actual proposal, he says that it's up to the application to send a message to the new TCP stack that says "Hey, I'm a good app, gimme bandwidth"? At least, that's how I read it.

    I don't think I could walk to the kitchen and get a beer faster than it would take P2P authors to exploit that.
  • by Sir.Cracked ( 140212 ) on Monday March 24, 2008 @11:18AM (#22845200) Homepage
    This article is well fine and good, but it fails to recognize that there are two types of packet discrimination being kicked around. Protocol filtering/prioritization, and Source/Destination filtering/prioritization. There are certainly good and bad ways of doing the former, and some of the bad ways are really bad (for a "for instance", see Comcast). However, the basic concept, that network bandwidth is finite over a set period of time, and that finite resource must be utilized efficiently, is not one most geek types will disagree with you on. Smart treatment of packets is something few object to.

    What brings a large objection is the Source/Destination filtering. I'm going to downgrade service on packets coming from Google video, because they haven't paid our "upgrade" tax, and coincidentally, we're invested in Youtube. This is an entirely different issue, and is not an engineering issue at all. It is entirely political. We know is technologically possible. People block sites today, for parental censorship reasons, among others. It would be little challenge, as an engineer, to set to a VERY low priority an arbitrary set of packets from a source address. This however violates what the internet is for, and in the end, if my ISP is doing this, am I really connected to the "Internet", or just a dispersed corporate net, similar to the old AOL.

    This is, and will be, a political question, and if it goes the wrong way, will destroy what is fundamentally interesting about the net. The ability to, with one connection, talk to anyone else, anywhere in the world, no different then if they were in the next town over.

  • Unfairness problem of transport protocols can not be fixed in today's internet because it requires cooperation from other nodes that forward your data, and these nodes could be anywhere in the world.
    Specifically for TCP, one can just hack into the OS kernel and force TCP to ignore all the congestion notifications etc.. and thus hogging all the bandwidth...(its not that difficult)
  • by Ancient_Hacker ( 751168 ) on Monday March 24, 2008 @11:22AM (#22845244)
    Lots of WTF's in TFA:
    • Expecting the client end to backoff is a losing strategy. I can write over NETSOCK.DLL you know.
    • Results of a straw poll at an IETF confab is not particularly convincing.
    • Expecting ISP's to do anything rational is a bit optimistic.
    • It's not a technical nor a political problem, it's an economic one. If users paid per packet the problem would go away overnight.
  • Removing latency from voip packets at the expense of FTP is QoS. It's in general a quite good idea, and improves service.

    Adding latency to only $foocorp (where $foocorp != $isp) so $isp can get more money violates net neutrality. This is a very bad idea, and borderline legal since the customer has alreaady paid.
  • Confusing... (Score:4, Insightful)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday March 24, 2008 @11:26AM (#22845294) Journal

    I need coffee before I'll really understand this, but here's a first attempt:

    Despite the undeniable truth that Jacobsons TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred. Groups like the Free Press and Vuze (a company that relies on P2P) files FCC complaints against ISPs (Internet Service Providers) like Comcast that try to mitigate the damage caused by bandwidth hogging P2P applications by throttling P2P.

    Ok, first of all, that isn't about TCP congestion avoidance, at least not directly. (Doesn't Skype use UDP, anyway?)

    But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections -- thus, BitTorrent can still work, and still be reasonably fast, by decreasing the number of connections. Make too many connections and Comcast starts dropping them, no matter what the protocol.

    They tell us that P2P isnt really a bandwidth hog and that P2P users are merely operating within their contracted peak bitrates. Never mind the fact that no network can ever support continuous peak throughput for anyone and that resources are always shared, they tell us to just throw more money and bandwidth at the problem.

    Well, where is our money going each month?

    But more importantly, the trick here is that no ISP guarantees any peak bitrate, or average bitrate. Very few ISPs even tell you how much bandwidth you are allowed to use, but most reserve the right to terminate service for any reason, including "too much" bandwidth. Comcast tells you how much bandwidth you may use, in units of songs, videos, etc, rather than bits or bytes -- kind of insulting, isn't it?

    I would be much happier if ISPs were required to disclose, straight up, how much total bandwidth they have (up and down), distributed among how many customers. Or, at least, full disclosure of how much bandwidth I may use as a customer. Otherwise, I'm going to continue to assume that I may use as much bandwidth as I want.

    But despite all the political rhetoric, the reality is that the ISPs are merely using the cheapest and most practical tools available to them to achieve a little more fairness and that this is really an engineering problem.

    Yes, it is a tricky engineering problem. But it's also a political one, as any engineering solution would have to benefit everyone, and not single out individual users or protocols. Most solutions I've seen that accomplish this also create a central point of control, which makes them suspect -- who gets to choose what protocols and usage patterns are "fair"?

    Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens. This is accomplished by the single-stream application tagging its TCP stream at a higher weight than a multi-stream application. TCP streams with higher weight values wont be slowed as much by the weighted TCP stack whereas TCP streams with smaller weight values will be slowed more drastically.

    Alright. But as I understand it, this is a client-side implementation. How do you enforce it?

    At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence cheat advantage by installing a newer TCP implementation.

    Nope. What I wonder is why a P2P user might want to do that, rather than install a different TCP implementation -- one which tags every single TCP connection as "weighted".

    Oh, and who gets to tag a connection -- the source, or the destination? Remember that on average, some half of th

    • Re: (Score:3, Insightful)

      by Alsee ( 515537 )
      But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections

      No, Comcast is specifically examining your data and is specifically forging packets to kill P2P connections.

      (a) George Ou is a corporate shill; and
      (b) George Ou considers BitTorrent and all P2P teh evilz of teh piratez.

      So his position is that
  • FUD (Score:5, Insightful)

    by Detritus ( 11846 ) on Monday March 24, 2008 @11:27AM (#22845298) Homepage
    The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.
    • by Kjella ( 173770 )

      Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No.

      What about download accelerators? On a congested server, I've seen a near linear increase in bandwidth by opening multiple streams (which many servers now have limited, but not really the point). When I go from 25kb/s to 100kb/s, I took that bandwidth from someone. Same with some slow international connections where there's plenty on both ends but crap in the middle. I would honestly say I'm gaming the system then. P2P have a "natural" large number of streams because it has so many peers, but there's no de

      • Re: (Score:3, Informative)

        by asuffield ( 111848 )

        What about download accelerators? On a congested server, I've seen a near linear increase in bandwidth by opening multiple streams (which many servers now have limited, but not really the point). When I go from 25kb/s to 100kb/s, I took that bandwidth from someone.

        You're making the same mistake as the author of that article. What you fail to realise is precisely why the single connection did not operate as fast: because your kernel was slowing it down incorrectly. You are not fighting other users by opening

    • And this is exactly the problem we're going to keep running into. People like this who want the Internet to return to a simplistic, centrally controlled few producers - many consumers model, rather than the distributed P2P model it's rapidly moving towards. P2P might be mostly about questionably-legal content distribution now, but the technology's going to be used for more and more "legitimate" purposes in years to come... If ISPs and "old media" advocates don't manage to kill it first.
  • ...is that it expects every client to play nice. Ideas like "I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole." is such a major WTF I can't begin to describe it. If they wanted to control it, there should be a congestion control where packets were tagged with a custom id set by the incoming port on the ISPs router. So that if you have 5 TCP streams coming in
  • Figure out the real committable bandwidth (available bandwidth / customer connections). Then, tag that amount of customer information coming into the network with a priority tag. Customers may prioritize what they want, and it will be respected up to the limit.

    Example: 1000k connection shared between 100 people who each have 100k pipes. They get a committed 10k. The first 10k of packets in per second that are unmarked are marked "priority." Packets marked "low" are passed as low. Packets marked "high" or

  • by vrmlguy ( 120854 ) <samwyse@@@gmail...com> on Monday March 24, 2008 @12:13PM (#22845818) Homepage Journal

    Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link. [...] The other major loophole in Jacobson's algorithm is the persistence advantage of P2P applications where P2P applications can get another order of magnitude advantage by continuously using the network 24×7.
    I agree with the first point, but not with the second. One of the whole points of having a computer is that it can do things unattended. Fortunately, the proposal seems to only fix the first issue.

    I'd think that a simple fix to Jacobson's algorithm could help a lot. Instead of resetting the transmission rate on just one connection on a dropped packet, reset all of them. This would have no effect on anyone using a single stream, and would eliminate problems with the source of the congestion is nearby. Variations on this theme would included resetting all connections for a single process or process group, which would throttle my P2P without affecting my browser. This alone would be more than enough incentive for me to adopt the patch: instead of having to schedule different bandwidth limits during the day, I could just let everything flow at full speed 24x7. And by putting the patch into the kernel, you'd have less to worry about individual applications and/or users deciding to adopt it.
  • Does anyone remember reading about a scheme for turning the usual QoS technique upside down?

    That is, instead of marking packets you really care about (VoIP packets, say) high priority, you mark the ones you don't care that much (bittorrent downloads) about as low priority?

    I recall reading about low priority marks having interesting advantages over high priority marks. It had to do with the high priority marks relying on perverse incentives (almost all routers would have to play by the rules and the more th
  • by Percy_Blakeney ( 542178 ) on Monday March 24, 2008 @12:32PM (#22846090) Homepage

    Here is the glaring flaw in his proposal:

    That means the client side implementation of TCP that hasn't fundamentally changed since 1987 will have to be changed again and users will need to update their TCP stack.

    So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates? He does mention this problem:

    At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence "cheat" advantage by installing a newer TCP implementation... I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole.

    There are two issues with this solution:

    1. How would the ISP distinguish between a network running NAT and a single user running P2P?
    2. If you can reliably detect "cheaters", why do you need to update the users' TCP stacks? You would just throttle the cheaters and be done with it.

    It's nice that he wants to find a solution to the P2P bandwidth problem, but this is not it.

    • by vrmlguy ( 120854 )

      So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates?

      I'm sure that if Microsoft pushed an update, it would handle more than half of the P2P community. Over time, when the successor to Vista arrived, you wouldn't have an older stack to fall back upon. Linux might be a bit harder, since old stacks could still float around forever, but there's nothing today that's stopping anyone today from running a stack that has the Jacobson code disabled.

      Instead of throttling based on per-host, though, I'd do it per process or process group. Right now, every P2P app that

  • by clare-ents ( 153285 ) on Monday March 24, 2008 @12:35PM (#22846128) Homepage
    In the UK bandwidth out of BTs ADSL network costs ~ £70/Mbit/Month wholesale. Consumer DSL costs ~ £20/month.

    You've got three options,

    #1 Have an uncapped uncontended link for the £20/month you pay - you'll get about 250kbps.

    #2 Have a fast link with a low bandwidth cap - think 8Mbits with a 50GB cap and chargeable bandwidth after that at around ~ 50p-£1/GB

    #3 Deal with an ISP who's selling bandwidth they don't have and expect them to try as hard as possible to make #1 look like #2 with no overage charges.

    If you want a reliable fast internet connection you want to go with a company that advertises #2. If you can't afford #2, you can spend your time working against the techs at ISP #3, but expect them to go our of their way to make your life shit until you take your service elsewhere because you cost them money.

  • If you have a basic understanding of tcp, and reasonable c skills, it is not at all hard to make your kernel play unfair, and it can really make a big difference to your transmission rates, assuming you have a reliable connection. I sometimes wonder how many people out there have an unfair kernel like me.
  • by UttBuggly ( 871776 ) on Monday March 24, 2008 @12:45PM (#22846280)
    WARNING ~! Core dump follows.

    It occurred to me this morning that driving on public roadways and surfing the public networks were identical experiences for the vast majority of people. That experience being; "mine, mine, ALL MINE!....hahahaha!" AKA "screw you...it's all about me!"

    Now, I have the joy of managing a global network with links to 150 countries AND a 30 mile one way commute. So, I get to see, in microcosm, how the average citizen behaves in both instances.

    From a network perspective, charge by usage...period. Fairness only works in FAIRy tales.

    We do very good traffic shaping and management across the world. QoS policies are very well designed and work. The end user locations do get charged an allocation for their network costs. So, you'd think the WAN would run nicely and fairly. After all, if the POS systems are impacted, we don't make money and that affects everyone, right?

    Hardly. While we block obvious stuff like YouTube and Myspace, we have "smart" users who abuse the privilege. So, when we get a ticket about "poor network performance", we go back to a point before the problem report and look at the flows. 99 out of 100 times, it's one or more users hogging the pipe with their own agenda. Now, the branch manager gets a detailed report of what the employees were doing and how much it cost them. Of course, porn surfers get fired immediately. Abusers of the privilege just get to wonder what year they'll see a merit increase, if at all.

    So, even with very robust network tuning and traffic shaping, the "me, me" crowd will still screw everybody else...and be proud that they did. Die a miserable death in prison you ignorant pieces of shit.

    Likewise the flaming assholes I compete with on the concrete and asphalt network link between home and office every day. This morning, some idiot in a subcompact stuck herself about 2 feet from my rear bumper...at 70mph. If I apply ANY braking for ANY reason, this woman will collide with me. So, I tapped the brakes so she'd back off. She backed off with the upraised hand that seemed to be "yeah, I know I was in the wrong and being unsafe" She then performed 9 lane changes, all without signaling once, and managed to gain....wait for it.... a whole SEVEN SECONDS of time over 10 miles of driving.

    I see it every day. People driving with little regard for anyone else and raising the costs for the rest of us. On the network, or on the highway, same deal. And they feel like they did something worthwhile. I've talked to many users at work and the VAST majority are not only unapologetic, but actually SMUG. Many times, I'll get the "I do this at home, so it must be okay at work". To which I say, "well you cannot beat your wife and molest your kids at the office, now can you?"

    My tolerance of, and faith in, my fellow man to "do the right thing" are at zero.

    A technical solution (to TCP Congestion Control, etc.) is teaching the pig to sing; horrible results. Charge the thieving, spamming bastards through the nose AND constrain their traffic. That'll get better results than any pollyanna crap about "fair".

    • That experience being; "mine, mine, ALL MINE!....hahahaha!" AKA "screw you...it's all about me!"

      Economists call this the Tragedy of the Commons [sciencemag.org], and it's the reason driving in traffic sucks, and also the reason public toilets are filthy.

      The Internet is fundamentally a shared infrastructure. BitTorrent and other protocols intentionally utilize that infrastructure unfairly. A BitTorrent swarm is like a pack of hundreds of cars driving 90 mph, both directions, in every lane including the shoulder. They cut y

  • by gweihir ( 88907 ) on Monday March 24, 2008 @12:47PM (#22846314)
    Every year or so somebody else proposes to "fix TCP". It never happens. Why?

    1) TCP works well.
    2) TCP is in a lot of code and cannot easily be replaced
    3) If you need something else, alternatives are there, e.g. UDP, RTSP and others.

    Especially 3) is the killer. Applications that need it are already using other protocols. This article, like so many similar ones before it, is just hot air by somebody that did either not do their homework or want attention without deserving it.
  • Solution (Score:2, Interesting)

    by shentino ( 1139071 )
    Personally, I think they should move to a supply and demand based system, where you are charged per packet or per megabyte, and per-unit prices rise during periods of peak demand.

    There are a few power companies who announce 24 hours in advance how much they're going to charge per Kwh in any given hour, and their customers can time their usage to take advantage of slack space, since the prices are based on demand.

    If we do the same thing with internet service *both in and out*, a real bandwidth hog is going t
  • by Animats ( 122034 ) on Monday March 24, 2008 @02:11PM (#22847764) Homepage

    As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), I suppose should say something.

    The way this was supposed to work is that TCP needs to be well-behaved because it is to the advantage of the endpoint to be well-behaved. What makes this work is enforcement of fair queuing at the first router entering the network. Fair queuing balances load by IP address, not TCP connection, and "weighted fair queueing" allows quality of service controls to be imposed at the entry router.

    The problem now is that the DOCSIS approach to cable modems, at least in its earlier versions, doesn't impose fair queuing at entry to the network from the subscriber side. So congestion occurs further upstream, near the cable headend, in the "middle" of the network. By then, there are too many flows through the routers to do anything intelligent on a per-flow basis.

    We still don't know how to handle congestion in the middle of an IP network. The best we have is "random early drop", but that's a hack. The whole Internet depends on stopping congestion near the entry point of the network. The cable guys didn't get this right in the upstream direction, and now they're hurting.

    I'd argue for weighted fair queuing and QOS in the cable box. Try hard to push the congestion control out to the first router. DOCSIS 3 is a step in the right direction, if configured properly. But DOCSIS 3 is a huge collection of tuning parameters in search of a policy, and is likely to be grossly misconfigured.

    The trick with quality of service is to offer either high-bandwidth or low latency service, but not both together. If you request low latency, your packets go into a per-IP queue with a high priority but a low queue length. Send too much and you lose packets. Send a little, and they get through fast. If you request high bandwidth, you get lower priority but a longer queue length, so you can fill up the pipe and wait for an ACK.

    But I have no idea what to do about streaming video on demand, other than heavy buffering. Multicast works for broadcast (non-on-demand) video, but other than for sports fans who want to watch in real time, it doesn't help much. (I've previously suggested, sort of as a joke, that when a stream runs low on buffered content, the player should insert a pre-stored commercial while allowing the stream to catch up. Someone will probably try that.)

    John Nagle

    • by Mike McTernan ( 260224 ) on Monday March 24, 2008 @05:04PM (#22850030)

      I'd argue for weighted fair queuing and QOS in the cable box.

      Seem to me that for ADSL it would be ideally placed in the DSLAM, where there is already a per-subscriber connection (in any case, most home users will only get 1 IP address, hence making a 1:1 mapping for subscriber to IP -nothing need be per IP connection as the original article assumes). In fact, the wikipedia page on DSLAMs [wikipedia.org] says QoS is already an additional feature, mentioning priority queues.

      So I'm left wondering why bandwidth hogs are still a problem for ADSL. You say that this is a "huge collection of tuning parameters", and I accept that correctly configuring this stuff maybe complex, but this is surely the job of the ISPs. Maybe I'm overestimating the capabilities of the installed DSLAMs, in which case I wonder if BTs 21CN [btplc.com] will help.

      Certainly though, none of the ISPs seem to be talking about QoS per subscriber. Instead they prefer to differentiate services, ranking P2P and streaming lower than uses on the subscribers behalf. PlusNet (a prominent UK ISP) have a pizza analogy [plus.net] to illustrate how sharing works - using their analogy, PlusNet would give you lots of Margarita slices, but make you wait for a Hawaiian even if you aren't eating anything else. Quite why they think this is acceptable is unknown to me; they should be able to enforce how many slices I get at the DSLAM, but still allow me to select the flavours at my house (maybe I get my local router to apply QoS policies when it takes packets from the LAN to the slower ADSL, or mark streams using TOS bits in IPv4 or the much better IPv6 QoS features to assist the shaping deeper into the network).

    • John,

      Fairness is not the problem. Fairness is the wedge-issue that CATV-ISPs are trying to use to justify their behavior.

      I personally like the rudimentary aspects of the weighted fair queuing proposal -- so let's image that we had it. Would Comcast still have a problem with too many upload bytes from too many homes competing for the upload path back to the CMTS? Yes.

      The real problem is that CATV-ISPs are at their upper limits and FIOS is currently superiour. Most CATV nets are DOCSIS 1.1, neighborhoods
  • by John Sokol ( 109591 ) on Monday March 24, 2008 @02:38PM (#22848256) Homepage Journal
    Back in 1994 to 1997 I was in many debates on just this subject.

    We were buying T1 and T3 for use with video streaming and the ISP where getting upset that we were using 90% of the capacity they sold us. Apparently they specked out their cost based on office use doing web surfing. And based their models on older Telco traffic models where they needed 100 lines of outbound bandwidth for every 10000+ phone lines based on supporting 95% of the peak throughput.

    But we concluded if you are selling us 1.5Mbps I dam well better be able to use 1.5Mbps, don't blame me when I use what was sold to me.

    Well I see this as the same problem. If Comcast or Verizon sells me internet at at data rate, then I expect to be able to use all of it. There is nothing unfair about me using what I was sold. If they don't like it then they need to change their contractual agreements with me and change their hardware to match!

    Same goes with the internal infrastructure, backbones and exchange point. If you can't support it don't sell it! Don't attack the P2P users, they are using what they PAID FOR and what was sold to them!!! If they are not getting it, they should file a class action suit.
    No more then if you local cable company decided that 4 hr of TV was your limit and they would start to degrade your reception if you watched more, though this wasn't in the contract you signed up for.

    On the other side, P2P should be given the means to hug the edges of the network. By this I mean communication between 2 cable modem or DSL users running off the same upstream routers (less hops) should be preferable and more efficient, not clogging up the more costly backbones. Currently P2P doesn't take any of that into consideration. Maybe ISP's could consider some technical solution to that rather then trying to deny customers the very access they are contractually bound to provide...
  • by Crypto Gnome ( 651401 ) on Monday March 24, 2008 @04:30PM (#22849618) Homepage Journal
    Yet another clueless wannabe pontificating about something they clearly do not understand. Somedays I wish they'd firewall the RFC list and prevent retards like this commenting on stuff.
    1. many P2P protocols use UDP (skype, anyone?)

    2. proposes a client/application side change in behaviour, while they're whining about a failure of the protocol
      - (think my car needs a grease-and-oilchange, so I'll go walk the dog - proposed solution bears no relationship to the problem)

    3. enforcement proposal ignores how the interweb works, there's NO difference (at the IP level) between a user multi-streaming a TCP download of a single file, and a user opening multiple tcp connections to a webserver to simultaneously download *all* the crappy bits-n-shits that make up a web page (ie parallel non-pipelined http requests) rather than one-at-a-time
      - yet the first would be argued as an "unfair use* while the second is perfectly normal and acceptable behaviour
    I could go on for hours.
    • If 'the protocol' is broken, then 'the protocol' needs to change, recommending an app-level change only opens up further opportunity for abuse
      - after all, if the app developers were genuinely interested in playing nicely in the sandbox, they would already

    • recommending an *external* enforcement will never work, that costs time and money and who is gonna pay me to implement it?
      - TCP congestion control "works" (ie as engineered) because it's inherent in the protocol implementation, does not require "enforcement" by the ISP

    • P2P users are initiating "sessions" (assuming they're still using TCP) to different endpoints, so you don't have a beautiful and neat bundle of parallel-tubes as described in the metaphor
      - ie most of your assumptions about "how this works" are wrong.
    Of course, the entire article starts out from a baseless assumption (that users should get 'fair' access to the interweb).

    Anyone read their ISP Ts&Cs ? Ever?

    IP is a *best effort* protocol.... we will punt your packet upstream and hope it gets there - have a nice day.

    There is *no* guarantee of *anything*.

    Now, as far as anything approaching a "solution" to the supposed "problem".
    ..... As long as we're talking about *application level* tweaks....

    What about all the P2P developers marking their "data transmission" packets (whatever the protocol) with the lowest-of-the-low QoS markings.
    --> "if you need to manage congestion, I am exceedingly eligible for shaping"

    That would work nicely.

    In fact, if YouTube (and friends) did the same, it would actually *encourage* ISPs to enable proper QoS processing throughout their entire networks.

    If applications (and protocols) inherently played nicely in the sandbox, your ISP would bend-over-backwards to guarantee a near-perfect service. (mainly because it'd thusly be near-trivial to do)

    And yes I realise this raises the spectre of "Net Neutrality" - but seriously folks how is that argument any different than "because of the terorists" or "think of the children"?

    ISPs Applying QoS to traffic in order to guarantee the quality is not inherently bad. The *bad* ness comes about because they will (yes, I said WILL, not MIGHT or COULD) use said QoS practices to push their own services/enforce their own policies (we hate P2P/ignore client-QoS-markings, etc , etc, etc).

    All those people who're frothing-at-the-mouth because QoS is BAD need a RABIES shot.

    In an ideal world, we'd never need QoS. QoS is a congestion management mechanism. If you have no congestion, then you don't need to apply QoS techniques.

    But until the day when we all have quantum-entangled communications processors with near-infinite effective bandwidth we're going to need QoS, somewhere.

The ideal voice for radio may be defined as showing no substance, no sex, no owner, and a message of importance for every housewife. -- Harry V. Wade

Working...