Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Software

BitTorrent Calls UDP Report "Utter Nonsense" 238

Ian Lamont writes "BitTorrent has responded to a report in the Register that suggested uTorrent's switch to UDP could cause an Internet meltdown. Marketing manager Simon Morris described the Register report as 'utter nonsense,' and said that the switch to uTP — a UDP-based implementation of the BitTorrent protocol — was intended to reduce network congestion. The original Register report was discussed enthusiastically on Slashdot this morning."
This discussion has been archived. No new comments can be posted.

BitTorrent Calls UDP Report "Utter Nonsense"

Comments Filter:
  • Best of intentions (Score:5, Interesting)

    by seanadams.com ( 463190 ) * on Monday December 01, 2008 @07:07PM (#25953385) Homepage

    BT may have the best of intentions here in developing this experimental protocol, but this quote leads me to believe that their understanding of the problem is terribly naive:

    It so happens that the congestion control mechanism inside TCP is quite crude and problematic. It only detects congestion on the internet once "packet loss" has occurred - i.e. once the user has lost data and (probably) noticed there is a problem.

    Packet loss is a normal and deliberate mechanism by which TCP detects the maximum thoughput of a path. Periodically it increases the number of packets in flight until the limit is reached, then it backs off. You have to test again from time to time, in order to increase throughput if more capacity becomes available. This in no way incurs "loss of data" or a noticeable problem. Packets lost due to congestion window growth are handled by the fast retransmit algorithm, which means that there is no timeout or drop in throughput (that would be pretty stupid if the whole purpose of growing the congestion window is to _maximize_ throughput).

    I wonder if Simon Morris was merely oversimplifying for the benefit of the layman, but I still find that statement disturbing. As I sugggested in the other thread, it really sounds like they're going to reinvent TCP (poorly). That's not to say you couldn't design a better protocol specifically for point-to-multipoint transfer, but I question if they're on the right track here.

    • Re: (Score:3, Informative)

      by AndresCP ( 979913 )
      Yeah, that is a ridiculous description. No one uses TCP for real-time work, which is the only time a lost packet would be noticeable to any end user.
    • by 644bd346996 ( 1012333 ) on Monday December 01, 2008 @07:19PM (#25953481)

      Detecting maximum throughput via packet dropping is really bad in high-latency links and in applications that need low latency. It is also apparently easy to implement TCP in such a way that overall transfer speed takes a nosedive when latency gets high, as evinced by Microsoft having done just that.

      • by seanadams.com ( 463190 ) * on Monday December 01, 2008 @07:24PM (#25953537) Homepage

        Detecting maximum throughput via packet dropping is really bad in high-latency links and in applications that need low latency.

        I disagree. Please be more specific. First, what exactly do you believe is the problem? Secondly, how else would you do it (on an IP network)?

        It is also apparently easy to implement TCP in such a way that overall transfer speed takes a nosedive when latency gets high, as evinced by Microsoft having done just that.

        So you're saying it's possible to implement it with a bug? I've recently found a heinous bug in a recent Redhat kernel which would result in _deadlocked_ TCP connections. It happens to the best of us.

      • by TheRaven64 ( 641858 ) on Monday December 01, 2008 @07:32PM (#25953613) Journal
        No it isn't. The size of the window can be adjusted to compensate for this. With a larger window, you get more throughput over high-latency links at the expense of having to wait longer for retransmits when they are needed. Modern TCP stacks dynamically adjust the window size based on the average RTT for any given link.
        • I haven't used Vista, but I'm pretty sure that as of XP SP2, the TCP stack in Windows didn't fit that definition of modern.

          For the purposes of this discussion, it doesn't really matter whether that's a flaw inherent in the protocol, or just a bug in the implementation with 90% market share. A real network will still have very sub-optimal performance under load if some of the important links have high latency, even if the bandwidth is high.

          TCP is a good all-around protocol, but there are a lot of situations

          • Re: (Score:3, Informative)

            by Hal_Porter ( 817932 )

            http://en.wikipedia.org/wiki/TCP_window_scale_option#Windows [wikipedia.org]

            TCP Window Scaling is implemented in Windows 2000, XP, Server 2003 and Vista operating systems. Vista has enabled it by default, while the other operating systems implement it as an option. Because many routers and firewalls do not properly implement TCP Window Scaling, it can cause a user's Internet connection to malfunction intermittently for a few minutes, then appear to start working again for no reason. If "diagnose problem" is selected in Vista, an error message will be displayed "cannot communicate with primary DNS server."

            I.e. it's been around since Win2k but not enabled for the excellent reason that routers and firewalls don't support it. In the absence of Window scaling the TCP window size is limited to 64K which will be a bottleneck on fast connections with a high RTT. That sounds like a good description of a bitorrent connection to a peer on the other side of the world.

            Mind you bitorrent is notorious for gobbling all the available bandwidth unless you limit it,

    • by boyko.at.netqos ( 1024767 ) on Monday December 01, 2008 @07:20PM (#25953493)

      I agree with you Seanadams, but I just finished an interview with Simon Morris [networkper...edaily.com] and it's not that he's saying that the way TCP handles packet loss is a particular problem, he just thinks he can do better.

      BitTorrent essentially already has it's own methods to deal with dropped packets of information - it gets the information from elsewhere. Moving to UDP eliminates the triple handshake, and it eliminates throttling down packet sizes in response to a dropped packet.

      The only problem is that it also eliminates the Layer 4 [transport] traffic congestion safeguards, which is why BitTorrent is looking to establish new and better ones at layer 7 [application].

      • Re: (Score:2, Insightful)

        And why, exactly, does Simon Morris think that he, single-handedly, can do better than the folks at the IETF, most of whom have PhDs in computer science? Seems a bit presumptious, if you ask me.

        • by naasking ( 94116 ) <naasking@gm[ ].com ['ail' in gap]> on Monday December 01, 2008 @07:39PM (#25953683) Homepage

          Because application-specific knowledge allows easier and often better optimizations than application-generic protocols, which have to be good enough for all applications at the expense of top end performance for specific applications. Isn't it obvious?

          • by Xelios ( 822510 ) on Monday December 01, 2008 @07:59PM (#25953849)
            On top of this TCP hasn't seen a major update since the 80's. Most of it was implemented to deal with a very different internet than the one we have today. If you can side step TCP's shortcomings by doing the congestion control more efficiently at the application level then why not give it a shot?
            • by CTachyon ( 412849 ) <chronos AT chronos-tachyon DOT net> on Monday December 01, 2008 @11:24PM (#25955453) Homepage

              On top of this TCP hasn't seen a major update since the 80's. Most of it was implemented to deal with a very different internet than the one we have today. If you can side step TCP's shortcomings by doing the congestion control more efficiently at the application level then why not give it a shot?

              Uhh, TCP Vegas [wikipedia.org], TCP New Reno [faqs.org], BIC [wikipedia.org] and CUBIC [wikipedia.org]? All of which have been implemented in the Linux kernel?

              TCP has only been standing still since the 80's if you're using an OS from the 80's... or a Microsoft OS.

              • by rxmd ( 205533 ) on Tuesday December 02, 2008 @05:00AM (#25957225) Homepage

                On top of this TCP hasn't seen a major update since the 80's.

                Uhh, TCP Vegas [wikipedia.org], TCP New Reno [faqs.org], BIC [wikipedia.org] and CUBIC [wikipedia.org]? All of which have been implemented in the Linux kernel? TCP has only been standing still since the 80's if you're using an OS from the 80's... or a Microsoft OS.

                Note that the only one of those which made it into an RFC is New Reno, aka RFC 2582 [faqs.org], which has been implemented in the Windows TCP stack since Vista [microsoft.com], along with a number of other recent RFCs.

                The others are basically different suggestions for implementing TCP congestion control. Microsoft has its own variant of those (Compound TCP [wikipedia.org], which is quite similar to TCP Vegas and has also been ported to Linux [caltech.edu]).

                Your 1980s comment is not quite up to date, of course. Microsoft has been sticking to their BSD-based implementation of the TCP stack for quite a long time (too long in fact), but with Vista it's been undergoing quite a bit of change. I know it's unpopular to say something in favour of MS and/or Vista here and I'm far from being a MS apologetic, but it's worth actually reading their Cable Guy columns [microsoft.com] every now and then to be up to date with regards to what the Windows network stack actually does and doesn't do - especially if you are a sysadmin or interested in developments in the TCP arena.

              • Re: (Score:3, Informative)

                True enough, except that most implementations of TCP are just plain-old TCP. If -everybody- switched to CUBIC, then we could all use CUBIC. But right now, plain old TCP is the standard.

                In corporate networks, you can use something like CUBIC or TCP New Reno, because you control both ends of the connection. On the Internet, it's a bit harder.

                But in a BT application, most communication is going to take place between two computers using the same application - an application that BT, Inc. does control: uTorre

                • Re: (Score:3, Informative)

                  by Andy Dodd ( 701 )

                  90% of the benefits of an improved TCP congestion control algorithm come from improvements on the sender side - very few improvements need to be done on a receive side, and most OSes do have those few improvements (window scaling, etc, SACK is one of the few extensions that it is beneficial for a receiver to have that isn't widespread).

                  Thus, for example, a connection between a Linux webserver that has CUBIC as its congestion control algorithm and a Windows XP client with NewReno gets 90%+ of the benefits of

              • Re: (Score:3, Funny)

                by Fnord666 ( 889225 )

                ...if you're using an OS from the 80's... or a Microsoft OS.

                The second being a subset of the first.

              • Re: (Score:3, Interesting)

                It's also worth mentioning that Linux apps can choose between those different congestion controls with get/setsockopt(). So the applications ARE allowed (at least in linux) to choose a more efficient congestion control according to their needs. And if their needs aren't covered, they can submit a new congestion control implementation.

        • by ushering05401 ( 1086795 ) on Monday December 01, 2008 @07:43PM (#25953713) Journal

          Maybe he has spent years working with the technology he is trying to one up?

          This relates directly to a reply I just finished on another thread regarding whether a degree is required for success.

          This Morris character may be right, he may be wrong, but you citing the education level of his rivals blows your point out of the water IMO.

        • by FictionPimp ( 712802 ) on Monday December 01, 2008 @07:47PM (#25953753) Homepage

          I know, how dare anyone try to improve things without being old and having a PHD.

          Fucking punk ass kids. Next thing we know we will have guys with only master degrees trying to write operating systems.

          • Re: (Score:3, Interesting)

            by Anonymous Coward

            I just have to mention this real quick...a bit off topic but I never get to point this out...

            Albert Einstein = High School Graduate, College Professor. Kinda speaks for himself. NOT College Student
            Thomas Edison = High School Dropout. Inventor of so many overly complicated things it makes...well...pretty much everyone's head spin.
            Nicholas Tesla = Last formal schooling was in the Croatian equivalent of 5th grade. Inventor of Alternating Current & the Tesla Coil amongst many other things.
            Bill Gates = D

        • by Anonymous Coward on Monday December 01, 2008 @08:18PM (#25954023)

          I wonder if you know what you're talking about. TCP is a great general-use protocol, as is UDP. But for specific cases, like this one, developers will tend to roll their own as an extension to UDP to fit their needs.
           
          Take any modern networked game. It will not use straight TCP, as that's stupid, nor will it simply use UDP because that doesn't work, either. TCP is fine if you're writing apps that require data to get to the other end, without regards to time taken, but if you need a happy middle-ground you need to do it yourself.

        • by Vellmont ( 569020 ) on Monday December 01, 2008 @08:39PM (#25954145) Homepage


          does Simon Morris think that he, single-handedly, can do better than the folks at the IETF, most of whom have PhDs in computer science?

          You really have a very strange over-estimation of the value of a PhD. It doesn't mean you're a super-genius, you're smarter than everyone (or even anyone) else, or that you're always right. It simply means you've been willing to go through some schooling. That's it. Hopefully you've learned something from that education.

          There's plenty of examples of non-PhD's making major contributions. The WWW was largely invented by someone with a degree in physics (undergrad I believe), with no degree in computer science. Linus Torvalds only attained a mere masters degree in Computer Science, but yet his OS seems to have become a bit more successful than quite a few other OS's written by people with more education.

          • by DiegoBravo ( 324012 ) on Monday December 01, 2008 @09:36PM (#25954581) Journal

            And don't forget that those IETF PhD's couldn't design a better way to upgrade IPV4 but the incompatible and essentially non-interoperable IPV6 (please don't argue about dual stacks or something similar.)

          • by blueg3 ( 192743 ) on Tuesday December 02, 2008 @12:14AM (#25955791)

            Further, the guys at IETF are not tasked with coming up with optimal protocols for specific applications. They're not necessarily even tasked with coming up with better versions of TCP. There are tons of things they have come up with that improvements on what we use now, for that matter, that haven't been picked up by the Internet at large.

            The expertise of the IETF members really has no bearing on this matter. If you were trying to actually recreate TCP (i.e., a general-purpose protocol), and the IETF said that theirs is better -- they're probably right. That's not what's going on here.

        • And why, exactly, does Simon Morris think that he, single-handedly, can do better than the folks at the IETF, most of whom have PhDs in computer science? Seems a bit presumptious, if you ask me.

          And Einstein was being presumptuous when he was thinking he had a better idea then those who had gone before him. Your point?

    • By user I hope he means the program which is using TCP, not the human user at the computer.

    • by sudog ( 101964 ) on Monday December 01, 2008 @07:52PM (#25953795) Homepage

      Please mod parent up. This place is so damn full of armchair wannabe network experts who've clearly no understanding of how TCP congestion avoidance works it's bordering on the physically painful.

    • by argent ( 18001 )

      As I sugggested in the other thread, it really sounds like they're going to reinvent TCP (poorly).

      Aw, Bullwinkle, that trick never works!

      This time for sure, Rocky!

    • Re: (Score:3, Insightful)

      by cheater512 ( 783349 )

      Bittorrent doesnt need reliable communications, and congestion control can be tuned for bittorrent connections.

      IMHO its a good move.

    • by zmooc ( 33175 ) <zmooc@zmooc.DEGASnet minus painter> on Monday December 01, 2008 @08:44PM (#25954183) Homepage

      TCP does two things at the cost of some overhead: it ensures packets arrive in order and it ensures they arrive. While doing that, it also has to ensure that in the case of network congestion, packets are resent within reasonable time while being fair and allowing each TCP connection an equal share of the speed and bandwidth. The problems TCP solves are at the root of the success of bittorent; bittorrent is extremely good at spreading traffic in such a way that links that have those problems are avoided. It can do this since the order in which the packets arrive does not matter and in fact it does not matter whether they arrive from a certain host at all; it can simply request a lost packet from another host minutes or hours later. In the case of TCP/IP there is no provision to handle such a case; it keeps trying to get the packets at their destination in the right order as quickly as possible.

      So none of the problems that TCP solves affect bittorrent and all the overhead that TCP causes, however small, serves no purpose in this case. Instead of many small TCP ACKs, resends, negotiation, and what else TCP does, bittorrent will do just fine with one status update every now and then, which it can conveniently combine with the packets that are sent the other way anyway. Therefore, IMHO, using UDP for bittorrent is fine; it will help spread bittorrent traffic even better over the fastest links while using less bytes of network traffic and it might even end up making it easier on the Internet since now bittorrent traffic no longer has to fight with other TCP connections for a fair share of the bandwidth; it can be tuned to get just a little bit less.

      Of course they can fuck it up completely and fill the poor pipes of the Internet with loads of packets that never arrive, but they don't have to; it is not inherent to this solution and therefore such rumours should be classified as... FUD.

      Disclaimer: I did not RTFA and now practically nothing about TCP/IP :-)

      • by complete loony ( 663508 ) <Jeremy.Lakeman@nOSpaM.gmail.com> on Monday December 01, 2008 @10:08PM (#25954843)
        Another advantage UDP has over TCP; there is no maximum half open limit on windows XP. So having a bittorrent client running in the background should not have any impact (other than the available bandwidth obviously) on every other application that is trying to open a TCP connection.
      • by Sancho ( 17056 ) *

        I'm sure that such problems can be solved, but:
        1) Retransmission/order may still be important if BT chunk size is greater than max packet size.
        2) The advantages you cite mean that DoS of trackers and clients becomes easier.

    • by Aloisius ( 1294796 ) on Monday December 01, 2008 @08:45PM (#25954191) Homepage
      There are a lot of good reasons why BitTorrent should use a UDP file transfer protocol, but I probably wouldn't put TCP's congestion control mechanism on the top of the list.

      If you're going to argue UDP, you might as well bring out the major benefits:

      * Going to a NAK-based or hybrid NAK/ACK-based protocol which can significantly improve performance over high latency or poor connections

      * Multicast - assuming anyone implements IPv6 or multicast over the internet :)

      * NAT to NAT transfers (you can do it with TCP, but it is just harder and you generally have to build a user-space TCP stack anyway).

      * Faster start time since you no longer have to do a three-phase startup and all the annoying things Microsoft does to prevent people from starting too many per second

      There are plenty of UDP-based protocols with TCP-friendly congestion control mechanisms out there and plenty of research into the subject.

      The biggest problems I see happening here revolve around different BitTorrent clients all reimplementing uTP and doing a poor job at it. I'd like to see a spec for uTP and a public domain implementation to help minimize the problems that could pop up.
      • Re: (Score:3, Interesting)

        by Andy Dodd ( 701 )

        You forgot one:

        Transport protocol header authentication, preventing Man-In-The-Middle connection denial-of-service attacks, aka Sandvining, aka Bogus RSTs

    • Don't some ISPs intentionally drop packets as a way of blocking/throttling bittorrent? I can see how the statement that in such a situation packet loss is noticeable to users is true.
    • Re: (Score:3, Informative)

      by Todd Knarr ( 15451 )

      He was simplifying, and you can see it by looking at what happens before packet loss starts. When traffic loads increase, routers don't immediately start dropping packets. First they start queueing packets up for handling, trying their best to avoid packet loss. Only when their queues fill up do they start discarding packets. TCP waits until that's happened before it starts backing off. A better way is what uTP sounds like it's designed to do: watch for the increasing latency that signals routers are starti

    • Re: (Score:3, Insightful)

      by porpnorber ( 851345 )

      There's plenty of research to suggest that packet loss is not the most attractive indicator of congestion, not merely because it involves losing data, but because it comes very late in the development of an instability. Under congestion, router queue depths go up, and you get observable latency spikes. To exploit this observation effectively you need to timestamp packets, of course.

      As many others have noted, there's a second issue with TCP, which is that it is hell-bent on in-order delivery, which is an ut

    • Japan (Score:5, Insightful)

      by MasaMuneCyrus ( 779918 ) on Tuesday December 02, 2008 @03:05AM (#25956731)

      I just got back from living in Japan not too long ago. Over there, we got 100 mbps up AND down for about ~$40/month. Near the end of my stay there, I got a letter from my ISP stating that they're going to start implementing a bandwidth cap, and that no user can upload 500gb/month, but downloading is still unlimited.

      This year, I read on Slashdot that AU/KDDI is unrolling a 1gbps line for a similarly cheap price.

      If you want my sympathy for ISPs in America, get back to me when I get even one tenth of the service I got in Japan. I'd like to extend a big middle finger of gratitude to all American ISPs. No one is spouting the gloom and doom over in Asia, and meanwhile, they're shooting ahead into the future of the internet. Asia's been rolling out fibre-optics, at great cost to them, but with spectacular results. Conversely, we sit on our asses and wonder how we can charge more money to more users to put them through the same amount of pipes without upgrading infrastructure.

      **** you, American ISPs.

      Signed,
      Someone who's seen the other side of the world, and it was better.

      • Re:Japan (Score:4, Informative)

        by muffen ( 321442 ) on Tuesday December 02, 2008 @07:50AM (#25958077)

        I just got back from living in Japan not too long ago. Over there, we got 100 mbps up AND down for about ~$40/month. Near the end of my stay there, I got a letter from my ISP stating that they're going to start implementing a bandwidth cap, and that no user can upload 500gb/month, but downloading is still unlimited. This year, I read on Slashdot that AU/KDDI is unrolling a 1gbps line for a similarly cheap price. If you want my sympathy for ISPs in America, get back to me when I get even one tenth of the service I got in Japan. I'd like to extend a big middle finger of gratitude to all American ISPs. No one is spouting the gloom and doom over in Asia, and meanwhile, they're shooting ahead into the future of the internet. Asia's been rolling out fibre-optics, at great cost to them, but with spectacular results. Conversely, we sit on our asses and wonder how we can charge more money to more users to put them through the same amount of pipes without upgrading infrastructure.

        I have 100MBit full-duplex for $30 a month, and this is standard pricing where I live (in Europe). There is a gigabit fiberlink going into our building (apartment building with 24 apartments) and from there it's cat 6 going to each apartment.

        This was the cost of getting it...
        About $1500 for getting the fiber into the building (one time cost including the router). In fact, I had a choice of two different providers and there is four separate fibercables in the road outside my apartment block. It was put there when the roads were dug up for changing some pipes. After the waterpipes were changed, any company that wanted to lay down fiber along the road was allowed to do so, four companies put it there, out of which two companies were selling fiber connections to consumers. The companies paid nothing for digging up the road and the work took two days longer due to the fibers being put down. This is standard for any roadworks and has been for quite some time, which is the main reason for the wide spread of fiber in the cities here.

        Some equipment was installed in the basement, from there, a cable was pulled to each apartment. The cost for this was $200 / apartment. So, total cost per apartment was about $275.

        Then, the company that we got the fiber from sells 100/100 subscriptions for just under $30/month. The only thing with this company, as opposed to the other company, was that the initial cost was lower but in return you have to get the internet connection via them (three year contract). After the three years have passed, you can "get out" of the contract with them but keep the fiber for an additional $1000. If this is done you can then freely choose you ISP, and the cost currently, for the cheapest provider, is $19 for a 100/100 connection. For this apartment block we were paying $150 a year as a base-fee to our cable provider (per apartment). Since TV channels can be gotten via the fiber, at a lower cost then what the cable company charges for the same channels, the $275 was weighted against the $150 a year we now don't have to pay (this is how I got even the "old" people to agree to the fiber installation). Two years, even if you don't have an internet subscription, and it's a good dael. Finally, you can also use this for VoIP, $5 / month if you want a subscription where you get a phone number and free calls to any phone in the entire country (mobiles excluded).

        I can add that the other provider wanted $5000 for getting the fiber in to the building, so the difference was $3500. Over three years it was about the same (may even have worked out cheaper depending on the subscriptions you choose), but I felt it was easier to convince people in the apartment block to get the fiber in to the building if I could keep the initial cost to a minimum.

        So, I can simply not understand why some countries are having such a hard time rolling out fiber connections. There are roadworks all the time, since roadworks are handled by some central government function (gen

  • by kcbanner ( 929309 ) on Monday December 01, 2008 @07:08PM (#25953399) Homepage Journal
    ...we should have known they are full of shit.
  • by QuantumG ( 50515 ) * <qg@biodome.org> on Monday December 01, 2008 @07:13PM (#25953439) Homepage Journal

    Nonsense? From The Register? Ya don't say.

  • by kwabbles ( 259554 ) on Monday December 01, 2008 @07:21PM (#25953503)

    My ISP, Comcast, is already on top of this new bittorrent over UDP idea and has summarily blocked all UDP traffic.

    So, I wonder if I'll be experNO CARRIER

    unknown host slashdot.org

  • by Rayeth ( 1335201 ) on Monday December 01, 2008 @07:25PM (#25953541)

    I don't really see how this is going to kill VoIP and online gaming. Those two services are big users of UDP, no doubt, but its not like all of a sudden the explosion of UDP requests is going to sqeeze VoIP traffic out. If anything it should encourage ISPs and providers to increase the rate of their roll out of new tech.

    This move should bring into focus the last mile problem that is the real source of most of the internet connection speed debate. I don't care how the solution ends up working, but I think there needs to be a plan given that most of the plans I have heard involve several years of lead time.

    • Re: (Score:3, Interesting)

      by mini me ( 132455 )

      This move should bring into focus the last mile problem that is the real source of most of the internet connection speed debate.

      When someone says "the last mile problem," I think the last mile is short on bandwidth. The problem here is that the last mile has (and is using) more bandwidth than the upstream connections can handle.

      • by onkelonkel ( 560274 ) on Monday December 01, 2008 @07:58PM (#25953839)
        So should we call this the "First 4,999 Mile Problem"?
      • The problem here is that the last mile has (and is using) more bandwidth than the upstream connections can handle.

        For some ISPs, that is the case. However, for many ISPs/MSOs the last mile is literally the problem.

        Take Comcast for example. Upstream bandwidth on the last mile is their biggest limiting factor. Implementing torrent throttling and getting their wrist slapped by the FCC wasn't fun for them.

        You think they like getting criticized for over compressing their HD streams and are doing it for the pure

  • The problem is... (Score:5, Informative)

    by wolfbyte18 ( 1421601 ) on Monday December 01, 2008 @07:49PM (#25953759)
    we have an opportunity to detect end-to-end congestion and implement a protocol that can detect problems very quickly and throttle back accordingly so that BitTorrent doesn't slow down the internet connection
    ----
    The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

    As soon as TCP notices a single packet loss, the Jacobson Algorithm kicks in and it's throttled to maybe 50-60%, and raises the limit slowly. I highly doubt that uTorrent's reworked version of UDP will play this nicely.

    As soon as TCP's throttling kicks in, space will be cleared in the tubes. uTorrent will be able to send more data through UDP without noticing any loss, so it'll quickly move to fill this space. Then, TCP gets hit with more data loss - and goes slower. It seems like a vicious cycle.
    • Re: (Score:2, Insightful)

      As soon as TCP notices a single packet loss, the Jacobson Algorithm kicks in and it's throttled to maybe 50-60%, and raises the limit slowly. I highly doubt that uTorrent's reworked version of UDP will play this nicely.

      You're right; uTP is actually nicer than TCP.

    • Well, when they say 'hit back' they really mean hit back. Of course there are probable benefits aside from just hitting through the ISPs's "congestion management", but I think just about everybody had it with ISPs throttling (AND not admitting it).
    • Re:The problem is... (Score:5, Informative)

      by seanadams.com ( 463190 ) * on Monday December 01, 2008 @09:16PM (#25954391) Homepage

      The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

      That statement makes no sense whatsoever. It's the same as saying "IP doesn't play as nicely as TCP". They are not comparable.

      You have to be talking about whatever transfer protocol might be implemented ON TOP OF UDP, because UDP is merely a datagram protocol. It's nothing more than an IP packet plus port numbers.

      In other words, I could write a simple program which blindly fires UDP packets at the rate of 1GB/s. This would kill my internet connection. I could also write a program which transmits one UDP packet per hour. This would have no effect. See what I mean? It's entirely a function of the application.

    • The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

      The main reason for swapping to UDP is that 600 TCP connections over a single congested link don't play very nice together anyway. If you let uTorrent flood your upload bandwidth with outgoing TCP connections you already have the exact problem your talking about. Only it gets amplified by all the automatic retransmits. Swapping to UDP doesn't magically sweep away all the potential problems of having such a bandwidth greedy application, but it gives the application more control in how the bandwidth of all it

      • But, but, but, BitTorrent is no more bandwidth greedy than -say- SFTP or eMule! You *can* ask any decent BT (or SFTP or eMule) client to limit the bandwidth that it uses.

  • by crossmr ( 957846 ) on Monday December 01, 2008 @07:55PM (#25953827) Journal

    everyone thought dht was great too, but I found every time I used it it caused massive headaches. I would jump on a popular torrent and for days afterward I would be having poor performance, checking logs etc would show several dozen connection attempts per second on the utorrent port, even 2-3 days after I was done with the torrent because the DHT tracker was still advertising my IP address. I'd have to release renew to bring my performance back up. This was with a fairly standard Linksys router. Any situation where the other party might not just get the message that I'm not there anymore is bound to lead to headaches on popular torrents.

    • Update your firmware and tweak your BT settings. I run the WRT54G and I get no problems whatsoever even with multiple torrents.
      • Re: (Score:3, Informative)

        by ScrewMaster ( 602015 ) *

        Update your firmware and tweak your BT settings. I run the WRT54G and I get no problems whatsoever even with multiple torrents.

        I run a WRT54G V4 with the Tomato firmware. Likewise I have no problems.

  • by sudog ( 101964 ) on Monday December 01, 2008 @08:03PM (#25953885) Homepage

    TCP guarantees in-order, mildly error-corrected, delivery of transmitted *DATA*. Not packets. It is a streaming protocol where the data being transmitted is of unknown and indeterminate length, or open-ended length. Since BT already doesn't care that files arrive in pieces at the beginning, middle or end (well, it does a little, but not enough to matter) then you can relax and basically eliminate one of the TCP guarantees right there: in-order delivery of data.

    You can eliminate sliding windows, ACK-based retransmits, fast-retransmission, and pretty much every other mechanism that TCP uses to guarantee in-order delivery of data. You can simplify it and the application itself beautifully, and provided it correctly throttles back based on detected packet loss (it MUST be exponential back-off,) you end up with a net win for those reasons. The application can set up its own optimised data structures that don't necessarily have (but likely will end up having anyway) the overhead of an OS-backed TCP stack.

    I mean who cares if you miss pieces, until the end when you can re-request them?

    Heck, there are already P2P apps that use a UDP-based transfer mechanism and they are WAY less impactful on systems that a typical BitTorrent stream is. They way the hell slower, too, but that's not the point.

    I do think there is a point that bears repeating: BT *MUST* have exponential back-off. If it doesn't there is logic already built-in to core routers, ISPs, and firewalls that WILL drop the connection more severely when the endpoints don't respond properly to an initial packet-drop attempt at slowing them down.

    There are some really nice academic papers about it, and there are lots of algorithms and choices that companies have. They all assume TCP-like back-off on the endpoints, and they ALL uniformly punish greedy floods.

    • Since BT already doesn't care that files arrive in pieces at the beginning, middle or end (well, it does a little, but not enough to matter) then you can relax and basically eliminate one of the TCP guarantees right there: in-order delivery of data.

      As long as the complete bittorrent data unit fits inside ONE packet, yes. If however the thing that describes the file it belongs to and the position in the file it goes in is relatively large, you're being quite wasteful. Assuming 1400 byte max packet data, 8 bytes of file position (i.e. files may be bigger than 4GiB), 4 bytes to give the index of a file, that's 1388 byte packets... 3094357 packets at least for 4GB, an extra 32MB.

      If instead we can do 32KiB or 64KiB runs (reliably, i.e. sliding window

      • by sudog ( 101964 )

        Also incorrect, unless I'm misunderstanding what you're saying.

        The BT chunks themselves can become the discrete, and known-about, units that must be transferred across the UDP channel--however they decide to do it--and the chunks of the chunks become encapsulated in packets. You still don't have to care about in-order delivery. Add in a mux'd, single, unique identifier, and it can even be reused. In fact, I challenge you to find a single instance, anywhere available to any BT user that has more than even a

      • by chgros ( 690878 )

        Are you saying that an overhead of about 1% is going to cause an internet meltdown? And that would be assuming that TCP has no overhead, which is clearly wrong.

      • by blueg3 ( 192743 )

        If I remember BitTorrent correctly, you need 64 bits (8 bytes) for offset within file. While you could use local machine port somehow to identify the torrent, you probably shouldn't. However, you're unlikely to need more than two bytes for that (more realistically, one). That's 10 bytes, compared to the 12-byte TCP overhead.

        BitTorrent frequently wastes more bandwidth in double-requested blocks and hashfails than that.

  • by bertok ( 226922 ) on Monday December 01, 2008 @08:47PM (#25954207)

    The probablem with BitTorrent is not that it uses a large amount of bandwidth, but that it's using the wrong bandwidth. In every country other than the United States, international bandwidth is substantially more expensive than local bandwidth, and often in short supply. Local bandwidth is cheap, or even free. Even in the US, inter-ISP bandwidth has the same cost issues, but is plentiful.

    What I've never understood is what's the excuse for not implementing a peer selection algorithm that prioritises nearby users. Even a naive algorithm is going to be vastly better than a purely random selecton. Simply selecting peers based on, say, the length of the common prefix of the IP address will often produce excellent results. Why in God's name should I transfer at 0.1 kbps from some guy in Peru, when a peer down the road could be uploading to me at 500 kbps?

    The truth is that the BitTorrent folks are not playing ball with ISPs. In reality, I think most major ISP could care less about copyright violation, or excessive bandwidth - it makes people pay for more expensive monthly plans - but they DO care about international bandwidth costs.

    If they just took 10 minutes to revamp the peer selection algorithm, they would reduce the impact in ISPs enormously, and then they woudldn't be villified and throttled.

    • When I last used Azureus they were working on an algorithm that created a virtual map based on pings and were meant to prioritize low ping seeds.

      The idea being that low ping usually means that the other computer is on the same or a close network.

    • by Ungrounded Lightning ( 62228 ) on Monday December 01, 2008 @09:13PM (#25954369) Journal

      The truth is that the BitTorrent folks are not playing ball with ISPs. In reality, I think most major ISP could care less about copyright violation, or excessive bandwidth ...

      Unfortunately, the major ISPs are components of conglomerates whose primary moneymaker is selling "content". As such they have a perverse incentive structure that can put "protecting against piracy" above the quality of the network's operation.

      The networks also provided asymmetric transport and vastly oversold their bandwidth, assuming a central server / many small clients "broadcast media" model. The rise of peer-to-peer usage bit them mightily and Bit Torrent was the spearhead of that rise. So rather than spending the added billions to expand their backbones to meet their advertised service's requirements they chose to throttle it.

      The ISPs were the ones to turn this into a war and fire the first shots. BitTorrent is just trying to engineer a solution on which to build peace - and is being vilified for the attempt.

      Having said that, your suggestion for improving things by smarter selection of peers is good. Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

      • Re: (Score:3, Insightful)

        by ztransform ( 929641 )

        Having said that, your suggestion for improving things by smarter selection of peers is good. Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

        I posted a while ago an idea I'd like to see for a Request For Comment (RFC), a new protocol ISPs could easily run in-house. See http://slashdot.org/comments.pl?sid=590741&cid=23883635 [slashdot.org]

        Actually even Cisco could add such a protocol to their routers which merely look up the internal routing protocol to decide which IPs were local, and anything out a border gateway (routes advertised via BGP) could be regarded as "non-local". Anyone from Cisco here?

      • Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

        Last time I used ktorrent, it put little flags beside each peer's IP address. It sounds like that "database" is already being used to some degree (although, ping times are probably a better metric).

      • Re: (Score:3, Interesting)

        by dch24 ( 904899 )

        Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better.

        What about TTL [wikipedia.org] (a.k.a. Hop Limit)?

        I am not saying that it gives perfect network-locality. But it's way, way better than random.

      • Re: (Score:3, Insightful)

        by akozakie ( 633875 )

        Use round trip time? Try increasing TTL to find the closest nodes with the desired content? Why would you need a central database, when many end-to-end metrics can give you information about distance?

  • by actionbastard ( 1206160 ) on Monday December 01, 2008 @08:50PM (#25954227)
    was discussed this morning...which means all the funny comments were expended before the really funny article was posted.
  • by mtarnovan ( 1337149 ) on Monday December 01, 2008 @09:06PM (#25954315)
    If they want to improve network congestion why not start by implementing a better peer selection algorithm. IIRC currently peers are selected at random. A network topology aware peer selection algorithm might improve network congestion a great deal. Currently I see peers which are on another continent being 'preferred' (to due the randomness) to peers on my own ISP's network, with which I have a 50+ mbit connection.
    • Re: (Score:2, Interesting)

      This would be cool, but could also be memory intensive. Routers have ram dedicated to a routing table, and if you're planning to implement this like I think you are, you're going to have the server run a bunch of traceroutes to determine how packets are traveling to their destination and sort them appropriately.

      That or you could make assumptions about an isp's range and provide a bias to any ip within the same /16 range - but I'm pretty sure this is nowhere near ideal either.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...