Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Software

BitTorrent Calls UDP Report "Utter Nonsense" 238

Ian Lamont writes "BitTorrent has responded to a report in the Register that suggested uTorrent's switch to UDP could cause an Internet meltdown. Marketing manager Simon Morris described the Register report as 'utter nonsense,' and said that the switch to uTP — a UDP-based implementation of the BitTorrent protocol — was intended to reduce network congestion. The original Register report was discussed enthusiastically on Slashdot this morning."
This discussion has been archived. No new comments can be posted.

BitTorrent Calls UDP Report "Utter Nonsense"

Comments Filter:
  • by AndresCP ( 979913 ) on Monday December 01, 2008 @08:19PM (#25953479)
    Yeah, that is a ridiculous description. No one uses TCP for real-time work, which is the only time a lost packet would be noticeable to any end user.
  • by boyko.at.netqos ( 1024767 ) on Monday December 01, 2008 @08:20PM (#25953493)

    I agree with you Seanadams, but I just finished an interview with Simon Morris [networkper...edaily.com] and it's not that he's saying that the way TCP handles packet loss is a particular problem, he just thinks he can do better.

    BitTorrent essentially already has it's own methods to deal with dropped packets of information - it gets the information from elsewhere. Moving to UDP eliminates the triple handshake, and it eliminates throttling down packet sizes in response to a dropped packet.

    The only problem is that it also eliminates the Layer 4 [transport] traffic congestion safeguards, which is why BitTorrent is looking to establish new and better ones at layer 7 [application].

  • by seanadams.com ( 463190 ) * on Monday December 01, 2008 @08:24PM (#25953537) Homepage

    Detecting maximum throughput via packet dropping is really bad in high-latency links and in applications that need low latency.

    I disagree. Please be more specific. First, what exactly do you believe is the problem? Secondly, how else would you do it (on an IP network)?

    It is also apparently easy to implement TCP in such a way that overall transfer speed takes a nosedive when latency gets high, as evinced by Microsoft having done just that.

    So you're saying it's possible to implement it with a bug? I've recently found a heinous bug in a recent Redhat kernel which would result in _deadlocked_ TCP connections. It happens to the best of us.

  • by TheRaven64 ( 641858 ) on Monday December 01, 2008 @08:32PM (#25953613) Journal
    No it isn't. The size of the window can be adjusted to compensate for this. With a larger window, you get more throughput over high-latency links at the expense of having to wait longer for retransmits when they are needed. Modern TCP stacks dynamically adjust the window size based on the average RTT for any given link.
  • Where where? (Score:5, Informative)

    by Emperor Zombie ( 1082033 ) on Monday December 01, 2008 @08:42PM (#25953703)
    "Hear", not "here".
  • The problem is... (Score:5, Informative)

    by wolfbyte18 ( 1421601 ) on Monday December 01, 2008 @08:49PM (#25953759)
    we have an opportunity to detect end-to-end congestion and implement a protocol that can detect problems very quickly and throttle back accordingly so that BitTorrent doesn't slow down the internet connection
    ----
    The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

    As soon as TCP notices a single packet loss, the Jacobson Algorithm kicks in and it's throttled to maybe 50-60%, and raises the limit slowly. I highly doubt that uTorrent's reworked version of UDP will play this nicely.

    As soon as TCP's throttling kicks in, space will be cleared in the tubes. uTorrent will be able to send more data through UDP without noticing any loss, so it'll quickly move to fill this space. Then, TCP gets hit with more data loss - and goes slower. It seems like a vicious cycle.
  • by sudog ( 101964 ) on Monday December 01, 2008 @08:52PM (#25953795) Homepage

    Please mod parent up. This place is so damn full of armchair wannabe network experts who've clearly no understanding of how TCP congestion avoidance works it's bordering on the physically painful.

  • by sudog ( 101964 ) on Monday December 01, 2008 @09:03PM (#25953885) Homepage

    TCP guarantees in-order, mildly error-corrected, delivery of transmitted *DATA*. Not packets. It is a streaming protocol where the data being transmitted is of unknown and indeterminate length, or open-ended length. Since BT already doesn't care that files arrive in pieces at the beginning, middle or end (well, it does a little, but not enough to matter) then you can relax and basically eliminate one of the TCP guarantees right there: in-order delivery of data.

    You can eliminate sliding windows, ACK-based retransmits, fast-retransmission, and pretty much every other mechanism that TCP uses to guarantee in-order delivery of data. You can simplify it and the application itself beautifully, and provided it correctly throttles back based on detected packet loss (it MUST be exponential back-off,) you end up with a net win for those reasons. The application can set up its own optimised data structures that don't necessarily have (but likely will end up having anyway) the overhead of an OS-backed TCP stack.

    I mean who cares if you miss pieces, until the end when you can re-request them?

    Heck, there are already P2P apps that use a UDP-based transfer mechanism and they are WAY less impactful on systems that a typical BitTorrent stream is. They way the hell slower, too, but that's not the point.

    I do think there is a point that bears repeating: BT *MUST* have exponential back-off. If it doesn't there is logic already built-in to core routers, ISPs, and firewalls that WILL drop the connection more severely when the endpoints don't respond properly to an initial packet-drop attempt at slowing them down.

    There are some really nice academic papers about it, and there are lots of algorithms and choices that companies have. They all assume TCP-like back-off on the endpoints, and they ALL uniformly punish greedy floods.

  • Re:Where where? (Score:5, Informative)

    by Zironic ( 1112127 ) on Monday December 01, 2008 @09:52PM (#25954239)

    I'm not sure, bouncing stones could be pretty painful.

  • by Ungrounded Lightning ( 62228 ) on Monday December 01, 2008 @10:13PM (#25954369) Journal

    The truth is that the BitTorrent folks are not playing ball with ISPs. In reality, I think most major ISP could care less about copyright violation, or excessive bandwidth ...

    Unfortunately, the major ISPs are components of conglomerates whose primary moneymaker is selling "content". As such they have a perverse incentive structure that can put "protecting against piracy" above the quality of the network's operation.

    The networks also provided asymmetric transport and vastly oversold their bandwidth, assuming a central server / many small clients "broadcast media" model. The rise of peer-to-peer usage bit them mightily and Bit Torrent was the spearhead of that rise. So rather than spending the added billions to expand their backbones to meet their advertised service's requirements they chose to throttle it.

    The ISPs were the ones to turn this into a war and fire the first shots. BitTorrent is just trying to engineer a solution on which to build peace - and is being vilified for the attempt.

    Having said that, your suggestion for improving things by smarter selection of peers is good. Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

  • Re:In other news... (Score:3, Informative)

    by 680x0 ( 467210 ) <vicky@noSPaM.steeds.com> on Monday December 01, 2008 @10:15PM (#25954383) Journal

    Sorry, SMTP (e-mail) uses TCP (almost always port 25). Perhaps you mean SNMP (network management)? A typical home router doesn't normally use SNMP, but more expensive ones (Cisco, etc.) do.

  • Re:The problem is... (Score:5, Informative)

    by seanadams.com ( 463190 ) * on Monday December 01, 2008 @10:16PM (#25954391) Homepage

    The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

    That statement makes no sense whatsoever. It's the same as saying "IP doesn't play as nicely as TCP". They are not comparable.

    You have to be talking about whatever transfer protocol might be implemented ON TOP OF UDP, because UDP is merely a datagram protocol. It's nothing more than an IP packet plus port numbers.

    In other words, I could write a simple program which blindly fires UDP packets at the rate of 1GB/s. This would kill my internet connection. I could also write a program which transmits one UDP packet per hour. This would have no effect. See what I mean? It's entirely a function of the application.

  • by Todd Knarr ( 15451 ) on Monday December 01, 2008 @10:24PM (#25954459) Homepage

    He was simplifying, and you can see it by looking at what happens before packet loss starts. When traffic loads increase, routers don't immediately start dropping packets. First they start queueing packets up for handling, trying their best to avoid packet loss. Only when their queues fill up do they start discarding packets. TCP waits until that's happened before it starts backing off. A better way is what uTP sounds like it's designed to do: watch for the increasing latency that signals routers are starting to queue packets up, and start backing off before the routers have to start dropping packets. Ideally you avoid ever overflowing the routers' queues and causing packet loss in the first place.

    Historical note: TCP uses packet loss because the early routers didn't queue. As soon as they hit 100% of capacity they started dropping packets. It's worked acceptably up until now because another aspect, the adjustable transmit window size (which varies based on round-trip time), that was designed to avoid flooding the receive queue on the destination host also compensates for queues in the intermediate routers. But with more knowledge of how modern infrastructure has evolved, it's possible to do better. Especially when you're designing for a single purpose (uTP won't have to handle small-packet-size latency-sensitive protocols like telnet).

  • by ScrewMaster ( 602015 ) * on Monday December 01, 2008 @10:49PM (#25954691)

    Update your firmware and tweak your BT settings. I run the WRT54G and I get no problems whatsoever even with multiple torrents.

    I run a WRT54G V4 with the Tomato firmware. Likewise I have no problems.

  • by complete loony ( 663508 ) <Jeremy.Lakeman@g ... m minus caffeine> on Monday December 01, 2008 @11:08PM (#25954843)
    Another advantage UDP has over TCP; there is no maximum half open limit on windows XP. So having a bittorrent client running in the background should not have any impact (other than the available bandwidth obviously) on every other application that is trying to open a TCP connection.
  • by Hal_Porter ( 817932 ) on Tuesday December 02, 2008 @03:05AM (#25956461)

    http://en.wikipedia.org/wiki/TCP_window_scale_option#Windows [wikipedia.org]

    TCP Window Scaling is implemented in Windows 2000, XP, Server 2003 and Vista operating systems. Vista has enabled it by default, while the other operating systems implement it as an option. Because many routers and firewalls do not properly implement TCP Window Scaling, it can cause a user's Internet connection to malfunction intermittently for a few minutes, then appear to start working again for no reason. If "diagnose problem" is selected in Vista, an error message will be displayed "cannot communicate with primary DNS server."

    I.e. it's been around since Win2k but not enabled for the excellent reason that routers and firewalls don't support it. In the absence of Window scaling the TCP window size is limited to 64K which will be a bottleneck on fast connections with a high RTT. That sounds like a good description of a bitorrent connection to a peer on the other side of the world.

    Mind you bitorrent is notorious for gobbling all the available bandwidth unless you limit it, I'm not sure that it learning to bypass this bottleneck is a good thing for people with selfish neighbours.

  • by rxmd ( 205533 ) on Tuesday December 02, 2008 @06:00AM (#25957225) Homepage

    On top of this TCP hasn't seen a major update since the 80's.

    Uhh, TCP Vegas [wikipedia.org], TCP New Reno [faqs.org], BIC [wikipedia.org] and CUBIC [wikipedia.org]? All of which have been implemented in the Linux kernel? TCP has only been standing still since the 80's if you're using an OS from the 80's... or a Microsoft OS.

    Note that the only one of those which made it into an RFC is New Reno, aka RFC 2582 [faqs.org], which has been implemented in the Windows TCP stack since Vista [microsoft.com], along with a number of other recent RFCs.

    The others are basically different suggestions for implementing TCP congestion control. Microsoft has its own variant of those (Compound TCP [wikipedia.org], which is quite similar to TCP Vegas and has also been ported to Linux [caltech.edu]).

    Your 1980s comment is not quite up to date, of course. Microsoft has been sticking to their BSD-based implementation of the TCP stack for quite a long time (too long in fact), but with Vista it's been undergoing quite a bit of change. I know it's unpopular to say something in favour of MS and/or Vista here and I'm far from being a MS apologetic, but it's worth actually reading their Cable Guy columns [microsoft.com] every now and then to be up to date with regards to what the Windows network stack actually does and doesn't do - especially if you are a sysadmin or interested in developments in the TCP arena.

  • by mhesd ( 698429 ) on Tuesday December 02, 2008 @07:42AM (#25957701)

    Microsoft has patents on Compound TCP. The above patch is only for use in research.

    from

    http://netlab.caltech.edu/lachlan/ctcp/ [caltech.edu]

  • by David Gerard ( 12369 ) <slashdot@@@davidgerard...co...uk> on Tuesday December 02, 2008 @07:50AM (#25957749) Homepage
    It's not a piracy issue, e.g. BBC iPlayer which works peer-to-peer, is entirely legal and is clogging the tubes in the UK.
  • by boyko.at.netqos ( 1024767 ) on Tuesday December 02, 2008 @08:20AM (#25957917)

    True enough, except that most implementations of TCP are just plain-old TCP. If -everybody- switched to CUBIC, then we could all use CUBIC. But right now, plain old TCP is the standard.

    In corporate networks, you can use something like CUBIC or TCP New Reno, because you control both ends of the connection. On the Internet, it's a bit harder.

    But in a BT application, most communication is going to take place between two computers using the same application - an application that BT, Inc. does control: uTorrent. By controlling both ends of the application at Layer 7, you can change the implimentation all at once, all over the Internet.

  • Re:Japan (Score:4, Informative)

    by muffen ( 321442 ) on Tuesday December 02, 2008 @08:50AM (#25958077)

    I just got back from living in Japan not too long ago. Over there, we got 100 mbps up AND down for about ~$40/month. Near the end of my stay there, I got a letter from my ISP stating that they're going to start implementing a bandwidth cap, and that no user can upload 500gb/month, but downloading is still unlimited. This year, I read on Slashdot that AU/KDDI is unrolling a 1gbps line for a similarly cheap price. If you want my sympathy for ISPs in America, get back to me when I get even one tenth of the service I got in Japan. I'd like to extend a big middle finger of gratitude to all American ISPs. No one is spouting the gloom and doom over in Asia, and meanwhile, they're shooting ahead into the future of the internet. Asia's been rolling out fibre-optics, at great cost to them, but with spectacular results. Conversely, we sit on our asses and wonder how we can charge more money to more users to put them through the same amount of pipes without upgrading infrastructure.

    I have 100MBit full-duplex for $30 a month, and this is standard pricing where I live (in Europe). There is a gigabit fiberlink going into our building (apartment building with 24 apartments) and from there it's cat 6 going to each apartment.

    This was the cost of getting it...
    About $1500 for getting the fiber into the building (one time cost including the router). In fact, I had a choice of two different providers and there is four separate fibercables in the road outside my apartment block. It was put there when the roads were dug up for changing some pipes. After the waterpipes were changed, any company that wanted to lay down fiber along the road was allowed to do so, four companies put it there, out of which two companies were selling fiber connections to consumers. The companies paid nothing for digging up the road and the work took two days longer due to the fibers being put down. This is standard for any roadworks and has been for quite some time, which is the main reason for the wide spread of fiber in the cities here.

    Some equipment was installed in the basement, from there, a cable was pulled to each apartment. The cost for this was $200 / apartment. So, total cost per apartment was about $275.

    Then, the company that we got the fiber from sells 100/100 subscriptions for just under $30/month. The only thing with this company, as opposed to the other company, was that the initial cost was lower but in return you have to get the internet connection via them (three year contract). After the three years have passed, you can "get out" of the contract with them but keep the fiber for an additional $1000. If this is done you can then freely choose you ISP, and the cost currently, for the cheapest provider, is $19 for a 100/100 connection. For this apartment block we were paying $150 a year as a base-fee to our cable provider (per apartment). Since TV channels can be gotten via the fiber, at a lower cost then what the cable company charges for the same channels, the $275 was weighted against the $150 a year we now don't have to pay (this is how I got even the "old" people to agree to the fiber installation). Two years, even if you don't have an internet subscription, and it's a good dael. Finally, you can also use this for VoIP, $5 / month if you want a subscription where you get a phone number and free calls to any phone in the entire country (mobiles excluded).

    I can add that the other provider wanted $5000 for getting the fiber in to the building, so the difference was $3500. Over three years it was about the same (may even have worked out cheaper depending on the subscriptions you choose), but I felt it was easier to convince people in the apartment block to get the fiber in to the building if I could keep the initial cost to a minimum.

    So, I can simply not understand why some countries are having such a hard time rolling out fiber connections. There are roadworks all the time, since roadworks are handled by some central government function (gen

  • by Andy Dodd ( 701 ) <atd7@cornell . e du> on Tuesday December 02, 2008 @11:28AM (#25959897) Homepage

    90% of the benefits of an improved TCP congestion control algorithm come from improvements on the sender side - very few improvements need to be done on a receive side, and most OSes do have those few improvements (window scaling, etc, SACK is one of the few extensions that it is beneficial for a receiver to have that isn't widespread).

    Thus, for example, a connection between a Linux webserver that has CUBIC as its congestion control algorithm and a Windows XP client with NewReno gets 90%+ of the benefits of CUBIC.

    Also, the historical workaround for congestion control algorithm flaws has been to use multiple parallel TCP streams - guess what BitTorrent has always done? Multiple TCP connections, in this case to multiple servers (multiple parallel to one is beneficial for servers with "meh" TCP implementations.)

    Moving BitTorrent to UDP for congestion control purposes can be done without negative effect, but is extremely dangerous. Moving it for the purposes of adding transport protocol header authentication (i.e. "is this a legit RST flag from the other endpoint") is a much better reason to do this, but it still has the risks of a broken congestion control mechanism that interferes with other users - not necessarily to the point of "internet meltdown" but still "bad".

  • by GreyFish ( 156639 ) on Tuesday December 02, 2008 @02:32PM (#25963037) Homepage

    There's an extention to TCP called ECN (explicit congestion notification) that marks packets as experiencing congestion rather than dropping them, you can see before and after graphs here:

    http://people.freebsd.org/~rpaulo/throughput-withecn.png [freebsd.org]

    http://people.freebsd.org/~rpaulo/throughput-withoutecn.png [freebsd.org]

Always try to do things in chronological order; it's less confusing that way.

Working...