Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Software

BitTorrent Calls UDP Report "Utter Nonsense" 238

Ian Lamont writes "BitTorrent has responded to a report in the Register that suggested uTorrent's switch to UDP could cause an Internet meltdown. Marketing manager Simon Morris described the Register report as 'utter nonsense,' and said that the switch to uTP — a UDP-based implementation of the BitTorrent protocol — was intended to reduce network congestion. The original Register report was discussed enthusiastically on Slashdot this morning."
This discussion has been archived. No new comments can be posted.

BitTorrent Calls UDP Report "Utter Nonsense"

Comments Filter:
  • Best of intentions (Score:5, Interesting)

    by seanadams.com ( 463190 ) * on Monday December 01, 2008 @08:07PM (#25953385) Homepage

    BT may have the best of intentions here in developing this experimental protocol, but this quote leads me to believe that their understanding of the problem is terribly naive:

    It so happens that the congestion control mechanism inside TCP is quite crude and problematic. It only detects congestion on the internet once "packet loss" has occurred - i.e. once the user has lost data and (probably) noticed there is a problem.

    Packet loss is a normal and deliberate mechanism by which TCP detects the maximum thoughput of a path. Periodically it increases the number of packets in flight until the limit is reached, then it backs off. You have to test again from time to time, in order to increase throughput if more capacity becomes available. This in no way incurs "loss of data" or a noticeable problem. Packets lost due to congestion window growth are handled by the fast retransmit algorithm, which means that there is no timeout or drop in throughput (that would be pretty stupid if the whole purpose of growing the congestion window is to _maximize_ throughput).

    I wonder if Simon Morris was merely oversimplifying for the benefit of the layman, but I still find that statement disturbing. As I sugggested in the other thread, it really sounds like they're going to reinvent TCP (poorly). That's not to say you couldn't design a better protocol specifically for point-to-multipoint transfer, but I question if they're on the right track here.

  • by mini me ( 132455 ) on Monday December 01, 2008 @08:37PM (#25953659)

    This move should bring into focus the last mile problem that is the real source of most of the internet connection speed debate.

    When someone says "the last mile problem," I think the last mile is short on bandwidth. The problem here is that the last mile has (and is using) more bandwidth than the upstream connections can handle.

  • by crossmr ( 957846 ) on Monday December 01, 2008 @08:55PM (#25953827) Journal

    everyone thought dht was great too, but I found every time I used it it caused massive headaches. I would jump on a popular torrent and for days afterward I would be having poor performance, checking logs etc would show several dozen connection attempts per second on the utorrent port, even 2-3 days after I was done with the torrent because the DHT tracker was still advertising my IP address. I'd have to release renew to bring my performance back up. This was with a fairly standard Linksys router. Any situation where the other party might not just get the message that I'm not there anymore is bound to lead to headaches on popular torrents.

  • by Xelios ( 822510 ) on Monday December 01, 2008 @08:59PM (#25953849)
    On top of this TCP hasn't seen a major update since the 80's. Most of it was implemented to deal with a very different internet than the one we have today. If you can side step TCP's shortcomings by doing the congestion control more efficiently at the application level then why not give it a shot?
  • by bertok ( 226922 ) on Monday December 01, 2008 @09:47PM (#25954207)

    The probablem with BitTorrent is not that it uses a large amount of bandwidth, but that it's using the wrong bandwidth. In every country other than the United States, international bandwidth is substantially more expensive than local bandwidth, and often in short supply. Local bandwidth is cheap, or even free. Even in the US, inter-ISP bandwidth has the same cost issues, but is plentiful.

    What I've never understood is what's the excuse for not implementing a peer selection algorithm that prioritises nearby users. Even a naive algorithm is going to be vastly better than a purely random selecton. Simply selecting peers based on, say, the length of the common prefix of the IP address will often produce excellent results. Why in God's name should I transfer at 0.1 kbps from some guy in Peru, when a peer down the road could be uploading to me at 500 kbps?

    The truth is that the BitTorrent folks are not playing ball with ISPs. In reality, I think most major ISP could care less about copyright violation, or excessive bandwidth - it makes people pay for more expensive monthly plans - but they DO care about international bandwidth costs.

    If they just took 10 minutes to revamp the peer selection algorithm, they would reduce the impact in ISPs enormously, and then they woudldn't be villified and throttled.

  • by mtarnovan ( 1337149 ) on Monday December 01, 2008 @10:06PM (#25954315)
    If they want to improve network congestion why not start by implementing a better peer selection algorithm. IIRC currently peers are selected at random. A network topology aware peer selection algorithm might improve network congestion a great deal. Currently I see peers which are on another continent being 'preferred' (to due the randomness) to peers on my own ISP's network, with which I have a 50+ mbit connection.
  • Re:The problem is... (Score:1, Interesting)

    by Creepy Crawler ( 680178 ) on Monday December 01, 2008 @10:29PM (#25954501)

    You're talking out your ass here.

    TCP and UDP are similar protocols that use IP as the transport.

    TCP provides session, exponential backoff, and many other features.
    UDP does very little.

  • by CTachyon ( 412849 ) <`chronos' `at' `chronos-tachyon.net'> on Tuesday December 02, 2008 @12:24AM (#25955453) Homepage

    On top of this TCP hasn't seen a major update since the 80's. Most of it was implemented to deal with a very different internet than the one we have today. If you can side step TCP's shortcomings by doing the congestion control more efficiently at the application level then why not give it a shot?

    Uhh, TCP Vegas [wikipedia.org], TCP New Reno [faqs.org], BIC [wikipedia.org] and CUBIC [wikipedia.org]? All of which have been implemented in the Linux kernel?

    TCP has only been standing still since the 80's if you're using an OS from the 80's... or a Microsoft OS.

  • by AngelofDeath-02 ( 550129 ) on Tuesday December 02, 2008 @12:33AM (#25955513)

    This would be cool, but could also be memory intensive. Routers have ram dedicated to a routing table, and if you're planning to implement this like I think you are, you're going to have the server run a bunch of traceroutes to determine how packets are traveling to their destination and sort them appropriately.

    That or you could make assumptions about an isp's range and provide a bias to any ip within the same /16 range - but I'm pretty sure this is nowhere near ideal either.

  • by Anonymous Coward on Tuesday December 02, 2008 @12:33AM (#25955523)

    I just have to mention this real quick...a bit off topic but I never get to point this out...

    Albert Einstein = High School Graduate, College Professor. Kinda speaks for himself. NOT College Student
    Thomas Edison = High School Dropout. Inventor of so many overly complicated things it makes...well...pretty much everyone's head spin.
    Nicholas Tesla = Last formal schooling was in the Croatian equivalent of 5th grade. Inventor of Alternating Current & the Tesla Coil amongst many other things.
    Bill Gates = Dropped out after 2 semesters at a local technical college. If you don't know who he is, leave Slashdot immediately.

    George W. Bush = The single dumbest person to ever receive a vote. Harvard Graduate with Masters Degree.

    A college degree means that someone thought it was a good idea to pay anywhere from $80,000 to $500,000+ and waste 4 otherwise useful years of their life to get a piece of paper. NOTHING can be learned in ANY college or university that cannot be learned by a combination of reading books and talking to yourself in a mirror (since every single major requires a damn public speaking class.) It does not mean you are smart or intelligent and it was not a good choice or a good opportunity. It means you are perfectly happy living within the norms of a society which says if you haven't spent 4 years and lots of money to let someone else stand up and yell knowledge at you, you must be dumb, because there is no other fathomable way that you could acquire an equal amount of knowledge any faster or cheaper. Wake up and smell the damn coffee!

    Go to college if you want a valid excuse to spend 4 additional years of your life without a job and with constant hangovers. Go to college if you're too damn immature to grow up and join the real world just yet. Still think Paris Hilton or Brad Pitt has ever made a single shred of important news? Go to college. Still think Bush was a good president? Go to college. For everyone else, for those with common sense and the ability to look in the mirror and not see someone who looks just like the idiot closest to you, drop out, boycott, or even burn it to the ground.

    You may now return to your previous job of flaming the other commenter's during the lecture.

  • by Anonymous Coward on Tuesday December 02, 2008 @01:22AM (#25955853)

    Don't forget as well, that the main guy behind Hayes modems, Dale Heatherington had just a 2 year degree from a tech college, when he co-founded Hayes, and helped create the net in his own way (and Dale's also a really nice guy)

    He's still someone who invents amazing things, and I always love to give his creations a look over, and despite me having a masters in robotics, his robotic creations always make mine look shabby.

    Back on topic though, education isn't everything. I'm not a networking person, Bennett supposedly is. Why is it I could see what was wrong in his statements? Whats more, like the standard, I took the chance to both try the client out, and talk to the people behind it - mainly the tireless Firon, as he talked me though the protocol, and it's hopes, as well as his thoughts, as Community manager (and so the one people ask everything of from 'where do I get invites' to 'why is my utorrent sending out improperly bencoded packets') on the article. It can be read here [torrentfreak.com]

  • by dch24 ( 904899 ) on Tuesday December 02, 2008 @02:14AM (#25956145) Journal

    Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better.

    What about TTL [wikipedia.org] (a.k.a. Hop Limit)?

    I am not saying that it gives perfect network-locality. But it's way, way better than random.

  • by Anonymous Coward on Tuesday December 02, 2008 @03:41AM (#25956653)

    A lower ping does not mean anything. These days, the last mile usually has the highest latency.

    Good torrent seeders sit on dedicated/colocated boxes with way better latency than any DSL line can have.

    Point is, a dedicated server anywhere usually gives better pings than a DSL user at the same ISP.

    If you want to measure locality perfectly, you have to query autonomous system numbers and compare them. Same AS# = same ISP.

  • by Anonymous Coward on Tuesday December 02, 2008 @06:46AM (#25957413)

    Somewhat related to that: having a lot of open TCP connections still causes some NAT-routers to crash, using UDP gets rid of that problem, too. Sure, UDP means annoying port-forwarding but you basically need that for an efficient Bittorrent download anyway.

  • by Anonymous Coward on Tuesday December 02, 2008 @08:44AM (#25958043)

    Microsoft has patents on Compound TCP.

    Like most Microsoft software patents these are most probably unenforceable under In re Bilski.

  • by diegocgteleline.es ( 653730 ) on Tuesday December 02, 2008 @11:24AM (#25959829)

    It's also worth mentioning that Linux apps can choose between those different congestion controls with get/setsockopt(). So the applications ARE allowed (at least in linux) to choose a more efficient congestion control according to their needs. And if their needs aren't covered, they can submit a new congestion control implementation.

  • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Tuesday December 02, 2008 @11:33AM (#25959971) Homepage

    You forgot one:

    Transport protocol header authentication, preventing Man-In-The-Middle connection denial-of-service attacks, aka Sandvining, aka Bogus RSTs

  • by Anonymous Coward on Tuesday December 02, 2008 @10:50PM (#25970369)

    I run llinois on all my linux servers, waaa hooo! Best of all worlds, moves to a large window fast but is fair:

    http://www.princeton.edu/~shaoliu/tcpillinois/index.html

    Spread the word...

Happiness is twin floppies.

Working...