Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Media Networking IT Your Rights Online

uTorrent To Build In Transfer-Throttling Ability 187

vintagepc writes "TorrentFreak reports that a redesign of the popular BitTorrent client uTorrent allows clients to detect network congestion and automatically adjust the transfer rates, eliminating the interference with other Internet-enabled applications' traffic. In theory, the protocol senses congestion based on the time it takes for a packet to reach its destination, and by intelligent adjustments, should reduce network traffic without causing a major impact on download speeds and times. As said by Simon Morris (from TFA), 'The throttling that matters most is actually not so much the download but rather the upload – as bandwidth is normally much lower UP than DOWN, the up-link will almost always get congested before the down-link does.' Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks, thereby saving them money and support costs. Apparently, the v2.0b client using this protocol is already being used widely, and no major problems have been reported."
This discussion has been archived. No new comments can be posted.

uTorrent To Build In Transfer-Throttling Ability

Comments Filter:
  • by pha7boy ( 1242512 ) on Sunday November 01, 2009 @05:58PM (#29944712)
    I'm sure ISPs such as Comcast will find another reason to suggest they need in interfere with network management. just give them a little bit of time to put their heads together with the guys at RIAA.
    • by nate11000 ( 1112021 ) on Sunday November 01, 2009 @06:10PM (#29944814)
      This probably isn't so much for avoiding the eye of your ISP as it is for personal network management. I know I don't want bittorrent interfering with my internet usage, particularly when my wife is at the computer. Not having a router that can prioritize my internet traffic, this is a welcome feature to avoid either slow-downs or having someone else turn off my downloads so they can use the internet.
      • Re: (Score:3, Interesting)

        by Firehed ( 942385 )

        I don't think this protocol will replace QoS on your local network - more likely, it will intelligently select peers based off of external network (Internet) factors

        • by sopssa ( 1498795 ) *

          It will improve the local internet connection, which is the parents problem as well (since the torrent client is slowing down the other internet usage). Torrent client will analyze how much latency grows and tries to optimize that.

          But I'm more unsure about how exactly will this improve ISP's network. They do not have global latency problems because of torrenting but only bandwidth capability problems, and torrent clients have no way to know if bandwidth usage at the ISP level is too much (or where it is).

          • by adolf ( 21054 )

            It looks like a good method to manage one's upload speed in order to keep local latency low for other purposes. However, I also don't think it'll help at the ISP level at all -- the tubes are just too big for my paltry 1-megabit upstream to make any measurable difference in the latency on them.

            What would, however, make a big difference (and I've been saying it for years): Geographically-aware peering.

            It's obviously more efficient on the network if you download something from someone a 4 hops away, than if

            • by nstrom ( 152310 )

              Closer makes no difference, effective transfer speed does (which BT already prioritizes peers based upon). I can get much better download rates from the guy in Finland with a 100mbit connection then I can from the guy across town on my same cable ISP with an already saturated 384kbps upload.

              • by adolf ( 21054 )

                You're ignoring cost. All that international bandwidth costs more money, at the end of the day, than more localized bandwidth would. You, the customer, bear the brunt of these expenses in the forms of increased subscription fees and the ongoing war against P2P.

    • They will, it will give them another reason NOT to upgrade their networks like they should of 3 years ago
    • by wizardforce ( 1005805 ) on Sunday November 01, 2009 @06:46PM (#29945096) Journal

      I fear that you're right. With our luck, ACTA will probably kill net neutrality stone dead with provisions allowing for perhaps even mandating throttling by ISPs to protect various corporate interests regarding copyright law. The FCC's position on net neutrality supports this view strongly. Allowing for exceptions where activity is deemed illegal.

    • by interkin3tic ( 1469267 ) on Sunday November 01, 2009 @08:16PM (#29945690)

      I'm sure ISPs such as Comcast will find another reason to suggest they need in interfere with network management. just give them a little bit of time to put their heads together with the guys at RIAA.

      Really? I for one am certain that they will continue with the exact same rhetoric. It's a good scapegoat for them, and they don't have a problem with overlooking facts to avoid spending money.

      Comcast: "No, we don't need to spend money to relieve congestion, the slowdown is all caused by bittorrent. We need to regulate it."
      Us: "No it isn't, bittorrent isn't causing the problem, it's now self-regulating. The problem is on your end."
      Comcast: "The slowdown is all caused by illegal bittorrent transfers! We need to regulate it!
      Us: "No, see, here's a breakdown of traffic..."
      Comcast" "THE SLOWDOWN IS ALL CAUSED BY ILLEGAL BITTORRENT TERRORISM! WE NEED TO REGULATE IT!"

  • How much do you want to bet ISPs will suddenly have numerous other non-bandwith reasons to justify traffic shaping practices? :-)
  • shouldn't TCP do that by itself?

    Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

    • by Anonymous Coward on Sunday November 01, 2009 @06:10PM (#29944818)

      shouldn't TCP do that by itself?

      Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

      This is probably aimed at average BItTorrent users, i.e. they're on Windows. I highly doubt Windows has the wide variety of TCP congestion management protocols that are available in the Linux kernel. If I am wrong about that, please correct me as I had a really hard time confirming this for certain. It's not exactly a "common support question" that you can easily Google for, or maybe your Google-fu is stronger than mine. I think Windows uses an implementation of Reno and that's it. Hence, the need to build these features into the clients.

      Then there's the issue that to a TCP congestion protocol, all traffic is likely to be equal in its eyes. It won't know that torrent traffic should receive lower priority whenever it conflicts with something else, VOIP apparently being the classic example. For that you need actual QoS. So the client itself will now measure latency to help determine this.

      Also, I doubt this will eliminate an ISP's excuses for throttling traffic. In terms of bandwidth saturation and network capacity, I highly doubt your ISP really cares whether your BitTorrent client is fully saturating your upstream by itself, or whether it uses only the bandwidth that something else doesn't need. In either case, you'd be maxing out your upstream pipe which is what they might concern themselves about.

      • Yeah, it is a big problem. Especially since we've got a very basic router with no type of throttling or priority features.

        Generally when downloading a torrent from certain trackers and large amounts of peers, the whole internet pretty much goes down for every other person in the house. Or goes to dial-up rates. Drives my Dad nuts.

        It wouldn't be a problem if I had a proper router, but with this feature, it should help if it works well. =)

        ~Jarik

        • Generally when downloading a torrent from certain trackers and large amounts of peers, the whole internet pretty much goes down for every other person in the house. Or goes to dial-up rates. Drives my Dad nuts.

          that's YOU not managing your bandwidth correctly... I use Vuze and always set my uplink to only 20-30kbps max.. and downlink to 250-500 kbps... never any congestion for the other computers sharing the router...

          • This happens regardless of speed. Just seems to happen on certain torrents for some reason.

            For instance, one torrent might be downloading at 200kB/s with 10kB/s upload without any issues. Another one might be at 60kB/s down and 10kB/s up and the rest of the net slows to a crawl.

            • by ae1294 ( 1547521 )

              This happens regardless of speed. Just seems to happen on certain torrents for some reason.
              For instance, one torrent might be downloading at 200kB/s with 10kB/s upload without any issues. Another one might be at 60kB/s down and 10kB/s up and the rest of the net slows to a crawl.

              As you said, you have a crappy router. Build a linux box for doing your NAT or change the settings in your torrent app. If you are using torrent over wireless you might find that things work much better if you plug your torrent box into the router instead (no encryption overhead, etc, etc). Other than that you need to reduce the total number of connections allowable in your torrent app. Some routers can't handle more than 200 to 300 connections at a time (total from all your computers). Also keep DHT turned

        • by gmack ( 197796 )

          The protocol changes will probably not help you if your overflowing your router's NAT table. Try reducing your max peers, it's a trick I've used on some of the cheaper routers to avoid choking everything (including the bittorrent) Some Zyxel modems with custom telco firmware (thank you telefonica) require a max peers setting as low as 30.

    • by timeOday ( 582209 ) on Sunday November 01, 2009 @06:11PM (#29944826)
      Bittorrent spawns a huge number of connections. If the OS (or ISP) gives equal bandwidth to each TCP stream, your connection to youtube gets about as much as each one of your 25 bitorrent connections, which destroys the streaming video, voip, or even normal web surfing. I would LOVE it if this provides a solution. (I would be even happier if ToS flags were widely honored, but that has never happened, so I don't know why it would happen now).
      • Re: (Score:3, Interesting)

        by causality ( 777677 )

        Bittorrent spawns a huge number of connections. If the OS (or ISP) gives equal bandwidth to each TCP stream, your connection to youtube gets about as much as each one of your 25 bitorrent connections, which destroys the streaming video, voip, or even normal web surfing. I would LOVE it if this provides a solution. (I would be even happier if ToS flags were widely honored, but that has never happened, so I don't know why it would happen now).

        I have heard the claim that the reason why ToS/QoS flags are not widely honored is that Windows, by default, sets the highest priority for ALL traffic with no regard for what kind of traffic it is. As I don't run Windows, I have to say I honestly don't know whether this is so. Can anyone affirm or deny this claim?

    • by Don Negro ( 1069 ) * on Sunday November 01, 2009 @06:36PM (#29945030)

      Short answer, No. TCP doesn't back off until packets are lost. uTP looks for latency increases which happen before packet loss (and therefore, before TCP congestion control kicks in) and throttles itself preemptively. Put another way, TCP treats all senders as having an equal right to bandwidth. uTP doesn't want to assert an equal right to bandwidth, it wants to send and receive in the unused portion of the available connection.

      • TCP Vegas? [wikipedia.org]

        I remember reading how AT&T's iPhone "zero-packet-loss" was causing network congestion and 8-second ping times.

      • Re: (Score:3, Informative)

        by AHuxley ( 892839 )
        Think of it as Apples Grand Central Dispatch for your network.
        If you have the bandwidth and nothing else is requesting it, your torrents will fly.
        Want to watch youtube HD on your low end consumer grade adsl, your torrents will slow and overall networking will still seem responsive.
        When done viewing, BT will reclaim the bandwidth.
        BT is not just aware of your hard coded BT app max settings, but also your OS networking demands and can adjust?
        • Just to be slightly more pedantic that the other responses. Bittorrent uses heaps of TCP streams, each of which can start pushing their current window size worth of packets when they receive an ACK packet from their peer. Assuming no other limits in the torrent client, this can easily flood your OS and router / ADSL / Cable modem's transmit buffers.

          uTorrent and probably every other torrent client already needs to have complete control over the transmission speed of every TCP stream so it can impose a sane

        • by adolf ( 21054 )

          What is it with foreigners, and their various and sundry prose. For example: There is a clear trend for some of them to write a message which is primarily a statement, and to always end it with a question mark?

      • by mgblst ( 80109 )

        Wrong. You are thinking of the earliest versions of TCP. There is a lot better congestion management controls in recent TCP implementation stacks (recent = 90s).

        What this does is regulate bit torrent traffic only. So while TCP will cut down all your traffic, this will cut down bit torrent at the first signs of danger, before it gets to TCP throttling.

    • by _KiTA_ ( 241027 )

      shouldn't TCP do that by itself?

      Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

      It would, if Bittorrent et all weren't designed to break TCP's regulating. By default uTorrent starts up with like 800 max connections at a time. The TCPIP spec was never really designed to handle this kind of shotgun flooding. The Bittorrent spec is designed to not care about fragmentation, QoS, et all. It is designed to break through college dorm QoS throttling, which is why this kinda discussion is kinda amusing.

  • But is it working? (Score:5, Insightful)

    by wealthychef ( 584778 ) on Sunday November 01, 2009 @06:05PM (#29944774)
    The summary says that the protocol is already out there, and "no major problems are reported." So how about "and congestion is being reduced, and here is how we know it?"
    • Re: (Score:2, Interesting)

      by angelbunny ( 1501333 )

      I've been using uTP for a couple of months now and I have to say it is excellent and is working for me quite well.

      However, since uTorrent is backwards compatible with the original TCP bit torrent protocol the second I start sending to a client that doesn't support uTP my ping jumps from 20 to 200 or i have to go back to manually limiting my upload rate. Regardless, uTP works.

    • Major problems HAVE been reported, especially with people already using their own Traffic Shaping solutions. I've never gotten v2 to work properly. Uploading fluctuates and uses only half of my upstream on average. Even though 100% of the upstream is available without congestion issues. eMule otoh has absolutely no issues using 99% of my upstream bandwidth.
      • Are you using eMule's throttling option? (Upload Speed Sense in Extended options)

  • In my experience, uTorrent only runs on Linux through Wine, and even then, only a few particular obsolete versions of uTorrent are Wine-compatible. Is there someway for me to run a uTorrent-2 client on Linux right now? I've wasted a lot of time trying to get bittorrent to play nice on my home network, to little avail.
    • Re: (Score:2, Informative)

      by trendzetter ( 777091 )
      I use deluge as a utorrent replacement
      • by broeman ( 638571 )
        I started using deluge lately too, and it works great. I used to use Azureus (mainly for the plugins), but I always wanted memory handled better. Deluge is just as good as uTorrent IMHO.
      • by rdnetto ( 955205 )

        I've tried it, but it just didn't feel as responsive as uTorrent, although I did like the daemon/client design. uTorrent seemed to have faster transfer speeds as well.

    • Are there any particular features that you particularly want uTorrent for, or are you just wanting it because you are already familiar with it in a Winwos environment?

      There are a great many Linux native clients you could chose from and while many are text based (which might not be your cup of tea), such as the excellent rtorrent [rakshasa.no] which I tend to use, there are quite a few that are GUI based, of which deluge [deluge-torrent.org] seems very popular, or are GUI wrappers for working with text based clients (there are several such wr

      • What is this story about? uTP, because it promises to reduce bittorrent interference with other apps on the network. From what I have gathered it is only offered by utorrent.
        • What is this story about? uTP, because it promises to reduce bittorrent interference with other apps on the network. From what I have gathered it is only offered by utorrent.

          Ah sorry, I completely forgot the fact that rTorrent has become the "official" client since its purchase.

    • by NorQue ( 1000887 )
      uTorrent 1.8 works with Wine here and it already brings support for the uTP protocol, you just have to switch int on manually in the advanced settings. There any other reason you want to use 2? Because I wouldn't recommend that, as some private trackers don't allow using it yet.

      For 1.8.x, in advanced settings, set bt.transp_disposition to:

      0: attempt only TCP
      1: attempt both TCP and uTP, drop TCP if uTP is successful
      2: attempt uTP if supported, TCP otherwise
      3: attempt only uTP
    • by Hatta ( 162192 )

      If you have linux, you can set up QoS yourself. Or you can just set rTorrent to use 5 or 10 K/s less than your max upstream, and it should work fine.

  • by mleugh ( 973240 )
    Is this likely to improve LAN performance when using bittorrent on a shared internet connection also?
    • Unless your LAN is slower than your WAN (remember that wireless never achieves its advertised rate) there should be no way BitTorrent is slowing down your LAN.

      Basically unless you have FiOS or similar and are using 802.11 to access it, something is wrong with your LAN if torrents break it.

  • Sweet! (Score:5, Funny)

    by i-like-burritos ( 1532531 ) on Sunday November 01, 2009 @06:29PM (#29944966)
    Now when I illegaly download the newest DVD screeners, I can do it with a clear conscience knowing that I'm not congesting the network!

    Seriously though, this is a good thing. I don't know why the story is tagged "your rights online"

  • by bug1 ( 96678 ) on Sunday November 01, 2009 @06:51PM (#29945130)

    AFAIK most bittorent clients throttle connections already, some automatically like vuze, others like transmission only manually.

    Or am i missing the point ?

    • I didn't RTFA, but from the summary it would seem that each client has its own method for throttling. What they want to to is build a throttling algorithm into the BT protocol, hence standardizing the procedure. I guess this would make client coding easier, as the throttling would be achieved with a call to a BT library rather than a client coder having to write/find throttling code themselves.
    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Sunday November 01, 2009 @07:24PM (#29945364) Homepage

      Most clients have you set a fixed upload speed. Some try to do this automatically, while most have you set it manually. This isn't perfect - if you set it to use 80% of your upload, and you are using more than 20%, things will get slow. If you use less than 20%, you'll have some amount idle and being wasted. Some rely on something like monitoring ping to some specific service.. if ping is higher, throttle back. If ping is low, increase speed. Again this isn't perfect because it relies on a single host and route to determine your speed.

      uTorrent's new protocol requires no action from the user, no automatic bandwidth tests, and no outside service. It is designed to always use the optimal speed, while never interfering with foreground tasks.

      It has been a while since I read it, and when I read it I was very very tired, but my understanding is that it tags each packet with a high-precision send time. So if we have two packets, A and B, A will sent at 100ms and B will be sent at 300ms. So you know they were sent 200ms apart. The _receiver_ then notices that he receives them 400ms apart, so there is 200ms of lag which means it should be throttled back. It tries to keep the amount of lag 50ms. Again, I could be completely wrong :D

      Since it is based on UDP and not TCP, it also solves the problem of Comcast sending fake RST packets to make each client think they wanted to disconnect from eachother.

      • by mrbene ( 1380531 )

        It has been a while since I read it, and when I read it I was very very tired, but my understanding is that it tags each packet with a high-precision send time. So if we have two packets, A and B, A will sent at 100ms and B will be sent at 300ms. So you know they were sent 200ms apart. The _receiver_ then notices that he receives them 400ms apart, so there is 200ms of lag which means it should be throttled back. It tries to keep the amount of lag 50ms. Again, I could be completely wrong :D

        This would show that the lag had increased (from n to n+200ms), but it would not be possible to solve directly for n. You'd need additional back and forth (in the vein of NTP [wikipedia.org]) to establish a baseline value for n.

        Mind you, if you don't trust the network to be consistent, (or expect it to take longer in one direction than another) NTP doesn't work as well.

        • Sounds like the perfect solution to me. You don't need to know the actual latency. Instead as you increase your transmission speed an extra delay is added that your peer can tell you about, so you back off and speed up again with rules similar to TCP to try and stay near the sweet spot of your available bandwidth.
    • You can throttle on your end, and your end only.

      If you had say... Cable, and all your neighbours were active too, then this would make your speed drop. Your torrents choke their webpage browsing and youtube streaming, but with congestion control, it doesn't choke them as much. Is it perfect? Nope. Will it affect you negatively? Not really. I'd happily download 20% slower for 80ms ping instead of 2000ms. (and yes, it can get that bad when networks opt for low or no packet loss.)

      When there is no congestion, i

  • Your router will throttle you, take your wallet and run.
  • by bertok ( 226922 ) on Sunday November 01, 2009 @08:06PM (#29945630)

    There's a much bigger issue with uTorrent that the developers seem to refuse to solve, or even acknowledge.

    In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients. This may not be a huge issue for Americans, but everywhere else, you know, like the rest of the fucking planet, this is hugely inefficient, for both the end users, and most importantly, ISPs. This is why they're throttling bittorrent: because it tends to make connections to peers outside the ISP's internal network, which costs ISPs money. In Australia for example, international bandwidth is extremely limited and very expensive, but local bandwidth, even between ISPs, is essentially unlimited, high-speed, and often free or 'unmetered'.

    What do you think is going to be faster: connecting to your neighbour through at the same fucking router, or some kid's home PC in Kazakhstan over 35 hops away? Even connections from here to America have to go through thousands of miles of fiber optic cable over an ocean.

    Note that some other clients like Azureus have already implemented weighted peer choices, where peers with similar IP addresses are preferred over other peers. It's not hard. Heck, it's a trivial change to make, as no changes need to be made to the protocol itself. A reasonably competent programmer could implement this in an hour: simply take the user's own IP address, and then sort the IPs of potential peers by the number of prefix bits in common, then do a random selection from that list, weighted towards the best-matching end. How hard is that?

    The arrogance of the uTorrent devs is simply staggering. They're a group of developers who could, with an hours effort, reduce international bandwidth usage by double-digit percentages and improve torrent download speeds by an order of magnitude, but they just... don't.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?

      That's why the Vuze plugin uses a IP->location mapping database.

      • Re: (Score:3, Insightful)

        by bertok ( 226922 )

        Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?

        That's why the Vuze plugin uses a IP->location mapping database.

        True, but it's still better than random. Many countries were allocated IP blocks from large ranges. Most of Australia's IP addresses start with prefixes around 200-something, for example. Similarly, most ISPs have large blocks allocated to them like /8 ranges or the like. Some ISPs are big enough that torrent users could have 10 or more connections to peers in the same ISP for reasonably common files like TV shows, and only need 1 or 2 to the outside world.

        Still, you're correct, adding even a simple country

    • Ono Plug-In

      You're absolutely right about how badly implemented the random client connection protocol is for BitTorrent clients. There is a project and a plug-in called Ono [northwestern.edu] for Vuze (formely Azureus) BitTorrent clients. I used it before to resolve this problem but I found that the non-stop creation of many ping.exe threads to analyze latency was causing some slow-downs on my own system and additional upstream congestion on my upstream limited broadband pipe.

      I am still surprised that a better protocol for p

      • by bertok ( 226922 )

        It doesn't have to be reliable, it just has to be better than "totally random". Even a very bad peer selection policy would be a HUGE improvement over what they have now.

    • by Aladrin ( 926209 ) on Sunday November 01, 2009 @09:13PM (#29946036)

      THEIR arrogance is astounding? How about yours? They are working FOR FREE. You are merely complaining. Get your hands dirty and start doing some work yourself.

      You can suggest things all you want, but once you start insulting someone for their free work, you've crossed a line. Nobody is forced to use their client. There are dozens of decent clients and probably hundreds of open source ones.

      As for their choices, they will work on what's more important to them, I'm sure. Since they don't need this 'local' feature, they haven't got much incentive to actually work on it.

      • by bertok ( 226922 ) on Sunday November 01, 2009 @09:34PM (#29946198)

        THEIR arrogance is astounding? How about yours? They are working FOR FREE. You are merely complaining. Get your hands dirty and start doing some work yourself.

        You can suggest things all you want, but once you start insulting someone for their free work, you've crossed a line. Nobody is forced to use their client. There are dozens of decent clients and probably hundreds of open source ones.

        As for their choices, they will work on what's more important to them, I'm sure. Since they don't need this 'local' feature, they haven't got much incentive to actually work on it.

        First of all, they're not working for 'free', uTorrent is owned by BitTorrent Inc, a for-profit company. Initially it was free, but it's now developed by a corporation. Those devs are salaried employees.

        More importantly, uTorrent depends on and uses infrastructure that is not free, by any stretch of the imagination. International links are $billions expensive.

        So by your logic, just because a user can download their client for free, it gives Bittorent Inc carte blanche to do anything at all they want, including shit all over the internet infrastructure?

        How the fuck does it make sense for a company who's product uses something like 30% of the total internet bandwidth to not make an hours worth of effort to minimize their impact on said infrastructure? Their product in its present state is so harmful that ISPs are buying millions of dollars worth of equipment to throttle it, and with good reason.

        Read up on the Tragedy of the Commons [wikipedia.org] and get a clue.

        Compare their behavior to the largely free, open, and volunteer efforts of the dedicated people who worked on the early Internet protocols like DNS and NNTP. These were systems designed to scale, use bandwidth efficiently, and 'play nice'.

        What happened since then? Why is it acceptable now to design a protocol that is maximally inefficient? Why would anyone support this kind of behavior?

        • holey moley. the percentage of bits devoted to file sharing is dropping fast. urgent media company/isp press releases notwithstanding, total bandwidth consumed by peer-to-peer file sharing is now under 20%. this includes all protocols. bittorrent will of course be less. the precipitous share decline has caused at least one observer (sandvine's dave caputo) to comment that "peer-to-peer is yesterday's internet story." all the more startling coming from that outfit, a company whose controversial history sugge

        • by Aladrin ( 926209 )

          My mistake, it's no longer open source. It is still free, though, and your demands don't mean jack.

          And what gives them the right to do whatever they want is that it's THEIR PROTOCOL. You are perfectly free to invent your own, more efficient protocol. And if it really -is- better, people -will- switch to it. Why? Because people are impatient and want their files as quickly as they can get them.

          As for 'maximally inefficient', it is anything but. Most clients implement algorithms to determine the fastest

      • Re: (Score:3, Insightful)

        by Hatta ( 162192 )

        Get your hands dirty and start doing some work yourself.

        Sure, where can I get the uTorrent source code so I can add this feature?

    • Keeping traffic completely local would make it much easier to snag a bunch of file sharers in a massive "three strikes and you're out" campaign, don't you think? Since mere use of torrent software seems to be associated with illicit activity in the minds of the ignorant (ie. the authoRIAAties), I'm not sure that "I was just downloading the latest Ubuntu ISO" would be enough to avoid being threatened by the ISP. Lots of local inter-ISP torrent traffic might also cause them to alert local law enforcement to
      • Re: (Score:3, Insightful)

        by bertok ( 226922 )

        Keeping traffic completely local would make it much easier to snag a bunch of file sharers in a massive "three strikes and you're out" campaign, don't you think? Since mere use of torrent software seems to be associated with illicit activity in the minds of the ignorant (ie. the authoRIAAties), I'm not sure that "I was just downloading the latest Ubuntu ISO" would be enough to avoid being threatened by the ISP. Lots of local inter-ISP torrent traffic might also cause them to alert local law enforcement to take a closer look. This could increase one's risk significantly, particularly if any 'infringing' content is ever shared (by an occasional, less enlightened, user of the connect, for example). Seems safer to not have to worry about local/non-local bandwidth, to be honest. Might be smarter to prefer connections that are as non-local and non-concentrated as possible. It's not always just about data transfer speed and bandwidth saving - there are other factors to consider.

        [citation needed]

        Keep in mind that in large part, the motivation of ISPs for monitoring or throttling bittorrent is not concerns over copyright violations, but the impact to their bottom line. All ISPs have three classes of links: Internal, peered, and external. They have a strong preference to maximize the utilization of the former over the latter, as internal links are effectively free and often underutilized, while external links are often very expensive and overloaded.

        If torrent traffic utilized interna

    • by evilviper ( 135110 ) on Sunday November 01, 2009 @09:33PM (#29946194) Journal

      In Australia for example, international bandwidth is extremely limited and very expensive, but local bandwidth, even between ISPs, is essentially unlimited, high-speed, and often free or 'unmetered'.

      No bittorrent client picks one peer, and downloads everything from them... Instead, it connects to a large number of peers, and downloads from all of them.

      If you can download from your neighbor 100X faster than you can download from someone across the planet... good. You'll get 100 chunks from your neighbor, for every 1 you get from the foreign country. No programming required.

      What do you think is going to be faster: connecting to your neighbour through at the same fucking router, or some kid's home PC in Kazakhstan over 35 hops away?

      There's ample opportunity for either to be equally fast. Crossing an ocean increase latency, but if the link isn't horribly oversubscribed, can provided speeds faster than you can handle. So, your neighbor might have 100 other people requesting the same torrent as you, for the same reasons, while the kid in Kazakhstan may have a great internet connection, which is barely being utilized, and this while international traffic is down. This is not international calling... you don't save money by not fully utilizing that transoceanic link.

      Also, ISPs brought this on themselves. I've long advocated ISPs allowing unlimited speeds between subscribers, and only limiting the uplink speeds to whatever you've subscribed, but they almost never do. If they did, see above... any peer-to-peer protocol would naturally download almost everything from local sources, without any added intelligence on its part. You wouldn't have to write it in to every single app.

      A reasonably competent programmer could implement this in an hour

      You could implement it easily, if you're willing to restrict yourself to neighboring network addresses in lieu of all else. If you want some fancy weighting to decide how important locality is versus absolute speed, completeness, etc. then you're talking about a major project.

      Besides that... A good network admin could do the job in an hour as well, with no need to rewrite any of the applications.

      They're a group of developers who could, with an hours effort, reduce international bandwidth usage by double-digit percentages and improve torrent download speeds by an order of magnitude, but they just... don't.

      That's baseless and utterly ridiculous.

    • by crossmr ( 957846 )

      you say it doesn't, but I say it does
      on private tackers I'm routinely shunned, even as the initial seed as soon as other seeds become available that are very likely geographically closer.

      Almost every torrent I've added to private trackers shows a consistent max up speed until someone else hits seed and then my up basically stops. Fine when I'm the initial seed, terrible if I'm just jumping on a random torrent since 95% of the people out there won't take anything from me. I've seen plenty of people on the sa

    • A reasonably competent programmer could implement this in an hour

      I don't agree or disagree with the rest of your statement, but these kinds of statements really bother me.

      A reasonably competent plummer could fix my sink in an hour. I'm not counting the time he has to drive to me, the time it takes to fetch repair parts, the time it takes to talk with me, the time it takes to write me a bill, and the time to get me to pay said bill because I store bills in a drawer that gets opened every month or three.

      Do I need to explain to you that your pet bug does not take an hour to

    • There's a much bigger issue with uTorrent that the developers seem to refuse to solve, or even acknowledge.

      This issue has been fixed [superjason.com] since version 1.7x.

      One of the features that I've been anxiously awaiting is the "Local Peer Discovery" feature in uTorrent 1.7x. Basically, it uses a multicast to discover bittorrent clients that are active on your local network. It can determine if they are seeding or leeching a torrent that you're interested in. If it's available on the network, it will try to use it as a p

      • by daid303 ( 843777 )

        Local peer discovery only finds peers on my local network, not on near networks.

        • Local peer discovery only finds peers on my local network, not on near networks.

          Agreed, but I was responding to one of his hypothetical hyperbolic scenarios: "connecting to your neighbour through at the same fucking router". The problem is definitely not as bad as the parent originally made it sound.

          In any case, it does sound like my second suggested solution would work for his case, flipping the peer.resolve_country variable to true. Granted, he would have to flip that flag, and come back the next day to

    • by Inda ( 580031 )
      If ISPs want to keep their traffic local they should own a large server with massive storage for their users. Maybe have a file retention of 10 days to keep the size down. Maybe sell extra retention time to users who want it. Imagine the download speeds when connected to your ISP, who might only be 5 hops away! Imagine not having to upload anything either!

      If only ISPs bought into this model, we'd all be sorted.

      I'm surprised no universities have experimented with something like this already.

      (I know the first
    • BitTyrant [wikipedia.org] does something like this. Essentially it prioritizes connections to peers that have the best response rates.

      In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients.

      The problem isn't simply proximity. If, for example, Kazakhstan upgraded their capacity and you really could get better transfer speeds than, say, your neighbor next door, well then they should be prioritized.

    • Azureus has the Ono Plugin that you might want to try. It uses CDN redirection information to identify and give connection priority to geographically nearby peers. I haven't heard of any similar efforts for other clients.

      http://azureus.sourceforge.net/plugin_details.php?plugin=ono [sourceforge.net]

    • In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients.

      You might be interested in this thread [ibiblio.org] I started on the topic in '05 - it covers some pros and cons. My intent at the time was to avoid the whole problem we wound up with at Comcast (that took the FCC to fix). Somebody mentioned to me once that there was a problem with traceroute on Windows, not sure if that's really true.

    • A reasonably competent programmer could implement this in an hour: simply take the user's own IP address, and then sort the IPs of potential peers by the number of prefix bits in common, then do a random selection from that list, weighted towards the best-matching end.

      It sounds like this scheme would wreak havoc on the stats kept by private trackers, definitely not a one hour job.

  • by 7-Vodka ( 195504 ) on Sunday November 01, 2009 @08:42PM (#29945840) Journal

    Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks

    How wrong this is. ISPs don't give a crap about this and it's never going to work.

    1. They don't give a crap because the real reason they throttle is because they don't want you using your bandwidth. You know the bandwidth you actually paid for. Whether you are supposedly clogging up their pipe or not is not the point. The point is that you are using more bandwidth than another user and they could kick your ass and sell their internets to 1000 old ladies instead.

    2. It's never going to work because of (1) and because the problem it's trying to solve was never a problem for the ISP it was always a problem for the end user anyway. You think that the ISPs have big download pipes and small upload limits like you do? They don't. Their shit is equilateral. You can stop clogging your tiny upload allocation as much as you want, it's never going to affect the ISP. They never had an UP shortage because they have equal up/down bandwith and provide you with tiny up limits. It may help the end user, but only if it's already better than existing solutions, which if you already know what your ISP castrates your up bandwidth to, it's not.

  • by Tumbleweed ( 3706 ) on Sunday November 01, 2009 @10:08PM (#29946412)

    Get a seedbox. :)

  • I think it is somewhat pointless to throttle the speeds beyond your connection to the ISP. Usually (always?) your upload bandwidth is the limiting factor. Azureus has had a autospeed plugin for ages that monitors your latency and adjusts the upload speed based on that. It is responsive enough to detect when you are watching streaming video etc and lower the upload speed when needed. And just to spite the ISP I usually make Azureus (not rTorrent) to open 4000+ connections and run 10-100 torrents simultaneous

  • http://www.digitalsociety.org/2009/11/analysis-of-bittorrent-utp-congestion-avoidance/ [digitalsociety.org] BitTorrent’s new uTP protocol claims to be “network friendly”, but testing suggests that it’s just as nasty to web surfing, online gaming, and VoIP as before. BitTorrent still consumes 90% of the network and causes very high jitter.

What is research but a blind date with knowledge? -- Will Harvey

Working...