Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Media Networking IT Your Rights Online

uTorrent To Build In Transfer-Throttling Ability 187

vintagepc writes "TorrentFreak reports that a redesign of the popular BitTorrent client uTorrent allows clients to detect network congestion and automatically adjust the transfer rates, eliminating the interference with other Internet-enabled applications' traffic. In theory, the protocol senses congestion based on the time it takes for a packet to reach its destination, and by intelligent adjustments, should reduce network traffic without causing a major impact on download speeds and times. As said by Simon Morris (from TFA), 'The throttling that matters most is actually not so much the download but rather the upload – as bandwidth is normally much lower UP than DOWN, the up-link will almost always get congested before the down-link does.' Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks, thereby saving them money and support costs. Apparently, the v2.0b client using this protocol is already being used widely, and no major problems have been reported."
This discussion has been archived. No new comments can be posted.

uTorrent To Build In Transfer-Throttling Ability

Comments Filter:
  • by Anonymous Coward on Sunday November 01, 2009 @06:10PM (#29944818)

    shouldn't TCP do that by itself?

    Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

    This is probably aimed at average BItTorrent users, i.e. they're on Windows. I highly doubt Windows has the wide variety of TCP congestion management protocols that are available in the Linux kernel. If I am wrong about that, please correct me as I had a really hard time confirming this for certain. It's not exactly a "common support question" that you can easily Google for, or maybe your Google-fu is stronger than mine. I think Windows uses an implementation of Reno and that's it. Hence, the need to build these features into the clients.

    Then there's the issue that to a TCP congestion protocol, all traffic is likely to be equal in its eyes. It won't know that torrent traffic should receive lower priority whenever it conflicts with something else, VOIP apparently being the classic example. For that you need actual QoS. So the client itself will now measure latency to help determine this.

    Also, I doubt this will eliminate an ISP's excuses for throttling traffic. In terms of bandwidth saturation and network capacity, I highly doubt your ISP really cares whether your BitTorrent client is fully saturating your upstream by itself, or whether it uses only the bandwidth that something else doesn't need. In either case, you'd be maxing out your upstream pipe which is what they might concern themselves about.

  • by mleugh ( 973240 ) on Sunday November 01, 2009 @06:18PM (#29944872)
    Is this likely to improve LAN performance when using bittorrent on a shared internet connection also?
  • by causality ( 777677 ) on Sunday November 01, 2009 @06:20PM (#29944886)

    Bittorrent spawns a huge number of connections. If the OS (or ISP) gives equal bandwidth to each TCP stream, your connection to youtube gets about as much as each one of your 25 bitorrent connections, which destroys the streaming video, voip, or even normal web surfing. I would LOVE it if this provides a solution. (I would be even happier if ToS flags were widely honored, but that has never happened, so I don't know why it would happen now).

    I have heard the claim that the reason why ToS/QoS flags are not widely honored is that Windows, by default, sets the highest priority for ALL traffic with no regard for what kind of traffic it is. As I don't run Windows, I have to say I honestly don't know whether this is so. Can anyone affirm or deny this claim?

  • by bug1 ( 96678 ) on Sunday November 01, 2009 @06:51PM (#29945130)

    AFAIK most bittorent clients throttle connections already, some automatically like vuze, others like transmission only manually.

    Or am i missing the point ?

  • by Firehed ( 942385 ) on Sunday November 01, 2009 @07:42PM (#29945494) Homepage

    I don't think this protocol will replace QoS on your local network - more likely, it will intelligently select peers based off of external network (Internet) factors

  • by angelbunny ( 1501333 ) on Sunday November 01, 2009 @07:48PM (#29945532)

    I've been using uTP for a couple of months now and I have to say it is excellent and is working for me quite well.

    However, since uTorrent is backwards compatible with the original TCP bit torrent protocol the second I start sending to a client that doesn't support uTP my ping jumps from 20 to 200 or i have to go back to manually limiting my upload rate. Regardless, uTP works.

  • by bertok ( 226922 ) on Sunday November 01, 2009 @08:06PM (#29945630)

    There's a much bigger issue with uTorrent that the developers seem to refuse to solve, or even acknowledge.

    In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients. This may not be a huge issue for Americans, but everywhere else, you know, like the rest of the fucking planet, this is hugely inefficient, for both the end users, and most importantly, ISPs. This is why they're throttling bittorrent: because it tends to make connections to peers outside the ISP's internal network, which costs ISPs money. In Australia for example, international bandwidth is extremely limited and very expensive, but local bandwidth, even between ISPs, is essentially unlimited, high-speed, and often free or 'unmetered'.

    What do you think is going to be faster: connecting to your neighbour through at the same fucking router, or some kid's home PC in Kazakhstan over 35 hops away? Even connections from here to America have to go through thousands of miles of fiber optic cable over an ocean.

    Note that some other clients like Azureus have already implemented weighted peer choices, where peers with similar IP addresses are preferred over other peers. It's not hard. Heck, it's a trivial change to make, as no changes need to be made to the protocol itself. A reasonably competent programmer could implement this in an hour: simply take the user's own IP address, and then sort the IPs of potential peers by the number of prefix bits in common, then do a random selection from that list, weighted towards the best-matching end. How hard is that?

    The arrogance of the uTorrent devs is simply staggering. They're a group of developers who could, with an hours effort, reduce international bandwidth usage by double-digit percentages and improve torrent download speeds by an order of magnitude, but they just... don't.

  • by 7-Vodka ( 195504 ) on Sunday November 01, 2009 @08:42PM (#29945840) Journal

    Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks

    How wrong this is. ISPs don't give a crap about this and it's never going to work.

    1. They don't give a crap because the real reason they throttle is because they don't want you using your bandwidth. You know the bandwidth you actually paid for. Whether you are supposedly clogging up their pipe or not is not the point. The point is that you are using more bandwidth than another user and they could kick your ass and sell their internets to 1000 old ladies instead.

    2. It's never going to work because of (1) and because the problem it's trying to solve was never a problem for the ISP it was always a problem for the end user anyway. You think that the ISPs have big download pipes and small upload limits like you do? They don't. Their shit is equilateral. You can stop clogging your tiny upload allocation as much as you want, it's never going to affect the ISP. They never had an UP shortage because they have equal up/down bandwith and provide you with tiny up limits. It may help the end user, but only if it's already better than existing solutions, which if you already know what your ISP castrates your up bandwidth to, it's not.

  • by Anonymous Coward on Sunday November 01, 2009 @08:59PM (#29945938)

    Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?

    That's why the Vuze plugin uses a IP->location mapping database.

  • by JakFrost ( 139885 ) on Sunday November 01, 2009 @09:07PM (#29945986)

    Ono Plug-In

    You're absolutely right about how badly implemented the random client connection protocol is for BitTorrent clients. There is a project and a plug-in called Ono [northwestern.edu] for Vuze (formely Azureus) BitTorrent clients. I used it before to resolve this problem but I found that the non-stop creation of many ping.exe threads to analyze latency was causing some slow-downs on my own system and additional upstream congestion on my upstream limited broadband pipe.

    I am still surprised that a better protocol for proximity favored peer connections wasn't developed for BitTorrent and other P2P systems to maximize performance by connecting to peers on the same or close-by networks. I have a feeling that with the huge increases in demand for content there will be a need for optimized connection protocols once we start demanding more than the capacity that we have.

    Netmask Flaws

    One solution that is simple to implement is the one that you mentioned for netmask calculations but I fear that this is solution won't work reliability since the way that network ranges are created and managed internally by large broadband ISPs is unpredictable and neighboring ranges are owned by different ISPs or are in other countries. Plus netmask information doesn't tell you anything about closest neighbors to connect to once you exhaust the connections in your own netmask.

    Routing Table Solution

    I think that the best solution would be one based on information in the routing protocols that the routers have but since this information is not available to the individual clients the applications have no way of looking at the overall routing structure to determine exactly who the closest and best neighbors. are based on latency, bandwidth, cost, and hop count information.

    If there was a way for the application to query the router for a partial list of the routing table (e.g. 5 or 10-hops) and then prioritize the peer addresses from the tracker according to the routing table based an algorithm that takes bandwidth up-and-down, latency, cost, and hop count into account we would have an optimal solution to the order of connections for peers.

    Latency and Hop Count Not Enough

    The problem is that the routers won't share the routing table information with the clients. The solution becomes the one like Ono plug-in in that the client has to ping and/or trace route to the peer addresses to determine optimal choices based only on latency and hop count without knowing anything about bi-directional bandwidth availability or cost associated. Without the bandwidth info the whole thing falls apart because latency isn't enough to determine maximum throughput and there is no practical way of doing a bandwidth check bi-directionally in a meaningful way between peers without taking up a lot of time and bandwidth in the process itself.

    Upstream Throttling (Not Choking)

    Hopefully, this new uTP protocol will at least give us a benefit and improvement on the upstream bandwidth side by auto-throttling the upstream to prevent choking the connection.

    If only the clients could peek at the routing tables of our routers...

  • by Anonymous Coward on Sunday November 01, 2009 @10:08PM (#29946404)

    This has been a problem since day one. Since dial up to BBS's. However, this is also the same reason we have 8Mb/s-100Mb/s connections today. I was considered a heavy user back in the day when I wanted to send some digital pictures to a friend many states away. Dialup wasn't alway 64k, and was never 64k up. I had 300bps initially. It took a very long time to send digital pictures, though not as long as the post.

    That was once considered abuse. Now no one cares that I have a lot of digital pictures I send. The ISPs don't care, and MySpace and Facebook make it free to share these with family.

    Once it was floppy disks, then CDs, today it is the sharing of DVDs. It will not end, and it will drive the increase in total bandwidth. ISPs should be able to prioritize this traffic. The current encryption and obfuscation used by many P2P clients means the only way to detect it is by detecting SSL on ports other than 443 which have invalid certificates. This makes difficult to control unless you have quality equipment. Bittorrent is much easier to control. The developers are helping with prioritization which is a good thing. More needs to be done. I do the QoS for a "free" campus type hot spot with 100Mb/s of Internet connection and lots of users. We pay based on usage. When someone kicks in a big P2P session, it is very noticeable. Should I kick him off, or QoS him, or pay thousands of dollars a month extra to let him do "free" P2P? QoSing that P2P Ubuntu up/download seems to me to be the right thing to do.

    At home, using DD-WRT I'm able to prioritize things. I can have a Mozy backup going full speed now without it affecting my Netflix or Hulu. Before I did QoS, the Mozy upload would cause major problems with these services due to up link congestion.
     

  • by evilviper ( 135110 ) on Sunday November 01, 2009 @11:03PM (#29946804) Journal

    Your post is precisely what is wrong: It's all about what you get out of an individual download.

    You have utterly and totally failed to understand the content of my reply. I suggest you try again.

The one day you'd sell your soul for something, souls are a glut.

Working...