uTorrent To Build In Transfer-Throttling Ability 187
vintagepc writes "TorrentFreak reports that a redesign of the popular BitTorrent client uTorrent allows clients to detect network congestion and automatically adjust the transfer rates, eliminating the interference with other Internet-enabled applications' traffic. In theory, the protocol senses congestion based on the time it takes for a packet to reach its destination, and by intelligent adjustments, should reduce network traffic without causing a major impact on download speeds and times. As said by Simon Morris (from TFA), 'The throttling that matters most is actually not so much the download but rather the upload – as bandwidth is normally much lower UP than DOWN, the up-link will almost always get congested before the down-link does.' Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks, thereby saving them money and support costs. Apparently, the v2.0b client using this protocol is already being used widely, and no major problems have been reported."
reason 1 down. reason 2 in que. (Score:5, Insightful)
Re:reason 1 down. reason 2 in que. (Score:5, Insightful)
Re: (Score:3, Interesting)
I don't think this protocol will replace QoS on your local network - more likely, it will intelligently select peers based off of external network (Internet) factors
Re: (Score:2)
It will improve the local internet connection, which is the parents problem as well (since the torrent client is slowing down the other internet usage). Torrent client will analyze how much latency grows and tries to optimize that.
But I'm more unsure about how exactly will this improve ISP's network. They do not have global latency problems because of torrenting but only bandwidth capability problems, and torrent clients have no way to know if bandwidth usage at the ISP level is too much (or where it is).
Re: (Score:2)
It looks like a good method to manage one's upload speed in order to keep local latency low for other purposes. However, I also don't think it'll help at the ISP level at all -- the tubes are just too big for my paltry 1-megabit upstream to make any measurable difference in the latency on them.
What would, however, make a big difference (and I've been saying it for years): Geographically-aware peering.
It's obviously more efficient on the network if you download something from someone a 4 hops away, than if
Re: (Score:2)
Closer makes no difference, effective transfer speed does (which BT already prioritizes peers based upon). I can get much better download rates from the guy in Finland with a 100mbit connection then I can from the guy across town on my same cable ISP with an already saturated 384kbps upload.
Re: (Score:2)
You're ignoring cost. All that international bandwidth costs more money, at the end of the day, than more localized bandwidth would. You, the customer, bear the brunt of these expenses in the forms of increased subscription fees and the ongoing war against P2P.
Re: (Score:2)
Re:reason 1 down. reason 2 in que. (Score:5, Insightful)
I fear that you're right. With our luck, ACTA will probably kill net neutrality stone dead with provisions allowing for perhaps even mandating throttling by ISPs to protect various corporate interests regarding copyright law. The FCC's position on net neutrality supports this view strongly. Allowing for exceptions where activity is deemed illegal.
Re:reason 1 down. reason 2 in que. (Score:5, Insightful)
I'm sure ISPs such as Comcast will find another reason to suggest they need in interfere with network management. just give them a little bit of time to put their heads together with the guys at RIAA.
Really? I for one am certain that they will continue with the exact same rhetoric. It's a good scapegoat for them, and they don't have a problem with overlooking facts to avoid spending money.
Comcast: "No, we don't need to spend money to relieve congestion, the slowdown is all caused by bittorrent. We need to regulate it."
Us: "No it isn't, bittorrent isn't causing the problem, it's now self-regulating. The problem is on your end."
Comcast: "The slowdown is all caused by illegal bittorrent transfers! We need to regulate it!
Us: "No, see, here's a breakdown of traffic..."
Comcast" "THE SLOWDOWN IS ALL CAUSED BY ILLEGAL BITTORRENT TERRORISM! WE NEED TO REGULATE IT!"
Re: (Score:2)
I hearby copyright and/or trademark the word "Bitterrorism".
Yeah But... (Score:1)
TCP regulating congestion (Score:2)
shouldn't TCP do that by itself?
Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).
Re:TCP regulating congestion (Score:5, Interesting)
shouldn't TCP do that by itself?
Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).
This is probably aimed at average BItTorrent users, i.e. they're on Windows. I highly doubt Windows has the wide variety of TCP congestion management protocols that are available in the Linux kernel. If I am wrong about that, please correct me as I had a really hard time confirming this for certain. It's not exactly a "common support question" that you can easily Google for, or maybe your Google-fu is stronger than mine. I think Windows uses an implementation of Reno and that's it. Hence, the need to build these features into the clients.
Then there's the issue that to a TCP congestion protocol, all traffic is likely to be equal in its eyes. It won't know that torrent traffic should receive lower priority whenever it conflicts with something else, VOIP apparently being the classic example. For that you need actual QoS. So the client itself will now measure latency to help determine this.
Also, I doubt this will eliminate an ISP's excuses for throttling traffic. In terms of bandwidth saturation and network capacity, I highly doubt your ISP really cares whether your BitTorrent client is fully saturating your upstream by itself, or whether it uses only the bandwidth that something else doesn't need. In either case, you'd be maxing out your upstream pipe which is what they might concern themselves about.
Re: (Score:2)
Yeah, it is a big problem. Especially since we've got a very basic router with no type of throttling or priority features.
Generally when downloading a torrent from certain trackers and large amounts of peers, the whole internet pretty much goes down for every other person in the house. Or goes to dial-up rates. Drives my Dad nuts.
It wouldn't be a problem if I had a proper router, but with this feature, it should help if it works well. =)
~Jarik
Re: (Score:2)
that's YOU not managing your bandwidth correctly... I use Vuze and always set my uplink to only 20-30kbps max.. and downlink to 250-500 kbps... never any congestion for the other computers sharing the router...
Re: (Score:2)
This happens regardless of speed. Just seems to happen on certain torrents for some reason.
For instance, one torrent might be downloading at 200kB/s with 10kB/s upload without any issues. Another one might be at 60kB/s down and 10kB/s up and the rest of the net slows to a crawl.
Re: (Score:2)
This happens regardless of speed. Just seems to happen on certain torrents for some reason.
For instance, one torrent might be downloading at 200kB/s with 10kB/s upload without any issues. Another one might be at 60kB/s down and 10kB/s up and the rest of the net slows to a crawl.
As you said, you have a crappy router. Build a linux box for doing your NAT or change the settings in your torrent app. If you are using torrent over wireless you might find that things work much better if you plug your torrent box into the router instead (no encryption overhead, etc, etc). Other than that you need to reduce the total number of connections allowable in your torrent app. Some routers can't handle more than 200 to 300 connections at a time (total from all your computers). Also keep DHT turned
Re: (Score:2)
The protocol changes will probably not help you if your overflowing your router's NAT table. Try reducing your max peers, it's a trick I've used on some of the cheaper routers to avoid choking everything (including the bittorrent) Some Zyxel modems with custom telco firmware (thank you telefonica) require a max peers setting as low as 30.
Re:TCP regulating congestion (Score:5, Informative)
Re: (Score:3, Interesting)
Bittorrent spawns a huge number of connections. If the OS (or ISP) gives equal bandwidth to each TCP stream, your connection to youtube gets about as much as each one of your 25 bitorrent connections, which destroys the streaming video, voip, or even normal web surfing. I would LOVE it if this provides a solution. (I would be even happier if ToS flags were widely honored, but that has never happened, so I don't know why it would happen now).
I have heard the claim that the reason why ToS/QoS flags are not widely honored is that Windows, by default, sets the highest priority for ALL traffic with no regard for what kind of traffic it is. As I don't run Windows, I have to say I honestly don't know whether this is so. Can anyone affirm or deny this claim?
Re: (Score:2)
Re:TCP regulating congestion (Score:5, Informative)
Short answer, No. TCP doesn't back off until packets are lost. uTP looks for latency increases which happen before packet loss (and therefore, before TCP congestion control kicks in) and throttles itself preemptively. Put another way, TCP treats all senders as having an equal right to bandwidth. uTP doesn't want to assert an equal right to bandwidth, it wants to send and receive in the unused portion of the available connection.
Re: (Score:2)
TCP Vegas? [wikipedia.org]
I remember reading how AT&T's iPhone "zero-packet-loss" was causing network congestion and 8-second ping times.
Re: (Score:2)
Yes, uTP is like Vegas but it actually works on Windows.
Re: (Score:3, Informative)
If you have the bandwidth and nothing else is requesting it, your torrents will fly.
Want to watch youtube HD on your low end consumer grade adsl, your torrents will slow and overall networking will still seem responsive.
When done viewing, BT will reclaim the bandwidth.
BT is not just aware of your hard coded BT app max settings, but also your OS networking demands and can adjust?
Re: (Score:2)
Just to be slightly more pedantic that the other responses. Bittorrent uses heaps of TCP streams, each of which can start pushing their current window size worth of packets when they receive an ACK packet from their peer. Assuming no other limits in the torrent client, this can easily flood your OS and router / ADSL / Cable modem's transmit buffers.
uTorrent and probably every other torrent client already needs to have complete control over the transmission speed of every TCP stream so it can impose a sane
Re: (Score:2)
What is it with foreigners, and their various and sundry prose. For example: There is a clear trend for some of them to write a message which is primarily a statement, and to always end it with a question mark?
Re: (Score:2)
Wrong. You are thinking of the earliest versions of TCP. There is a lot better congestion management controls in recent TCP implementation stacks (recent = 90s).
What this does is regulate bit torrent traffic only. So while TCP will cut down all your traffic, this will cut down bit torrent at the first signs of danger, before it gets to TCP throttling.
Re: (Score:2)
shouldn't TCP do that by itself?
Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).
It would, if Bittorrent et all weren't designed to break TCP's regulating. By default uTorrent starts up with like 800 max connections at a time. The TCPIP spec was never really designed to handle this kind of shotgun flooding. The Bittorrent spec is designed to not care about fragmentation, QoS, et all. It is designed to break through college dorm QoS throttling, which is why this kinda discussion is kinda amusing.
But is it working? (Score:5, Insightful)
Re: (Score:2, Interesting)
I've been using uTP for a couple of months now and I have to say it is excellent and is working for me quite well.
However, since uTorrent is backwards compatible with the original TCP bit torrent protocol the second I start sending to a client that doesn't support uTP my ping jumps from 20 to 200 or i have to go back to manually limiting my upload rate. Regardless, uTP works.
Re: (Score:2)
Re: (Score:2)
Are you using eMule's throttling option? (Upload Speed Sense in Extended options)
Re: (Score:2)
So, it's a Torrent issue.
Although the comparison is a bit unfair here because you are using the latest & greatest in bandwidth throttling which may not work in as broad an audience as Torrent's.
I am using eMule too but NAFC isn't working well for me since we have a lot of systems on NAT and whenever there is internal FTP traffic the NAFC on the eMule client thinks the network is congested.
The plain USS works OK though.
Linux client? (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
I've tried it, but it just didn't feel as responsive as uTorrent, although I did like the daemon/client design. uTorrent seemed to have faster transfer speeds as well.
Re: (Score:2)
Are there any particular features that you particularly want uTorrent for, or are you just wanting it because you are already familiar with it in a Winwos environment?
There are a great many Linux native clients you could chose from and while many are text based (which might not be your cup of tea), such as the excellent rtorrent [rakshasa.no] which I tend to use, there are quite a few that are GUI based, of which deluge [deluge-torrent.org] seems very popular, or are GUI wrappers for working with text based clients (there are several such wr
Re: (Score:2)
Re: (Score:2)
What is this story about? uTP, because it promises to reduce bittorrent interference with other apps on the network. From what I have gathered it is only offered by utorrent.
Ah sorry, I completely forgot the fact that rTorrent has become the "official" client since its purchase.
Re: (Score:2)
Re: (Score:2)
FTMFW = For The MotherFucking WIn
Re: (Score:2)
For 1.8.x, in advanced settings, set bt.transp_disposition to:
0: attempt only TCP
1: attempt both TCP and uTP, drop TCP if uTP is successful
2: attempt uTP if supported, TCP otherwise
3: attempt only uTP
Re: (Score:2)
Re: (Score:2)
If you have linux, you can set up QoS yourself. Or you can just set rTorrent to use 5 or 10 K/s less than your max upstream, and it should work fine.
LAN performance also? (Score:2, Interesting)
Re: (Score:2)
Unless your LAN is slower than your WAN (remember that wireless never achieves its advertised rate) there should be no way BitTorrent is slowing down your LAN.
Basically unless you have FiOS or similar and are using 802.11 to access it, something is wrong with your LAN if torrents break it.
Sweet! (Score:5, Funny)
Seriously though, this is a good thing. I don't know why the story is tagged "your rights online"
Re: (Score:2)
You paid for that up bandwidth, use it
Clients already do this (Score:3, Interesting)
AFAIK most bittorent clients throttle connections already, some automatically like vuze, others like transmission only manually.
Or am i missing the point ?
Re: (Score:2)
Re:Clients already do this (Score:5, Informative)
Most clients have you set a fixed upload speed. Some try to do this automatically, while most have you set it manually. This isn't perfect - if you set it to use 80% of your upload, and you are using more than 20%, things will get slow. If you use less than 20%, you'll have some amount idle and being wasted. Some rely on something like monitoring ping to some specific service.. if ping is higher, throttle back. If ping is low, increase speed. Again this isn't perfect because it relies on a single host and route to determine your speed.
uTorrent's new protocol requires no action from the user, no automatic bandwidth tests, and no outside service. It is designed to always use the optimal speed, while never interfering with foreground tasks.
It has been a while since I read it, and when I read it I was very very tired, but my understanding is that it tags each packet with a high-precision send time. So if we have two packets, A and B, A will sent at 100ms and B will be sent at 300ms. So you know they were sent 200ms apart. The _receiver_ then notices that he receives them 400ms apart, so there is 200ms of lag which means it should be throttled back. It tries to keep the amount of lag 50ms. Again, I could be completely wrong :D
Since it is based on UDP and not TCP, it also solves the problem of Comcast sending fake RST packets to make each client think they wanted to disconnect from eachother.
Re: (Score:2)
It has been a while since I read it, and when I read it I was very very tired, but my understanding is that it tags each packet with a high-precision send time. So if we have two packets, A and B, A will sent at 100ms and B will be sent at 300ms. So you know they were sent 200ms apart. The _receiver_ then notices that he receives them 400ms apart, so there is 200ms of lag which means it should be throttled back. It tries to keep the amount of lag 50ms. Again, I could be completely wrong :D
This would show that the lag had increased (from n to n+200ms), but it would not be possible to solve directly for n. You'd need additional back and forth (in the vein of NTP [wikipedia.org]) to establish a baseline value for n.
Mind you, if you don't trust the network to be consistent, (or expect it to take longer in one direction than another) NTP doesn't work as well.
Re: (Score:2)
Re: (Score:2)
You can throttle on your end, and your end only.
If you had say... Cable, and all your neighbours were active too, then this would make your speed drop. Your torrents choke their webpage browsing and youtube streaming, but with congestion control, it doesn't choke them as much. Is it perfect? Nope. Will it affect you negatively? Not really. I'd happily download 20% slower for 80ms ping instead of 2000ms. (and yes, it can get that bad when networks opt for low or no packet loss.)
When there is no congestion, i
Next: ISPs develop automatic throttling... (Score:2)
Much bigger issue with uTorrent still unsolved (Score:3, Interesting)
There's a much bigger issue with uTorrent that the developers seem to refuse to solve, or even acknowledge.
In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients. This may not be a huge issue for Americans, but everywhere else, you know, like the rest of the fucking planet, this is hugely inefficient, for both the end users, and most importantly, ISPs. This is why they're throttling bittorrent: because it tends to make connections to peers outside the ISP's internal network, which costs ISPs money. In Australia for example, international bandwidth is extremely limited and very expensive, but local bandwidth, even between ISPs, is essentially unlimited, high-speed, and often free or 'unmetered'.
What do you think is going to be faster: connecting to your neighbour through at the same fucking router, or some kid's home PC in Kazakhstan over 35 hops away? Even connections from here to America have to go through thousands of miles of fiber optic cable over an ocean.
Note that some other clients like Azureus have already implemented weighted peer choices, where peers with similar IP addresses are preferred over other peers. It's not hard. Heck, it's a trivial change to make, as no changes need to be made to the protocol itself. A reasonably competent programmer could implement this in an hour: simply take the user's own IP address, and then sort the IPs of potential peers by the number of prefix bits in common, then do a random selection from that list, weighted towards the best-matching end. How hard is that?
The arrogance of the uTorrent devs is simply staggering. They're a group of developers who could, with an hours effort, reduce international bandwidth usage by double-digit percentages and improve torrent download speeds by an order of magnitude, but they just... don't.
Re: (Score:2, Interesting)
Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?
That's why the Vuze plugin uses a IP->location mapping database.
Re: (Score:3, Insightful)
Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?
That's why the Vuze plugin uses a IP->location mapping database.
True, but it's still better than random. Many countries were allocated IP blocks from large ranges. Most of Australia's IP addresses start with prefixes around 200-something, for example. Similarly, most ISPs have large blocks allocated to them like /8 ranges or the like. Some ISPs are big enough that torrent users could have 10 or more connections to peers in the same ISP for reasonably common files like TV shows, and only need 1 or 2 to the outside world.
Still, you're correct, adding even a simple country
Proximity Favored Connections (Score:2, Interesting)
Ono Plug-In
You're absolutely right about how badly implemented the random client connection protocol is for BitTorrent clients. There is a project and a plug-in called Ono [northwestern.edu] for Vuze (formely Azureus) BitTorrent clients. I used it before to resolve this problem but I found that the non-stop creation of many ping.exe threads to analyze latency was causing some slow-downs on my own system and additional upstream congestion on my upstream limited broadband pipe.
I am still surprised that a better protocol for p
Re: (Score:2)
It doesn't have to be reliable, it just has to be better than "totally random". Even a very bad peer selection policy would be a HUGE improvement over what they have now.
Re: (Score:2)
It wouldn't even require the cooperation of ISPs.
As the AC post just above yours pointed out, it's fairly simple to scrape this information from a wide variety of sources. It would be sufficient for someone to update a simple table once every few months. Even if it was baked into the uTorrent executable, it would get updated reasonably often along with the regular point releases.
Keep in mind also that most P2P clients represent a large set of distributed cooperating applications that could analyze and monit
Re: (Score:2)
I assure you that I can maintain my grip on my sanity even in the face of the most advanced heuristics! 8)
Re:Much bigger issue with uTorrent still unsolved (Score:5, Insightful)
THEIR arrogance is astounding? How about yours? They are working FOR FREE. You are merely complaining. Get your hands dirty and start doing some work yourself.
You can suggest things all you want, but once you start insulting someone for their free work, you've crossed a line. Nobody is forced to use their client. There are dozens of decent clients and probably hundreds of open source ones.
As for their choices, they will work on what's more important to them, I'm sure. Since they don't need this 'local' feature, they haven't got much incentive to actually work on it.
Re:Much bigger issue with uTorrent still unsolved (Score:4, Insightful)
THEIR arrogance is astounding? How about yours? They are working FOR FREE. You are merely complaining. Get your hands dirty and start doing some work yourself.
You can suggest things all you want, but once you start insulting someone for their free work, you've crossed a line. Nobody is forced to use their client. There are dozens of decent clients and probably hundreds of open source ones.
As for their choices, they will work on what's more important to them, I'm sure. Since they don't need this 'local' feature, they haven't got much incentive to actually work on it.
First of all, they're not working for 'free', uTorrent is owned by BitTorrent Inc, a for-profit company. Initially it was free, but it's now developed by a corporation. Those devs are salaried employees.
More importantly, uTorrent depends on and uses infrastructure that is not free, by any stretch of the imagination. International links are $billions expensive.
So by your logic, just because a user can download their client for free, it gives Bittorent Inc carte blanche to do anything at all they want, including shit all over the internet infrastructure?
How the fuck does it make sense for a company who's product uses something like 30% of the total internet bandwidth to not make an hours worth of effort to minimize their impact on said infrastructure? Their product in its present state is so harmful that ISPs are buying millions of dollars worth of equipment to throttle it, and with good reason.
Read up on the Tragedy of the Commons [wikipedia.org] and get a clue.
Compare their behavior to the largely free, open, and volunteer efforts of the dedicated people who worked on the early Internet protocols like DNS and NNTP. These were systems designed to scale, use bandwidth efficiently, and 'play nice'.
What happened since then? Why is it acceptable now to design a protocol that is maximally inefficient? Why would anyone support this kind of behavior?
Re: (Score:2)
holey moley. the percentage of bits devoted to file sharing is dropping fast. urgent media company/isp press releases notwithstanding, total bandwidth consumed by peer-to-peer file sharing is now under 20%. this includes all protocols. bittorrent will of course be less. the precipitous share decline has caused at least one observer (sandvine's dave caputo) to comment that "peer-to-peer is yesterday's internet story." all the more startling coming from that outfit, a company whose controversial history sugge
Re: (Score:2)
My mistake, it's no longer open source. It is still free, though, and your demands don't mean jack.
And what gives them the right to do whatever they want is that it's THEIR PROTOCOL. You are perfectly free to invent your own, more efficient protocol. And if it really -is- better, people -will- switch to it. Why? Because people are impatient and want their files as quickly as they can get them.
As for 'maximally inefficient', it is anything but. Most clients implement algorithms to determine the fastest
Re: (Score:3, Insightful)
Get your hands dirty and start doing some work yourself.
Sure, where can I get the uTorrent source code so I can add this feature?
Re: (Score:2)
Re: (Score:3, Insightful)
Keeping traffic completely local would make it much easier to snag a bunch of file sharers in a massive "three strikes and you're out" campaign, don't you think? Since mere use of torrent software seems to be associated with illicit activity in the minds of the ignorant (ie. the authoRIAAties), I'm not sure that "I was just downloading the latest Ubuntu ISO" would be enough to avoid being threatened by the ISP. Lots of local inter-ISP torrent traffic might also cause them to alert local law enforcement to take a closer look. This could increase one's risk significantly, particularly if any 'infringing' content is ever shared (by an occasional, less enlightened, user of the connect, for example). Seems safer to not have to worry about local/non-local bandwidth, to be honest. Might be smarter to prefer connections that are as non-local and non-concentrated as possible. It's not always just about data transfer speed and bandwidth saving - there are other factors to consider.
[citation needed]
Keep in mind that in large part, the motivation of ISPs for monitoring or throttling bittorrent is not concerns over copyright violations, but the impact to their bottom line. All ISPs have three classes of links: Internal, peered, and external. They have a strong preference to maximize the utilization of the former over the latter, as internal links are effectively free and often underutilized, while external links are often very expensive and overloaded.
If torrent traffic utilized interna
Re:Much bigger issue with uTorrent still unsolved (Score:5, Insightful)
No bittorrent client picks one peer, and downloads everything from them... Instead, it connects to a large number of peers, and downloads from all of them.
If you can download from your neighbor 100X faster than you can download from someone across the planet... good. You'll get 100 chunks from your neighbor, for every 1 you get from the foreign country. No programming required.
There's ample opportunity for either to be equally fast. Crossing an ocean increase latency, but if the link isn't horribly oversubscribed, can provided speeds faster than you can handle. So, your neighbor might have 100 other people requesting the same torrent as you, for the same reasons, while the kid in Kazakhstan may have a great internet connection, which is barely being utilized, and this while international traffic is down. This is not international calling... you don't save money by not fully utilizing that transoceanic link.
Also, ISPs brought this on themselves. I've long advocated ISPs allowing unlimited speeds between subscribers, and only limiting the uplink speeds to whatever you've subscribed, but they almost never do. If they did, see above... any peer-to-peer protocol would naturally download almost everything from local sources, without any added intelligence on its part. You wouldn't have to write it in to every single app.
You could implement it easily, if you're willing to restrict yourself to neighboring network addresses in lieu of all else. If you want some fancy weighting to decide how important locality is versus absolute speed, completeness, etc. then you're talking about a major project.
Besides that... A good network admin could do the job in an hour as well, with no need to rewrite any of the applications.
That's baseless and utterly ridiculous.
Re: (Score:3, Interesting)
You have utterly and totally failed to understand the content of my reply. I suggest you try again.
Re: (Score:2)
would you care to provide links to these papers? i'm sure you've read quite a few that back this up otherwise you wouldn't be arguing this position.
I only have vague memories of reading some related stuff a while back (I'm not exactly keeping tabs of this stuff for research or anything), but I remember that there was a Slashdot post a while back about this paper from Microsoft Research:
Network Coding for Large Scale Content Distribution
http://research.microsoft.com/pubs/67246/tr-2004-80.pdf [microsoft.com]
It basically says that there's an even more efficient form of P2P, where the blocks are optimally encoded using something akin to a huge "RAID 5" type parity algorit
Re: (Score:2)
Also just found another one, but this is a bit heavy on the maths:
Network information flow - R Ahlswede, N Cai, SYR Li, RW Yeung, 2000
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.4568&rep=rep1&type=pdf [psu.edu]
I think this was the original paper that pointed out that it's possible to exceed the naive "optimal broadcast" efficiency of a single source on a switched network, by allowing intermediate nodes to perform some computation.
Mind you, this is only tangentially related to current P2P syst
Re: (Score:2)
you say it doesn't, but I say it does
on private tackers I'm routinely shunned, even as the initial seed as soon as other seeds become available that are very likely geographically closer.
Almost every torrent I've added to private trackers shows a consistent max up speed until someone else hits seed and then my up basically stops. Fine when I'm the initial seed, terrible if I'm just jumping on a random torrent since 95% of the people out there won't take anything from me. I've seen plenty of people on the sa
Re: (Score:2)
A reasonably competent programmer could implement this in an hour
I don't agree or disagree with the rest of your statement, but these kinds of statements really bother me.
A reasonably competent plummer could fix my sink in an hour. I'm not counting the time he has to drive to me, the time it takes to fetch repair parts, the time it takes to talk with me, the time it takes to write me a bill, and the time to get me to pay said bill because I store bills in a drawer that gets opened every month or three.
Do I need to explain to you that your pet bug does not take an hour to
Re: (Score:2)
This issue has been fixed [superjason.com] since version 1.7x.
Re: (Score:2)
Local peer discovery only finds peers on my local network, not on near networks.
Re: (Score:2)
Agreed, but I was responding to one of his hypothetical hyperbolic scenarios: "connecting to your neighbour through at the same fucking router". The problem is definitely not as bad as the parent originally made it sound.
In any case, it does sound like my second suggested solution would work for his case, flipping the peer.resolve_country variable to true. Granted, he would have to flip that flag, and come back the next day to
Re: (Score:2)
If only ISPs bought into this model, we'd all be sorted.
I'm surprised no universities have experimented with something like this already.
(I know the first
Already solved. See: BitTyrant. (Score:2)
BitTyrant [wikipedia.org] does something like this. Essentially it prioritizes connections to peers that have the best response rates.
In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients.
The problem isn't simply proximity. If, for example, Kazakhstan upgraded their capacity and you really could get better transfer speeds than, say, your neighbor next door, well then they should be prioritized.
Re: (Score:2)
Azureus has the Ono Plugin that you might want to try. It uses CDN redirection information to identify and give connection priority to geographically nearby peers. I haven't heard of any similar efforts for other clients.
http://azureus.sourceforge.net/plugin_details.php?plugin=ono [sourceforge.net]
Re: (Score:2)
In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients.
You might be interested in this thread [ibiblio.org] I started on the topic in '05 - it covers some pros and cons. My intent at the time was to avoid the whole problem we wound up with at Comcast (that took the FCC to fix). Somebody mentioned to me once that there was a problem with traceroute on Windows, not sure if that's really true.
Re: (Score:2)
It sounds like this scheme would wreak havoc on the stats kept by private trackers, definitely not a one hour job.
Re: (Score:2)
I read your post and, at first, entirely agreed with it. However, what if the devs deliberately want to keep this problem amidst to perpetuate a reason for ISPs to finally upgrade their infrastructure before further optimizing how the protocol works?
That's quite a retarded suggestion. Upgrading the link with the bottleneck requires a lot of investment (putting a new intercontinental fiber-optic line in isn't the same as digging up a few streets) so the ISPs are quite right to try to put it off as long as they can. It's not underinvestment, it's trying to make the existing investment work for its living properly. And the thing is... it does work for most since most people's network access is sporadic. It's the bulk downloaders that are the problem from
Re: (Score:3, Insightful)
I agree, I see other peers on the same ISP as me and others on another ISP which the ISP are also based in the same city as me and yet it doesn't connect to them.
Coding a way so you can manually prioritise that peer or domain would be easy to do.
I see this all the time too. It shits me to no end that I could be connecting to users with 10Mbit uplinks in the same city, but uTorrent blindly connects to peers in places like Hungary which is almost precisely the furthest possible distance from me.
ISPs don't give a crap (Score:3, Interesting)
Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks
How wrong this is. ISPs don't give a crap about this and it's never going to work.
1. They don't give a crap because the real reason they throttle is because they don't want you using your bandwidth. You know the bandwidth you actually paid for. Whether you are supposedly clogging up their pipe or not is not the point. The point is that you are using more bandwidth than another user and they could kick your ass and sell their internets to 1000 old ladies instead.
2. It's never going to work because of (1) and because the problem it's trying to solve was never a problem for the ISP it was always a problem for the end user anyway. You think that the ISPs have big download pipes and small upload limits like you do? They don't. Their shit is equilateral. You can stop clogging your tiny upload allocation as much as you want, it's never going to affect the ISP. They never had an UP shortage because they have equal up/down bandwith and provide you with tiny up limits. It may help the end user, but only if it's already better than existing solutions, which if you already know what your ISP castrates your up bandwidth to, it's not.
route around the problem (Score:3, Insightful)
Get a seedbox. :)
Re: (Score:2)
You're still gonna need a torrent client to run on that seedbox.
Users don't need this protocol (Score:2)
I think it is somewhat pointless to throttle the speeds beyond your connection to the ISP. Usually (always?) your upload bandwidth is the limiting factor. Azureus has had a autospeed plugin for ages that monitors your latency and adjusts the upload speed based on that. It is responsive enough to detect when you are watching streaming video etc and lower the upload speed when needed. And just to spite the ISP I usually make Azureus (not rTorrent) to open 4000+ connections and run 10-100 torrents simultaneous
My testing shows it's still not friendly to nets (Score:2)
Re:Why? (Score:5, Funny)
a) That's "The Fine Article", you insensitive clod!
b) You must be new here.
c) In light of b) Articles=bad. Summaries=good.
Re: (Score:2)
Why can't it just be "the article" most of the time?
Someone forgot to RTFM...
Re:god-fucking-awful summary (Score:4, Informative)
Re:god-fucking-awful summary (Score:4, Informative)
You have no clue what multicast is, do you? Please stop using that word until you get a fucking clue about what it is.
Hulu and Pandora are NOT multicast. If they were, it would put less of a strain on their networks.
I've owned an ISP and I can tell you, P2P applications like BT put a BIG strain on the network. You saying its a myth doesn't make it so. Just shows that your an idiot who talks out of his ass.
That being said, instead of bitching about network congestion like Comcast does, I would upgrade my network to keep up with the demand. I got a lot of customers that way. Long-term, its the better strategy.
Re: (Score:2)
No , you just need a good balance between making profit and upgrading.
Simply put : use a set portion of your profit to upgrade your network. This is difficult at the start , but it pays off after some time.
I always wondered.... (Score:2)
Chances are on a popular service there will be a significant amount of people attempting to view the same thing at relatively the same time. This way you can at least reduce some of the bandwidth.
Anyone else heard of
Re: (Score:2)
In the long term, such laws are probably a good thing. They'll just advance the state of the art in p2p so that crypto, packet shaping, anonymous routing etc. becomes the default. Of course it won't prevent the RIAA sniffing traffic and attempting to make inferences but it will be much harder for them to prove their case.