Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking

ISPs & P2P, Getting Along Without Getting Cozy 118

penguin-geek writes "Researchers at Northwestern University have discovered a way to ease the tension between ISPs and P2P users. As we all know, there's been a growing tension between Internet Service Providers (ISPs) and their customers' P2P file-sharing services, and this has driven service providers to forcefully reduce P2P traffic at the expense of unhappy subscribers and the risk of government investigations. Recently, some ISPs have tried to fix the problem through partnerships with certain P2P applications. The Ono project represents an alternative solution: a software service that allows P2P clients to efficiently identify nearby peers, without requiring any kind of cozy relationship between ISPs and P2P users. Using results collected from over 150,000 users, they have found that their system locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random by BitTorrent, and that these high-quality paths can lead to significant improvements in transfer rates. In challenged settings where peers are overloaded in terms of available bandwidth, Ono provides a 31% average download-rate improvement; in environments with large available bandwidth, Ono increases download rates by 207% on average (and improves median rates by 883%). Ono is available as a plugin for the Azureus BitTorrent client, an open tracker and an standalone service you can integrate into any P2P system."
This discussion has been archived. No new comments can be posted.

ISPs & P2P, Getting Along Without Getting Cozy

Comments Filter:
  • Standard (Score:3, Insightful)

    by gustolove ( 1029402 ) on Monday May 05, 2008 @11:50AM (#23301430) Journal
    Should be made standard into the apps if it does all that it claims.
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday May 05, 2008 @11:55AM (#23301490)
      They are looking at the PHYSICAL location of the machines.

      As far as I am aware, most bittorrent clients already search for the machines with the fewest hops and lowest latency. Translation: machines on the same NETWORK as them.

      Because if I am on Comcast at home and you have DSL through ATT at home and our homes are within 500' of each other ... that means NOTHING with regard to hops and latency between us.
      • While your argument makes some sense in theory, it doesn't change the fact that this project is apparently reporting some very straightforward numbers which seem to indicate that in practice your point doesn't hold much water. I understand what you're getting at, but a 207% average speed increase is a 207% average speed increase. If you've investigated and gotten different results, please feel free to share. How directly that translates into a savings in bandwidth for the provider, I don't know, but I don't think that's what the GP was getting at.
        • by snowraver1 ( 1052510 ) on Monday May 05, 2008 @12:16PM (#23301746)
          I have been using Ono for about 6 months now. When I installed it, it made very little difference at all. I usually get pretty good speeds though, with or without. I am still using the plugin now (with azureus) and am using it more because i'm too lazy to uninstall it, then for the speed increase (if any).

          It sounded cool, but didn't work for me. I am curious if anyone else noticed similar findings, or if I am all alone.
          • Re: (Score:3, Insightful)

            by Hatta ( 162192 )
            Yeah I'm not seeing how this is going to be too useful in most cases. If you have enough seeds that you can afford to pick and choose which ones you download from, you're going to be getting high speeds anyway. If you have low speeds, you're not going to be picky about your seeds.
            • If you happen to download a very popular torrent (such the latest Naruto or BSG episode) with 30000 seeds all over the world, then this would be a godsend.

              As I described here: http://www.aigarius.com/blog/2006/08/12/bit-horizon/ [aigarius.com]

              If the torrent client chooses a peer at random and gets a peer across the world from you, then there will be bad traffic between you two. If all peers are such unlucky choices (which is a significant probability for high popularity torrents) then you will have low total download spee
          • I think more people need to use it for it to work better.
          • by msobkow ( 48369 )

            I just installed it and saw an immediate doubling of my transfer rates after I restarted Azureus. I'd have to say that it seems to work, at least for the torrents I have on the go right now.

          • by Inda ( 580031 )
            Same as that...

            Been using it six month roughly too. I've only ever seen one peer close to me in the logs but, saying that, I don't use BT for new stuff, that's what Usenet's for.
      • by CountZer0 ( 60549 ) on Monday May 05, 2008 @12:23PM (#23301828) Homepage
        Except they aren't only looking at the physical location of the machines. They are basically merging both network and physical location to come up with a hybrid location mapping that provides the lowest latency route.

        From the FAQ:
        Does this really work? In a paper pending publication, we show that our lightweight approach significantly reduces cross-ISP traffic and over 33% of the time it selects peers along paths that are within a single autonomous system (AS). Further, we find that our system locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random, and that these high-quality paths can lead to significant improvements in transfer rates.
        • Re: (Score:3, Insightful)

          by Hatta ( 162192 )
          I haven't read their paper obviously, but those numbers might not mean much in the real world. For instance, latency means nothing when you're downloading a large file. A 30% lower loss rate might matter, but only if your loss rate is already a significant limiting factor.

          Availability of peers is likely to be the limiting factor in any real life situation. Using an app that's picky about its peers isn't going to improve that at all.
          • Re: (Score:3, Insightful)

            by Shakrai ( 717556 ) *

            or instance, latency means nothing when you're downloading a large file

            No, but latency might be useful in trying to figure out which peer is closer to you on the network.

          • Availability of peers is likely to be the limiting factor in any real life situation. Using an app that's picky about its peers isn't going to improve that at all.
            If other peers can exchange pieces more quickly by using information about network topography, then the peers become available to send pieces to you more quickly.
        • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday May 05, 2008 @12:57PM (#23302194)
          If I may ...

          ... Further, we find that our system locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random ...
          And THAT is the problem with this work.

          The current torrent clients do not RANDOMLY pick an address. They check latency and hops.

          Sure, it's easy to get HUGE IMPROVEMENTS when you choose to compare yourself against something that no one does anyway.

          I'll wait to see what their app does when compared to the current methodology of the clients. I'd guess that it would be WORSE than simply measuring the latency and hops. Which is already done and done rather more efficiently than their method of querying 3rd party servers.
          • by Sentry21 ( 8183 ) on Monday May 05, 2008 @03:09PM (#23303794) Journal
            Great, except that latency and hops means very little in terms of throughput. As an example, being in Vancouver on Shaw, I'm likely to get better speeds from a node in Toronto on Shaw (quite a few hops away, and relatively latent) than from a Telus user here in Vancouver.

            The reason? Shaw owns a national fibre network that crosses the country, and you can traverse that distance without leaving their (impressive) network. In comparision, going to Telus, which is not that far away in terms of hops and latency, requires crossing border routers which, at peak periods, are very likely saturated.

            One thing I wish my torrent clients would do is stop accepting uploads from peers with worthless transfer rates. When I have three seeds sending data to me at 120 KB/s on average, and forty sending data at 0.5 KB/s on average (and not downloading at all), those connections are accomplishing pretty much nothing. I'd rather disconnect from them, and try to find other peers with whom I can exchange data faster (in both directions).

            Especially on private trackers, where the 'maximum number of peers' I connect to are all downloading from me at 1 kb/s each; this actively harms my ratio, because I have to seed the torrent for weeks to hit 1:1; I'd rather connect to someone else and ship them 100 KB/s so I can get the data out there faster, and not suffer because of people with shitty routes.

            That, more than anything, is what I hope for this technology.
            • by Sancho ( 17056 ) *
              Ratios, in general, are a bad idea. They're a pyramid scheme. You have to get in early to even have a shot at a high ratio.

              I don't think that BitTorrent was designed with ratios in mind, but it was definitely designed with availability. Private trackers should track how long you're connected to the torrent, not how much you've actively uploaded. If you've been seeding for a week, but you only managed to upload a small amount due to other crappy peers, you shouldn't be penalized.
              • I disagree. I value the 10mbit uploader who's there are release time a whole lot more than someone who comes by a month later with saturated upload. Credit should be given where it's due. Sites often give bonus for seeders over regular peers. About the only tweak I'd make is to increase the bonus for near-dead torrents.
              • BitTorrent was designed to be a scalable high-performance system. Ratios are part of making that work, and Bram went through several iterations of tuning to get the pyramid scheme to work well.

                Early connectors are likely to have high ratios unless they abandon right after getting their full file, and late arrivers are going to be mostly leaching, and to some extent that's ok - but most people will get their files earlier if people are more generous, and also they'll get them earlier if they download from f

      • Out of curiosity, I fired up my Azureus BitTorrent client let a few peers connect and ran the IP addresses through traceroute and on a Comcast connection I got nothing, all of the traces timed-out because Comcast has blocked them, and Comcast was one of the companies working with Pango to develop a BT client that selects low hop peers first.
      • Actually its looking at the network path.

        Not directly but via what CDN's each user sees.
    • Bingo!

      It doesn't. Most BitTorrent clients are already somewhat location-aware, in the sense that they try to prioritize peers according to a number of factors, and each peer can pick and choose who they deal with. Between the two, things usually sort themselves out fairly decently.

      Is there room for improvement ? Hell, yes! But I think this Ono thing is fixing a problem that didn't necessarily exist in the first place. What would be nice is to retool the algorithms to favor same-network peers, but that
  • internet gps (Score:5, Insightful)

    by caffeinemessiah ( 918089 ) on Monday May 05, 2008 @11:51AM (#23301436) Journal
    nice idea...but looks like its piggybacking on Akamai's database for geo/ip mappings. I wonder if Akamai's TOS is friendly to this sort of stuff. In any case, this sort of feature could be built into the BT protocol itself to achieve the same end if necessary.
    • by billstewart ( 78916 ) on Monday May 05, 2008 @12:09PM (#23301672) Journal
      Akamai's content distribution system works by putting large numbers of small caching servers around the internet, on ISP networks, and using algorithms to connect clients to the closest server while doing some level of load-balancing. (There are other CDNs that work by putting small numbers of large servers at peering points.) So if two clients get connected to the same Akamai server when they're retrieving some Akamai customer's content, they're probably nearby in a network sense. That doesn't require getting lots of detail from Akamai's network - though it might be more accurate if it did.


      It's an interesting approach - you can also do things like identifying IP addresses by BGP Autonomous System Number, which will tell you what sites are in the same ISP, but you might get better P2P performance by connecting to a peer on another ISP in your same city than a peer who's on your ISP but across the country. (Most ISPs seem to assign ASNs on roughly a continent or country level.) So sometimes you'll get better P2P performance by picking close ping times, but as the article says, pinging lots of potential peers can take a long time.

  • Double Edged Sword (Score:4, Insightful)

    by VorpalRodent ( 964940 ) on Monday May 05, 2008 @11:51AM (#23301448)
    Despite all the legitimate uses for P2P and the associated technologies, there appears to be a rather pervasive view (spin, rather) that all possible uses are nefarious.

    As such, this will likely get spun as making the process of copyright infringement more efficient. Will that lead to this being blocked or otherwise pushed back against?

    • by billstewart ( 78916 ) on Monday May 05, 2008 @12:40PM (#23302002) Journal
      There are two groups of people who don't like P2P - the RIAA who want to spin it as content thievery (which, ok, it often is), and the ISPs, who don't like getting their networks swamped and having to pay more for transit with upstream ISPs or increasing the size of their peering with peers and their internal distribution links. Right now, both of those forces are pointed in the same direction.


      Making P2P more efficient by aligning peer selection with ISP structure makes the ISP side less grouchy about it. This is good. The more precisely you can do that, the more you reduce the impact on the ISP's performance and costs, as well as getting better performance for the P2P system. So they're generally going to like it, though it's obviously a balancing act, because better alignment means you can also find the bottlenecks in your ISP and fill them.


      So no, as long as you're not bothering Akamai too much, and as long as this works reasonably well with your ISPs, it's not going to get pushback.


      Back when Napster was still around, it did some work with some universities to set up peering student-student rather than student-outsider, because that way most of the bandwidth stayed on the fat cheap university LANs rather than the thinner and rapidly-overloaded links to the Internet. Some of this happened naturally (students would show up as having fast connections, so students would generally upload from other students, but outsiders would also try to upload from students.) Napster could do this fairly easily, because they had a centralized database. Bittorrent and most other P2P systems today are designed to avoid having a centralized database, because it was a target.

      • Re: (Score:3, Informative)

        by Shakrai ( 717556 ) *

        Bittorrent and most other P2P systems today are designed to avoid having a centralized database, because it was a target.

        Uhh, bittorrent does have a 'centralized database' -- it's called a tracker.

        Granted, there are some trackerless implementations but bittorrent wasn't "designed" to avoid having a "target". It was designed to efficiently share large files.

        • Re: (Score:2, Informative)

          tracker's aren't centralized databases in the way Napster's database was. Napster's central database served as a single global tracker. That doesn't exist in torrent land. Downloaders were inconvenienced by Demonoid going down for a period of time, but BT wasn't threatened.
          • Agreed, and that makes a real difference. Avoiding having a single target which could be used to shut down the whole system was one of Bram's objectives when he was writing Bittorrent. Individual trackers can be shut down, so a person distributing a given file or bunch of files can be told to shut down, but the Bittorrent system as a whole keeps working just fine, because there isn't one big "Kick Me" target on it like Napster had. That also reduced some of the need for secrecy that some of Bittorrent's
        • Re: (Score:1, Troll)

          by Brett Glass ( 98525 )
          Well, whatever you think of P2P, you must admit that it was designed for the purpose of piracy. That's just a matter of historical fact. Some companies that want to profiteer on it (e.g. BitTorrent, Inc.) are trying to get legitimate media outlets to use P2P, but it's unwise of them to do it; it might prevent crackdowns on piracy due to "substantial noninfringing uses." The best thing to do about P2P is to develop client/server alternatives to it. They will be more efficient and can be made at least as fas
          • by Dan541 ( 1032000 )

            Some companies that want to profiteer on it (e.g. BitTorrent, Inc.) are trying to get legitimate media outlets to use P2P, but it's unwise of them to do it; it might prevent crackdowns on piracy due to "substantial noninfringing uses."

            Um... That's the whole point.

            P2P was intended for file sharing what was it's original purpose, but it always had problems due to residential users having a lower upload than download. Bit torrent was written to solve this problem and now people are capable of downloading and distributing huge amounts of data that would have previously been inconceivable.

            P2P is a perfectly legal and legitimate way to distribute media and the more people we get on our side the more we can fight people who are trying to contr

            • It's the best way to allocate limited resources. P2P breaks the asymmetrical bandwidth model not because it is any more efficient but rather because it allows the content provider's costs to be shifted from the content provider to the ISP. The original and explicit contract of the Internet, since its inception, has been simple: each side pays for its connection to the backbone. But some content providers don't want to pay their freight. They want to shift the cost of distributing their content to someone el
              • by Dan541 ( 1032000 )

                It's the best way to allocate limited resources. P2P breaks the asymmetrical bandwidth model not because it is any more efficient but rather because it allows the content provider's costs to be shifted from the content provider to the ISP.

                The ISP doesn't pay anything, unless they are providing FREE service.

                But some content providers don't want to pay their freight. They want to shift the cost of distributing their content to someone else.

                How is a content provider who uses bit torrent NOT paying for distribution?

                I know people who are paying hundreds of dollars a month to distribute via bit torrent. Bandwidth cost's money and if the user is happy to contribute some bandwidth towards distribution, then why shouldn't they? I see plenty of PayPal donation buttons on website's, how is bandwidth different.

                100% of my website's and content (although it's been awhile now) are/wher

                • he ISP doesn't pay anything, unless they are providing FREE service.

                  Yes, it does, because it is providing flat rate service. The user pays it no more money, but its costs go through the roof. Hence, the distributor of the content is setting up a server on their networks and taking service from them without compensation.

                  How is a content provider who uses bit torrent NOT paying for distribution?

                  It is not paying its freight to transport its content to the Internet backbone. This is the fundamental contra

                  • by Dan541 ( 1032000 )

                    This is the fundamental contract of the privatized Internet: everyone pays his or her way to the backbone.

                    Then where is the problem?

                    P2P is a method of (a) shifting costs and (b) avoiding the establishment of centrally located sites which can be shut down

                    That's the advantage of it, Yes
                    The Internet has always been about freespeech and allowing others to voice there opinions. Why should only the rich be allowed to distribute their content? Do you know what it cost to run a server? I don't know many people who could afford to do so let alone
                    have the knowledge to manage the server.

                    he ISP doesn't pay anything, unless they are providing FREE service.

                    Yes, it does, because it is providing flat rate service. The user pays it no more money, but its costs go through the roof. Hence, the distributor of the content is setting up a server on their networks and taking service from them without compensation.

                    Hang on, are you saying that the ISP shouldn't have to provide the service their customer paid for?
                    I think most consumer protection laws would disagree wi

                    • Then where is the problem?

                      The problem comes when content providers refuse to pay their freight.

                      That's the advantage of it, Yes The Internet has always been about freespeech

                      ...which does not include theft of service or theft of intellectual property.

                      and allowing others to voice there [sic] opinions.

                      Which has nothing to do with P2P, since P2P is not used to voice one's own opinion. It's used to make illicit copies of others' work or to distribute content which should be distributed via other means.

                      Why s

                    • by Dan541 ( 1032000 )
                      I'm sorry but I find it difficult to believe that you even have a faint clue what your talking about.

                      What the hell am I doing paying $300 a month when I can apparently get the service for free with a $15 domain name.

                      You claim that content providers should pay for their content and then sprout bullshit that they should obtain their service's for free. The problem is that is infrastructure isn't free at the end of the day someone has to pay.

                      The customer did not pay to operate a server (see the terms of service for virtually all residential Internet service). And the content provider did not pay for any service from the ISP at all.

                      P2P isn't a server you obviously DON'T know what a server (or P2P) i

      • Making P2P more efficient by aligning peer selection with ISP structure makes the ISP side less grouchy about it. This is good. The more precisely you can do that, the more you reduce the impact on the ISP's performance and costs

        Yeah... maybe. Or maybe heavy BitTorrent users will just download more stuff in the same amount of time, keeping last-mile network utilization constant.

        Basically, most of the "recreational" P2P users I've known just download as much as they reasonably can; the faster their network connection, the more they download.

        So more efficient P2P algorithms will help keep network utilization down from people like me (I use BitTorrent occasionally to download large files when it's convenient, or is the default m

      • by rtechie ( 244489 ) *

        There are two groups of people who don't like P2P - the RIAA who want to spin it as content thievery (which, ok, it often is), and the ISPs, who don't like getting their networks swamped and having to pay more for transit with upstream ISPs or increasing the size of their peering with peers and their internal distribution links. Right now, both of those forces are pointed in the same direction.

        Well, you might notice that the cable companies are doing a lot of throttling and the phone companies... aren't. Why do you think that is? It's because the lion's share of bittorrent traffic is video, often the very same video the cable companies are broadcasting. IPTV has been taking off in a huge way lately. They don't want the competition, which is why they're not going to cooperate with this initiative. Their partnership with bittorrent is about mooching their user's bandwidth to SELL PPV video and mov

    • by PCM2 ( 4486 ) on Monday May 05, 2008 @12:57PM (#23302210) Homepage
      Despite all the legitimate uses for handguns|hemp|abortions|porn|foreigners and the associated technologies, there appears to be a rather pervasive view (spin, rather) that all possible uses are nefarious.

      As such, this will likely get spun as making the process of violent crime|drug abuse|premarital sex|rape|taking our jobs more efficient. Will that lead to this being blocked or otherwise pushed back against?
      • Re: (Score:3, Interesting)

        by Kjella ( 173770 )
        There's those, but there's also the techno-optimists that thinks any possible advance in science and technology must by definition be progress. Creating a super-resistant, super-lethal, super-contagious bioweapon may be a great feat of genetics and biochemisty, but I doubt it'd be to humanity's progress. It's quite impossible to turn back time and pretend we don't know what we know, but it doesn't mean that every change should be embraced. Take anonymous P2P which would mean absolute free speech, not just p
    • No, so long as we have legit uses, I can't see how they can do that. The only thing I use p2p to download is OSS. This weekend I downloaded MythDora 5, in 8 days I'll be downloaded Fedora 9. I download distros all the time with BT.

      Further, how does the ISP know something is legit or not? What if some indy or signed band or director wants to start distributing music/movies (free license or not) via p2p?
      • Further, how does the ISP know something is legit or not?

        Must we always come back to this? This was already answered once and for all:The Evil Bit [wikipedia.org].

  • by Animats ( 122034 ) on Monday May 05, 2008 @11:56AM (#23301492) Homepage

    That's been the trouble with these "peer to peer" protocols. The routing algorithms have been horribly inefficient. It's quite possible to have the same data flowing in both directions on the same pipe. Multiple copies, even.

    It might be cheaper for the telecom industry (which is big) to buy out the music industry (which is tiny) and just cache the RIAA's entire output on local servers. Just cacheing the top 100 releases or so might cut traffic in half.

    (This won't scale to movies, though. Movies are bigger and more expensive to make.)

    • Re: (Score:2, Redundant)

      by crossmr ( 957846 )
      Multicast would do wonders on the internet for anything with a high volume.
      I've thought the same about work places that allow streaming music. Put in a media server that pulls the top X streams down once, and then internal users could hit that. Rather than several hundred streams, maybe you cut it down by 80%.
      • Re: (Score:2, Informative)

        by Shakrai ( 717556 ) *

        Multicast would do wonders on the internet for anything with a high volume.

        We see some variation of this thought expressed in every p2p/bandwidth related story but would it actually help that much?

        How is multicast going to reduce the bandwidth requirements of video on demand (i.e: Netflix instant view) applications? You request something, the server sends it to you. Unless somebody else is requesting that exact same movie (and requesting it at the exact same time as you) how the hell does multicast help?

        It might be useful for live events (think of the Presidential Debates)

        • Multicast doesn't magically help with every possible application, but it *would* help with classical block-based P2P file trading.

        • Re: (Score:3, Interesting)

          by crossmr ( 957846 )
          No, not all. And nothing will be a magic bullet that is going to solve everything. You need to make improvements where you can. Any bittorrent application which supposedly takes up 175% of all internet traffic if some sources are to be believed and has more than 1 leecher on it will benefit from multicast. Live events, streaming music, and any other kind of service which isn't on-demand (you join something in progress rather than it starting fresh for you) will benefit from it.
          On demand stuff can only benef
        • Re: (Score:3, Informative)

          by nuzak ( 959558 )
          > Unless somebody else is requesting that exact same movie (and requesting it at the exact same time as you) how the hell does multicast help

          Someone probably is requesting the exact same movie at roughly the same time. Have a few multicast streams going that are offset by some interval. You request the chunks that aren't being multicast, then synchronize to the first available multicast stream when it's available.

        • 10 minute tape delay (Score:3, Informative)

          by tepples ( 727027 )

          How is multicast going to reduce the bandwidth requirements of video on demand (i.e: Netflix instant view) applications? You request something, the server sends it to you. Unless somebody else is requesting that exact same movie (and requesting it at the exact same time as you) how the hell does multicast help?

          The first ten minutes are streamed normally. At some time during this ten-minute period, everyone else watching the same movie as you and who started within the same ten-minute period gets a multicast stream of the second ten minutes. Continue until the entire movie has been streamed in ten-minute blocks.

    • by JustinOpinion ( 1246824 ) on Monday May 05, 2008 @12:39PM (#23301988)

      That's been the trouble with these "peer to peer" protocols. The routing algorithms have been horribly inefficient. It's quite possible to have the same data flowing in both directions on the same pipe. Multiple copies, even.
      Seems to me that is an artifact of a protocol being designed to operate on a hostile network.

      Distribution could be wildly efficient if the users and the network operators were on the "same team." If they wanted to, they could design a bit-torrent variant where chunks are cached by intermediary servers, so that they can always be delivered quickly from a local node. Further, servers could maintain accurate models of network topology, and clients could then use this data to pick the best path. Chunks from popular files would almost always be available from a nearby server cache or a nearby peer.

      The problem is that the network is either indifferent to user activities, or actively trying to prevent user activities (throttling, etc.). The end result is that the protocol is tweaked not for efficiency, but for circumvention (e.g. encryption).

      I like the idea presented in the summary, since it is in principle a net benefit to both the users and the network operators. However even if it works, it may not last. For instance, ISPs may use even more aggressive tricks (maybe even exploiting this proposed variant), forcing the protocol to become even more inefficient (e.g. switching to a multi-hop TOR-like protocol).
  • by EMeta ( 860558 ) on Monday May 05, 2008 @12:00PM (#23301546)
    I'm no expert in this field, but this sounds to me like computers in isolated areas would suddenly get the shaft. Am I missing something?
    • by wattrlz ( 1162603 ) on Monday May 05, 2008 @12:05PM (#23301622)
      Don't computers in remote locations already get the shaft? If there're no peers within your ttl, then you're sol.
      • by Shakrai ( 717556 ) *

        Does this ever actually happen in the real world? I'm doubtless spoiled living in the United States but I've never seen a traceroute with more than 30 or 35 hops on it. Isn't the lowest default TTL for any (major) operating system at least 64?

        • It used to happen to me all the time way back when I had the patience to use p2p. I really don't know how it took 256+ hops to find someone with the same musical tastes as I had, but an mp3 could take months to complete, if it completed at all, because the one other guy in the state who liked, say, Wumpscut hand an HD meltdown.
      • by Strange Ranger ( 454494 ) on Monday May 05, 2008 @12:57PM (#23302206)
        AFAIK there are often ISPs in BFE that can give you a decent ttl. It's just a PITA getting them to honor their TOS so your packets don't go MIA.
    • This seems like it would only affect downstream bandwidth. That is, this affects which hosts a client will request information from. I don't see any reason why it would have any effect on whether or not a host would accept a requested upstream connection. So, now that I think about it, I guess it would be BENEFICIAL to people living in the boonies on account of they'd receive fewer requests to upload data.
    • I'm no expert in this field, but this sounds to me like computers in isolated areas would suddenly get the shaft. Am I missing something?
      I don't know. Are you an albanian goatherd using a 2400bps modem from your moutainside shack?

      If so, then I guess you're probably missing something. However, if you're in that situation, you can put torrenting for prawns near the bottom of the list.

    • Not necessarily. It depends on how oversubscribed the ISP is. Put it this way, if 50% of the P2P users in one location can completely flood the upload traffic of their ISP. Then even if the peers are more interconnected, they can still flood the upload traffic of the ISP, while allowing the other 50% of their aggregate bandwidth to be used among themselves.

  • Can't the torrent clients simply check the TTL value and then prefer closer peers?

    Man talk about re-invent the wheel.
    • But a low TTL does not imply low latency. It is likely, but may not always be the case. Also, a short TTL, low latency connection can still be unstable, causing resents and dataloss (malware, bad (wireless) connection, old OS, etc). It can also be a low bandwidth/poorly configured connection, only giving you 1-2KB/s. The story is thus a bit more complicated than just TTL's and latency. Quality and Speed of the Peer's connection also come into play.
      • The story is thus a bit more complicated than just TTL's and latency. Quality and Speed of the Peer's connection also come into play.

        This is all information that the torrent client has or can get easily. The client can monitor rolling averages for speed and do periodic latency tests. The client could also send the TTL it is sending with in the TCP packet so that the receiving client can calculate the distance.

        You don't need to build an external database to look this stuff up.

    • The problem with TTL is that it can be re-set at the routing points. For large backbones, the same data could be routed through multiple points, some of which have multiple intermediary hops, and TTL will not be adjusted until it hits the same end-point. This is one of the "priority" methods large network houses already use to prioritize packets; to the end user/switch/router/etc. it just looks like an overloaded or slow router.

      Because these companies have already messed with TTL, it is now broken for use
  • by Brett Glass ( 98525 ) on Monday May 05, 2008 @12:04PM (#23301610) Homepage
    One thing that many people do not think about at first (but realize when it's pointed out to them) is that mechanisms which try to identify peers on the same ISP's network are anticompetitive. (That's why only the biggest carriers, like AT&T, support them.) Here's why. The cable and telephone monopolies have so many customers that the odds are there will be someone else on the same provider's network with the requested files. Small ISPs, on the other hand, will rarely if ever have someone with that file and so will still experience a great impact from the cost shifting and congestion caused by P2P. Hence, you can see why the big guys are cautiously embracing schemes like "P4P" as an anticompetitive weapon to block new entrants -- particularly wireless ones.
  • Setting aside the Net Neutrality implications of this development if it were to enter mass deployment (use this P2P software!), ISPs will loathe to actually install this technology. It would leave them implicitly condoning P2P, the majority of which is used for copyright infringement. Besides, it'd cost them actual money, compared to lobbying and whining at government.
    • There is nothing for the ISP to install (RTFM?)
    • by billstewart ( 78916 ) on Monday May 05, 2008 @01:15PM (#23302424) Journal
      ISPs don't actually care about copyright infringement, except possibly the cable modem companies which are also selling television and might have their advertising revenues impacted. Back when Napster and @Home were still around, @Home had two positions on Napster - officially, they'd say "Evil Copyright Infringers are Bad! And people generating upstream bandwidth from home are Bad!". Unofficially, the people who worked there mostly said "Well, duh! The reason people are buying broadband at home is to download music - Napster's really great for us!"


      ISPs care about money - buying more upstream costs money, and upgrading peering links or internal distribution networks costs money. They also care about customer perceived performance, and if P2P uses their networks inefficiently, and swamps a neighborhood's upstream in ways that interfere with TCP performance, that's bad. For the most part, this technology will reduce their costs by reducing exterior bandwidth, and that's good, as long as it doesn't do it in ways that the improved P2P performance finds other bottlenecks in their system to step on. The better the P2P paths can match the structure of the ISP, the lower the impact on their network will be.


      This approach doesn't actually require the ISP to install anything, or to do anything, or expose them to participating-in-P2P-themselves infringement conflicts; there are other approaches that do, such as putting P2P caching servers in their network. So it's pretty much all gravy for them, especially since they know that some large fraction of the bits they're carrying are P2P. (The Akamai caching servers here aren't being used to cache the P2P - they're web caches used by traditional content providers, and what this tool is doing is using their location to identify some of the structure of the ISP network to do better P2P peer matching.)

  • by Sun.Jedi ( 1280674 ) on Monday May 05, 2008 @12:35PM (#23301946) Journal

    In fact, when ISPs configure their networks properly, their software significantly improves transfer speeds
    Asking ISPs to properly configure their networks in response to specific application will, in all likelihood, garner the same result as me asking my 2 year old not to play in dog's water bowl. Can't Azure, bT, etc just limit, or filter TTL values to the same, or similar effect?

    What about the Comcast [slashdot.org] effect? Although a joint venture would seem to help both sides, the bottom line from the network/legal/politician/*AA side is [voice of James Hetfield] P2P BAAAAD! [/voice].
  • I understand why location aware choking is helpful to ISPs - it reduces border traffic and their costs. I can also understand how location aware peer selection on the tracker can help torrents that have too many peers for every peer to be connected to every other peer. So does this client plugin or any other client based location aware selection/choking make any real difference for users? The classic tit-for-tat choking algorithm means you unchoke the peers giving you the fastest download. It doesn't ma
    • Re: (Score:3, Interesting)

      by Sancho ( 17056 ) *

      I understand why location aware choking is helpful to ISPs - it reduces border traffic and their costs.
      Well depending upon how ISPs determine egregious bandwidth usage, it could be the difference between getting a letter telling you to cut back and slipping under the radar.
  • Cool, now me and my neighbors will have something to talk about when we get notices from RIAA! Talk about bringing the love to a local level
  • This may be a stupid question, but if ISPs are looking to save on bandwidth, why don't they turn on IPv6? IPv6 multicast solves the problem of efficient 1:N distribution way better than P2P apps.

    Have a large file you want to distribute and want to do so using 2mbps of bandwidth? Pump the file in parallel using 1mbps, 512kbps, 256kbps, 128kbps, 64kbps and 32kbps so that people with all kinds of pipes can download it, and pump it in a loop. Add some amount of redundancy to each stream, and you are good to go.
    • by Sancho ( 17056 ) *
      Several reasons.

      1) Cost efficiency. Is their current infrastructure IPv6 ready? Probably not.

      2) A significant number of the peers would also need to be on IPv6. Chicken and egg problem.

      3) P2P apps still need to know about IPv6 multicast, right?
    • Multicast is a good idea, but it isn't related to IPv6. There's IPv4, IPv4 with multicast, IPv6, and IPv6 with multicast.

      As I understand it, if ISPs enabled multicast their routers would explode due to the memory requirements.
  • by Simonetta ( 207550 ) on Monday May 05, 2008 @01:55PM (#23302894)
    There are two seperate issues between the ISPs and the P2Ps. The details of the two issues tend to get mixed according to the perspective of the person making the argument.

    The first issue is the amount of data (the bandwidth issue) that the P2P downloader is using relative to the amount of bandwidth that the other ISP users are consuming. The other issue is the ability of the so-called owners the downloaded information to legally extort money from P2P users.

    The P2P users are the best customers of the ISPs. In time, the technology improves to handle the growing needs of the P2P community, and the P2P'ers are willing to pay (within reason) for faster access and greater bandwidth. P2P'ers will pay $30-$50 more a month to the ISPs than the dial-up'ers who are mostly checking e-mail, reading specialized websites, and doing eBay trading. This makes the P2P'ers a significant revenue source to the ISPs.

    "Significant revenue source", in case you didn't know, is the most important three word phrase in the English language. "You're Under Arrest" is the second-most significant phrase in English. And, of course, the more 'sig rev source' that you have, the less you have to concern yourself with hearing "You're U A!" But, nevertheless, it can still happen. Especially in the current times of great change such as the present when one former source of sig revenue (the music industry) is evaporating and others like the P2P community are rising.

    Generally the law follows the money. The golden rule states that he who hath the gold maketh the rule. But, in the real world, money and law tend to be 90 degrees out of phase. Situations arise where a disappearing revenue source has, for a certain period of time, the ability to envoke the legal system to extort money from people in greater proportion than its social usefullness would have it deserve. The music industry, and its extortion arm - the RIAA, is in that position. This industry is entering its 'zombie' phase, in that it is already dead but doesn't seem to know it. Death for a business is a different concept than it is in biology. Zombie businesses are basically unsustainable in the long run because their economic model has been broken, but their structures are still functioning. Basically the RIAA is just the music industry running around like a chicken with its head cut off. It can't last, but you don't want to be in its way before it just falls over.

    Since the RIAA uses the ISPs to identify the P2P'ers that it has selected for random extortion, the P2P'ers don't trust the ISPs to come up with a working technical solution to the bandwidth problem. So we have the current situation that is bad for everyone. Personally I work around this by not downloading industry product: I get it in disc format from the local library and copy it from the disc onto my home PC. Then I return the disc to the library for the next person to use.

    The music industry insists that this is illegal in their parallel universe. And, there was a time when it appeared that the RIAA was going to take on the US Library Association. But the librarians have been dealing with assholes like this for 300 years and have their arguments in order. It always come down to this point: yes, library users copy the most popular music recordings. Which does cut sales to a minor degree. But the 50,000 libraries buy (at full retail cost) one copy each of thousands of titles that wouldn't be selling 50,000 copies if the libraries weren't buying it. Basically, the library makes available music for people to copy. But the libraries pay off the music industry to ignore it. Everybody is happy.

    The P2P'ers need to adopt this model for distribution. They should find out who they are in their local areas, like a university, and then trade physical copies of the materials that they are interested in. Like having ALL the recent music of particular genre or favorite films on a single USB 500Gi
    • by TubeSteak ( 669689 ) on Monday May 05, 2008 @03:05PM (#23303758) Journal

      The P2P users are the best customers of the ISPs. In time, the technology improves to handle the growing needs of the P2P community, and the P2P'ers are willing to pay (within reason) for faster access and greater bandwidth. P2P'ers will pay $30-$50 more a month to the ISPs than the dial-up'ers who are mostly checking e-mail, reading specialized websites, and doing eBay trading. This makes the P2P'ers a significant revenue source to the ISPs.
      All this is wrong.
      The best customers of the ISPs are "dial-up'ers who are mostly checking e-mail, reading specialized websites, and doing eBay trading" AND "pay $30-$50 more a month to the ISPs".

      ISPs hate the traditional bandwidth hog and now they're starting to hate their traditional customers too, because those "dial-up'ers" on broadband are also moving towards bandwidth heavy internet habits.
    • Re: (Score:3, Insightful)

      If you've read this far and are a normal Slashdotter, then you think that I'm really weird. But, this is how the real world works. It's just that no one ever talks about it like this. Thank you.

      Wrong: just about everyone on slashdot who gets moderated past +3 talks like this. And it is not the way the world works. It's close, but there exist subtle and important distinctions between your parallel universe and the one you're living in.

      The biggest distinction is that we reward riches, because riches are a rew

  • The stated problem is that ISPs are upset with the P2P traffic because of its heavy load and want to throttle it. The proposed solution is supposed to increase my download speed. This seems to me to sound like exactly what will make my ISP even more upset.

    If the ISPs' claims are correct, what would make them happy would be P2P software that throttles itself to a very low transfer rate. The longer it takes me to download (or upload) a file, the less bandwidth I'm using at any given time, and the happier t

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...