Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Media

Researchers Suggest P2P As Solution To Video Domination of The Internet 121

JPawlak writes "NewScientistTech reports that big businesses may be realizing the benefits of P2P technologies. Blizzard uses it to distribute patches for World of Warcraft, and now researchers at Microsoft are indicating internet users may have to use it to help distribute online video clips. The growing cost associated with delivering such content may be becoming prohibitive for some companies. 'The team also suggest a way to prevent Internet Service Providers' costs jumping when their users start uploading much more data. The trick is to allow sharing only between people with the same provider, when data transactions are free. That restriction would cut the pool of sharers into smaller groups, meaning MSN's servers would have to do more to fill any gaps in the service. But costs could still fall by more than half, simulations showed.'"
This discussion has been archived. No new comments can be posted.

Researchers Suggest P2P As Solution To Video Domination of The Internet

Comments Filter:
  • ISPs are gonna love this, since they're big fans of P2P as it is (Bittorrent and friends).
    • Re:haha oh wow (Score:4, Interesting)

      by arivanov ( 12034 ) on Sunday September 16, 2007 @05:48AM (#20624103) Homepage
      Actually they will.

      Especially when someone points to the idiots from Redmondia (and other places) that they should stop reinventing multicast again and again. The technology to do what is needed is there, the ability of ISPs to control it so that it is not detrimental to other users is also there. It has been there since the dawn on the Internet. And it is Multicast. From the viewpoint of network design and network operation theory, P2P is nothing, but an extremely lame sorry and sad excuse for Multicast emulation.

      Implementing it is solely a matter of minor network tidy-up for most ISPs along with some software updates for the CE devices (where not supplied by the ISP).

      By the way, the same methods which are used to control multicast are also valid for P2P services. TTL adjustment down to under 8 will usually cut down the traffic to be solely within an ISP while cutting it down to under 4 will cut it down so it stays within the same RAS device (2 for non-NAT setups). It is also trivial to deliver a correct setting on a per-ISP basis and to autodetect the necessary setting adjustment.

      There is no rocket science here and no research to be done. All the tech is already out there. The problem is that the suppliers of P2P services and developers of P2P software deliberately do not want to do this. In fact, they are doing everything they can to steal more service than the ISP is willing to allocate to them. As a result the ISPs have no other choice but to love this and use a big stick to provide the luving to the customer.

      • Especially when someone points to the idiots from Redmondia (and other places) that they should stop reinventing multicast again and again.

        Multicast makes no sense here, no sense at all. Multicast makes sense when everyone wants to see the same data at exactly the same time (e.g. video conference). For sharing of video clips, this would actually waste a huge chunk of bandwidth.

        What you're proposing means the first person to watch the video gets to watch the video. Most someone wants to start watching it
        • For stuff like patches that doesnt matter.
          It would simply use a Bittorrent like system where the chunks can come from anywhere in the file.
          • Right, but the original topic was for video, not patches. Sure, multicast is good for lots of things, and your patch example squarely falls under the rule of thumb I provided...but patches is not the real topic. The topic at hand is the ideal situation to not use multicast.
            • by Nicopa ( 87617 )
              You are wrong. Receivers don't need to watch/use the content right away. They can store it, why not? So Microsoft would broadcast the patches again and again in a loop. The clients would store the data as they get it, and wait until they have it all.
        • Re:haha oh wow (Score:4, Insightful)

          by arivanov ( 12034 ) on Sunday September 16, 2007 @09:08AM (#20625193) Homepage
          I probably did not express my thoughts clear enough. Let's give it another go.

          There are two portions to a P2P network - discovery and data fetch. Discovery determines where do you get your data from and fetch is the actual data flow. An ISP can confine a P2P service to its own network by either limiting discovery or by limiting the actual fetches.

          The discovery is where the P2P networks lamely emulate a multicast application. They try to determine if a piece of data A is present in any of the surrounding nodes B,C,D,E,F. In order to do that they in the trivial case transmit to each node. In the more modern networks they transmit to hypernodes and get info from there. In either case they try to emulate a multicast network via a tunnel mesh (just the way people try to emulate Multicast on ATM LANE).

          Compared to that a discovery mechanism based on multicast with a unicast reply can give you the information on where exactly is the piece which you are interested with one request. There is usually no need for hypernodes either. It just works. Magically. Further to this, you can set your discovery scope to find nodes which are 1,2,3...n hops away by tweaking TTL. Further to this, it is a true P2P network - totally serverless. If you throw in PKI authentication you can also make it as secure as you wish.
      • I'm assuming you don't actually use P2P? In P2P, i get what i want, when i want. Multicast is just broadcasting the content, which would be good for a live Victoria Secret feed, but if i wanted to download it at my convenient (not at the time of broadcast), P2P is far better.
      • The reason people aren't using multicast is because it doesn't work: ISPs don't support it reliably, and even if it did, it's poorly designed and doesn't address the same needs as P2P.

        A better solution is for ISPs to cache P2P traffic, and that's what they are doing. That prevents the same packet from traversing the same link over and over again, without the limitations and design problems of multicast.
      • In fact, they are doing everything they can to steal more service than the ISP is willing to allocate to them. As a result the ISPs have no other choice but to love this and use a big stick to provide the luving to the customer.

        If my contract says I have unlimited bandwidth and I pay for the max speeds a provider provides, how am I stealing more service than the ISP is willing to allocate me? If they didn't want me to use the service I pay for, shouldn't they have not sold me the service in the first place.
    • Most ISP now like people to be on limited download limits per month, and charge for excess. If this takes off, the number of 'accidantal' overrtuns will potentially skyrocket, and profits will be up.

      I wouldn't be surprised if the unlimited tag is removed completely so they can be sure of cashing in on this.

      I'll happily use p2p if it fulfills four criteria

      1: It's legal.
      2: Its to my direct benefit (people who just leech being removed from the system).
      3: My ISP won't try to ass rape my bank account each month
      • From the ISP's point of view, in-network traffic is dirt cheap, its the cross-boarder traffic that is expensive. I think these guys are morons for not providing P2P clients that prefer in-network peers and not installing local proxies to cache as much http and ftp content as possible.
      • Demanding extra fees from a few people is a way to make money.

        Demanding extra fees from EVERYONE is a way to quickly go out of business.
    • Comment removed based on user account deletion
  • ok but.. (Score:2, Insightful)

    Sharing among people on the same network is only going to be effective for popular data. Not to mention I have a feeling Comcast would still send tell you you are using too much bandwidth even if it is all coming from within their network.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Sharing among people on the same network is only going to be effective for popular data.

      Then again, this whole bandwidth problem only becomes an actual problem with popular data.

      Not to mention I have a feeling Comcast would still send tell you you are using too much bandwidth even if it is all coming from within their network.

      Probably!

    • Yeah, wait until ISPs realize they are shooting themselves in the foot by capping the speed at the modem. In reality they should cap the speed later on, where the the data hits the backbone. That way they can much more easily bottleneck the data and shift it around. Rather instantly adjust data rates, and moreover allow for the max speed the technology allows when you're dealing with internal data transfer. So if you are going P2P on Time Warner between different people with the same ISP, you can transfer d
      • I have gotten a letter once from comcast supposedly for violating some Digital Millenium Rights Act thing downloading movies illegally. So was this a comcast message in disguise that they don't want me using the bandwidth? What actually happens when you reach the cap? Do they email you a letter? Do they shut you down permanently or monthly?
      • Please do not wave the "no extra cost" flag for technologies that require management, software, and hardware to implement. Such modulation occurs now as a part of basic load balancing, and to throttle traffic away from segments that are due for repair to avoid service interruptions. But it's not cheap to manage, and the routers or gateways capable of doing it well are not cheap.
    • Tautological as it is, popular data is the majority.
  • by Asmor ( 775910 ) on Sunday September 16, 2007 @04:36AM (#20623711) Homepage
    Saying BitTorrent (and similar protocols, if such exist) is P2P is like saying the web is the internet.
    • Re: (Score:3, Informative)

      My favorite P2P protocal is the Internet Protocal. If ISPs are going to block P2P, they should start with that one. All the other ones rely on it anyway.

    • by skeeto ( 1138903 )

      Saying BitTorrent (and similar protocols, if such exist) is P2P is like saying the web is the internet.

      Huh?

      P2P - Peer-to-peer (from Wikipedia [wikipedia.org])

      A peer-to-peer (or "P2P") computer network exploits diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application.

      That sure sounds like BitTorrent. BitTorrent is made up of many peers that are sharing data ... peer to peer. Also from the Wikipedia BitTorrent article [wikipedia.org],

      BitTorrent is a peer-to-peer file sharing (P2P) communications protocol.

      Of course BitTorrent operates via peer-to-peer networks. How couldn't it?

      • by Asmor ( 775910 )
        P2P is a broad category. BitTorrent is a specific thing within the context of P2P.

        The internet is a broad category. The web is a specific thing within the internet.

        I didn't say that BitTorrent wasn't a form of P2P, I said that BitTorrent was not the same as P2P.
        • That's true, but it's also true that bittorrent is a generalization of the concept of P2P. Instead of having files distributed in multiple copies across many peers and downloading from the closest one, files are broken up into N sub-chunks which are up-to-simultaneously downloaded from many peers.

          In a very real sense, P2P is a subset of Bittorrent where N -> 1.
  • Really some of us have been saying it for a long time. Some of the load can be taken off the internet especially bulk files such as video and bt by sharing them first within the network and then outside. I think that's it's a fine idea if people are willing to do it, that way you only have to seed some of the file to people on similar networks. The only place I see this falling short really is with very specific files, for instance I doubt that me or any of my neighbors are going to be watching the same
  • by davmoo ( 63521 ) on Sunday September 16, 2007 @04:44AM (#20623769)
    I have no problems at all with not for profit entities using some of my bandwidth to distribute their files.

    I have serious problems with a for profit entity like Microsoft or Redhat doing the same.

    The first one I call "charity" or "support". The second one I call "leaching", and its not far from "stealing".

    If you're a for profit company and you can't afford bandwidth, then you need to find a new line of work. Don't expect your customers to give you freebies unless you're giving them something *good* in return, and something you're not also giving to those who don't share bandwidth.
    • Well, Microsoft might make a deal with the ISP's so it's still them paying for the bandwidth cost but it'll be cheaper since it's distributed.

      Also I think it's implied that cost savings on bandwidth is meant to be converted into cheaper and or better services.
    • Re: (Score:2, Insightful)

      Would this not be regulated by the market? If you don't want to use the extra bandwidth, don't use the product/service.
    • Don't expect your customers to give you freebies unless you're giving them something *good* in return

      Look at it this way: in return for using your bandwidth to distribute files, you get peers doing the same thing for you.

      It's true that in this sense you and your peers are "providing a service" for Microsoft/Red Hat/whoever, but that company is providing a service to you by letting you have the file in the first place. Do you have the right to demand that the company provides you with the file in any other w

      • Re: (Score:3, Insightful)

        by ScrewMaster ( 602015 )
        But it doesn't seem reasonable to dismiss the system yet, when it could benefit everyone.

        True, I suppose ... but then again, take a look at the caliber of the people running the show here in the United States. Largely it comes down to the Telcos and Comcast, and a few other big ISPs, none of whom are interested in anything but profit maximization. I guarantee you that if they find a way to reduce their costs using this or any other technology, they will simply pass the savings on to themselves and their
      • by Splab ( 574204 )
        Great, so not only are we going to have a bazillion program trying to phone home for updates, we are also going to have 5 different BT clients all thinking they are being nice to your ADSL upstream.

        That can't go wrong can it?
    • by ghyd ( 981064 )
      As a consumer I must admit that I don't care to download Blizzard patches that way. Maybe if I had a limited or capped access i would care, but I'm lucky to live in one of the EU countries where it isn't a problem (we've got enough other problems).
    • by jez9999 ( 618189 )
      I kinda like this idea, though, if for no other reason than it puts the focus back on consumer upstream bandwidth. Not much point using P2P when people are stuck with ridiculously asynchronous connections (ie. 10mb/512kb), so hopefully upstreams and downstreams will start to come closer together again.
    • I have no problems at all with not for profit entities using some of my bandwidth to distribute their files.

      I have serious problems with a for profit entity like Microsoft or Redhat doing the same.

      The first one I call "charity" or "support". The second one I call "leaching", and its not far from "stealing".

      That's a major stumbling block. Commercial P2P companies seem to assume that consumers will actually let them use their connections 24/7. The problem is that upload bandwidth is scarce, and that those com

    • by Nevyn ( 5505 ) *
      I have no problems at all with not for profit entities using some of my bandwidth to distribute their files.

      I have serious problems with a for profit entity like Microsoft or Redhat doing the same.

      The first one I call "charity" or "support". The second one I call "leaching", and its not far from "stealing".

      If you're a for profit company and you can't afford bandwidth, then you need to find a new line of work.

      As with most things, I don't think you want to put this as a black or white choice. Fo

  • Seriously, do these "researchers" even HAVE cable internet? Upstream is only user segregated to the head-end for bare copper technologies like DSL. Cable broadband is built on a tree network. Sure, you can build more nodes into the infrastructure to free up a bit more upstream within a single neighborhood, but eventually that upstream has to be combined with the upstream from all the other nodes. Eventually you just can't squeeze any more data into the upstream band and everything stalls. This is one of the
    • You've completely missed the point. Regardless of how the cable infrastructure is arranged within the ISPs own network - they get chared at the transit point. Traffic between their own customers does not transit to another network at any point and so it is free for them.
      • by tomz16 ( 992375 )
        No, YOU have completely missed the parent's point! It's not free if you have to spend significant cash to upgrade your own network to allow users to share mass quantities of data amongst themselves. Right now, most residential networks are not set up to efficiently handle heavy peer to peer uploading, even among people on the same network. Read the parent's post again.

        • You're making the same mistake as the OP. "Free" versus nonfree traffic is not a question of capital costs. ISPs have to pay per packet that moves upstream and off their network (depending on their peering status). The question of building infrastructure is irrelevant. It is a cost that the ISP has to invest to be in business, and can be written off against whichever of their activities seems most appropriate. It is not a DIRECT cost of the traffic. But every packet that transits to a backbone has a DIRECT
          • by Arethan ( 223197 )
            You're comparing apples to apples and swearing to god that one of them is an orange.
            First off, any ISP that pays per packet is doomed to fail just out of lack of business ability. They really do get much better deals on the bandwidth than that. What they pay for is whatever is negotiated.

            Generally that ends up being: Line leasing costs for line of size X + network connection costs of $X per quarter to have that line connected at a certain transfer rate and guaranteed a minimum speed at any point in time. In
            • You've written a very indepth and informative post, but you are still wrong on a couple of issues. ADSL access in the uk market is not sold as a commodity. In the LLU market things are different but the vast majority of ISPs in this country are still on Datastream / IPstream products. So it is not as fanciful as you claim for somebody to be paying per packet. The BT pricing structure is actually based on the size of the outgoing pipe (not the customer links) and is sold by capacity. The charging of the ISP
              • by Arethan ( 223197 )
                Ah yes, the market differences. I believe you are right in that my US centric thinking (after all, I am American and I might as well live up to the stereotype) has cast a US specific scenario upon the whole situation. In the US, ISPs mostly just lease the pipe at whatever capacity they need, and the cost is flat whether they fill it 100% of the time or keep it mostly idle. There are obviously a whole slew of variables that can be applied that will adjust pricing (such as if the line is supposed to be a fail
    • by g-san ( 93038 )
      As far as I know, it has nothing to do with cost. There is a spectum so wide for data tx and rx, and instead of dividing it in half for equal upstream and downstream, someone (smart) noticed that the nature of the average internet is about 10:1 down to up ratio. Small requests large replies. Go check your system stats and see if I am right. On my system right now, I have 360.48MB down, and 37.91 MB up. It is purely arbitrary, if you can get 56k up and 5Mb down, you could just as easily move the specturm al
      • by Arethan ( 223197 )
        Actually, you're both right and wrong at the same time. Yes the cable company can divide up the spectrum as they see fit, but it isn't as dynamic of a change as you imply. The modem downloads a config file which dictates to it the channels it is supposed to use for this boot, and it cannot be changed without rebooting the modem. This config file also contains SLA enforcement info, like how fast it is allowed to upload, how many local MAC addresses are allowed on the LAN side, etc. Now these channels are fix
  • Researcher rediscovers USENET.
  • AFAIK p2p is the current solution to efficient video distribution. I think they are really trying to accomplish something else here, which is why the current solution won't do.

    For instance, if you want to distribute that World of Warcraft patch, then make a torrent and post it to a tracker, done. If you're really paranoid then host it on your own tracker. No, because what they really want is to have an service running on your machine 24/7, so they can... I don't know, but whatever is I'm pretty sure I wo
    • P2P works well for video right now, but as traffic increases, the bandwidth needs to be paid for. Expect the infrastructure needed to cost serious money to the ISP's, who don't want to spend the money for it without getting paid directly.

      P2P doesn't work well for DRM, for preventing people from accessing material without formal permission from the owner. This is the big problem for video content providers: the tools haven't been properly made or widely published to authenticate and restrict P2P content, so
      • It's quite ironic that content owners force legal download services to use strong DRM. Strong DRM is incompatible with P2P. As a result, legal movie download services can't be profitable.
        • This is not true. As long as unlocking the DRM is based on a provider published, local user key that is not easily transferred, the content delivery mechanism is a separate issue from accessing the contents. This is fundamental to public/private key authentication, and to public/private key encryption.

          Take a good look at the insanities Windows Media Player does for DRM. Most of the work is already done: it's the business models that don't yet support it, not a lack of DRM-based software.
  • A friend of mine had an account with a provider called Fastweb, where he had a really fast connection but payed for traffic outside the fastweb network (which went through a nat i think, he had a local 10. something IP address).

    He used file sharing software inside the network, and got very fast downloads (for content which is popular enough in italy).

    Of course this is a rather rudimentary implementation but certainly one might be willing to configure his P2P file transfer client to only download from a cert
    • Re: (Score:3, Interesting)

      I've used bittorrent effectively inside a corporate firewall for transmitting DVD images, especially because HTTP and FTP couldn't handle files larger than 2 Gigabyte easily. The security models aren't built in: authentication of the content remains a separate step. But transmitting DRM enabled files, such as Windows Media files for the BBC's well-publicized Iplayer project, seems a natural approach and would help prevent fakery of the files. (That's a big problem for PiratesBay and other Bittorrent sites.)
      • by Wildclaw ( 15718 )
        Bittorrent does have authentication of content built in. One of the big advantages over FTP and HTTP btw. When I have recieved something via bittorrent, I am guaranteed to have recieved exactly what the publisher of the torrent file wanted me to recieve, and nothing else. As long as you trust the publisher of the original torrent file, you can trust the data your recieve.

        Of course, if you use sites like the piratebay, you can't trust the publisher (because anyone can publish), so you can't be sure if what y
  • by FooBarWidget ( 556006 ) on Sunday September 16, 2007 @05:10AM (#20623921)
    ...how do you implement it? Browsers currently have absolutely no support for implementing anything like this. I'm not sure whether it can be done in Flash. Java is so heavyweight that it would probably scare off most people. ActiveX is a no-go. You can't make people install client software either - 99% will never bother to do that. Unless you can make it work out-of-the-box on browsers, it'll not become popular.

    And how do you implement P2P streaming? All P2P protocols until now allow peers to send file pieces in non-streamable order.
    • Oh and I forgot to ask this as well: how do you get around firewalls/NAT? Most people these days are behind NAT. In my experience, UPnP only works out-of-the-box on very few systems because most routers have UPnP disabled by default.
      • Re: (Score:2, Informative)

        by Osty ( 16825 )

        Oh and I forgot to ask this as well: how do you get around firewalls/NAT? Most people these days are behind NAT. In my experience, UPnP only works out-of-the-box on very few systems because most routers have UPnP disabled by default.

        Support linux-igd [sourceforge.net]? The project started back up in the past year and a half or so, along with libupnp [sourceforge.net] coming back from the dead after Intel abandoned it. Help these projects get to the point where they're trivial to setup, stable, and shipped with all distributions and you sol

    • by JordanL ( 886154 ) <jordan,ledoux&gmail,com> on Sunday September 16, 2007 @06:31AM (#20624289) Homepage

      Browsers currently have absolutely no support for implementing anything like this.
      Except for Opera, which is about the only program you actually need to interface with the entire web.

      About the only thing it's not useful for is SSH and FTP.
      • Uhm, what are you talking about? How do you want to run a new, yet-to-be-developed P2P protocol in a current version of Opera without installing software?
        • by dubstar ( 565060 )
          I think he means that Opera already has a torrent client built in.

          There is no new protocol necessary really. I have already seen this implemented to some degree - Rogers in Canada throttles torrent connections to outside of their network, but it often works fine inside the network. While illicit torrents go slow as dirt for me 99% of the time, actual legal content from sites like Vuze goes at a good speed. It seems to me that this is because a lot of the people I am connecting to are on my (ISPs) own netwo
          • But can Bittorrent be used for streaming media? I don't think so.
            • by dubstar ( 565060 )
              It could be done similar to streaming. You could just break the media up into chapters/tracks and set the priority appropriately.
              • Are you sure? Torrents usually need some time to get to decent speed. Will torrents speed up quickly enough, *and* have enough peers to provide the data in the preferred order in time? I somehow doubt that, most of the videos I download (except the extremely, extremely popular ones with 30000 seeds) are not finished downloading in less time than the video length.
  • The article makes an assumption that data flow within an ISPs network is free. That is not always the case. Take for example an ADSL connection. The ADSL infrastructure (metallic path, DSLAM, etc.) is often (especially in the case of non-unbundled local loops) provided by a different company from the ISP. The ISP pays this provider per byte of data that flows over the connection to and from the end user.
    • Who gets stung with a deal like that? The UK is considered to be an example of what can go wrong with non-unbundled services, but even here ISPs paid a flat rental fee for access to the line, and then had to deal with BT for upstream access. Most the of the current bandwidth limits here are a direct result of how BT charges for that upstream bandwidth. Something like this system would be a great boon for them.
  • by haakondahl ( 893488 ) on Sunday September 16, 2007 @05:24AM (#20623977)
    Download a P2P client and learn how to use it *today*. Help Apple! Share all of your files; learn how to become a seed. Lend the RIAA a hand--do their R&D for a new distribution model.

    There is a term in Low German for the feeling I have right now--SchadenGoFuchyourselves.

  • The trick is to allow sharing only between people with the same provider, when data transactions are free.

    Sounds like multicasting . . . good things the ISPs have implemented this also . . . oh wait.
    • It sounds nothing like multicasting. One is limiting peers based on their address ranges, and the other is broadcasting to multiple peers at once. Now, if multicasting did work properly it would revolutionise p2p as it would break the limit (total_upload=total_download) across the swarm.
      • It is replicating a package just inside the ISP network, while it passes a single packet from one ISP to the other. Exactly like multicast.

  • Other thing that will save loads of bandwidth and improve end user quality at the same time: multicast. For instance, the modern broadcast media companies that do TV / radio / concerts, could well set up streams that are relayed only once as to as many hops as there are subscribers, and copied at the final router to each subscriber.
    • Great, let's fill all available bandwidth in your neighborhood with multicast spew for material no one wants. I don't think so. Keep it on request only.
  • obligatory (Score:2, Funny)

    by wwmedia ( 950346 )
    Nerd: I've developed a program that downloads porn from the interet a million times faster than normal

    Marge: Who would need that much porn

    Homer: [drools]...oohhh..1 million times faster..
  • I'm sorry, but even if I send data to a neighbor I get that transfer charged on my 35GB monthly allowance. And that's 35GB for the upload+download total, too.

    Companies using P2P to distribute THEIR files (i.e. WoW being a perfect example) are cutting into MY 35GB for the month. And if you try to block them out, you get ridiculously slow downloads, around 0.1KB/sec.

    Screw'em all.

    • I'm sorry, but even if I send data to a neighbor I get that transfer charged on my 35GB monthly allowance. And that's 35GB for the upload+download total, too.

      35 gigs/month is horrible. You know that, because you're complaining about it. You're the one who chose your internet service plan - you an always chose a different one. And yes - there are different ones. If that's the maximum residential plan in your area, look into "business" plans.

      Companies using P2P to distribute THEIR files (i.e. WoW being a pe

      • by Yvan256 ( 722131 )

        35 gigs/month is horrible. You know that, because you're complaining about it.

        Well it's currently enough for what I do, but if companies start eating away at my 35GB, then no, it won't be enough. And I'm sure I won't be able to send them the bill for each extra 10$/GB over my 35GB limit.

        You're the one who chose your internet service plan - you an always chose a different one. And yes - there are different ones. If that's the maximum residential plan in your area, look into "business" plans.

        You're assuming t

        • Well it's currently enough for what I do, but if companies start eating away at my 35GB, then no, it won't be enough. And I'm sure I won't be able to send them the bill for each extra 10$/GB over my 35GB limit.

          If you don't change what you do, then your usage won't go up. Period. If you start using different methods to acquire data, you may use more or less bandwidth. This isn't the fault of "companies" - you're the one who choses what services you use and what applications you run.

          You're assuming there is

  • Unrestricted P2P across a true mesh topology is developmentally speaking, the ultimate logical destination for the Internet, in my own mind. If I was going to borrow an expression from someone the average Slashbot considers one of their patron deities, I'd even call it a "historical inevitability."

    It's probably going to take a very long time. The telcos and big media can be counted on to fight it, kicking and screaming, every last milimeter of the way. Eventually however, if the net is to continue to exi
  • Consumers do a lot that is good for business, that business doesn't have to pay for but have been complaining about.
    P2P, genuinely fair use copyright (Some recent /. article on how fair use does a lot of good at stimulating teh economy....and even matters regarding the fraud of software patents (IBM the largest software Patent Holder have been releasing their patents to open source and others are beginning to follow.)

    There was a time in this country (USA) where the people got together and created the countr
  • Since multicast keeps getting mentioned and as I imagine there are a few who are too lazy to Wikipedia it. Here is the gist of it:

    IP Multicast is a technique for many-to-many communication over an IP infrastructure. It scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast utilizes network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes

  • Microsoft invents Democracy Player and Joost, only a few years after they have been invented!
  • The best solution for this problem is to provide for a true market solution for both the producers and the users. I've been researching and writing about peercasting for years now, and I do think this is a great solution to the problem.

    First of all, if the content is free, then someone wants that content watched. If that original producer is willing to put a price on the cost of a complete download, those who are helping to provide bandwidth for that download should get offered a piece of the action. If
    • I agree with you. There is a lot of hype about P2P. Many view P2P as free bandwidth. Many consider it as a game-changing revolution, as a magic cure against the curse of the "speed/quality/cost: pick two" triangle.

      P2P is no such thing. There is no such thing as a free lunch. As long as broadband connections are asymmetrical, somehow, someone will have to sacrifice one leg of the triangle. If legal download services want a decent quality of service, they will have to pay enough to attract enough uploaders.

      Of
  • Wouldn't it be great if ISP's could work it so that when you are doing P2P (Bittorrent, etc) that if somebody is in the same network local topology there is no cap on bandwidth?

    This would work great for non P2P apps as well. Let people on comcast, cox, etc do full bandwidth videoconferencing between customers on the same ISP. For instance it costs comcast probably not much bandwidth wise to let my mother do a 5Mb/s video conference with my system when we are in the same local area (and the same cable ISP)
  • If every users is doing 100gb of upload/download then comcast shuts them all off.

    now no one has bandwidth.

    the solution is to actually raise the bandwidth so that 100gb is trivial (like in korea and japan).

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...