Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Encryption Security

Encrypted Traffic No Longer Safe From Throttling 268

coderrr writes "New research could allow ISPs to selectively block or slow down your encrypted traffic even if they cannot snoop on your transmitted data. Italian researchers have found a way to categorize the type of traffic that is hidden inside an encrypted SSH session to around 90% accuracy. They are achieving this by analyzing packet sizes and inter-packet intervals instead of looking at the content itself. Challenges remain for ISPs to implement this technology, but it's clear that encrypting your traffic inside an SSH session or VPN connection is not a solution to protect net neutrality."
This discussion has been archived. No new comments can be posted.

Encrypted Traffic No Longer Safe From Throttling

Comments Filter:
  • Why bother? (Score:2, Insightful)

    by Threni ( 635302 )

    They could just throttle all encrypted packets for free.

    • Re:Why bother? (Score:5, Insightful)

      by TheLink ( 130905 ) on Monday June 30, 2008 @09:14AM (#23999119) Journal
      That'll mess up corporate vpn users with clout, and https connections to banks etc.

      Anyway it doesn't take a genius to detect p2p.

      See the user. See the user after 1 hour. See how many bytes up and down. Check how many different IP destinations the user is connected with.

      If they are downloading a lot up and down, and connected to lots of host, chances are they are using P2P. Put them on a watch list. If they are still doing it much later, you put them on a black list where from then on if they are doing something similar you throttle them immediately (you can do it in a way that would in most cases still allow that user's web surfing to work reasonably - since most users don't websurf 20 different sites at the same time AND read those pages at the same time - it doesn't matter if pages come in one by one ).

      If they aren't downloading or uploading much, why throttle? :)

      No need for fancy math. No need for "deep packet inspection" or fancy "Dumb Investors Hand Over Your Money" phrases.

      Then again maybe I should write a "research" paper, mmm $$$$ ;).
      • Re:Why bother? (Score:5, Interesting)

        by aplusjimages ( 939458 ) on Monday June 30, 2008 @09:28AM (#23999301) Journal
        how would this work for gaming online? 16 different IP destinations and I play for hours on in. My understanding of Xbox Live is that it is P2P and if they throttle my Halo 3 game, I'm gonna get pwned even more than normal.
        • Re: (Score:3, Informative)

          by TheLink ( 130905 )
          So far with most multiplayer online games, one machine is the server and the rest are the clients.

          Go look at the traffic if you don't believe me. I've monitored the traffic on my connection as I play various online games - but not Xbox Live though.

          In theory the server might get throttled affecting the game BUT online game traffic seldom adds up to gigabytes a day - all you are usually sending is "changes in state". In some cases yes game assets do get downloaded - but the clients seldom upload that much bac
          • by Fweeky ( 41046 )

            So far with most multiplayer online games, one machine is the server and the rest are the clients.

            Go look at the traffic if you don't believe me. I've monitored the traffic on my connection as I play various online games - but not Xbox Live though.

            Sure, FPS's typically keep the game world state in a single server, but RTS games commonly use peer to peer network topology; e.g. Supreme Commander and Sins of a Solar Empire.

            • Re:Why bother? (Score:4, Insightful)

              by TheLink ( 130905 ) on Monday June 30, 2008 @11:09AM (#24000987) Journal
              I doubt those games even hit 1Mbps up and down sustained for more than even 1 minute :).

              If bittorrent users looked like RTS game players there won't be much traffic to throttle.

              For example it seems like it's 24kbps per opponent for Supreme Commander. So 20 opponents won't even saturate a 512kbps upstream.

              Do many people play Supreme Commander with 40 opponents at a time and expect good performance?

              • by Fweeky ( 41046 ) on Monday June 30, 2008 @11:22AM (#24001285) Homepage

                I doubt those games even hit 1Mbps up and down sustained for more than even 1 minute :).

                So, just like normal peer to peer services then? ;)

                I think the most opponents SupCom supports are 8; those 8 can be on a very large map, with thousands of units each, and each round from each unit tracked, though.

        • Re:Why bother? (Score:5, Informative)

          by cryptodan ( 1098165 ) on Monday June 30, 2008 @11:32AM (#24001493) Homepage

          how would this work for gaming online? 16 different IP destinations and I play for hours on in. My understanding of Xbox Live is that it is P2P and if they throttle my Halo 3 game, I'm gonna get pwned even more than normal.

          I totally agree. Steam creates a lot of connections to various content servers to bring down content faster for the Steam Client. It also creates a shitload of traffic when you refresh the server list via Steam Clinet > Servers Tab. The Steam Client is also P2P by definition.

          Now this type of throttling would piss me off greatly.

      • "Check how many different IP destinations the user is connected with."

        Won't help if the user is connected through a VPN tunnel. They'll be talking to one IP.

        • by TheLink ( 130905 )
          Such users will just take longer to put on the blacklist by the heuristics I suggest.

          But basically the ISPs want to reduce traffic, so whether you're talking to one IP or not, if you've uploaded at > 3Mbps and also downloaded at > 3Mbps for hours and you do that sort of thing everyday, it doesn't take any fancy technology or math to decide you belong on the list of "Those To Be Throttled and sent to competitors".

          The sweet smell of unbridled Capitalism.
      • by kabocox ( 199019 )

        (you can do it in a way that would in most cases still allow that user's web surfing to work reasonably - since most users don't websurf 20 different sites at the same time AND read those pages at the same time - it doesn't matter if pages come in one by one ).

        So you must be the one that got my webcomics loading slower in the morning! I use that "open all in tabs" to open up like 20 sites in the morning. This used to take 10-20 seconds for all of them to load. Now it'll take 5 minutes or so.

        Come on sluggy,

        • by TheLink ( 130905 )
          Well the way I'd do it is you'd get full speed on connections to the first X sites, then when they're done loading you get the next sites and so on. So it shouldn't affect most people's websurfing. My assumption is most people would just read the sites that ge loaded first, rather than wait for all sites to be loaded before starting to read.

          What's happening to you is probably a blanket "throttle all connections of anyone with lots of connections".

          Which of course is easier to implement :).
      • Re: (Score:2, Informative)

        That'll mess up corporate vpn users with clout, and https connections to banks etc.

        Probably not. In normal circumstances, these connections don't use anywhere near the same raw data transfer volume as one bittorrent with a few dozen connections.

        • Probably, actually. Anybody who works with software installs downloads the latest versions/patches via VPN connections to the corporate network. That's several gigs worth of downloads for one connection. That's size though, not number of IPs. If they check for numbers of IPs, they can filter out corporate users.

          That said, watch P2P protocols evolve to account for this.

        • It's called Camfrog. Look into it. I can saturate my connection down and up running a Camfrog server faster than I can torrenting the most popular Linux distro. It would look just like P2P traffic too.

          I'd love to see them throttle my $200 Camfrog Pro server. The lawsuit for doing so and saying that it's 'illegal P2P' traffic would get them so owned in court.

      • Re:Why bother? (Score:4, Interesting)

        by fast turtle ( 1118037 ) on Monday June 30, 2008 @10:33AM (#24000251) Journal

        My ISP already throttles my connection by price. I've currently got 256/768 as that suits my needs. If they were to start throttling any more of my net access (I'm paying for unlimited at 256/768) I'd have their asses in court in a hurry for false advertising and violation of contract, which I have kept the hard copy of from the day I signed up for service.

        I was one of the first adopters to get broadband when it became available 6 years ago in my area and according to the original contract (have hardcopy on file) they planned offering tierred service with it being a simple change in minimum speeds and thus not requiring a new contract. I also informed them that I'm worse then a squeaky wheel, I'm like a brake that's gone metal to metal since I'm semi-retired and disabled with plenty of time on my hands to pursue things every time they try to change my contract without consent.

      • since most users don't websurf 20 different sites at the same time AND read those pages at the same time

        No, but users visit web pages with images from a variety of hosts (such as advertising banners, etc).

        Just because you're reading one web page at a time doesn't mean your PC isn't communicating with several IP addresses in order to gather the data necessary to render the web page.

  • Er, no. (Score:5, Informative)

    by Cave Dweller ( 470644 ) on Monday June 30, 2008 @08:26AM (#23998687)

    First, encrypted traffic was never safe from throttling anyway. Second, FTA:

    "So it seems the use of a tool like this would be limited to an extremely controlled environment where users are limited to a white-list set of network protocols (so that they can't use a different tunneling mechanism, stunnel for example) and only allowed to ssh to servers under the control of the censoring party. In which case you would wonder why the admin wouldn't just set the ssh server's AllowTcpForwarding option to false."

    Kinda useless.

    • Kinda useless. Also, kinda not new: go to http://www.shmoocon.org/2007/presentations.html [shmoocon.org] and look for "Rob King and Rohlt Dhamankar - Encrypted Protocol Identification via Statistical Methods".

      Upon observing a flow (as it is going on), they can identify which encrypted protocol is being used. I imagine tunneling things through ssh would only change the entropy (it's a different encryption), not how big the packets are or when they're being sent; at least not by much.

      Whether King and Dhamankar generate tr

  • Non-timing critical? (Score:3, Interesting)

    by jaminJay ( 1198469 ) on Monday June 30, 2008 @08:28AM (#23998703) Homepage

    If the application is not time-critical, introducing random jitter would go some way to subverting this, no?

    • by omnirealm ( 244599 ) on Monday June 30, 2008 @09:53AM (#23999609) Homepage

      > introducing random jitter would go some way to subverting this, no?

      Exactly. I took a few minutes to glance over the paper. Their feature
      extraction stage consists of two predictable attributes: packet size
      and time between packets. Modifying the traffic sent at the
      application layer (SSH itself does not even need to be touched) can
      trivially ambiguate the extracted features so as to throw off the
      classification attempt. This is simply a road bump; as soon as it gets
      into use, application-layer proxies will pop up to circumvent it.

      They also seemed to have inventented their own home-brew statistical
      analysis. I was disappointed that they did not go into detail as to
      why they largely ignored the entire field of Machine Learning
      (NaiveBayes? Perceptron? kNN? Why not try using these?) when coming up
      with their classification model.

    • What about if someone's running an encrypted VOIP server?

  • by cephah ( 1244770 ) on Monday June 30, 2008 @08:30AM (#23998711)
    Can anyone explain to me why any ISP would use this technique? If they start looking at packet sizes to determine different kinds of encrypted traffic then the packets will just be padded, causing their network to be further overloaded...
    • by Sigma 7 ( 266129 )

      If they start looking at packet sizes to determine different kinds of encrypted traffic then the packets will just be padded, causing their network to be further overloaded...

      Packets involved in a P2P transfer or any other form of data stream are designed to maximize throughput - they send a full packet whenever possible. Padding or adding extra data is in direct contravention to this because it sends useless data that will be discarded. You can identify them because the local to remote packet size is typically large and continuous, which is not normal for an SSH connection.

      • by Shakrai ( 717556 ) *

        You can identify them because the local to remote packet size is typically large and continuous, which is not normal for an SSH connection

        I take it you've never used scp or sftp before?

        • Please. Maybe one percent of your average ISP's customer base has ever used sftp. They don't give a shit if they throttle a tiny but legitimate chunk of the userbase while hitting P2P users.

          • by Shakrai ( 717556 ) *

            Well, if that's your attitude I'd be surprised if 1% of the customer base has ever used ssh altogether -- never mind sftp.

            My point was in response to the GPs "which is not normal for an SSH connection" remark.

            • SSH generally doesn't look too different from, say, an https connection to an online banking website, though. SFTP does, that's all.

              All I'm saying is "but this would cut off [legitimate uses with small userbase]" is not a defense to these people.

              • by Shakrai ( 717556 ) * on Monday June 30, 2008 @11:10AM (#24001023) Journal

                No, it's not. But it could be a defense with the FCC/Congress or other regulatory agencies. Just wait until some Congresscritter can't VPN back into his office because of a policy like this -- that's when attention will start being paid to these issues.

                Kind of like how nobody in power gave a shit about the Gestapo^WTSA until some Congressman/Senator had to take HIS shoes off or found HIMSELF on the no fly list.

                • Like some other poster said, they will undoubtedly not throttle their hilariously overpriced "business class" accounts, and direct home VPN users to sign up for them.

  • by zwei2stein ( 782480 ) on Monday June 30, 2008 @08:31AM (#23998727) Homepage

    Even without this analysis it was kinda obvious that throttle-happy ISPs would simply throttle all encrypted data once encrypting became mainstream in P2P.

    • by CharlieHedlin ( 102121 ) on Monday June 30, 2008 @08:48AM (#23998867)

      What about VPN tunnels? People working from home are a core customer group they don't want to piss off.

      • by thegnu ( 557446 ) <thegnu@noSpam.gmail.com> on Monday June 30, 2008 @08:54AM (#23998913) Journal

        those people will be more obliged to pay the ridiculously jacked up business internet prices, then, i suppose.

        • I did that. And I pay exactly $10 more per month than the residential. I have a SOHO package (small home office, but definitely a "business" account)

          It's the best $10/mo I could have spent.

          You see, I don't deal with traffic shaping, bandwidth caps, blocked ports, or anything else. It's just a standard internet connection. I can download/upload as much as I want and I haven't ever heard a peep from my ISP. And trust me, if I was on a residential account.....I would have heard from them.


          (p
    • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Monday June 30, 2008 @09:14AM (#23999121) Homepage

      Actually, encrypted or not, the way the Sandvine (I think that was the name?) system used by Comcast worked was it just did a traffic analysis - If your upload connection was more than X% saturated for N seconds, the Sandvine appliance would start spoofed RST injection to kill off connections. The only way around this would be a full blown VPN that used an encrypted transport layer. (Encrypted BitTorrent, SSH, and nearly all encrypted protocols except the various VPN systems are an encrypted application stream over an unencrypted TCP session. Even some VPNs use an unencrypted TCP session to tunnel through, making them vulnerable to RST injection.)

  • by Anonymous Coward on Monday June 30, 2008 @08:32AM (#23998731)

    You can identify the type of traffic, because we're not trying very hard to hide it. If you keep going down this road, we'll just send all the time, the same constant packet size, the same rate, regardless of actually required service. It's the same to us, really, because we pay a flat price. It is not the same to you, though, because when we have to make every traffic look the same, we'll use much more of your precious bandwidth, so cut out the crap.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Right. It's not like they would just throttle your entire connection if you did that.

    • by shird ( 566377 )

      Why would anyone do this if such traffic is detected as p2p traffic and therefore throttled? You are depending on everyone doing this, then complaining about their throttled legitimate traffic - the solution is stop sending legitimate traffic like this, not get the ISP to lift the throttle.

      "Dear ISP, I am deliberately making my legitimate traffic look like p2p traffic, and its getting throttled. I don't want to change my legimate traffic back to looking like legimate traffic because I also have p2p traffic

      • by malkavian ( 9512 )

        Methinks the point is that the originally chosen packet size would relate to definite non-P2P packet sizes and general metrics (not making everything look like P2P, which would, as you say, be self defeating). When all P2P traffic becomes obfuscated to the point that it looks to any statistical analysis exactly the same as all the non p2p traffic, then throttling of that stream becomes rather more difficult, as you have to wave your fingers in the air and guess what you're throttling, which will likely ups

    • by dyfet ( 154716 ) on Monday June 30, 2008 @09:34AM (#23999355) Homepage

      Actually, strange you should suggest this, I was working on a small and rather generic package to tunnel data between hosts in this very way, constant rate/constant packet size tunneling, with empty data filled with random noise, and with non-packet-aligned encrypted data overlayed when there is data to actually send. I was going to call it tstunnel. Yes, it is somewhat of an extreme response to an extreme problem.

    • The next step will be for the monopolies to simply be to inform customers they're no longer desired and to stop offering them the service completely. Now they won't do it to everyone, they'll do it to a certain percentage (probably between 1 and 5 percent) and advertise this fact well known through the media.

      That should have a chilling effect on p2p users.

    • by Anonymous Brave Guy ( 457657 ) on Monday June 30, 2008 @10:14AM (#23999963)

      Dear customer,

      Thank you for your comments. We regret that because it makes no business sense to continue providing an unlimited bandwidth service, we will be discontinuing this offering from next month. Current subscribers may transfer to our metered service with no disruption. This service is commercially viable and we expect it to remain so, and most users will find the metered service significantly cheaper as they will no longer be subsidising a small minority of heavy users.

      At your current usage rates, we estimate that your own monthly bill on the metered service would be approximately:

      $1,764.38

      Please note that this figure is an estimate based on your current usage level, and may go down or up depending on your future usage patterns.

      Best wishes,
      Your ISP

  • second (Score:2, Funny)

    by Anonymous Coward

    I would have been first but my ISP throttled my SSH tunnel

  • Next step? Encrypted packages that are arbitrarily sized to be like any other encrypted package.

  • This will backfire (Score:5, Insightful)

    by DarkOx ( 621550 ) on Monday June 30, 2008 @08:33AM (#23998751) Journal

    All its going to do is encourage P2P developers to try (and they will likely succeed) to make P2P traffic look more like other traffic. Want your bittorent to look more like encrypted telnet? Easy send tons of tiny packets and take a short break every few seconds. All this is going to do is increase the packet overhead the ISPs see. That same overhead will also hurt P2P end users but unless its more then the throttle does they will do it anyone. Its a loose loose situation really. They ISPs should realize they gain nothing going down this path.

  • by Zerth ( 26112 ) on Monday June 30, 2008 @08:34AM (#23998755)

    And throttle all encrypted traffic over whatever an IP phone or VPN connection would use on assumption of file-sharing. They don't give a rat's ass what you are doing, really, they just want a reason to throttle you and this company just makes money by giving them one.

  • Next move... (Score:4, Insightful)

    by PhotoGuy ( 189467 ) on Monday June 30, 2008 @08:38AM (#23998799) Homepage

    Well, the next move would simply be some tool, or modification to bittorrent, that makes the traffic patterns look like that of other protocols. While I'm sure it would have some impact upon performance, surely torrent packets can be make to look pretty damn similar to a bunch of HTTPS images being loaded on a web page (or something along those lines). Just like DRM, each move like this isn't solving any problem, just slowing things down, while a counter-move is made. (Or, another provider is chosen who doesn't throttle traffic, competition permitting.)

  • by Digital_Quartz ( 75366 ) on Monday June 30, 2008 @08:40AM (#23998807) Homepage

    Could be worse. Rogers and Bell, here in Canada, just throttle ALL encrypted traffic.

    • by Fryth ( 468689 ) on Monday June 30, 2008 @08:53AM (#23998899)

      You'd think that's how they're doing it, but it doesn't seem to be the case. Rogers customer here, and my SFTP (FTP over SSH) connections go at full-tilt, while BitTorrent has slowed down to a crawl (0-1 KB/sec) on my connection in the past (yes, using the latest uTorrent/Azureus Vuze client, with standard BT MSE/PE encryption enabled).

      I don't know what's going on, but I suspect they've already figured out something that these Italian guys are researching now, and they've been able to identify BitTorrent from other encrypted traffic.

      • by Klaus_1250 ( 987230 ) on Monday June 30, 2008 @09:30AM (#23999319)
        There is another weakness in BT which allows ISP's to throttle traffic. Client to tracker communications. Unless your tracker uses SSL, all peers inside a swarm are send over in the clear. So your ISP knows which IPs are likely to send and receive BT-traffic. They don't have to look at the traffic, they just use the same information the tracker provided to you. IP in BT-swarm? Throttle.
        • Re: (Score:2, Interesting)

          by Fryth ( 468689 )

          That's interesting, that might be how they're doing it. I heard from some folk who claim success by encrypting the tracker communications only, by sending them over a VPN [secureix.com].

    • by nurb432 ( 527695 )

      If they throttle all traffic equally and advertise as such when you sign up, that would be cool with me.

  • So the ISPs now have another way to detect types of communication for throttling that they shouldn't normally have a problem with if they had actually kept to their agreements with the US Gov./the people to use the massive tax breaks they were given to build out their infrastructure so that..sort of like that whole deal was intended to do...we could've avoided this kind of problem where throttling would be necessary or desirable to begin with.

    What next? You sign up for internet service and pay your money an

  • by assemblerex ( 1275164 ) on Monday June 30, 2008 @08:50AM (#23998879)
    detect if one of the mario brothers is inside the packet, 89.9% of the time
    • by Anonymous Coward on Monday June 30, 2008 @09:16AM (#23999147)

      Yeah but that's a cheat owing to the tubes. See, they route all traffic through a huge green pipe and listen for the "Gew gew gew" noise that signals the presence of a Mario Brother.
       
      Why would an ISP do Deep Mario Brother Inspection, I hear you ask? Well if you remember, those depths were filled with coins! There's no depth an ISP won't go in order to get those.

    • Re: (Score:3, Informative)

      Mario Brothers would never be in the packets, as they travel through pipes, not tubes. :-)

  • Just throttle ALL traffic from ip adresses that you consider "excessive."

    • Here's a novel idea: if you intend to sell metered service, sell metered service. Wow. That's just blowing me away with its simplicity. How could they have not thought of that?

      Call it "Bandwidth Plus" or something.

      Better yet, call your local politician and tell him it would be really cool if power districts could sell communications services, because, you know, they own the rights of way and the incumbent communications providers aren't interested in building out the post roads of the 21st century.

  • A reverse DNS lookup will tell you a lot about whether an IP you are sending to is a home user or a corporation. I wouldn't be surprised if they use this also (though Net Neutrality legislation might stop it).
  • Once word gets out that there's some restriction on a service people are used to, they will always find a way to beat it. Last century they tried to ban alcohol and that worked about as well as throttling packets will work here. Inevitably they will have to stop because they'll just force people into any goofy method that circumvents their restrictions.

  • by petes_PoV ( 912422 ) on Monday June 30, 2008 @09:21AM (#23999209)
    > have found a way to categorize the type of traffic that is hidden inside an encrypted SSH session ... They are achieving this by analyzing packet sizes and inter-packet intervals instead of looking at the content itself

    And in the next (or two) release of SSH implementations, this weakness will, no doubt, be fixed.

    Professional cryptographers have known for decades that you don't just switch on your transmitter when you want to send a secret message - no matter how well encrypted it is. The mere fact of traffic is frequently a sizeable tell-tale itself. Instead, you keep your transmitter on 24*7 sending encrypted garbage, with the ability to interleave genuine messages when the need arises. I'm sure that in a short time, the SSH people will remove the ability to profile the transmission to glean anything usable from it.

    • by Migraineman ( 632203 ) on Monday June 30, 2008 @10:11AM (#23999929)
      Exactly. If you look at the FIPS 140 documents [wikipedia.org], you'll see layers of data- and physical-security that need to be implemented. Currently, the SSH folks are only considering the raw data encryption requirement at the endpoints. The ISPs' analysis techniques will force the SSH folks to consider the end-to-end link as a single unit, and they'll implement more structures to deny the ISPs any visibility. I fully expect such a move to cost the ISPs more bandwidth. "All these channels look like random data, all the time." Yep.
  • by intx13 ( 808988 ) on Monday June 30, 2008 @09:24AM (#23999241) Homepage

    Attempts to analyze (and then throttle) Internet traffic reminds me of copy protection schemes. The schemes get more and more complicated (and costly) and at every turn the user gets more sophisticated in his or her attempts to get around the protection. ISPs would be wise to look at the music, movie, and in particular video game industries and realize that there are many, many more users who wish to use P2P software than there are ISP engineers who wish to throttle said users, and that it will always be a losing battle.

    Personally, I think the granularity of the ISP payment schemes need to be increased. We pay for cell phone minutes in blocks of 100 or so (or by the minute, depending on your plan); we pay for electricity by the kWH, we pay for water by the gallon (or liter), and so on... why not pay for bandwidth by the Mb? In a perfect world (yeah, well, one can dream!) this would mean reduced costs for the average home Internet user, as most people aren't using anywhere close to what is available, and maybe slightly increased costs for people like me. But then at the same time throttling is no longer an issue. Of course in reality this is unlikely to happen any time soon; why charge responsible, realistic rates when you could charge a flat fee and then just block any traffic you don't like with increasingly expensive technology (and pass the cost on to your monthly subscribers, of course)?

    ISPs, learn from the "War on Copyright Violation" - you won't win this battle; give it up and fix the underlying problem.

    • I'm on Rogers in Canada and that's exactly what they have done. I'm a fairly heavy net user and average about 30gig of usage per month with the limit (before paying) of 70gig (upload + download). I don't think Torrents would be a problem if there wasn't a small select group of people turning their home Internet connect into a large 24/7 file server.
  • net neutrality (Score:2, Interesting)

    by jaymunro ( 906707 )
    Call me a troll, and I don't usually comment, however I don't think this is what "net neutrality" is about. If you want to be able to download anything and interrupt other people who want to surf freely, that is one thing, but if you just want to be able to surf freely without restriction being imposed by IPS's and such, that is a totally different kettle of fish.
    • by intx13 ( 808988 )

      I don't think this is what "net neutrality" is about. If you want to be able to download anything and interrupt other people who want to surf freely, that is one thing, but if you just want to be able to surf freely without restriction being imposed by IPS's and such, that is a totally different kettle of fish.

      You realize, of course, that "surfing" is shorthand for "downloading and then rendering as a web page"? The Web is just one system of protocols and file formats that is available on the Internet -

  • Not a Bell customer, but stuck using the Bell network (because they have the DSL last-mile monopoly here)...

    Bell doesn't even seem to bother inspecting my packets. As soon as I open up an SSH connection to my box (during peak hours, during off-time when they're known to relax throttling it's fine), things go slow as shit. Not just the encrypted traffic either... there seems to be an overall slowdown that hangs up other connections.

    And I'm 99% sure it's not my settings, because everything worked fine unt
  • It seems that all that needs to be done is to solve it is to upgrade the backbone to allow each user an average download of two x264 movies a day or so, circa 10-20GB.
    There is no one able to consume more than that, daily.

    Problem is that processing power is cheaper than fiber these days, so they analyze and throttle the packets, instead of increasing the bandwidth.

  • "Bend over and cough please"

  • by kenp2002 ( 545495 ) on Monday June 30, 2008 @11:03AM (#24000889) Homepage Journal

    Okay, before everyone starts their throttling engines for war please remember the following:

    A: ISP's are not throttling data because of bandwidth, they are throttling because of latency. If you do not understand the difference, here is a simple way to look at it

    A router can handle a million packets a second let say. Wether the packet is a size of 10 or a size of 1000 it still can only handle a million packets. Bandwidth is how many seats on the bus (or if all the buses had the same number of seats, how many lanes on the road), latency is how fast the bus is going. A router it a toll gate. Too many buses, regardless of how many seats, will bog down the toll gate. P2P is very chatty in the number of packets and depending on how it sliced the data, lots of big chunks, or a whole hella lot of small chunks. Either way the guy working the toll gate is going to go postal at some point.

    B: Encryption, your rights online, data type, freedom, and all of that supurious crap we like to toss around means nothing when: "You sign a contract." While I am not a lawyer I am an informed customer (I read the small print). When you sign up for Internet service, regardless of what you feel, or in fact what your rights are, you can and do sign most of those away when you sign up for a commerical service. If they say that you cannot encrypt your P2P traffic and you do; thus losing your service... that is more then acceptable under most nations idea of contract law. You have no right to privacy if you sign a contract that gives them the right to look.

    Keeping A & B in mind please feel free to march forward with your discussions but, the most important thing to remember, is point A. Telling people there is plenty of bandwidth has LITTLE IF ANYTHING to do with throttling as far as I can tell. I watched 3 hearing on CSPAN and not one rep from the big three telecoms mentioned BANDWIDTH as a reason, but I did hear 18 engineers talk about routers, MTU initiated fragments, and total packets per second capacities on core routers, and I did keep count of bandwidth vs. latency.

    Bandwidth Mentioned: 34 times
    Latency: 400+ times (I ran out of chicken scratch space on the page and gave up...)

    Now I admit I did doze off after 30 minutes of an engineer trying to explain to a senate committee the difference between TCP and UDP but I am human after all.

    Now certainly there is some complexity in latency and bandwidth in how they are related and from what I have heard fiber does take care of a lot of the latency issues (signal to noise ratio seemed to be a big talking point from some AT&T engiee who looked like Dracula) so feel free to toss that into the discussions.

    But seriously, this whole filtering stuff has nothing to do with bandwith, so please, please, please, stop with the bad 3rd party reporting. We have already seen on /. that the ISPs aren't hurting for bandwidth.

    Getting accurate information from the mainstream press on Internet filtering is like asking a caveman to fix your car... all he's gonna do is smash it with a rock.

    • by Adeptus_Luminati ( 634274 ) on Monday June 30, 2008 @03:06PM (#24005217)

      What you said about the problem being latency, is a little bit hard to swallow given that the core of most ISPs runs multi-terabit routers.

      The fact of the matter is that not only have router CPUs increased in power exponentially, but also core router technology, has advanced to implement caching such as CEF (Cisco Express Forwarding), and build into regular router blades additional CPUs such as DCEF (distributed CEF), etc.

      Case in point, core routers these days have SO much spare processing power that most routing cores run VRF (virtual routing and forwarding), which allows a single physical router to VIRTUALLY pose as if it is 100 or even 1,000 different routers, all inside the same box.

      And further, the total throughput capacity of these routing processors today is measured in the TERABITS. The latest Cisco router can process some 15 Terabits of traffic in a single box. Even if packet sizes were inneficient, you're still looking at 1+ Terabits of throughput... which is many many many OC192s (10Gigabit Sonet rings).

      So don't tell me we're hitting router processing capacity, because that's a complete joke, and if that were the case, Bell Canada would have been smart and presented that info right up front to the courts (they're currently being asked to justify why they throttle their end-users).

      I think what it actually may come down to is peering costs with other ISPs... which for the most part isn't a problem for the biggest players which are Tier 1 providers. Tier1 here is defined as a Telco/ISP that is so big (i.e. AT&T) that all other providers pay THEM for packets to traverse their network, and they in fact don't pay anyone or their peering costs are way lower than their peering income.

      So Tier1's aside, yes I can see ISPs having to fork out significant $$ for bandwidth per month, and of course torrent freaks doing 200+ GigaBytes/month are costing them significant money.

      just my $2.22 cents,
      Adeptus

  • Just thought I'd share a video relevant to the discussion: http://www.youtube.com/watch?v=Iw3G80bplTg [youtube.com]

  • by Adeptus_Luminati ( 634274 ) on Monday June 30, 2008 @11:37AM (#24001593)

    You'd think those ISPs *cough* Shaw Cable *cough* would have learned the lesson by now. That lesson should have been wastin... I mean spending, MILLIONS and MILLIONS on products like Sandvine to try to throttle bittorrent only to find out a few months later people were bypassing it with encryption.

    So now some Italians can identify prediction based on packet size etc... watch ISPs spend many more Millions implementing this, then the torrent client software guys simply change 10 lines of code, recompile and voila... Millions down the drain for ISPs!

    So go ahead, make my day! Just don't try to pass off those costs in your monthly bills to me.
    Adeptus

  • Illegal? (Score:3, Interesting)

    by kextyn ( 961845 ) on Monday June 30, 2008 @11:56AM (#24001937)
    When did P2P become illegal? It seems like every comment on this story talks about P2P like it's evil and needs to be stopped. I pay for an unlimited connection to the internet with a max speed of 30Mbps. I should be able to download and upload legitimate data as often as I'd like. And I do have a computer seeding torrents 24/7 which are completely legal. If Verizon doesn't like the fact that I'm constantly using most of my available upload then they should change the contract to say I can't do it. So far they haven't had any problems.
  • by John Sokol ( 109591 ) on Monday June 30, 2008 @01:12PM (#24003347) Homepage Journal

    I worked on implementing Error correction codes over IP some time back http://www.ecip.com/ [ecip.com]

    This is what we would call part of a family of Rude protocols that would do reverse Throttling.

    All of these ISP are counting on TCP being polite, but it's also counting on the network being passive or at least polite as well.

    In our case we originally implemented ECIP and SPAK when we had a 100KBPS video stream and 99KBPS gave us nothing but garbage. Since video is all or nothing. http://www.videotechnology.com/jessem/all_or_nothing.html [videotechnology.com]

    But with ISP taking a hostile approach, application writers could also start talking a more aggressive approach in a sort of arms race.

    I know everyone has been afraid of this, but I feel that this is indeed a necessary step if some sort if truce is to be reached between USERS and their ISP's. Right now we are really fighting over our rights on how we can use the "last mile" since it's all now been consolidated into the hands of only a few companies. We have already lost our ability to choose and market freedom.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...