Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Your Rights Online

Net Neutrality vs. Technical Reality 251

penciling_in writes "CircleID has a post by Richard Bennett, one of the panelists in the recent Innovation forum on open access and net neutrality — where Google announced their upcoming throttling detector. From the article: 'My name is Richard Bennett and I'm a network engineer. I've built networking products for 30 years and contributed to a dozen networking standards, including Ethernet and Wi-Fi. I was one of the witnesses at the FCC hearing at Harvard, and I wrote one of the dueling Op-Ed's on net neutrality that ran in the Mercury News the day of the Stanford hearing. I'm opposed to net neutrality regulations because they foreclose some engineering options that we're going to need for the Internet to become the one true general-purpose network that links all of us to each other, connects all our devices to all our information, and makes the world a better place. Let me explain ...' This article is great insight for anyone for or against net neutrality."
This discussion has been archived. No new comments can be posted.

Net Neutrality vs. Technical Reality

Comments Filter:
  • by Marcion ( 876801 ) on Sunday June 15, 2008 @01:32PM (#23801969) Homepage Journal
    Since the Google throttling detector does not yet exist, does any bright spark know how to achieve the same result using software that already exists?
  • ...or some such. Because those don't work on the scale of an ISP. It's simply much cheaper to add more bandwidth than try to manage things with QoS.
    • Then why does pretty much every ISP use some form of QoS today?

      • by A beautiful mind ( 821714 ) on Sunday June 15, 2008 @01:52PM (#23802155)
        Because pretty much every isp is part of a vertical monopoly and QoS provides a convenient excuse to leverage their monopoly in one market to push their product in another.
        • s/monopoly/cartel but your point is valid.
        • (That's more than 50 per state, so if you don't patronize one, it's not their fault.) That's hardly a duopoly situation. However, independent ISPs often pay more for bandwidth than the cable and telephone monopolies. Some pay as much as $300 per megabit per second per month for their backbone connections. They are thus even more susceptible to being harmed if greedy content providers -- such as Vuze -- siphon off their bandwidth using P2P, or if bandwidth hogs overrun their networks. So, the issue is not on
          • Re: (Score:3, Informative)

            I live in Central PA, and we've basically got a duopoly between Comcast and Verizon (and no FIOS, just DSL). There are a few smaller companies like D&E or Commonwealth Telephone that run DSL a few smaller suburban areas, but they only operate where Verizon doesn't. One company that I know of, PA Online, leases bandwidth from Verizon, so they're stuck hoping that their far better customer service is worth the extra $7/month that they have to tack on to Verizon's price. The big question is whether or not
          • Re: (Score:3, Informative)

            Well, where I live - along the border between 2 cities, I have access to 2 cable companies, Wow and Camcast, and AT&T. (There is also stelite broadband, of course, but my neighbors tell me the QoS is bad). There are several companies that resell AT&T DSL, and a few that run there own DSL over lines leased from AT&T, but all of these are dependent on AT&T's infrastructure, so are not really competitors. (I often wonder if AT&T saves money from not having to bill or provide customer suppor
          • duopoly (Score:5, Insightful)

            by falconwolf ( 725481 ) <falconsoaring_2000@yah o o .com> on Monday June 16, 2008 @12:32AM (#23806521)

            (That's more than 50 per state, so if you don't patronize one, it's not their fault.) That's hardly a duopoly situation.

            It is a duopoly if you only have 2 choices for broadband, and many don't have 2 choices. If you're lucky you have a choice for cable and dsl, many can't get either, and even if you can sign up with a third party ISP they still use either the cableco's or telco's lines.

            Rather, it's greed on the part of some bandwidth hogging users

            No it's greed on the part of access providers. Nothing made them offer unlimited access plans, but once people took them up on the offer they are crying. It's nothing more than offering more than they can provide and that's a problem of their own making.

            Now, if they want to start charging some people more for using more bandwidth then I want them to pay back the billions of taxpayer dollars [newnetworks.com] they got in subsidies to build out their infrastructure. They took the taxpayers' money and used it to boost their bottom line without doing what they were given the money do to.

            Falcon
        • by WolfWithoutAClause ( 162946 ) on Sunday June 15, 2008 @08:25PM (#23805101) Homepage
          That's true in America maybe, but here in the UK there's no monopoly (you can switch ISPs fairly quickly and there's maybe a dozen or more to choose from) but they do usually still use QOS to reduce the amount of file sharing somewhat at peak times, but mainly to improve the VOIP and web performance.

          In other words they use it more or less for what it's supposed to be for- to *make* stuff *work* rather than deliberately breaking stuff.

          I think Richard Bennett thinks it's OK to break stuff if it allows the telecoms company to make money, he seems to think that they don't make enough or something, and he's quite happy for that to be at the expense of the users online experience
      • Re: (Score:3, Interesting)

        by OldHawk777 ( 19923 ) *
        Consider telecommunications infrastructure "IAP" access (CableCo/TelCo) providers different from the "ISP" content/services (Google, Yahoo, MSN, /., SecondLife, Wired, PBS ...) providers.

        QoS Bandwidth delivered by IAPs, in the past, was found to be very questionable by the QoS Bandwidth ISP customers that wanted to confirm that they (ISPs) were indeed receiving the QoS bandwidth for which they contracted and paid. The typical home/biz user is in the business of trusting their IAP and not verifying QoS and b
        • ComCast is a cable TV company that supports Net-Nepotism, because they are both an IAP (primary) and ISP (secondary) and ending Net-Neutrality would expand monopoly like powers over the USA Internet by IAPs like ComCast, but not improve QoS bandwidth to urban and rural communities, small-biz, or citizens at home.

          Innovation requires investment and reinvestment ... the IAPs do not appear to have any great interest in expensive innovation/infrastructure investments that provide QoS bandwidth increases at capit
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Sunday June 15, 2008 @01:51PM (#23802143)
      I do not have the experience he has, but I see some strangeness in the phrases he uses.

      The Internet's traffic system gives preferential treatment to short communication paths. The technical term is "round-trip time effect." The shorter your RTT, the faster TCP speeds up and the more traffic you can deliver.
      Yes. And? Do I really want the server next to me to be as slow as the server in Tokyo?

      The Internet's congestion avoidance mechanism, an afterthought that was tacked-on in the late 80's, reduces and increases the rate of TCP streams to match available network resources, but it doesn't molest UDP at all. So the Internet is not neutral with respect to its two transport protocols.
      I'm not sure about this. But he's the expert so I'll accept his claim. But wouldn't it be easier to add UDP management capabilities to the existing structure than any of the alternatives?

      VoIP wants its packets to have small but consistent gaps, and file transfer applications simply care about the time between the request for the file and the time the last bit is received. In between, it doesn't matter if the packets are timed by a metronome or if they arrive in clumps. Jitter is the engineering term for variations in delay.
      Wasn't that what Asynchronous Transfer Mode (ATM) was supposed to address?

      The Internet is non-neutral with respect to applications and to location, but it's overly neutral with respect to content, which causes gross inefficiency as we move into the large-scale transfer of HDTV over the Internet.
      Yes. And? So grabbing a huge file off of the server next to me is more efficient than a VOIP call to Tokyo. I'm not seeing the problem yet.
      • Re: (Score:3, Interesting)

        by Skinkie ( 815924 )

        The Internet is non-neutral with respect to applications and to location, but it's overly neutral with respect to content, which causes gross inefficiency as we move into the large-scale transfer of HDTV over the Internet.
        Unless some people finally get there managers on deploying Multicast on every medium they manage, I totally agree with the inefficiency.
        • by Stellian ( 673475 ) on Sunday June 15, 2008 @03:55PM (#23803243)
          Just forget about Multicast, it's a dead-end idea. Not because it's technically flawed (actually, it works pretty nicely), but because it ignores economics.
          A simplified economic model of the Internet calls for multiple level of service providers that sell bandwidth to each other. So I, as your ISP / backbone provider make as much money as bandwidth you can use. I have the option of enabling a technology that allows you to be more efficient and use less bandwidth, therefore pay me less. Meanwhile, this technology offers no benefits for me, in fact costs me money, the money needed to implement it and manage it.
          To add insult to injury, this technology works properly only if all the hops between you and your destination have deployed it correctly. So a bunch of telcos who's primary business is selling bandwidth must go trough hoops to make your data transfer more efficient. No, it's not gonna happen.
          To be successful, Multicast must be completely redesigned from an economical perspective such as to provide a immediate benefit for the provider that uses it (if this is at all possible), without reducing his revenue potential.
          • by johndfalk ( 1255208 ) on Sunday June 15, 2008 @04:41PM (#23803627)

            Just forget about Multicast, it's a dead-end idea. Not because it's technically flawed (actually, it works pretty nicely), but because it ignores economics.
            Except that IPv6 uses multicast for pretty much everything. As the telco's upgrade to IPv6 they will be forced into using multicast. The telco's want to move your data as efficiently and at the lowest cost to them while still charging you the same price. See: http://en.wikipedia.org/wiki/IPv6 [wikipedia.org]

            To add insult to injury, this technology works properly only if all the hops between you and your destination have deployed it correctly. So a bunch of telcos who's primary business is selling bandwidth must go trough hoops to make your data transfer more efficient. No, it's not gonna happen.
            Once again incorrect. You can tunnel multicast through devices that do not support it by having multicast point to point servers. We did this at I2 all the time to reach schools that weren't on the Abilene backbone. You would setup a server at the closest place that could receive multicast and then one at the destination thus reducing congestion.

            To be successful, Multicast must be completely redesigned from an economical perspective such as to provide a immediate benefit for the provider that uses it (if this is at all possible), without reducing his revenue potential.
            It already does by reducing their costs associated with routing traffic.
            • Re: (Score:3, Informative)

              Except that IPv6 uses multicast for pretty much everything.
              On the local link, sure. Besides that, IPv6 doesn't use multicast any more than IPv4 and it certainly isn't required.
      • Re: (Score:2, Troll)

        by mrmeval ( 662166 )
        Like all self professing experts he's a well paid off self professing expert.

      • by Jah-Wren Ryel ( 80510 ) on Sunday June 15, 2008 @02:37PM (#23802561)

        The Internet's congestion avoidance mechanism, an afterthought that was tacked-on in the late 80's, reduces and increases the rate of TCP streams to match available network resources, but it doesn't molest UDP at all.
        One very important point here is that this 'afterthought' in TCP works at the end-points. The network remains dumb, it is the end-points that decide how to do congestion management.

        Wasn't that what Asynchronous Transfer Mode (ATM) was supposed to address?
        Good point. ATM died because the benefits weren't worth the costs (much more complex hardware all around, never mind the protocol stacks).

        A related point that seems to run through the article is that more bandwidth is not the solution. But he doesn't explain why - for example

        This problem is not going to be solved simply by adding bandwidth to the network, any more than the problem of slow web page loading was solved that way in the late 90's or the Internet meltdown problem disappeared spontaneously in the 80's. What we need to do is engineer a better interface between P2P and the Internet, such that each can share information with the other to find the best way to copy desired content.
        In the first case I think he's completely wrong, more bandwidth is exactly what solved the problem. Both in the network and the applications use of that bandwidth (netscape was the first to do simultaneous requests over multiple connections - which did not require any protocol changes). In the second case, he's talking about Bob Metcalf (the nominal inventor of ethernet and nowadays a half-baked pundit) predicting a "gigalapse" of the internet specifically due to a lack of bandwidth...

        It's interesting to note that ATT themselves have declared more bandwidth to be the solution. They didn't phrase it quite that way, but ultimately that's the conclusion an educated reader can draw from their research results. 1x the bandwidth of a 'managed network' requires 2x the bandwidth in a 'neutral network' to achieve the same throughputs, etc. Sounds like a lot, but then you realize that bandwidth costs are not linear, nor are management costs. In fact, they tend to operate in reverse economies of scale - bandwidth gets cheaper the more you buy (think of it as complexity O(x+n) due to fixed costs and the simple 1 to 1 nature of links), but management gets more expensive the more you do it because the 1-to-1 nature of links gets subsumed by having to manage the effects of all connections on each other n-to-n style for O(x+n^2). Ars Technica analysis of ATT report [arstechnica.com]
        • by hobbit ( 5915 ) on Sunday June 15, 2008 @03:21PM (#23802991)

          In the second case, he's talking about Bob Metcalf (the nominal inventor of ethernet...)
          It's particularly ridiculous to talk about how increasing bandwidth will not solve problems in the face of Ethernet, which has consistently beaten off all other comers by piling on the bandwidth even though its link utilisation is piss-poor...
          • Re: (Score:3, Informative)

            by hpa ( 7948 )

            It's particularly ridiculous to talk about how increasing bandwidth will not solve problems in the face of Ethernet, which has consistently beaten off all other comers by piling on the bandwidth even though its link utilisation is piss-poor...

            Ancient history. Very few Ethernet links today are CSMA/CD. Full duplex Ethernet is simply a point-to-point serial link which has no utilization degradation, and since switches replaced hubs, virtually all Ethernet links are full duplex.

      • Re: (Score:3, Informative)

        by Anonymous Coward
        No, It is not just simpler to add udp management capabilities. The Management capabilities in TCP are built into the NETWORK STACKS! And these network stacks may not play by the rules either. Read up on TCP RENO etc... Then you will understand what he is talking about.
      • by bruce_the_loon ( 856617 ) on Sunday June 15, 2008 @03:21PM (#23802989) Homepage

        Yes. And? So grabbing a huge file off of the server next to me is more efficient than a VOIP call to Tokyo. I'm not seeing the problem yet.

        The problem is subtle, and I've only seen it now that I read the TFA although I've experienced it with our internet connection at work.

        The sliding window mechanism of sending packets before the ACK of the previous one until you get NACK and then back off has an unpleasant side-effect. An ACK train coming back over three hops from the local P2P clients or ISP-based servers moves faster than one heading across the world over 16 hops with higher ping times. Therefore the sliding window opens more and the traffic over the three hops can dominate the link.

        Now add that problem with BitTorrent clients reported earlier that try for max bandwidth. That can force the window even wider.

        And once the DSLAM/switch/aggregation port is saturated with traffic, it will delay or drop packets. If those are ACKs from the other side of the world, that window closes up more. There goes the time-sensitive nature of VOIP down the toilet.

        On a shared-media network like cable, it doesn't even have to be you. If two people on the same cable are P2P transferring between each other, there goes the neighborhood. They dominate the line and the chap only using Skype down the road wonders why he isn't getting the performance he needs.

        I'm opposed to price-oriented non-neutral networks, your ISP charging Google for your high speed access to them. But a non-neutral network that does proper QOS by throttling bandwidth-heavy protocols that don't behave themselves on the network is acceptable. As long as the QOS only moves the throttled protocols down when needed.

        • NACK/ACK (old S&F/RUID terms) is not an IP responsibility. ACK/NACK for TCP packet delivery failure is only noticeable at the destination client/server computer .... The IP part is the only part used by the IAP (CableCo/TelCo) infrastructure there is no consideration of the content TCP packets, failure to deliver, and/or the order/time of delivery. The TCP origin of an email/file does not need any ACK-confirmation that a packet was received at the intended destination, but the TCP origin does require a
        • by nuintari ( 47926 ) on Sunday June 15, 2008 @10:57PM (#23805971) Homepage

          I'm opposed to price-oriented non-neutral networks, your ISP charging Google for your high speed access to them. But a non-neutral network that does proper QOS by throttling bandwidth-heavy protocols that don't behave themselves on the network is acceptable. As long as the QOS only moves the throttled protocols down when needed.

          Thank You!

          I work for an ISP, and net neutrality scares the hell out of me. We do not want to, and will not throttle back certain sites who won't pay us for premium access, or create a tiered pricing structure for our customers. What I want, is the right to manage my network to give my customers the best performance by de-prioritizing badly written, and poorly behaving protocols, AKA: 99% of all p2p stuff.

          We also don't want to see content providers shift their bandwidth costs onto the ISP networks via the use of p2p. Why pay for expensive backbone links when you can shove 50% or more of your bandwidth onto your customers, and their provider's network? Either let us ISPs manage our networks, or we will start charging for upload bandwidth on a usage basis. I really don't want to do this, but if net neutrality becomes a reality, I see this becoming a very popular way to save on bandwidth costs. Blizzard already does it, patches for World of Warcraft are distributed via bittorrent. Why they think it is appropriate for their service to be offloaded onto my network is beyond me, but they do. When I can't rate limit bittorrent, and it becomes a huge bandwidth hog, my customers that patronize services that are the source of the problem will see their bills go up.

          Thank you, I finally read a post from someone who gets it. I didn't think that would ever happen.

          Oh, and any replies to the effect of, "well, its your own fault for not having enough bandwidth" can just go eat a dick. I have bandwidth, and that is not the point. The point is content providers should provide their own bandwidth, not leach it from the ISPs in the name of the heavenly, super great, don't ever question it, p2p software demi-god.

          Man, I got way off target there.
      • I do have some old experience, I see some BS in the phrases he uses.

        The Internet's traffic system does not gives preferential treatment to short/fast communication paths unless you are stupid enough to configure your network/telecommunications backbone-architecture to the S/FPF rather then route on QoS metrics and implied content criticality. TCP is ignored by the backbone it is part of the package and cannot route, only the IP part is the destination/route information use for p
      • The Internet's traffic system gives preferential treatment to short communication paths. The technical term is "round-trip time effect." The shorter your RTT, the faster TCP speeds up and the more traffic you can deliver.

        Yes. And? Do I really want the server next to me to be as slow as the server in Tokyo?

        His point is that firms like Akami leverage the fact that shorter round trip times means preferential treatment. It's a hack to get around the TCP design.

        The Internet's congestion avoidance mechani

      • by 3vi1 ( 544505 )
        >> "I'm not sure about this. But he's the expert so I'll accept his claim. But wouldn't it be easier to add UDP management capabilities to the existing structure than any of the alternatives?"

        He's correct. UDP doesn't have any kind of window size scaling (since it's not session-oriented). So, if a lot of packets are being dropped, it would be up to the application layer to throttle itself. Since UDP non-guaranteed anyway, apps won't generally do that.

        Sure, you can just discard the UDP packets in t
    • Re: (Score:3, Interesting)

      by arth1 ( 260657 )
      QoS doesn't work well because it can only be implemented in a few ways:

      1: By discarding any QoS information in the packet as it crosses your perimeter, and replacing it based on a guess done by deep packet inspection. Not only is this modifying data that wasn't meant to be modified, and thus legally no different from the dubious practice of rewriting HTML pages to show your own ads, but it also opens the question of whether you can claim to be a common carrier as long as you open every envelope to look at
      • Re: (Score:3, Interesting)

        by Kohath ( 38547 )
        Why wouldn't you use or discard the QoS information based on the source and/or destination of the packets?

        If my company wants to use VOIP telephony between our branch offices and we want to pay extra for it to actually work right, but we don't want fully-private lines because it's wasteful and more expensive, then an ISP could offer us QoS on that basis. But they don't.
        • by arth1 ( 260657 )
          Because it's not all that useful. QoS is really only useful to prioritize packets going in the same direction, and packets that really are timing sensitive. If you have packets going to and from twenty different perimeter gateways, but colliding at central hubs, it won't help much to base QoS simply on source/destination. Prioritizing all the packets when someone is downloading a huge file might then break streaming audio arriving at the same hub. That's not really useful.
          All it ends up doing is making
          • by Kohath ( 38547 )
            I'm not understanding your argument, I guess.

            QoS is really only useful to prioritize packets going in the same direction, and packets that really are timing sensitive.

            That's why I want to buy it for my VOIP packets between my branch offices.

            If you have packets going to and from twenty different perimeter gateways, but colliding at central hubs, it won't help much to base QoS simply on source/destination. Prioritizing all the packets when someone is downloading a huge file might then break streaming audio arriving at the same hub. That's not really useful.

            That's why I want to pay extra. So my VOIP packets get priority. I wouldn't prioritize download packets. The ISP would presuambly offer me a service to just allow a certain amount of prioritized VOIP traffic on a connection from a well defined source and destination. I'd configure QoS for those packets and ask the ISP to honor it and use it. I'd pay an extra fee.

            I still don't underst

    • by ffejie ( 779512 )
      Yeah, nothing is quite so easy as adding another multi-million dollar router and new long haul optical gear and then provisioning the whole thing.

      It's much harder to configure QoS.

      I think you have it backwards.
  • Multicast? (Score:4, Insightful)

    by Anonymous Coward on Sunday June 15, 2008 @01:51PM (#23802145)
    AFAIK services like FiOS and U-verse handle HDTV over IP by making the breakout box an IP multicast client.

    He completely ignores multicast in the paragraph about HTDV being trouble for the Internet, and someone should at least explain why it's not relevant. Otherwise it kind of sinks his battleship w/r/t that argument, IMO.
    • Re:Multicast? (Score:5, Interesting)

      by niceone ( 992278 ) on Sunday June 15, 2008 @02:06PM (#23802275) Journal
      He completely ignores multicast in the paragraph about HTDV being trouble for the Internet, and someone should at least explain why it's not relevant. Otherwise it kind of sinks his battleship w/r/t that argument, IMO.

      Multicast only works if internet TV is going to be like regular TV where a show is aired at a particular time. If it's going to be more like youtube on steroids multicast doesn't help.
      • Re:Multicast? (Score:4, Informative)

        by Skinkie ( 815924 ) on Sunday June 15, 2008 @02:25PM (#23802449) Homepage
        YouTube on steroids is geographic caching. But even if two people on the same network are watching the same video, it should be an option to receive the networkdata that is for the position the other person is currently watching.

        But the problem with multicasting is not that there are no tools, but it is not 'neutrally' implemented across different carriers that deploy access networks.
        • Re: (Score:3, Interesting)

          I don't get how it would work for 2 people to watch the same video simultaneously without A) depriving Google of hits thereby decreasing profit by ads B) Ignoring cookies C) Invading privacy. For example, how would ads work? When I go to Youtube to watch a video (and have disabled AdBlock and my /etc/hosts file) the ad sees that I am *insert IP address here* and Google can charge the maker of the ads say $.01 per view, so Google gets a penny richer and the company gets a penny poorer. So when I get this fro
          • Re: (Score:3, Interesting)

            by Skinkie ( 815924 )

            I don't get how it would work for 2 people to watch the same video simultaneously without A) depriving Google of hits thereby decreasing profit by ads B) Ignoring cookies C) Invading privacy.

            Player A uses multicastable flash video tool.
            Player A requests a video using this tool, and subscribes on a multicast stream that is returned by the server.
            Player A is watching, stream starts from 0.

            Player B uses the same flash video tool.
            Player B requests a video using this tool, and subscribes on an exciting multicast stream, and a new one starting from 0.
            Player B now receives the data that is transmitted for player A. And the new data starting from 0.
            Player B is watching, using the available stre

      • by Anonymous Coward

        No, listen, really, it'll be great. What we need is for ISP to host a single system that stores content. This system then talks to the systems of other ISP's and propagates that data so that it is stored very closely to the user base... solving the Multicast timing issue... Oh, wait... that was Web 0.1 and ISP's are now dropping the protocol [slashdot.org] because Andrew Cuomo's been wackin' it to 88 kiddy fiddler newsgroups. He feels so guilt ridden about it, he wants the entire Usenet shut down. You know it's true.

      • Re: (Score:3, Interesting)

        More and more bandwidth providers are switching to charging based on usage rather than a flat rate for access. If this trend continues, multicast could become very attractive.

        Suppose you have two ways to watch shows: one is on-demand, click-and-get-it-this-second access. This option will never go away, but you can expect to be charged full bandwidth price for this option. The second choice is to select a few shows ahead of time. You would then subscribe to the multicast broadcast (which might be repeated ev
    • Re:Multicast? (Score:4, Insightful)

      by PCM2 ( 4486 ) on Sunday June 15, 2008 @02:08PM (#23802293) Homepage
      And maybe I don't understand how multicast really works ... but it seems to me that multicast made a lot of sense as a solution back when everybody was used to watching the same show at the same time every week and then waiting for the reruns to see it again. These days everyone is getting more and more used to watching their shows anytime they feel like it, and On Demand is one of the top selling points of a lot of digital cable packages. It doesn't seem like multicast is going to be much help if you're committed to letting each individual viewer start and stop the show at the precise second they choose.
      • Re: (Score:3, Interesting)

        by Antity-H ( 535635 )
        that is not a problem in itself : you are already used to wait while the system buffers the stream. If multicast allows a more efficient management of the bandwidth all you have to do is schedule sessions every 30 seconds or say 50 users and start the multicast.

        This should already help right ?
      • Re:Multicast? (Score:4, Interesting)

        by cnettel ( 836611 ) on Sunday June 15, 2008 @03:05PM (#23802835)
        No, but can do more complex scenarios. Let's say that we pipe the first sixty seconds through unicast. If the bandwidth of your end pipe is really four times that, you could pick up a continuous multicast loop being anywhere within three minutes of the start, and then just keep loading from that one, buffering locally. You need your local pipe to be wide enought that you can buffer up material while playing the current part, but even if the multicast is just done in realtime video speed, and there is a single one looping contiuously, you should have the expectation value of being able to switch from the multi feed from unicast after half that time.

        If you want on-demand, and NO local storage, then you are indeed in trouble.

    • by nxtw ( 866177 )

      AFAIK services like FiOS and U-verse handle HDTV over IP by making the breakout box an IP multicast client.

      FiOS TV service is standard cable TV that runs over fiber right up to the customer's home - thus, it works with analog tuners, unencrypted QAM tuners, and CableCard devices.

      I would guess that Verizon went this route (instead of going over IP, like U-verse) for a good reason. AT&T didn't, and the service is limited in the amount of simultaneous streams.

    • IP multicast doesn't actually work on todays internet - most networks don't support it as it's hard to figure out how to manage billing.
  • by Nom du Keyboard ( 633989 ) on Sunday June 15, 2008 @01:54PM (#23802161)
    I am in favor of Net Neutrality regulations and laws, not because I like regulations and laws (I don't), but that I am finding them necessary in this case.

    We supposedly have Truth in Advertising laws already on the books, but super-fast, all-you-can-eat, Internet connections are still being advertised. I'd start by applying the existing law to those claims.

    I'd like to be sold a truthful amount of bandwidth (DSL tends to be more honest in this area than cable), and not some inflated peak amount that I can only hit when going to the cable sponsored local bandwidth tester site. And when I have that honest amount of bandwidth available to me, I want to be the one to set the QoS levels of my traffic within that bandwidth amount - not the cable company. When I know what I have available to me, then I can best allocate how to use it.

    First the cable companies started killing BT, and other filesharing apps to some lesser degree. I believe that to have been a Red Herring. When that was complained loudly about they offered to just cap usage in general, instead of limiting certain bandwidth-intensive applications.

    Who does this benefit? The cable companies, of course. Think of the business they're in. They deliver video. But so do a lot of other people on the Internet. Kill everybody else's video feeds because that is the high bandwidth application for the rest of us and pretty soon you'll only be able to receive uninterrupted HD video over your broadband connection from your local cable company. They will become a monopoly in video distribution (and charge every provider for distributing their videos), and all because we insisted that they throttle all traffic equally on their vastly oversold networks.

    All they're waiting for is DOCSIS 3.0 to roll out so that they can promise us even more bandwidth that we can't use since they won't even let us used our promised current bandwidth under DOCSIS 2.0. A royal screwing is on its way if your cable ISP in particular isn't clamped down on hard by the federal government by way of the FCC.

    And why does it have to be the federal government and the FCC. Because the cable companies have already managed to get all local regulation preempted by the federal government to avoid more stringent local rules, so the feds are the only ones left who are allowed to do it!

    • by techno-vampire ( 666512 ) on Sunday June 15, 2008 @02:35PM (#23802543) Homepage
      We supposedly have Truth in Advertising laws already on the books, but super-fast, all-you-can-eat, Internet connections are still being advertised. I'd start by applying the existing law to those claims.


      It wouldn't do any good, because of the weasel words in the advertisements. You see, they don't say you'll get N Mbits/second, they say, "...up to N Mbits/second." And, what they say is true, because your equipment is capable of handling that much bandwidth and your cable connection can carry it if it's provided. Of course, what they don't tell you is that they don't have enough bandwidth available to give every customer a connection like that, so the fact that your equipment could handle it is irrelevant. It's just like a car manufacturer telling you that their newest line can go from 0->150 mph in X seconds, but not reminding you that the legal limit is 65. What they say is true, even though they don't tell you all the truth.

      • Re: (Score:2, Funny)

        by hobbit ( 5915 )
        Bad analogy. It's more like getting you to pay for a car that can go 0-150 in X seconds, then trying to fob you off with you a bus pass.

  • by Yxven ( 1100075 ) on Sunday June 15, 2008 @01:57PM (#23802187)
    I think the article has some valid points regarding the technical aspects of the Internet, but I don't understand why those aspects make net neutrality legislation a bad thing. My understanding of net neutrality is that people want the Internet to remain neutral. They do not want providers to charge favorable rates to their friends and extortionist rates to their competitors. They do not want small ISPs forced out of the market. They do not want websites and users to be double-charged for the same use. I don't see how any of these issues are technical. I don't see how legislation that would keep things fair also would eliminate an ISP's ability to improve the performance of jitter sensitive applications as well as jitter insensitive applications. I mean you could argue that it'd be legislated wrong, and you'd probably be right. But from a technical standpoint, assuming it's legislated correctly, why is net neutrality technically impossible? Or am I completely misunderstanding the net neutrality issue?
    • by niceone ( 992278 ) on Sunday June 15, 2008 @02:36PM (#23802549) Journal
      Or am I completely misunderstanding the net neutrality issue?

      No, it seems to me you understand it perfectly. However TFA seems to be blurring the lines between net neutrality and treating traffic differently. For instance if it were technically necessary to treat all Voice packets as high priority (it seems it isn't as VoIP works, but for the sake of argument) then there's nothing to stop a standard being agreed and implemented on a neutral internet, just so long as the voice packets are treated the same no matter who is sending and receiving them.
      • Bang-on. (Score:5, Insightful)

        by weston ( 16146 ) * <westonsd&canncentral,org> on Sunday June 15, 2008 @03:43PM (#23803167) Homepage
        This is the important distinction. It's not traffic type neutrality that's the essential character of an appropriately neutral net, it's source-destination neutrality.

        (A non-type-neutral net has some of its own problems, but not the same ones as a non-source-destination-neutral net, and there's a good argument that the latter is more important.)
      • Re: (Score:2, Informative)

        Or am I completely misunderstanding the net neutrality issue?

        No, it seems to me you understand it perfectly. However TFA seems to be blurring the lines between net neutrality and treating traffic differently.

        Here is the main technical problem that TFA ignores entirely, and it is the central problem that network neutrality seeks to resolve: QoS and filtering aren't just applied to protocols and ports -- they are applied to individual IP Addresses, and to suppress new services!

        I'm perfectly happy to give VoIP ports a higher priority QoS than file transfers, which tend to be more "bursty" anyway. I just don't think the ISP has the right to determine that VoIP connections to Vonage or Skype have higher priority th

  • Missing the point? (Score:5, Insightful)

    by JustinOpinion ( 1246824 ) on Sunday June 15, 2008 @02:00PM (#23802211)
    His analysis is in many ways good... but seems ridiculously idealistic. He emphasises:

    Where do we turn when we need enhancements to Internet protocols and the applications that use them? Not to Congress, and not to the FCC. ... Engineers solve engineering problems.
    (Emphasis in original.)

    Probably most of us agree with that statement in principle. The problem is that the various players in this (users, content providers, and network operators) do not have their objectives aligned. Thus, the engineers for the network operator will come up with a solution (e.g. throttling) that solves the network company's problem (users using too much of the bandwidth they (over)sold), but the engineers working for the users (e.g. people writing P2P apps) will engineer for a different objective (maximum transfer rates), and will even engineer workarounds to the 'solutions' being implemented by the network.

    The problem is thus that everyone is engineering in a fundamentally adversarial way, and this will continue so long as the objectives of all parties are not aligned. Ideally, legislation would help enforce this alignment: for instance, by legally mandating an objective (e.g. requiring ISPs to be transparent in their throttling and associated advertising), or funding an objective (e.g. "high-speed access for everyone"), or by just making illegal one of the adversarial actions (e.g. source-specific throttling).

    This is not purely an engineering question. The networks have control of one of the limited resources in this game (the network of cables already underground; and the rights required to lay/replace cables), and this imbalance in power may require laws to prevent abuse. It's not easy to create (or enforce) the laws... and ideally the laws would be informed by the expertise of engineers (and afford ways for smarter future solutions to be implemented)... but suggesting that we should just let everyone 'engineer' the solution misses the mark. Whose engineers? Optimizing for what goal? Working under what incentives?

    Put more simply: engineering is always bound by laws.
    • Re: (Score:3, Informative)

      Parent post has to be one of the most clear, cogent, and effective rebuttals of the arguments made in the original article. One must always be mindful to consider the social, economic, and regulatory environment in which engineers--and by extension, the technologies they create--operate. And the author of the article simply fails to do this by viewing the problem as (in the words of parent post) "purely an engineering question."

      I had mod points a few days ago but they expired. So this is my way of ma

    • by spazdor ( 902907 )
      I would also mod you up if I had the points. Engineers will solve the engineering problems, but we can't expect them to solve their employers' conflicts of interest.
  • by Kohath ( 38547 ) on Sunday June 15, 2008 @02:06PM (#23802271)
    No net neutrality these past 5 years has meant ... what exactly? What is the horrible problem we've all had to endure because the government hasn't forced ISPs (against their will) to operate in "the preferred way"?
    • by Anonymous Coward on Sunday June 15, 2008 @02:52PM (#23802689)
      ISPs have been operating "the preferred way" out of convention, in keeping with the norms of the Internet, for some time now. But they have only recently signaled their intent to deviate from historical principles in order to pursue additional sources of revenue.

      Their intended path optimizes the Internet in their own favor, and works against the Internet as a whole. They're saying, "Yes, we like the Internet. But you're going to like our take on the Internet even better, want it or not." They're bundling "their way" over what should be a common carrier type situation.

      So, it is like asking, "No net neutrality for telephone calls over the past 5 years has meant... what exactly?" Nothing, because the telephone companies have kept with the status quo, and not introduced 'features' that degrade the overall value of the network. Were they to announce an intent to do this, you'd see telephone neutrality legislation bounced around.

      "But we don't need telephone neutrality legislation! If you legislate the telephone system, then it will kill innovation!" See? We're blaming the wrong folks here. It isn't the customers or the legislators. It is the carrier rocking the boat, and then crying foul when people try to address their money making schemes.
    • by Lunatrik ( 1136121 ) on Sunday June 15, 2008 @03:22PM (#23802995)
      Comcast and Bittorrent [torrentfreak.com]? Deep Packet Inspection [p2pvine.com] commencing by Time Warner and Comcast? And, Today on slashdot, Verizon preventing access [slashdot.org] to a chunk of usenet?

      Either your trolling or live in a cave.
      • Verizon isn't preventing access to anything, they are only not carrying alt.* themselves - nothing preventing you getting it from another provider.
      • by Kohath ( 38547 )

        Comcast and Bittorrent?

        Did you read the article? I don't think you read the article.

        Also, I thought net neutrality was supposed to treat everyone's comparable traffic that same and not to charge extra for preferred delivery of packets. Is there any evidence that Comcast is treating one type or one company's Bittorrent traffic differently than some other type? Are they charging someone extra for preferred delivery? I have not heard that allegation. Are you making it now?

        I'm not sure what you're saying about deep packet inspe

        • Re: (Score:2, Insightful)

          by Lunatrik ( 1136121 )
          I don't have the time to respond to all of your comments, but your limitation of net neutrality to a concept which is "supposed to treat everyone's comparable traffic that same and not to charge extra for preferred delivery of packets" is not only incorrect, but also concurs with the idea that Comcast violated issues of net neutrality.

          From Wikipedia (very well cited, check it yourself):
          "A neutral broadband network is one that is free of restrictions on the kinds of equipment that may be attached, on the
          • by Kohath ( 38547 )
            So if any way of transferring a file takes any more time than another way, Comcast is guilty. It sounds unreasonable.

            Also the article talks about this.
  • by Whuffo ( 1043790 ) on Sunday June 15, 2008 @02:06PM (#23802279) Homepage Journal
    While some good points are made about the current state of the internet and how technical improvements could be made - his article lost credibility at the point where he states that the proper way to correct the problems is for "industry" to do it.

    Of course, the "industry" he's talking about are the corporations that control large chunks of the infrastructure. As we've established time and time again, those corporations aren't acting in the public interest. Their only interest is in what makes their corporation the largest profit. To those interests, blocking competing services or forcing popular websites to pay more to stay online are quite reasonable things to do.

    This is why net neutrality is such an important idea. Look at what has been accomplished so far with our "ad hoc" arrangement of computers connected to a crazy quilt of networks. All that you see is just the beginning - but a better future will never come to pass if the corporate interests are allowed to filter / segregate / block network traffic.

    Think about it for a minute: consider AT&T. They own a substantial amount of internet infrastructure and they're also the major telephone company. When they look at Skype and discuss how to limit the loss of business to this competitor - you'd better believe they consider blocking VOIP on the backbone. Call it a benefit to the customer and put a competitor out of business; another good day in corporate headquarters.

  • by kherr ( 602366 ) <kevin&puppethead,com> on Sunday June 15, 2008 @02:08PM (#23802295) Homepage
    Yes, there are technical reasons to shape traffic to optimize network flow. But the problem is that the large ISPs are using business, not technical, reasons to determine the network traffic policies. If companies like Comcast, Time Warner and Virgin Media could be trusted to base network design on technical issues, that'd be a nice utopia.

    But we know these companies are instead targeting packets that they see as business competitors, so they are not making sound technical decisions. I say it's better to make it harder for a perfect network than to allow corporate interests to balkanize the internet for their greedy purposes.

  • by erlehmann ( 1045500 ) on Sunday June 15, 2008 @02:15PM (#23802349)

    Over-the-air delivery of TV programming moves one copy of each show regardless of the number of people watching, but the Internet transmits one copy per viewer, because the transport system doesnâ(TM)t know anything about the content.

    One word: Multicast [wikipedia.org] .

  • by BlueParrot ( 965239 ) on Sunday June 15, 2008 @02:18PM (#23802373)
    The main issue here is not weather companies double charge for bandwidth or if they charge per use or don't offer this or that service, the issue is that if you allow a situation where a company like AT&T can make a deal with Microsoft to prioritize their traffic, then it will eventually end up in a situation where you get a cartel of companies controlling that keep competing smaller ISPs and content providers out of the market by artificially degrading their connections.

    Furthermore because the communications infrastructure is partially government funded, and as the radio frequencies are government controlled through the FCC , the "free market" argument doesn't hold water. There are numerous barriers to entry into the ISP market, both government imposed as well as technical ones, and thus coercive monopolies will be able to form unless actively restrained by the government.

    This doesn't necessarily say much about HOW you should regulate the market, but it pretty much implies that simply leaving ISPs to screw over customers and smaller competitors is a big no-no. Completely free unregulated markets only work when there are low barriers to entry, many suppliers, no external costs or benefits, perfect customer insight into the market, completely homogeneous and equivalent services being offered, zero cost of switching supplier, and no barriers to trade. The number of markets in which that applies can be counted on fewer hands than most people have.
  • by Cracked Pottery ( 947450 ) on Sunday June 15, 2008 @02:33PM (#23802521)
    I can understand charging for lower latency time, higher bandwidth or other aspects of higher quality of service, and even at reasonable prices for large amounts of data exchange usage. What should not be permitted are corporate level deals that create content favoritism based on the source and nature of the content, whether from direct monetary consideration or corporate partnership or favoring in-house content or services.


    Especially offensive is any sort of attempt at frustrating the dissemination of content based on political bias. The cable companies that own most of the broadband ISP's would love to model the Internet after their cable TV business. They have a news product that has done just a terrific job at political neutrality, and they would love to extend that model to Internet services.

  • 300 baud (Score:3, Insightful)

    by Haxx ( 314221 ) on Sunday June 15, 2008 @02:34PM (#23802533) Homepage
    Those of us that have been here a while, the people that used to watch the blocks move across the screen at 300 baud, can see a another of many drastic changes coming in the way the huge ISP's will handle content. There was a time when ISP's were everywhere. They were small companies with access to local dial-up node sites. Then AOL had 10 million people convinced that they were actually the whole internet. Today high speed internet has given birth to bohemoth ISPs that were huge cable/telephone/satelite companies years before. These companies may eventually package web access the same way they package movie channels. After a few years of this the smaller ISP's with open access will be back and the cycle will repeat in new and strange ways.
  • What crap (Score:5, Interesting)

    by Anonymous Coward on Sunday June 15, 2008 @02:41PM (#23802593)
    "I know that's not true. The Internet has some real problems today, such as address exhaustion, the transition to IPv6, support for mobile devices and popular video content, and the financing of capacity increases. Network neutrality isn't one of them."

      The effen telcos already got paid 200 billion dollars to do something about getting fiber to the premises and blew it on anything but that. Where's the "political engineering" solution to look into that to determine where the "QOS" broke down at ISP intergalatic central? Where are the ISP and telco fatcats sitting in front of congressional hearings explaining what happened to all that freekin money? Where did it go, real facts, real names, real figures.

      And why in the hell does the bulk of the public air wave spectrum only go to the same billion dollar corporations, year after decade after generation, instead of being turned loose for everyone-you know, that "public" guy- to use and develop on? Why the hell do we even *need* ISPs anymore for that matter? This is the 21 st century, there are tons of alternative ways to move data other than running them through ISP and telco profitable choke points, and all I am seeing is them scheming on how to turn the internet into another bastardized combination of the effen telco "plans" and cable TV "plans". Really, what for?

        Where's the meshnetworking using long range free wireless and a robust 100% equal client / server model that we could be using instead of being forced through the middle man of isps and telcos for every damn single packet? And what mastermind thought it was a good idea to let them wiggle into the content business? That's a big part of the so called problem there, they want to be the tubes plus be the tube contents, and triple charge everyone, get paid both ends of the connection and a middle man handling fee for..I don't know, but that is what they are on the record wanting, and industry drools like this doofus are providing their excuses. Not content with hijacking all the physical wired reality, for 100 years now, they get to hijack all the useful wireless spectrum, and no, WIFI DOESN'T CUT IT. That's at the big fat joke level in the spectrum for any distance.
  • by Zombie Ryushu ( 803103 ) on Sunday June 15, 2008 @02:44PM (#23802615)
    Look, the fact is that the telcos are engaged in criminal conspiracy to censor the Internet. Of course tiered rates for bandwidth usage will always be there. Thats been the way of the world since Broadband began. Anti-Net Neutrality is about WHAT you can access, not how fast you can access it, people who advocate against net neutrality are advocating FOR Internet censorship.
  • His opinion.

    Personally, I do not see any problem with modifying traffic for whatever reason THE CUSTOMER WHO IS PAYING THE BILL has.

    For the provider to do so smacks of WAY too much power in the hands of a few people to manipulate information.

    I mean, look at what they are doing to people now with that power, such as injecting banners and adds into html streams and other extra crap that actually creates MORE problems.

    With all due respect, traffic should be managed at the end points by the customer and the ISP
  • Is is really? (Score:3, Interesting)

    by diamondmagic ( 877411 ) on Sunday June 15, 2008 @03:30PM (#23803057) Homepage
    What do we need a new laws for? Most of the existing problems, false advertising or anti-competitive behavior, could be solved with existing laws, if the right people would bother using them. If and only if those attempts fail, will we need new laws.

    If all else fails, we simply need competition, look at what Version FiOS has done.
  • by kandresen ( 712861 ) on Sunday June 15, 2008 @03:43PM (#23803169)
    If ISPs offered their true bandwidth limits, latency limits, and so on from the beginning and not false offers like "unlimited".

    I have always had throttled connection - I used to throttled at 256kbps down and 56kbps up.
    Then I paid more and I with the exact same connection now got 512kbps down and 128kbps up.
    Then I got a better service and I with the exact same connection got 2Mbps down and 512kbps up..

    They have throttled the connection all the time. The total use is irrelevant. What is is whether all users use the bandwidth at the same time or not.

    The providers could simply offer what they not under the assumptions we only will use 0.1% of it, but actually use what we buy.

    What is worse for the ISP:
    - if you download 2 GB a day (~60 GB a month) spread out evenly (continuously ~90kbps)
    - if you download only during peak hours one hour a day 0.5GB (~15GB/month) (continuously 1110 kbps)

    What happens if the bandwidth is not used ? Do the ISP loose anything? It is their ability to provide to multiple people at the same time that matters; it is clearly worse for the ISP in the second case were one person downloaded only 15GB a month than in the one with 90GB.

    The entire issue could be resolved by ISP's offering the valid numbers for upwards and downwards bandwidth and expected latency for the connection.
    Don't blame the customers for using what they paid for.
  • Richard and I got into a Net Neutrality 'Discussion' in the comment section of Techdirt last year. I have a feeling he is some how benefits from the Pro Net neutrality side of the debate, although I have no proof. http://www.techdirt.com/articles/20070319/121200.shtml [techdirt.com] Judge for yourself. I did turn into a screaming little douche at the end though...but it was for the Love of a Free Internet.
  • like this guy, i have been a staunch defender of net neutrality in the small forums i run and in my friend circle, but given the right price, say, like 200 bucks or 250, i can invent many reasons why we should hand the fate of freedom of information for billions to the hands of verizon, comcast, at&t et al. i may be cheap, but i talk much.






    NOT.

  • Everything the government touches turns to shit. It's like that guy in the Skittles commercial, but with little rabbit turds instead. If the government had been making technology decisions twenty years ago, we would all be stuck on ISDN. Net Neutrality assumes a static technological world that only changes in predetermined ways.

    People like to pretend that the only problem wrong with government is that the right people are not in charge. But that's fantasy. Obama can no more write a routing protocol than McC
    • by Daniel Dvorkin ( 106857 ) * on Sunday June 15, 2008 @10:04PM (#23805703) Homepage Journal
      If the government had been making technology decisions twenty years ago, we would all be stuck on ISDN.

      Twenty years ago, the goverment was making technology decisions about something called ARPAnet. Typical stupid, wasteful government program that never went anywhere, of course. Fortunately, private enterprise led the way with bold innovative paradigm-breaking optimized syngergies, which is why we can now have this kind of discussion here on the Compuserve forums!

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...