Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Your Rights Online

Net Neutrality vs. Technical Reality 251

penciling_in writes "CircleID has a post by Richard Bennett, one of the panelists in the recent Innovation forum on open access and net neutrality — where Google announced their upcoming throttling detector. From the article: 'My name is Richard Bennett and I'm a network engineer. I've built networking products for 30 years and contributed to a dozen networking standards, including Ethernet and Wi-Fi. I was one of the witnesses at the FCC hearing at Harvard, and I wrote one of the dueling Op-Ed's on net neutrality that ran in the Mercury News the day of the Stanford hearing. I'm opposed to net neutrality regulations because they foreclose some engineering options that we're going to need for the Internet to become the one true general-purpose network that links all of us to each other, connects all our devices to all our information, and makes the world a better place. Let me explain ...' This article is great insight for anyone for or against net neutrality."
This discussion has been archived. No new comments can be posted.

Net Neutrality vs. Technical Reality

Comments Filter:
  • by Marcion ( 876801 ) on Sunday June 15, 2008 @02:32PM (#23801969) Homepage Journal
    Since the Google throttling detector does not yet exist, does any bright spark know how to achieve the same result using software that already exists?
  • by A beautiful mind ( 821714 ) on Sunday June 15, 2008 @02:39PM (#23802039)
    ...or some such. Because those don't work on the scale of an ISP. It's simply much cheaper to add more bandwidth than try to manage things with QoS.
  • by Anonymous Coward on Sunday June 15, 2008 @02:48PM (#23802127)
    1) ISPs are simply oversubscribing betting on people not using the bandwidth they are paying for.

    2) Throttling is one thing, what Comcast was doing was essentially criminal. They were hijacking the communications and injecting malicious resets or other packets to kill a connection.

    3) If they just properly implement QoS then things like VOIP and IPTV would work just fine.
  • by Yxven ( 1100075 ) on Sunday June 15, 2008 @02:57PM (#23802187)
    I think the article has some valid points regarding the technical aspects of the Internet, but I don't understand why those aspects make net neutrality legislation a bad thing. My understanding of net neutrality is that people want the Internet to remain neutral. They do not want providers to charge favorable rates to their friends and extortionist rates to their competitors. They do not want small ISPs forced out of the market. They do not want websites and users to be double-charged for the same use. I don't see how any of these issues are technical. I don't see how legislation that would keep things fair also would eliminate an ISP's ability to improve the performance of jitter sensitive applications as well as jitter insensitive applications. I mean you could argue that it'd be legislated wrong, and you'd probably be right. But from a technical standpoint, assuming it's legislated correctly, why is net neutrality technically impossible? Or am I completely misunderstanding the net neutrality issue?
  • Re:Multicast? (Score:5, Interesting)

    by niceone ( 992278 ) on Sunday June 15, 2008 @03:06PM (#23802275) Journal
    He completely ignores multicast in the paragraph about HTDV being trouble for the Internet, and someone should at least explain why it's not relevant. Otherwise it kind of sinks his battleship w/r/t that argument, IMO.

    Multicast only works if internet TV is going to be like regular TV where a show is aired at a particular time. If it's going to be more like youtube on steroids multicast doesn't help.
  • by Skinkie ( 815924 ) on Sunday June 15, 2008 @03:21PM (#23802407) Homepage

    The Internet is non-neutral with respect to applications and to location, but it's overly neutral with respect to content, which causes gross inefficiency as we move into the large-scale transfer of HDTV over the Internet.
    Unless some people finally get there managers on deploying Multicast on every medium they manage, I totally agree with the inefficiency.
  • Confused? (Score:1, Interesting)

    by Sniper98G ( 1078397 ) on Sunday June 15, 2008 @03:32PM (#23802513)
    I think this guy is confused over what most net neutrality advocates are trying to achieve. We don't want to say that you can't give voice packets priority. We are trying to ensure that all packet of the same type receive the same quality of service; that certain people don't receive better service while the rest of us get shoved into the slow lane.
  • by Anonymous Coward on Sunday June 15, 2008 @03:35PM (#23802541)

    Throttling is one thing, what Comcast was doing was essentially criminal. They were hijacking the communications and injecting malicious resets or other packets to kill a connection.

    What concerns me is if governance systems move to the internet [wikipedia.org]. Even if it is just for online voting -- who will keep the ISPs from manipulating the governmental processes?

    In any event, it is good to know that open source governance [wikipedia.org] is trying to muscle in on the action. At least the I.T. departments of the ISPs should be in favor of "open sourcing" the government, right?

  • by niceone ( 992278 ) on Sunday June 15, 2008 @03:36PM (#23802549) Journal
    Or am I completely misunderstanding the net neutrality issue?

    No, it seems to me you understand it perfectly. However TFA seems to be blurring the lines between net neutrality and treating traffic differently. For instance if it were technically necessary to treat all Voice packets as high priority (it seems it isn't as VoIP works, but for the sake of argument) then there's nothing to stop a standard being agreed and implemented on a neutral internet, just so long as the voice packets are treated the same no matter who is sending and receiving them.
  • by Jah-Wren Ryel ( 80510 ) on Sunday June 15, 2008 @03:37PM (#23802561)

    The Internet's congestion avoidance mechanism, an afterthought that was tacked-on in the late 80's, reduces and increases the rate of TCP streams to match available network resources, but it doesn't molest UDP at all.
    One very important point here is that this 'afterthought' in TCP works at the end-points. The network remains dumb, it is the end-points that decide how to do congestion management.

    Wasn't that what Asynchronous Transfer Mode (ATM) was supposed to address?
    Good point. ATM died because the benefits weren't worth the costs (much more complex hardware all around, never mind the protocol stacks).

    A related point that seems to run through the article is that more bandwidth is not the solution. But he doesn't explain why - for example

    This problem is not going to be solved simply by adding bandwidth to the network, any more than the problem of slow web page loading was solved that way in the late 90's or the Internet meltdown problem disappeared spontaneously in the 80's. What we need to do is engineer a better interface between P2P and the Internet, such that each can share information with the other to find the best way to copy desired content.
    In the first case I think he's completely wrong, more bandwidth is exactly what solved the problem. Both in the network and the applications use of that bandwidth (netscape was the first to do simultaneous requests over multiple connections - which did not require any protocol changes). In the second case, he's talking about Bob Metcalf (the nominal inventor of ethernet and nowadays a half-baked pundit) predicting a "gigalapse" of the internet specifically due to a lack of bandwidth...

    It's interesting to note that ATT themselves have declared more bandwidth to be the solution. They didn't phrase it quite that way, but ultimately that's the conclusion an educated reader can draw from their research results. 1x the bandwidth of a 'managed network' requires 2x the bandwidth in a 'neutral network' to achieve the same throughputs, etc. Sounds like a lot, but then you realize that bandwidth costs are not linear, nor are management costs. In fact, they tend to operate in reverse economies of scale - bandwidth gets cheaper the more you buy (think of it as complexity O(x+n) due to fixed costs and the simple 1 to 1 nature of links), but management gets more expensive the more you do it because the 1-to-1 nature of links gets subsumed by having to manage the effects of all connections on each other n-to-n style for O(x+n^2). Ars Technica analysis of ATT report [arstechnica.com]
  • What crap (Score:5, Interesting)

    by Anonymous Coward on Sunday June 15, 2008 @03:41PM (#23802593)
    "I know that's not true. The Internet has some real problems today, such as address exhaustion, the transition to IPv6, support for mobile devices and popular video content, and the financing of capacity increases. Network neutrality isn't one of them."

      The effen telcos already got paid 200 billion dollars to do something about getting fiber to the premises and blew it on anything but that. Where's the "political engineering" solution to look into that to determine where the "QOS" broke down at ISP intergalatic central? Where are the ISP and telco fatcats sitting in front of congressional hearings explaining what happened to all that freekin money? Where did it go, real facts, real names, real figures.

      And why in the hell does the bulk of the public air wave spectrum only go to the same billion dollar corporations, year after decade after generation, instead of being turned loose for everyone-you know, that "public" guy- to use and develop on? Why the hell do we even *need* ISPs anymore for that matter? This is the 21 st century, there are tons of alternative ways to move data other than running them through ISP and telco profitable choke points, and all I am seeing is them scheming on how to turn the internet into another bastardized combination of the effen telco "plans" and cable TV "plans". Really, what for?

        Where's the meshnetworking using long range free wireless and a robust 100% equal client / server model that we could be using instead of being forced through the middle man of isps and telcos for every damn single packet? And what mastermind thought it was a good idea to let them wiggle into the content business? That's a big part of the so called problem there, they want to be the tubes plus be the tube contents, and triple charge everyone, get paid both ends of the connection and a middle man handling fee for..I don't know, but that is what they are on the record wanting, and industry drools like this doofus are providing their excuses. Not content with hijacking all the physical wired reality, for 100 years now, they get to hijack all the useful wireless spectrum, and no, WIFI DOESN'T CUT IT. That's at the big fat joke level in the spectrum for any distance.
  • by arth1 ( 260657 ) on Sunday June 15, 2008 @03:50PM (#23802657) Homepage Journal
    QoS doesn't work well because it can only be implemented in a few ways:

    1: By discarding any QoS information in the packet as it crosses your perimeter, and replacing it based on a guess done by deep packet inspection. Not only is this modifying data that wasn't meant to be modified, and thus legally no different from the dubious practice of rewriting HTML pages to show your own ads, but it also opens the question of whether you can claim to be a common carrier as long as you open every envelope to look at the first few lines of every letter. Never mind the extra latency and routing costs.

    2: By accepting already existing QoS values at face value. While this might have worked 30 years ago, it will not work where there are commercial interests. Every spammer and spitter will prioritize his own packets as high as they can go, no matter what the consequences are to other traffic.

    3: A combination of 1 and 2, where deep packet inspection assigns QoS priorities on packets that don't already have them. This is the worst of both worlds, and only an idiot would do such a thing, so this is what's generally happening out in the real world.

  • Re:Multicast? (Score:3, Interesting)

    by Darkness404 ( 1287218 ) on Sunday June 15, 2008 @04:01PM (#23802783)
    I don't get how it would work for 2 people to watch the same video simultaneously without A) depriving Google of hits thereby decreasing profit by ads B) Ignoring cookies C) Invading privacy. For example, how would ads work? When I go to Youtube to watch a video (and have disabled AdBlock and my /etc/hosts file) the ad sees that I am *insert IP address here* and Google can charge the maker of the ads say $.01 per view, so Google gets a penny richer and the company gets a penny poorer. So when I get this from what I can assume to be the ISP's servers, it ignores or displays the ad data without giving Google the stats to get the money. So if I see the ad, Google doesn't get the $.01 and then the company gets a free ad. I just don't think this can work without Google or other ad companies complaining due to lost revenue, and unlike AdBlock this would be widespread.
  • Re:Multicast? (Score:3, Interesting)

    by Antity-H ( 535635 ) on Sunday June 15, 2008 @04:04PM (#23802833) Homepage
    that is not a problem in itself : you are already used to wait while the system buffers the stream. If multicast allows a more efficient management of the bandwidth all you have to do is schedule sessions every 30 seconds or say 50 users and start the multicast.

    This should already help right ?
  • Re:Multicast? (Score:4, Interesting)

    by cnettel ( 836611 ) on Sunday June 15, 2008 @04:05PM (#23802835)
    No, but can do more complex scenarios. Let's say that we pipe the first sixty seconds through unicast. If the bandwidth of your end pipe is really four times that, you could pick up a continuous multicast loop being anywhere within three minutes of the start, and then just keep loading from that one, buffering locally. You need your local pipe to be wide enought that you can buffer up material while playing the current part, but even if the multicast is just done in realtime video speed, and there is a single one looping contiuously, you should have the expectation value of being able to switch from the multi feed from unicast after half that time.

    If you want on-demand, and NO local storage, then you are indeed in trouble.

  • by Kohath ( 38547 ) on Sunday June 15, 2008 @04:10PM (#23802873)
    Why wouldn't you use or discard the QoS information based on the source and/or destination of the packets?

    If my company wants to use VOIP telephony between our branch offices and we want to pay extra for it to actually work right, but we don't want fully-private lines because it's wasteful and more expensive, then an ISP could offer us QoS on that basis. But they don't.
  • Re:Multicast? (Score:3, Interesting)

    by Skinkie ( 815924 ) on Sunday June 15, 2008 @04:20PM (#23802977) Homepage

    I don't get how it would work for 2 people to watch the same video simultaneously without A) depriving Google of hits thereby decreasing profit by ads B) Ignoring cookies C) Invading privacy.
    Player A uses multicastable flash video tool.
    Player A requests a video using this tool, and subscribes on a multicast stream that is returned by the server.
    Player A is watching, stream starts from 0.

    Player B uses the same flash video tool.
    Player B requests a video using this tool, and subscribes on an exciting multicast stream, and a new one starting from 0.
    Player B now receives the data that is transmitted for player A. And the new data starting from 0.
    Player B is watching, using the available streams on the network.

    Now you could even implement this as if someone skips to another position, this would influence the other players ;) So you see that the factual request is still made, the 'flash' app that downloads it just gets the network traffic in multiple streams.
  • by bruce_the_loon ( 856617 ) on Sunday June 15, 2008 @04:21PM (#23802989) Homepage

    Yes. And? So grabbing a huge file off of the server next to me is more efficient than a VOIP call to Tokyo. I'm not seeing the problem yet.

    The problem is subtle, and I've only seen it now that I read the TFA although I've experienced it with our internet connection at work.

    The sliding window mechanism of sending packets before the ACK of the previous one until you get NACK and then back off has an unpleasant side-effect. An ACK train coming back over three hops from the local P2P clients or ISP-based servers moves faster than one heading across the world over 16 hops with higher ping times. Therefore the sliding window opens more and the traffic over the three hops can dominate the link.

    Now add that problem with BitTorrent clients reported earlier that try for max bandwidth. That can force the window even wider.

    And once the DSLAM/switch/aggregation port is saturated with traffic, it will delay or drop packets. If those are ACKs from the other side of the world, that window closes up more. There goes the time-sensitive nature of VOIP down the toilet.

    On a shared-media network like cable, it doesn't even have to be you. If two people on the same cable are P2P transferring between each other, there goes the neighborhood. They dominate the line and the chap only using Skype down the road wonders why he isn't getting the performance he needs.

    I'm opposed to price-oriented non-neutral networks, your ISP charging Google for your high speed access to them. But a non-neutral network that does proper QOS by throttling bandwidth-heavy protocols that don't behave themselves on the network is acceptable. As long as the QOS only moves the throttled protocols down when needed.

  • Is is really? (Score:3, Interesting)

    by diamondmagic ( 877411 ) on Sunday June 15, 2008 @04:30PM (#23803057) Homepage
    What do we need a new laws for? Most of the existing problems, false advertising or anti-competitive behavior, could be solved with existing laws, if the right people would bother using them. If and only if those attempts fail, will we need new laws.

    If all else fails, we simply need competition, look at what Version FiOS has done.
  • by kandresen ( 712861 ) on Sunday June 15, 2008 @04:43PM (#23803169)
    If ISPs offered their true bandwidth limits, latency limits, and so on from the beginning and not false offers like "unlimited".

    I have always had throttled connection - I used to throttled at 256kbps down and 56kbps up.
    Then I paid more and I with the exact same connection now got 512kbps down and 128kbps up.
    Then I got a better service and I with the exact same connection got 2Mbps down and 512kbps up..

    They have throttled the connection all the time. The total use is irrelevant. What is is whether all users use the bandwidth at the same time or not.

    The providers could simply offer what they not under the assumptions we only will use 0.1% of it, but actually use what we buy.

    What is worse for the ISP:
    - if you download 2 GB a day (~60 GB a month) spread out evenly (continuously ~90kbps)
    - if you download only during peak hours one hour a day 0.5GB (~15GB/month) (continuously 1110 kbps)

    What happens if the bandwidth is not used ? Do the ISP loose anything? It is their ability to provide to multiple people at the same time that matters; it is clearly worse for the ISP in the second case were one person downloaded only 15GB a month than in the one with 90GB.

    The entire issue could be resolved by ISP's offering the valid numbers for upwards and downwards bandwidth and expected latency for the connection.
    Don't blame the customers for using what they paid for.
  • by ScaredOfTheMan ( 1063788 ) on Sunday June 15, 2008 @04:53PM (#23803233)
    Richard and I got into a Net Neutrality 'Discussion' in the comment section of Techdirt last year. I have a feeling he is some how benefits from the Pro Net neutrality side of the debate, although I have no proof. http://www.techdirt.com/articles/20070319/121200.shtml [techdirt.com] Judge for yourself. I did turn into a screaming little douche at the end though...but it was for the Love of a Free Internet.
  • by Stellian ( 673475 ) on Sunday June 15, 2008 @04:55PM (#23803243)
    Just forget about Multicast, it's a dead-end idea. Not because it's technically flawed (actually, it works pretty nicely), but because it ignores economics.
    A simplified economic model of the Internet calls for multiple level of service providers that sell bandwidth to each other. So I, as your ISP / backbone provider make as much money as bandwidth you can use. I have the option of enabling a technology that allows you to be more efficient and use less bandwidth, therefore pay me less. Meanwhile, this technology offers no benefits for me, in fact costs me money, the money needed to implement it and manage it.
    To add insult to injury, this technology works properly only if all the hops between you and your destination have deployed it correctly. So a bunch of telcos who's primary business is selling bandwidth must go trough hoops to make your data transfer more efficient. No, it's not gonna happen.
    To be successful, Multicast must be completely redesigned from an economical perspective such as to provide a immediate benefit for the provider that uses it (if this is at all possible), without reducing his revenue potential.
  • I do have some old experience, I see some BS in the phrases he uses.

    The Internet's traffic system does not gives preferential treatment to short/fast communication paths unless you are stupid enough to configure your network/telecommunications backbone-architecture to the S/FPF rather then route on QoS metrics and implied content criticality. TCP is ignored by the backbone it is part of the package and cannot route, only the IP part is the destination/route information use for packet-switching, ATM cell-switching is another backbone technology and (yes) both are (can be) used at the biz-office LAN/WAN network level.

    The technical term is semantics "round-trip time effect." Critical content delivery requires TCP/IP not time and a protocol like UDP is important for real-time/streaming content VoIP/VTC/.... UDP Packets (no need to manage) dropped/corrupt cannot be recovered, but TCP/IP has a process for packet dropped/corrupt recovery. UDP is a good fast protocol on LANs and for multimedia/broadcast (can case jitter/distortion), but UDP is not appropriate for email/downloads of large/critical files across the internet, because the complete email/file would then require another complete send/download. The less your RTT is not always best for TCP/IP (assured content delivery is critical) traffic, the faster UDP speeds, the more traffic you can deliver is great for VoIP, streaming MP* files ....

    IOW: Bandwidth and QoS is best kept net-neutral, and CableCo (or whichever IAP) needs to invest in their infrastructure and innovation not screw their customers with bullshit/legislation. Oh, some folks (like me), consider infrastructure "IAP" access (CableCo/TelCo) providers different from the "ISP" content/services (Google, Yahoo, MSN, /., SecondLife, Wired, PBS ...) providers. Letting either IAP or ISP control everything is corporate-welfare monopolies or worse, and will never provide innovations or QoS improvements. We already pay for bandwidth access and QoS, and don't need more bullshit about what causes (lack of reinvestment) jitter/UDP bullshit.

    VoIP functions best when it receives a stream of uninterrupted packets, but reality is VoIP was meant to function acceptable for voice communications and when there is adequate bandwidth provided VoIP provides an acceptable phone conversation. VoIP (the protocol) does not (as best I know) give a shit about consistent gaps ... for the voice conversation it would be nice, but the answer is bandwidth investment and/or truth in advertising (VoIP and get crappy due to limited bandwidth (or mother nature) problems).

    File transfer (FTP) applications simply care about the time between the request for the file and the time the last bit is received and if the file is corrupted then you/application make other FTP request for a clean+usable file. In between, it doesn't matter if the packets are timed by a metronome or if they arrive in a specific sequence of clumps when using TFTP. Jitter is the engineering/common term for variations in delay when data is corrupted/unrecoverable causing voice/video/content... distortion.

    Asynchronous Transfer Mode (ATM) (Cell switching) does manage both bandwidth and QoS, far better than packet switching and is great for VoIP/VTC....

    The Internet is neutral with respect to applications and to location ... the content provider/customer is paying for the bandwidth and QoS; So, how/what they use to send and receive content is of no damn business to any Cableco/TelCo/... IAP who are being paid to provide QoS access to bandwidth for the content sharing industry and their home/biz customers.

    The internet is not neutral with respect to QoS bandwidth ... if you cannot provide, then content/service providers and their customers can use a different IAP ... if thee is another IAP in their IAP's access area. Stupid IAP investment and poor i
  • Re:Multicast? (Score:3, Interesting)

    by CopaceticOpus ( 965603 ) on Sunday June 15, 2008 @05:39PM (#23803613)
    More and more bandwidth providers are switching to charging based on usage rather than a flat rate for access. If this trend continues, multicast could become very attractive.

    Suppose you have two ways to watch shows: one is on-demand, click-and-get-it-this-second access. This option will never go away, but you can expect to be charged full bandwidth price for this option. The second choice is to select a few shows ahead of time. You would then subscribe to the multicast broadcast (which might be repeated every couple of hours), download the show to your local cache, and watch it at your convenience. Your bandwidth provider would reward you for the small effort of planning ahead by not charging you for the transfer, or only charging you a small fraction of the regular rate.

    In theory, this could allow greater utility from the existing network capacity, and bring down costs for everyone.
  • by OldHawk777 ( 19923 ) * <oldhawk777&gmail,com> on Sunday June 15, 2008 @06:00PM (#23803771) Journal
    Consider telecommunications infrastructure "IAP" access (CableCo/TelCo) providers different from the "ISP" content/services (Google, Yahoo, MSN, /., SecondLife, Wired, PBS ...) providers.

    QoS Bandwidth delivered by IAPs, in the past, was found to be very questionable by the QoS Bandwidth ISP customers that wanted to confirm that they (ISPs) were indeed receiving the QoS bandwidth for which they contracted and paid. The typical home/biz user is in the business of trusting their IAP and not verifying QoS and bandwidth, which would be to complicated (for small biz and private users) and cost them too much.

    Letting either IAP or ISP control everything will never provide innovations or QoS improvements. We already pay for QoS bandwidth access, and don't need more bullshit about what causes jitter/UDP bullshit. Almost all Internet bandwidth problems are caused by a lack of reinvestment into infrastructure by the IAPs.
  • by OldHawk777 ( 19923 ) * <oldhawk777&gmail,com> on Sunday June 15, 2008 @06:13PM (#23803881) Journal
    ComCast is a cable TV company that supports Net-Nepotism, because they are both an IAP (primary) and ISP (secondary) and ending Net-Neutrality would expand monopoly like powers over the USA Internet by IAPs like ComCast, but not improve QoS bandwidth to urban and rural communities, small-biz, or citizens at home.

    Innovation requires investment and reinvestment ... the IAPs do not appear to have any great interest in expensive innovation/infrastructure investments that provide QoS bandwidth increases at capitalist "Open"market competitive prices for every one in the USA.
  • by Brett Glass ( 98525 ) on Sunday June 15, 2008 @08:21PM (#23804687) Homepage
    (That's more than 50 per state, so if you don't patronize one, it's not their fault.) That's hardly a duopoly situation. However, independent ISPs often pay more for bandwidth than the cable and telephone monopolies. Some pay as much as $300 per megabit per second per month for their backbone connections. They are thus even more susceptible to being harmed if greedy content providers -- such as Vuze -- siphon off their bandwidth using P2P, or if bandwidth hogs overrun their networks. So, the issue is not one of duopoly, nor is it one of greed on the part of the providers. (Many of them are just scraping by.) Rather, it's greed on the part of some bandwidth hogging users (5% use 80% of the bandwidth) and on the part of content providers which use P2P to avoid paying the freight for delivering their content to users. See http://www.brettglass.com/FCC/remarks.html [brettglass.com] for more on this issue.
  • by dpilot ( 134227 ) on Sunday June 15, 2008 @10:15PM (#23805409) Homepage Journal
    Unfortunately it IS that companies can't be trusted. We've adopted the meme that companies are responsible ONLY for returning stockholder value, within the framework of the law. If the law doesn't require a common-carrier style Internet - if that law permits them to turn it into cable-tv-on-steroids, extracting maximum value from content providers and shutting small content out, they may well do it. If the extra revenue from the content providers is greater than the revenue loss from the few "net neutrality extremists" that leave, they will do it. Not only will they, but by today's corporate meme they MUST do it, because it makes more money and to maintain a neutral Internet would be fiscally irresponsible. Unless LARGE numbers of people are ready and willing to give up broadband, net neutrality legislation is the only thing that will save the Internet.
  • by loki_tiwaz ( 982852 ) on Sunday June 15, 2008 @11:25PM (#23805805)
    they should just quit offering unlimited data plans unless they can actually offer unlimited data. unlimited dialup is easy to provide for, as in a whole month a user can get at best theoretically about 12gb if they are continuously downloading at full speed.

    the real problem is the marketing people are defining service options that the networks are not capable of supporting. some services are making a profit to support other services that aren't, which is fine in, for example, pre-packaged computer bundles, but because with internet service this affects everyone, this is the end result - isp's don't have the ability to provide the level of service they advertise so they must resort to throttling, which is of course done arbitrarily to certain kinds of traffic as they are the biggest bandwidth users, rather than doing it generally.

    if isp's just didn't spend so much time trying to hook those high bandwidth users up and made the prices of service to them higher, then the isp's could spend more money enhancing their bandwidth capacity instead of ending up having to explain why and what traffic they are shaping to keep use within the parameters of their networks.

    there is many factors related to how network applications are written, how various tcp/ip stacks schedule, how effective QoS systems are, and how widely deployed they are, but there is one guaranteed way to ensure networks aren't bogged down by bulk traffic and streaming users - always keep traffic levels below about half of capacity. the line might be rated to transport data at a certain speed but when you fill that pipe past a certain point you wind up with a great deal of turbulence which leads to latency issues.

    it's a bit similar to mastering levels in audio engineering - sure, you may have 120 dB of resolution in your recording medium, but the closer you get to filling all that space the less headroom you have for periodic spikes, which has lead in the commercial music engineering to more use of dynamic range compression, which produces a much 'duller' sound with less dynamics (some even say that this compressed dynamics leads to fatigue in the listener) - this problem never happened in cinema sound engineering because someone set a standard for how many dB average power should be targetted in a mix. Similarly, if the network provision industry would set a standard of aiming at around 50-60% utilisation average and accordingly adjusted planning for bandwidth upgrades and market penetration none of this would be a problem.

    beancounters see the network capacity specification and expect that they can run the network at that level without any problems. But of course beancounters also rate the potential of a resource according to a percentage of customer turnover below a certain level, meaning they can cheapskate to some degree and of course being that businesses care more about the bottom line than good service, this is the sort of issue that cannot be solved by anything other than legislation.

    i believe network neutrality as a concept misses the real point at issue here, which is simply businesses squeezing more money out of their lines than it is possible in real practise to allow, and pushing this limit just short of messing up the whole network. throttling bittorrent and streaming video is all about trying to hold back the flood of bandwidth demand so they can put off the upgrades for longer.

    there would not be a problem if they just didn't provide more bandwidth on the local loop than can be carried through the peering connections.
  • by nuintari ( 47926 ) on Sunday June 15, 2008 @11:57PM (#23805971) Homepage

    I'm opposed to price-oriented non-neutral networks, your ISP charging Google for your high speed access to them. But a non-neutral network that does proper QOS by throttling bandwidth-heavy protocols that don't behave themselves on the network is acceptable. As long as the QOS only moves the throttled protocols down when needed.

    Thank You!

    I work for an ISP, and net neutrality scares the hell out of me. We do not want to, and will not throttle back certain sites who won't pay us for premium access, or create a tiered pricing structure for our customers. What I want, is the right to manage my network to give my customers the best performance by de-prioritizing badly written, and poorly behaving protocols, AKA: 99% of all p2p stuff.

    We also don't want to see content providers shift their bandwidth costs onto the ISP networks via the use of p2p. Why pay for expensive backbone links when you can shove 50% or more of your bandwidth onto your customers, and their provider's network? Either let us ISPs manage our networks, or we will start charging for upload bandwidth on a usage basis. I really don't want to do this, but if net neutrality becomes a reality, I see this becoming a very popular way to save on bandwidth costs. Blizzard already does it, patches for World of Warcraft are distributed via bittorrent. Why they think it is appropriate for their service to be offloaded onto my network is beyond me, but they do. When I can't rate limit bittorrent, and it becomes a huge bandwidth hog, my customers that patronize services that are the source of the problem will see their bills go up.

    Thank you, I finally read a post from someone who gets it. I didn't think that would ever happen.

    Oh, and any replies to the effect of, "well, its your own fault for not having enough bandwidth" can just go eat a dick. I have bandwidth, and that is not the point. The point is content providers should provide their own bandwidth, not leach it from the ISPs in the name of the heavenly, super great, don't ever question it, p2p software demi-god.

    Man, I got way off target there.
  • by WolfWithoutAClause ( 162946 ) on Monday June 16, 2008 @08:16AM (#23808737) Homepage
    they'd have to make to make as much bandwidth as a customer bought available to them AT ALL TIMES

    You seem to be saying that if you have a 5M pipe, that you should be able to max that out 24x7.

    The thing is, I don't know about you, but I can't afford to pay for that amount of bandwidth.

    My ISP sells me a contended service, where I get to use about 1/50 of my max or so. I'm only actually using my pipe about that much, so I'm happy with that.

    If you want to use the pipe 24x7 you just have to pay more, you need a higher quality service, and they'll take your money just fine!
  • by slmdmd ( 769525 ) on Monday June 16, 2008 @09:16AM (#23809431)
    May be we should invent a new communication channel other than the stone age technology of cables for p2p, For example - GPS uses satellite. Say, a open source Wireless p2p device etc.. and then a p2p only international gateway service provider.
  • by Odder ( 1288958 ) on Monday June 16, 2008 @12:41PM (#23812233)

    Here's how media companies will kill the free internet we all know and love:

    The result will look like broadcast media does today, one big corporate billboard, instead of a free press. Just a little censorship is like being just a little pregnant.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday June 16, 2008 @02:31PM (#23813591)
    Comment removed based on user account deletion

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...