Net Neutrality vs. Technical Reality 251
penciling_in writes "CircleID has a post by Richard Bennett, one of the panelists in the recent Innovation forum on open access and net neutrality — where Google announced their upcoming throttling detector. From the article: 'My name is Richard Bennett and I'm a network engineer. I've built networking products for 30 years and contributed to a dozen networking standards, including Ethernet and Wi-Fi. I was one of the witnesses at the FCC hearing at Harvard, and I wrote one of the dueling Op-Ed's on net neutrality that ran in the Mercury News the day of the Stanford hearing. I'm opposed to net neutrality regulations because they foreclose some engineering options that we're going to need for the Internet to become the one true general-purpose network that links all of us to each other, connects all our devices to all our information, and makes the world a better place. Let me explain ...' This article is great insight for anyone for or against net neutrality."
Open source throttling detector? (Score:3, Interesting)
I hope he's not referring to QoS... (Score:2, Interesting)
Complete and utter BS (Score:0, Interesting)
2) Throttling is one thing, what Comcast was doing was essentially criminal. They were hijacking the communications and injecting malicious resets or other packets to kill a connection.
3) If they just properly implement QoS then things like VOIP and IPTV would work just fine.
I guess I don't understand. (Score:5, Interesting)
Re:Multicast? (Score:5, Interesting)
Multicast only works if internet TV is going to be like regular TV where a show is aired at a particular time. If it's going to be more like youtube on steroids multicast doesn't help.
Re:No, he's talking about replacing TCP/IP. (Score:3, Interesting)
Confused? (Score:1, Interesting)
Malicious ISP behavior vs governance (Score:0, Interesting)
Throttling is one thing, what Comcast was doing was essentially criminal. They were hijacking the communications and injecting malicious resets or other packets to kill a connection.
What concerns me is if governance systems move to the internet [wikipedia.org]. Even if it is just for online voting -- who will keep the ISPs from manipulating the governmental processes?
In any event, it is good to know that open source governance [wikipedia.org] is trying to muscle in on the action. At least the I.T. departments of the ISPs should be in favor of "open sourcing" the government, right?
Re:I guess I don't understand. (Score:5, Interesting)
No, it seems to me you understand it perfectly. However TFA seems to be blurring the lines between net neutrality and treating traffic differently. For instance if it were technically necessary to treat all Voice packets as high priority (it seems it isn't as VoIP works, but for the sake of argument) then there's nothing to stop a standard being agreed and implemented on a neutral internet, just so long as the voice packets are treated the same no matter who is sending and receiving them.
Re:No, he's talking about replacing TCP/IP. (Score:5, Interesting)
A related point that seems to run through the article is that more bandwidth is not the solution. But he doesn't explain why - for example
It's interesting to note that ATT themselves have declared more bandwidth to be the solution. They didn't phrase it quite that way, but ultimately that's the conclusion an educated reader can draw from their research results. 1x the bandwidth of a 'managed network' requires 2x the bandwidth in a 'neutral network' to achieve the same throughputs, etc. Sounds like a lot, but then you realize that bandwidth costs are not linear, nor are management costs. In fact, they tend to operate in reverse economies of scale - bandwidth gets cheaper the more you buy (think of it as complexity O(x+n) due to fixed costs and the simple 1 to 1 nature of links), but management gets more expensive the more you do it because the 1-to-1 nature of links gets subsumed by having to manage the effects of all connections on each other n-to-n style for O(x+n^2). Ars Technica analysis of ATT report [arstechnica.com]
What crap (Score:5, Interesting)
The effen telcos already got paid 200 billion dollars to do something about getting fiber to the premises and blew it on anything but that. Where's the "political engineering" solution to look into that to determine where the "QOS" broke down at ISP intergalatic central? Where are the ISP and telco fatcats sitting in front of congressional hearings explaining what happened to all that freekin money? Where did it go, real facts, real names, real figures.
And why in the hell does the bulk of the public air wave spectrum only go to the same billion dollar corporations, year after decade after generation, instead of being turned loose for everyone-you know, that "public" guy- to use and develop on? Why the hell do we even *need* ISPs anymore for that matter? This is the 21 st century, there are tons of alternative ways to move data other than running them through ISP and telco profitable choke points, and all I am seeing is them scheming on how to turn the internet into another bastardized combination of the effen telco "plans" and cable TV "plans". Really, what for?
Where's the meshnetworking using long range free wireless and a robust 100% equal client / server model that we could be using instead of being forced through the middle man of isps and telcos for every damn single packet? And what mastermind thought it was a good idea to let them wiggle into the content business? That's a big part of the so called problem there, they want to be the tubes plus be the tube contents, and triple charge everyone, get paid both ends of the connection and a middle man handling fee for..I don't know, but that is what they are on the record wanting, and industry drools like this doofus are providing their excuses. Not content with hijacking all the physical wired reality, for 100 years now, they get to hijack all the useful wireless spectrum, and no, WIFI DOESN'T CUT IT. That's at the big fat joke level in the spectrum for any distance.
Re:I hope he's not referring to QoS... (Score:3, Interesting)
1: By discarding any QoS information in the packet as it crosses your perimeter, and replacing it based on a guess done by deep packet inspection. Not only is this modifying data that wasn't meant to be modified, and thus legally no different from the dubious practice of rewriting HTML pages to show your own ads, but it also opens the question of whether you can claim to be a common carrier as long as you open every envelope to look at the first few lines of every letter. Never mind the extra latency and routing costs.
2: By accepting already existing QoS values at face value. While this might have worked 30 years ago, it will not work where there are commercial interests. Every spammer and spitter will prioritize his own packets as high as they can go, no matter what the consequences are to other traffic.
3: A combination of 1 and 2, where deep packet inspection assigns QoS priorities on packets that don't already have them. This is the worst of both worlds, and only an idiot would do such a thing, so this is what's generally happening out in the real world.
Re:Multicast? (Score:3, Interesting)
Re:Multicast? (Score:3, Interesting)
This should already help right ?
Re:Multicast? (Score:4, Interesting)
If you want on-demand, and NO local storage, then you are indeed in trouble.
Re:I hope he's not referring to QoS... (Score:3, Interesting)
If my company wants to use VOIP telephony between our branch offices and we want to pay extra for it to actually work right, but we don't want fully-private lines because it's wasteful and more expensive, then an ISP could offer us QoS on that basis. But they don't.
Re:Multicast? (Score:3, Interesting)
Player A requests a video using this tool, and subscribes on a multicast stream that is returned by the server.
Player A is watching, stream starts from 0.
Player B uses the same flash video tool.
Player B requests a video using this tool, and subscribes on an exciting multicast stream, and a new one starting from 0.
Player B now receives the data that is transmitted for player A. And the new data starting from 0.
Player B is watching, using the available streams on the network.
Now you could even implement this as if someone skips to another position, this would influence the other players
Re:No, he's talking about replacing TCP/IP. (Score:5, Interesting)
The problem is subtle, and I've only seen it now that I read the TFA although I've experienced it with our internet connection at work.
The sliding window mechanism of sending packets before the ACK of the previous one until you get NACK and then back off has an unpleasant side-effect. An ACK train coming back over three hops from the local P2P clients or ISP-based servers moves faster than one heading across the world over 16 hops with higher ping times. Therefore the sliding window opens more and the traffic over the three hops can dominate the link.
Now add that problem with BitTorrent clients reported earlier that try for max bandwidth. That can force the window even wider.
And once the DSLAM/switch/aggregation port is saturated with traffic, it will delay or drop packets. If those are ACKs from the other side of the world, that window closes up more. There goes the time-sensitive nature of VOIP down the toilet.
On a shared-media network like cable, it doesn't even have to be you. If two people on the same cable are P2P transferring between each other, there goes the neighborhood. They dominate the line and the chap only using Skype down the road wonders why he isn't getting the performance he needs.
I'm opposed to price-oriented non-neutral networks, your ISP charging Google for your high speed access to them. But a non-neutral network that does proper QOS by throttling bandwidth-heavy protocols that don't behave themselves on the network is acceptable. As long as the QOS only moves the throttled protocols down when needed.
Is is really? (Score:3, Interesting)
If all else fails, we simply need competition, look at what Version FiOS has done.
Net neutrality would work if the ISPs... (Score:3, Interesting)
I have always had throttled connection - I used to throttled at 256kbps down and 56kbps up.
Then I paid more and I with the exact same connection now got 512kbps down and 128kbps up.
Then I got a better service and I with the exact same connection got 2Mbps down and 512kbps up..
They have throttled the connection all the time. The total use is irrelevant. What is is whether all users use the bandwidth at the same time or not.
The providers could simply offer what they not under the assumptions we only will use 0.1% of it, but actually use what we buy.
What is worse for the ISP:
- if you download 2 GB a day (~60 GB a month) spread out evenly (continuously ~90kbps)
- if you download only during peak hours one hour a day 0.5GB (~15GB/month) (continuously 1110 kbps)
What happens if the bandwidth is not used ? Do the ISP loose anything? It is their ability to provide to multiple people at the same time that matters; it is clearly worse for the ISP in the second case were one person downloaded only 15GB a month than in the one with 90GB.
The entire issue could be resolved by ISP's offering the valid numbers for upwards and downwards bandwidth and expected latency for the connection.
Don't blame the customers for using what they paid for.
Dick and I Had it out on Tech Dirt a While ago (Score:2, Interesting)
Re:No, he's talking about replacing TCP/IP. (Score:5, Interesting)
A simplified economic model of the Internet calls for multiple level of service providers that sell bandwidth to each other. So I, as your ISP / backbone provider make as much money as bandwidth you can use. I have the option of enabling a technology that allows you to be more efficient and use less bandwidth, therefore pay me less. Meanwhile, this technology offers no benefits for me, in fact costs me money, the money needed to implement it and manage it.
To add insult to injury, this technology works properly only if all the hops between you and your destination have deployed it correctly. So a bunch of telcos who's primary business is selling bandwidth must go trough hoops to make your data transfer more efficient. No, it's not gonna happen.
To be successful, Multicast must be completely redesigned from an economical perspective such as to provide a immediate benefit for the provider that uses it (if this is at all possible), without reducing his revenue potential.
Reply: talking about profit not QoS/innovation (Score:3, Interesting)
The Internet's traffic system does not gives preferential treatment to short/fast communication paths unless you are stupid enough to configure your network/telecommunications backbone-architecture to the S/FPF rather then route on QoS metrics and implied content criticality. TCP is ignored by the backbone it is part of the package and cannot route, only the IP part is the destination/route information use for packet-switching, ATM cell-switching is another backbone technology and (yes) both are (can be) used at the biz-office LAN/WAN network level.
The technical term is semantics "round-trip time effect." Critical content delivery requires TCP/IP not time and a protocol like UDP is important for real-time/streaming content VoIP/VTC/.... UDP Packets (no need to manage) dropped/corrupt cannot be recovered, but TCP/IP has a process for packet dropped/corrupt recovery. UDP is a good fast protocol on LANs and for multimedia/broadcast (can case jitter/distortion), but UDP is not appropriate for email/downloads of large/critical files across the internet, because the complete email/file would then require another complete send/download. The less your RTT is not always best for TCP/IP (assured content delivery is critical) traffic, the faster UDP speeds, the more traffic you can deliver is great for VoIP, streaming MP* files
IOW: Bandwidth and QoS is best kept net-neutral, and CableCo (or whichever IAP) needs to invest in their infrastructure and innovation not screw their customers with bullshit/legislation. Oh, some folks (like me), consider infrastructure "IAP" access (CableCo/TelCo) providers different from the "ISP" content/services (Google, Yahoo, MSN,
VoIP functions best when it receives a stream of uninterrupted packets, but reality is VoIP was meant to function acceptable for voice communications and when there is adequate bandwidth provided VoIP provides an acceptable phone conversation. VoIP (the protocol) does not (as best I know) give a shit about consistent gaps
File transfer (FTP) applications simply care about the time between the request for the file and the time the last bit is received and if the file is corrupted then you/application make other FTP request for a clean+usable file. In between, it doesn't matter if the packets are timed by a metronome or if they arrive in a specific sequence of clumps when using TFTP. Jitter is the engineering/common term for variations in delay when data is corrupted/unrecoverable causing voice/video/content... distortion.
Asynchronous Transfer Mode (ATM) (Cell switching) does manage both bandwidth and QoS, far better than packet switching and is great for VoIP/VTC....
The Internet is neutral with respect to applications and to location
The internet is not neutral with respect to QoS bandwidth
Re:Multicast? (Score:3, Interesting)
Suppose you have two ways to watch shows: one is on-demand, click-and-get-it-this-second access. This option will never go away, but you can expect to be charged full bandwidth price for this option. The second choice is to select a few shows ahead of time. You would then subscribe to the multicast broadcast (which might be repeated every couple of hours), download the show to your local cache, and watch it at your convenience. Your bandwidth provider would reward you for the small effort of planning ahead by not charging you for the transfer, or only charging you a small fraction of the regular rate.
In theory, this could allow greater utility from the existing network capacity, and bring down costs for everyone.
Re:I hope he's not referring to QoS... (Score:3, Interesting)
QoS Bandwidth delivered by IAPs, in the past, was found to be very questionable by the QoS Bandwidth ISP customers that wanted to confirm that they (ISPs) were indeed receiving the QoS bandwidth for which they contracted and paid. The typical home/biz user is in the business of trusting their IAP and not verifying QoS and bandwidth, which would be to complicated (for small biz and private users) and cost them too much.
Letting either IAP or ISP control everything will never provide innovations or QoS improvements. We already pay for QoS bandwidth access, and don't need more bullshit about what causes jitter/UDP bullshit. Almost all Internet bandwidth problems are caused by a lack of reinvestment into infrastructure by the IAPs.
Re: ComCast is a QoS bandwidth example (Score:2, Interesting)
Innovation requires investment and reinvestment
There are more than 4,000 independent ISPs. (Score:3, Interesting)
Re:Companies can't be trusted/Nobody CAN be truste (Score:3, Interesting)
Re:It's not reality, it's all a lie (Score:5, Interesting)
the real problem is the marketing people are defining service options that the networks are not capable of supporting. some services are making a profit to support other services that aren't, which is fine in, for example, pre-packaged computer bundles, but because with internet service this affects everyone, this is the end result - isp's don't have the ability to provide the level of service they advertise so they must resort to throttling, which is of course done arbitrarily to certain kinds of traffic as they are the biggest bandwidth users, rather than doing it generally.
if isp's just didn't spend so much time trying to hook those high bandwidth users up and made the prices of service to them higher, then the isp's could spend more money enhancing their bandwidth capacity instead of ending up having to explain why and what traffic they are shaping to keep use within the parameters of their networks.
there is many factors related to how network applications are written, how various tcp/ip stacks schedule, how effective QoS systems are, and how widely deployed they are, but there is one guaranteed way to ensure networks aren't bogged down by bulk traffic and streaming users - always keep traffic levels below about half of capacity. the line might be rated to transport data at a certain speed but when you fill that pipe past a certain point you wind up with a great deal of turbulence which leads to latency issues.
it's a bit similar to mastering levels in audio engineering - sure, you may have 120 dB of resolution in your recording medium, but the closer you get to filling all that space the less headroom you have for periodic spikes, which has lead in the commercial music engineering to more use of dynamic range compression, which produces a much 'duller' sound with less dynamics (some even say that this compressed dynamics leads to fatigue in the listener) - this problem never happened in cinema sound engineering because someone set a standard for how many dB average power should be targetted in a mix. Similarly, if the network provision industry would set a standard of aiming at around 50-60% utilisation average and accordingly adjusted planning for bandwidth upgrades and market penetration none of this would be a problem.
beancounters see the network capacity specification and expect that they can run the network at that level without any problems. But of course beancounters also rate the potential of a resource according to a percentage of customer turnover below a certain level, meaning they can cheapskate to some degree and of course being that businesses care more about the bottom line than good service, this is the sort of issue that cannot be solved by anything other than legislation.
i believe network neutrality as a concept misses the real point at issue here, which is simply businesses squeezing more money out of their lines than it is possible in real practise to allow, and pushing this limit just short of messing up the whole network. throttling bittorrent and streaming video is all about trying to hold back the flood of bandwidth demand so they can put off the upgrades for longer.
there would not be a problem if they just didn't provide more bandwidth on the local loop than can be carried through the peering connections.
Re:No, he's talking about replacing TCP/IP. (Score:5, Interesting)
I'm opposed to price-oriented non-neutral networks, your ISP charging Google for your high speed access to them. But a non-neutral network that does proper QOS by throttling bandwidth-heavy protocols that don't behave themselves on the network is acceptable. As long as the QOS only moves the throttled protocols down when needed.
I work for an ISP, and net neutrality scares the hell out of me. We do not want to, and will not throttle back certain sites who won't pay us for premium access, or create a tiered pricing structure for our customers. What I want, is the right to manage my network to give my customers the best performance by de-prioritizing badly written, and poorly behaving protocols, AKA: 99% of all p2p stuff.
We also don't want to see content providers shift their bandwidth costs onto the ISP networks via the use of p2p. Why pay for expensive backbone links when you can shove 50% or more of your bandwidth onto your customers, and their provider's network? Either let us ISPs manage our networks, or we will start charging for upload bandwidth on a usage basis. I really don't want to do this, but if net neutrality becomes a reality, I see this becoming a very popular way to save on bandwidth costs. Blizzard already does it, patches for World of Warcraft are distributed via bittorrent. Why they think it is appropriate for their service to be offloaded onto my network is beyond me, but they do. When I can't rate limit bittorrent, and it becomes a huge bandwidth hog, my customers that patronize services that are the source of the problem will see their bills go up.
Thank you, I finally read a post from someone who gets it. I didn't think that would ever happen.
Oh, and any replies to the effect of, "well, its your own fault for not having enough bandwidth" can just go eat a dick. I have bandwidth, and that is not the point. The point is content providers should provide their own bandwidth, not leach it from the ISPs in the name of the heavenly, super great, don't ever question it, p2p software demi-god.
Man, I got way off target there.
Re:It's not gonna happen, sadly. (Score:3, Interesting)
You seem to be saying that if you have a 5M pipe, that you should be able to max that out 24x7.
The thing is, I don't know about you, but I can't afford to pay for that amount of bandwidth.
My ISP sells me a contended service, where I get to use about 1/50 of my max or so. I'm only actually using my pipe about that much, so I'm happy with that.
If you want to use the pipe 24x7 you just have to pay more, you need a higher quality service, and they'll take your money just fine!
Re:It's not reality, it's all a lie (Score:2, Interesting)
Neutral net or no net, there is no other choice. (Score:4, Interesting)
Here's how media companies will kill the free internet we all know and love:
The result will look like broadcast media does today, one big corporate billboard, instead of a free press. Just a little censorship is like being just a little pregnant.
Comment removed (Score:3, Interesting)