Blackout Shows Net's Fragility 287
It doesn't come easy wrote to mention a ZDNet article discussing a recent outage between Level 3 Communications and Cogent Communication. A business feud inadvertently highlighted the fragility of the Internet's skeleton. From the article: "In theory, this kind of blackout is precisely the kind of problem the Internet was designed to withstand. The complicated, interlocking nature of networks means that data traffic is supposed to be able to find an alternate route to its destination, even if a critical link is broken. In practice, obscure contract disputes between the big network companies can make all these redundancies moot. At issue is a type of network connection called 'peering.' Most of the biggest network companies, such as AT&T, Sprint and MCI, as well as companies including Cogent and Level 3, strike "peering agreements" in which they agree to establish direct connections between their networks. "
The small should pay for the big? (Score:5, Interesting)
What I don't get is why one of them would suddenly want the other to pay up. What's changed now, and why does the smaller company have to pay the big one's bills?
Am I missing something here?
When did this blackout happen (Score:1, Interesting)
Efficiency can be the enemy of robustness (Score:5, Interesting)
On a similar note, that's why there are 13 root DNS servers, and why most of us aren't supposed to use them. The DNS example though, is one where efficiency and robustness agree. It's more efficient, at least in terms of net bandwidth, to use a DNS server closer than the root servers.
Call the helpdesk...wait, THEY don't even know! (Score:5, Interesting)
Good article on this situation here
This situation has adversely affected various users of both companies' services. The inability of Level 3 to handle this situation in a fair and equitable manner to the consumers has alienated many customers and will continue to do so until the current situation is remedied. At what point is it good customer service to discontinue services due to no fault of said consumer base? Market history shows us that the single worse thing a company can do is to arbitrarily allow influences beyond the control of consumers to negatively impact services, determined by consumers to be status quo, without any warning or notification. If left unresolved and unaddressed, the current situation could set dangerous precedents for internet users across the country by allowing service providers to instantly discontinue provided services at the moment they feel that the services they provide are not being adequately compensated for from outside companies.
On a side note, I was listening to Howard Stern (oh no!) this morning and he said that his Time Warner internet connection at home didn't work. Howard then called a tech guy to come and fix the problem, only for him to call a help desk to figure out what happened. The help desk didn't even know what was wrong. It sounds like Level 3 just pulled the plug and didn't notify ANYONE. Or maybe it was Cogent, the point is nobody outside of that dispute KNEW what was going on.
This sounds like a good way to alienate your customers and/or ruin your business model. But that is just my opinion.
A New Approach (Score:3, Interesting)
Not a redundancy issue... (Score:4, Interesting)
Governmental Role? (Score:1, Interesting)
Why wouldn't it be a good thing for some governmental agency to regulate the physical location of various installations? It seems to me that many providers use the same colocations to house their equipment. It would seem smart that there would be some regulation to prevent all the Internet eggs in one basket.
Create several more physical IXP's that are located in geographically diverse areas with redundant connections. Then regulate that only a limited number of companies could colocate together within a certain number of IXP's.
This could prevent one companies "disagreement" with another from effecting the traffic being routed to an alternative link.
Does this make sense?
Re:Ask Slashdot (Score:4, Interesting)
Physicians trying to use the internet to take care of critically ill patients are already experiencing this. Radiologists sitting home reading films are seeing this as well.
Is 100% on neccessary? Hell, VoIP is making money like crazy over this unstable network of ours.
My suggestion is to test with people that will understand the limitations of your service. Then get a little VC money to spread your servers out.
ah peering (Score:4, Interesting)
But if most of the traffic from other networks is going to customers that are connected and already paying for your network's service then it makes no sense and is simply wrong for a network to start charging other network providers. It breaks the end to end communication model and is providing your customers with less than the service they are paying for. People pay for internet connectivity so they can transfer data between other users on the internet, not just the ones on your company's network.
If money exchanging hands is at all appropriate in this case it might be for the actual installation of routing equipment which establishes the physical connection between networks.
not a blackout (Score:2, Interesting)
Cogent Sucks (Score:2, Interesting)
I contacted Cogent's "premium" help desk last night when I found that I was suddenly no longer able to get to our networks in Australia. The tech had no idea that his own company was in the middle of a huge peering battle with L3. I had to tell them!
This was predictable (Score:5, Interesting)
Unfortunately, that is not the Internet that we have today. In the original Internet, every router knew about every network connected to the Internet. Most networks had connectivity to many other networks. Discovery protocols allowed alternative routes to be discovered if one failed.
Today, we don't have a (mostly) fully connected net, we have ISPs who don't know anything about networks which they don't "own", only that certain IP prefixes need to be passed to ISP x, y or z.
This makes the infrastructure much more fragile than it was originally intended to be. We ended up with this for a few reasons. First, the wimpy routers in use at the time had limited memory available to hold the network maps. The answer chosen was to no longer attempt to hold a full world view, but to divide the world into regions, certain IP prefixes would "belong" to those regions, and all any router would need to know about was networks in its region, plus how to route traffic to other regions, who would take care of routing within the region. This led to "backbone" connections - high capacity links needed because all traffic between regions now didn't "diffuse" through the network, but was channeled into specific connections. It also set the scene to allow the net to be commercialised, those regional centers were obvious "choke points" that an enterprising company could own and pretty much dictate the pricing to lower level enterprises who would do the dirty work of dealing with end-users.
Slowly but sureley the Internet evolved into a system dependent upon a few companies with high-speed links between them - prime candidates BTW, as locations for government control to be imposed. The self-healing nature of the original Internet was lost because all traffic HAS to pass via the top level companies infrastructure and over their interconnect backbone connections.
The "self healing" Internet is long gone.
Monitor it yourself (Score:5, Interesting)
http://www.internetpulse.net/ [internetpulse.net]
I'm not affiliated with them in any way, and I'm sure there are other similar sites, but I thought it was worth mentioning.
Re:Internet can route against natural calamities (Score:3, Interesting)
Re:The small should pay for the big? (Score:3, Interesting)
Re:This was predictable (Score:3, Interesting)
This is what happens when you have an industry based upon a high cost of entry (physical infrastructure, here) and a low marginal cost of supply. We need fat pipelines because we demand fast speeds and high volumes for our traffic. If we didn't have regions, but instead had the "original self-healing internet," how long do you think it would take to download big files if the source didn't happen to be just 2 or 3 routers away? Say goodbye to streaming video, etc.
Net cost of transmission would be far higher for packets that are many routers away in a truly web-based system, since not all apths are equal.
The problem is, how do we balance cheap efficiency (fatline "superhighways") with expensive redundancy to optimize the system for all participants?
Re:The small should pay for the big? (Score:3, Interesting)
Re:Didn't notice at all. (Score:3, Interesting)
Userfriendly? (Score:2, Interesting)
Re:It always will be fragile (Score:3, Interesting)
The previous topology in that office had been thicknet (where you had to manually tap the cable). Thinnet was seen as better. Or at least easier to build a network out of in a cubicle environment.
Token Ring wasn't all that bad. Unlike thinnet, the physical wiring was more of a topology like today's ethernet where you had a dedicated cable running from the patch panel to the workstation's network jack. At least, it was wired that way in the buildings where I've seen it. So it was easy enough to plug/unplug stations from the network in a central location. The topology was also designed to deal with a single break (the stations before/after the break would loopback).
The usual problems we had with TR were the fragile connectors (problematic for test environments / laptop users with frequent plug/unplug). Plus the issue that you only had 4Mbps (later 16Mbps) and a 4Mbps card wouldn't work on a 16Mbps network. Ethernet hubs/switches did a much better job of handling the upgrade path automatically where one port might be 10Mbps another 100Mbps and a third port running at 1Gpbs without redoing your entire network topology.