Does the Internet Route Around Damage? (ripe.net) 29
Longtime Slashdot reader Zarhan writes: On Sunday and Monday, two undersea cables in Baltic sea were cut. There is talk of a hybrid operation by Russia against Europe, and a Chinese ship has been detained by Danish Navy. However, the interesting part is did the cuts really have any effect, or does the internet actually route around damage? RIPE Atlas tests seem to indicate so. RIPE Atlas probes did not observe any noticeable increase of packet loss and only a minimal and perfectly expected increase of latency as traffic automatically switched itself to other available paths. While 20-30% of paths experienced latency increases, the effects were modest and no packet loss was detected. That said, questions remain about the consequences of further cable disruptions. "We are blind on what would happen if another link would be severed, or worse, if many are severed," reports RIPE Labs.
Going forward ... (Score:4, Insightful)
Looks like the Internet will (just) have to track Chinese ships and route around them. :-)
Re: (Score:3)
maybe we should place autonomous sea mines around undersea internet cables.
Re:Going forward ... (Score:4, Interesting)
This isn't the first time a Chinese ship has gotten caught doing (something like) this. A while ago one "accidentally" dragged their anchor for a hundred miles or so and broke a communications cable. China denied it for a long time, then admitted the "mistake" and (I think) paid a fine.
Re: Going forward ... (Score:2)
They have been caught this time.
I'd say semi automatically (Score:3)
I'd say semi-automatically and it depends on how well BGP is configured on the different hops and how many different paths are available between hosts for it to be completely automatic but in some cases, yes it should be automatic and transparent. Then again, a fail-over route might not be able to handle all the capacity of a primary route. That's my understanding at least.
Re:I'd say semi automatically (Score:5, Interesting)
"The Internet" is a mesh of BGP speaking Autonomous Systems (I run one).
For the most part (there are exceptions, but they don't really matter here) everyone has a full routing table- i.e., a route to every single destination on the internet.
Most are also multi-homed (meaning they have multiple sets of full routing tables at different NNIs)
Functionally, this means that "Yes, the Internet routes around damage."
Of course, every mesh has some critical amount of damage it can sustain before parts of it go dark, but "The Internet" as a whole is not susceptible to simple cable cuts. Darking individual countries (particularly smaller ones) isn't terribly hard- but something like a continent? Not going to realistically happen.
Re: (Score:2)
BGP was only introduced in 1989 and the military used to be able to route around broken links since the beginning of the Internet. I can't tell, but was it all manual back then? Or just like a patchy server, scripts hacked together including ping and the like? Any insight?
I didn't see any mention of predecessors here:
https://en.wikipedia.org/wiki/... [wikipedia.org]
I played a little with BGP but we are relying on providers BGP for our links right now. We are using OSPF for the network where we fully control all devices rig
Re:I'd say semi automatically (Score:4, Informative)
Look at a man page for gated.conf .
It was more manual back then but there were daemons and route preferences and stuff.
Re: (Score:1)
Re: (Score:2)
Unfortunately BGP is also sensitive to misconfiguration so if done wrong then you can take down a whole autonomous system someplace else.
For redundancy - it only works if you have capacity left.
It was designed to survive nuclear war (Score:5, Funny)
Re:It was designed to survive nuclear war (Score:5, Insightful)
I'd rather NOT be 100% certain on this point, personally...
Re: It was designed to survive nuclear war (Score:2)
Re: It was designed to survive nuclear war (Score:2)
Nuclear war will affect a lot more than the hardware that directly supports the links. These days there are numerous dependencies which could be highly relevant. Cloudfront for example.
Re: It was (sorta) designed to survive nuclear war (Score:2)
The project in ARPA that proposed a network, was designed it to route around blown-up cities. However, the first ARPA network had very few nodes and very few lines, so it couldn't route around much of anything. It wasn't particularly targeted at survivability.
Since we haven't had a nuclear war before the (D)ARPA net became the Internet, there has been little interest in testing its large-scale rerouting. We don't particularly want to have a nuclear war just to see if the design actually handles massive r
Re: (Score:1)
So it really should. But it's kind of sad we're not sure, half a century later.
Naa, we are very sure how BGP works.
The problem is to "route around damage" requires another route, one that isn't the same as was damaged.
A single route to a POP is not going to have a second route to go around the first.
Then there is the issue of cost.
Two routes need to be kept at 50% or less utilization. Three routes at 66% or less. Etc.
Without that the routes around won't have the bandwidth to handle the extra traffic.
So it's not just the cost of the multiple links but the cost of what looks like unde
Re: (Score:2)
Two routes need to be kept at 50% or less utilization. Three routes at 66% or less. Etc. Without that the routes around won't have the bandwidth to handle the extra traffic.
So it's not just the cost of the multiple links but the cost of what looks like underutilized links, which those in it for profit or on the cheap will read as "wasted"
You can cheat a little depending on the degradation you are willing to accept. Like, say, 3 routes at max 80%, which would sound better to the finance department. Just tell them (lying) the 20% is due to tcp-ip packet overhead and you're good to go! :)
Same principle for server clusters as a side note.
don't cut it (Score:3)
Re: (Score:2)
The larger a packet you throw over a link, the more susceptible it, individually, is to random bit-errors (which are simply a fact of life on long-haul links)
This means packet loss increases with packet size.
i.e., small packets are protected from packet loss by the school-of-fish effect.
Re: (Score:2)
Very interesting! Although quite obvious, I never thought about it but it makes plenty of sense at first glance.
Yes (Score:2)
When the power goes out and takes FTTH, cellular service, CATV, etc out with it, I just drive down to the coffee shop with my laptop.
Simple / Complex system (Score:3)
Really, the internet is a simple / complex system. It's simple as there are only so many ways to reroute, but complex in that it's prone to cascade failures if the existing pathways are overstressed.
Example:
You have 4 links between A and B, each utilized at 50% capacity to make the maths easy. If you lose one link and the traffic fails over, the other 3 would each be around 62% capacity. If you lose another, the remaining two are ~75% capacity. If you lose one more, your remaining link is over 100% capacity and other things start to timeout / fail as there is insufficient capacity to service the load, and nobody can predict what happens then from a system-wide perspective.
Re: (Score:2)
Shouldn't that be 50%/66%/100%/200%?
If you have 4 identical links at 50% saturation each, that's 200 "points" of traffic. Which then gets distributed over 3, 2, and then finally 1 link.
In which case, the network is fully saturated with just two links. And while it's technically not overloaded, it's close enough that things are probably going to break anyhow.
Stupid question (Score:2)
TCP/IP - ARPANET, the Internet: is designed to rout around damage.
Obviously, that only works: if there is still a route.
Not if your AT&T or Time Warner (Score:2)