Army DNS ROOT Server Down For 18+ Hours 154
An anonymous reader writes "The H-Root server, operated by the US Army Research Lab, spent 18 hours out of the last 48 being a void. Both the RIPE's DNSMON and the h.root-servers.org site show this. How, in this day and age of network engineering, can we even entertain one of the thirteen root servers being unavailable for so long? I mean, the US army doesn't even seem to make the effort to deploy more sites. Look at the other root operators who don't have the backing of the US government money machine. Many of them seem to be able to deploy redundant instances. Even the much-maligned ICANN seems to have managed deploying 11 sites. All these root operators that have only one site need a good swift kick, or maybe they should pass the responsibility to others who are more committed to ensuring the Internet's stability."
So the Internet worked as it should... (Score:5, Insightful)
So the Internet worked as it should, and routed around this disruption. The other root servers were unaffected, and still functioned fine. So what exactly is the problem?
Re:Army Intelligence? (Score:1, Insightful)
It was probably outsourced to the cheapest bidder. Either that or some incompetent idiots got the winning bid
by greasing a few palms.
Why is it their problem? (Score:3, Insightful)
Because they don't have redundancy? Everyone gets mad because the USA wants to control the internet, but let something go bad and then someone wants to point fingers? Really? I just don't get the mentality of "We want you to do this for free" and then people turn around and B&M about the service being down for a bit.
One down, several dozens up (Score:2, Insightful)
What's the problem? The point of redundancy isn't to keep all redundant instances up all the time. The system is designed to allow for downtime of quite a few servers.
Lowest bidder (Score:4, Insightful)
Re:Why is it their problem? (Score:5, Insightful)
It has nothing to do with this being a US Army server. It has everything to do with bad design. The people given the responsibility of a root server should NOT take that responsibility lightly.
There are 12 others - pick one. (Score:5, Insightful)
Hardware fails. That's just how it is. Even with the highest end hardware available today, outages can happen. This is why there are 13 root servers to start with. So long as they don't all go down at once, all is good. As far as 18 hours to recover, why is that bad? With 12 others to pick from, should this one be a high priority? I think not. Getting one's panties in a bunch because a server fails and takes some time to recover makes you sound like a silly management type. Most of us lived at least a large part of our lives without any root servers - or any servers at all. It's not the end of the world if DNS goes down. It will be ok, I promise.
wow (Score:5, Insightful)
Re:There are 12 others - pick one. (Score:5, Insightful)
Meh. It's just one of 13 roots. Almost nobody queries it directly. If I have my DNS pointing to my ISP DNS, or to Google DNS, or to my own recursive caching DNS Server which uses one of those as an upstream, all 13 root servers could be down for literally days and it's likely that almost nobody would ever notice. Most DNS servers will retain large caches of most domains. If something freaks out when the roots disappear, a few small ISP's might need to make some quick configuration changes. Some DNS changes wouldn' propagate properly until the DNS root servers were back online. But, frankly, life would go on. Making all of DNS go away would be pretty much impossible, short of taking out every node on the Internet.
Yes, if *All 13* root servers suddenly died, there would be a few people who would get a late night at the office, but I certainly wouldn't see the effects directly.
"backing of the US government money" (Score:5, Insightful)
Rest assured, the government isn't holding back. Those non-redundant Army servers already cost an order of magnitude more then everybody else's redundant servers.
Re:There are 12 others - pick one. (Score:1, Insightful)
What is the cost of a missed email?
1 phonecall
How about a thousand of them?
Less spam. (seriously, can you think of anyone else needing to send 1000 mails in one hour?)
If one non-spam email in 10,000 contains an urgent piece of information what's the cost of missing an hour's worth?
1. SMTP does not guarantee timely delivery.
2. Sending an email now does not give you any assurance on when it will be read.
Anyone using email for time-critical information transport runs a risk -- that can be foreseen.
How about purchases?
"Oh noes! Amazon is offline! Where do I shop now?!?!?"
What if every internet based store, currency trading mechanism, bond exchange and commodity exchange lost an hour's income?
What if? Seriously: what if???
Some companies making money don't do so for 1 hour. Oh dear.
Oh wait, I'm not paid by money-making companies to care about them. So I don't.
(besides, if all of them are offline, then no one can turn to competition -- I think the effects wouldn't be as severe as when half of them were offline)
How much the cost of 1 billion missed "I know you're there and I support you" connections between friends?
1. For ONE FRIGGIN HOUR?!?! Get real. Seriously.
I've got some great friends, but sometimes whole nights go by without them telling me they're there for me. Or vice versa.
We tend to respect each other's sleep like that.
2. If the Internet is the only connection you have to your friends, are they really your friends?
How much the cost of 1,000 drivers that can't contact a tow truck
Because internet is down?? How about they phone? Or, you know, talk to people?
Besides that: the cost is 1,000 drivers having to wait one hour. Which is nowhere near the end of the world.
100,000 telecommuters that can't sign in to work
Woohoo!! The first of your arguments that I feel is somewhat legitimate.
I think they would more or less do the same thing as their in-office colleagues at the same time.
(I doubt most teleworkers need a permanent internet connection to do any work)
1,000,000 phone calls that don't happen
They exit skype and use the damn phone?
and 10,000,000 attempts to do some bit of research that fail?
For one hour.
Seriously, whenever I run into some problem with Ubuntu, I either find a solution in 5 minutes or I easily fail for an hour in doing research.
No sweat, I'll pick it up later, with a fresh mind. That usually does it.
A million businesses that can't get int touch with a million others?
"Oh noes!! No internet, that means we can't contact anyone!!!" ...
Unless they have a phone, fax, physical location, post,
You know, there was life before the internet too. And it worked pretty nicely for a while. Even the most conservative estimates place that at roughly 6,000 years.
Re:Lowest bidder (Score:1, Insightful)
Re:Lowest bidder (Score:3, Insightful)
> This is what happens when you give contracts to the lowest bidder.
Because they'd obviously get better results by giving them to the highest bidder...
Try to get your head around concepts like "requirements", "specifications", and "lowest qualified bid". You not only do not get paid if you don't do the job you agreed to do, you may even have to pay the extra cost of having someone else do it over.
Re:Army Intelligence? (Score:3, Insightful)
Er... the Navy has outsourced to HP. In fact, to get out of the agreement they are having to pay to even receive information about the network configuration.
Re:So the Internet worked as it should... (Score:3, Insightful)
We've all made links in cat5 > 200 meters that work perfectly fine. Granted, perfect reliability is something else, but for a backup link in a datacenter that charges an arm and a leg for fiber connections and < 10% of that price for copper ... I've even been known to stick that link in a 10G copper interface card to see if it'd work (even if it didn't work). But I've had reliable gigabit copper links over > 250meters operational for years.It helps a lot if they're the only ethernet link in a metal cable tray.
And the opposite as well. Ever had an ethernet link inside a bundle of VDSL links ? The link was barely 30 meters, but the error counters mounted faster than the traffic counters. And the link stayed up, so the routing protocol saw no need to reroute. Now that was a bitch to deal with. Especially since we couldn't replace the cable with cat6.
If your network design can't deal with signal loss on individual links, especially when known beforehand that said links are located in a warzone, you have other problems than theoretical maximum link distances. And even in general : hardware WILL fail, so prepare for failure instead of investing untold resources in preventing it.