Explosion At ThePlanet Datacenter Drops 9,000 Servers 431
An anonymous reader writes "Customers hosting with ThePlanet, a major Texas hosting provider, are going through some tough times. Yesterday evening at 5:45 pm local time an electrical short caused a fire and explosion in the power room, knocking out walls and taking the entire facility offline. No one was hurt and no servers were damaged. Estimates suggest 9,000 servers are offline, affecting 7,500 customers, with ETAs for repair of at least 24 hours from onset. While they claim redundant power, because of the nature of the problem they had to go completely dark. This goes to show that no matter how much planning you do, Murphy's Law still applies." Here's a Coral CDN link to ThePlanet's forum where staff are posting updates on the outage. At this writing almost 2,400 people are trying to read it.
Kudo to their support team (Score:5, Insightful)
Photos or informaton on building? (Score:3, Insightful)
Being in the power systems engineering biz, I'd be interested in some more information on the type of building (age, original occupancy type, etc.) involved.
To date. I've seen a number of data center power problems, from fires to isolated, dual source systems that turned out not to be. It raises the question of how well the engineering was done for the original facility, or the refit of an existing one. Or whether proper maintenance was carried out.
From TFA:
Re:Server/customer ratio? (Score:3, Insightful)
Explosion? (Score:4, Insightful)
The only thing that I can imagine that could've caused an explosion in a datacenter is a battery bank (the data centers I've been in didn't have any large A/C transformers inside). And even then, I thought that the NEC had some fairly strict codes about firewalls, explosion-proof vaults and the like.
I just find it curious, since it's not unthinkable that rechargeable batteries might explode.
mr c
Re:Recovery costs (Score:5, Insightful)
Also, only the electrical equipment (and structural stuff) was damaged - networking and customer servers are intact (but without power, obviously).
Re:What does a server room (Score:4, Insightful)
Re:Server/customer ratio? (Score:5, Insightful)
5 servers, 5 cities, 5 providers (Score:2, Insightful)
I feel bad for their techs, but I have no sympathy for someone who's single-sourced, they should have propagated to their offsite secondary.
Which they'll be buying tomorrow, I'm sure.
Re:Server/customer ratio? (Score:5, Insightful)
Re:Server/customer ratio? (Score:3, Insightful)
Re:Kudo to their support team (Score:5, Insightful)
a bit wrong (Score:3, Insightful)
Re:Explosion? (Score:4, Insightful)
At one place I worked, every lightening storm my boss would rush to move his shitty old truck to underneath the can on the power pole, hoping the thing would blow and burn it so he could get insurance to replace it.
Re:More planning could have prevented this (Score:5, Insightful)
Re:Kudo to their support team (Score:4, Insightful)
It does not sound like the type of company that thinks of its customers as an enemy, as your message implies.
Re:Server/customer ratio? (Score:2, Insightful)
In fact I find it odd that this facility has so many individual customers. Seems like a lot of administrative overhead... If I were running that DC, I'd much rather lease out full or half racks, than individual units, then you let those people sublet to the small frys.
That's how most of the big hosting companies operate. They don't own their own datacenters, they just lease a cage or two, cram it full of gear and sell you that godawful oversold web space you love to hate. That's also why colocating a single server can be so goddamned expensive - datacenters set per-unit pricing high to scare away the Joe Blows, and the resellers make a lot more money selling crap hosting than subletting their precious space. This is especially true in the USA/Canada.
Re:Server/customer ratio? (Score:3, Insightful)
_The_ Power Room? (Score:3, Insightful)
How the hell could they claim redundant power with only one power room?
Re:More planning could have prevented this (Score:3, Insightful)
In related news, I was wondering why I wasn't getting much spam today and my sites didn't have strange spiders hitting them.
Re:Photos or informaton on building? (Score:3, Insightful)
The other possibility was that a tie was closed and the breakers over-dutied and could not clear the fault.
Odd that nobody was hurt though; spontaneous shorts are very rare-- most involve either switching or work in live boards, either of which would kill someone.
Re:Kudo to their support team (Score:1, Insightful)
Re:Who's hosted on ThePlanet? (Score:2, Insightful)
I'm a customer in that DC, and I'm a firefighter (Score:5, Insightful)
My thoughts as a customer of theirs:
1. Good updates. Not as frequent or clear as I'd like, but mostly they didn't have much to add.
2. Anyone bitching about the thousands of dollars per hour they're losing has not credibility to me. If your junk is that important, your hot standby server should be in another data center.
3. This is a very rare event, and I will not be pulling out of what has been an excellent relationship so far with them.
4. I am adding a fail over server in another data center (their Dallas facility). I'd planned this already but got caught being too slow this time.
5. Because of the incident, I will probably make the new Dallas server the primary and the existing Houston one the backup. This is because I think there will be long term stability issues in this Houston data center for months to come. I know what concrete, drywall, and fire extinguisher dust does to servers. I also know they'll have a lot of work in reconstruction ahead, and that can lead to other issues.
For now, I'll wait it out. I've heard of this cool place called "outside". maybe I'll check it out.
"Murphy's Law" != "Shit Happens" (Score:3, Insightful)
The lesson you should be taking from Murphy's Law is not "Shit Happens". The lesson you should be taking is that you can't assume that an unlikely problem (or one you can con yourself into thinking unlikely) is one you can ignore. It's only after you've prepared for every reasonable contingency that you're allowed to say "Shit Happens".
UMM.. USE STATIC PAGE?? (Score:5, Insightful)
Re:Service Sucked for those affected (Score:3, Insightful)
If your business loses $1000/minute while it's offline, get a quote for insurance that pays out $1000/minute while you're offline. Alternatively if you're happy self insuring take the loss when it happens.
It's almost as if people believe that SLAs are a form of service guarantee instead of a free very bad insurance deal.
Re:Kudo to their support team (Score:1, Insightful)
Their website had no indication of the fact that there was a problem, and no one was responding when I called their 1-866 customer service number. After waiting for half an hour, their customer service number was disconnected, and you couldn't even call the number any more.
They're better now, but the ETA they gave for having things working by "mid-afternoon" sunday looks unlikely now, and in the meantime, my business is hemorrhaging users...
They've had serious problems since I joined a few months ago, and always with absolutely no communication (I have to call their customer service to learn that all packets to their data center are being lost, etc). I am definitely backing up my stuff and switching providers the moment they come back online.
Re:_The_ Power Room? (Score:2, Insightful)
1700 test not necessarily a failure (Score:3, Insightful)
The initial draw from each new bank of gear to be given power will be very high so it will need to go slow.
The battery systems (be they on each rack or in large banks serving whole blocks) will try to charge all at once. If they're not careful, that'll heat those new power lines up like the filaments in a toaster. Remember, the battery plan they have was built with the idea that they'd be used very briefly during transition to generator power -- not drained down all at once.
Only once all the switches and routing gear is back up can they start updating the network paths (do they use BGP for this -- that's not my area of expertise) so that peering data starts flowing.
Only once the network is all up and stable (no small task on a site with dozens of high end peering points) can they even start doing banks of servers.
Its also probably that each bank of servers will needs its own new power lines (and eventually replaced conduit) in the distribution center that was destroyed.
Bank by bank they'll have to bring up all these servers, each of which will draw its maximum load during boot as disks are scanned and checked.
Most of these servers probably haven't been shut down in months or years. Some drives may not spin up due to tired motors that can run fine but spinning from cold is just too much now. Other servers may have boot configuration problems undiscovered since the machines have been running without reboot for a long time -- linux ones anyway
This isn't something out of Young Frankenstein where they'll yell across the room "throw za main svitch!" and a watch the lights dim briefly while 9000 servers boot up with the deafening sound of system beeps. If they did try such a thing -- as if such a thing were possible -- it would immediately blow at least another transformer if not more.
Think about it. 9000 servers @ an average of what, 300 watts, plus the networking gear, plus the air conditioning, plus charging all those batteries....you're talking megawatts.
Without a Mr. Fusion or Harry Mudd stumbling in with some chicks wearing dilithium crystal jewelery this is going to take a while.
Re:5 servers, 5 cities, 5 providers (Score:4, Insightful)
Most people own a single server that they make backups of in case of it crashing OR have two servers in the same datacenter in case one fails.
I don't know how you can easily do offsite switch over without a huge infrastructure to support it which most people don't have the time and money to do.
Get off your high horse.
I'm a firefighter AND a geek. You, not so much. (Score:5, Insightful)
I have a crew to protect. In this case, I'm going into an extremely hazardous environment. There has already been one explosion. I don't know what I'm going to see when I get there, but I do know that this place is wall to wall danger. Wires everywhere to get tangled in when its dark and I'm crawling through the smoke. Huge amounts of currents. Toxic batteries everywhere that may or may not be stable. Wiring that may or may not be exposed.
If its me in charge, and its my crew making entry, the power is going off. Its getting a lock-out tag on it. If you wont turn it off, I will. If I do it, you won't be turning it on so easily. If need be, I will have the police haul you away in cuffs if you try to stop me.
My job, as a firefighter -- as a fire officer -- is to ensure the safety of the general public, of my crew, and then if possible of the property.
NOW -- As a network guy and software developer -- I can say that if you're too short sighted or cheap to spring for a secondary DNS server at another facility, or if your servers are so critical to your livelihood that losing them for a couple of days will kill you but you haven't bothered to go with hot spares at another data center then you sir, are an idiot.
At any data center - anywhere - anything can happen at any time. The f'ing ground could open up and swallow your data center. Terrorists could target it because the guy in the rack next to yours is posting cartoon photos of their most sacred religious icons. Monkeys could fly out of the site admin's [nose] and shut down all the servers. Whatever. If its critical, you have off site failover. If not, you're not very good at what you do.
End of rant.
Re:_The_ Power Room? (Score:3, Insightful)
Re:I'm a firefighter AND a geek. You, not so much. (Score:3, Insightful)
Our S.O.G. (standard operating guidelines) are actually very specific about risk.
We will risk our lives to save a human life.
We will take reasonable risk to save the lives of pets and livestock.
We will take minimal risks to save property.
Sorry, but your building isn't worth the risk of my crew. That's reality.
Don't you DARE tell me what is and isn't bravery or cowardly until you put 50 pounds of gear on and crawl into a pitch black house that's burning over your head.
Don't you DARE tell me that you think you understand the difference between saving the blonde girl and saving your computer server.
This isn't TV World. This is the real world. Fire on TV doesn't look like real fire. You know why? Because a real house on fire doesn't look like anything but pitch black and that makes for lousy TV.
Get over yourself and go volunteer at your local fire department. 86% of the men and women in this country who will risk their lives for yours are volunteers. We could use your help if you have the guts for it. We'll teach you what you need to know -- and we'll keep you as safe as we can so you can go home to your family when its done.
Your examples are stupid and insulting to the 800,000 brave men and women who volunteer to risk death in the most painful way possible to save your sorry butt.
Yep (Score:4, Insightful)
You are also right on in terms of type of failure. I've been at the whole computer support business for quite a while now, and I have a lot of friends who do the same thing. I don't know that I could count the number of servers that I've seen die. I wouldn't call it a common occurrence, but it happens often enough that it is a real concern and thus important servers tend to have backups. However I've never heard of a data centre being taken out (I mean from someone I know personally, I've seen it on the news). Even when a UPS blew up in the university's main data centre, it didn't end up having to go down.
I'm willing to bet that if you were able to get statistics on the whole of the US, you'd find my little sample is quite true. There'd be a lot of cases of servers dying, but very, very few of whole data centres going down, and then usually only because of things like hurricanes or the 9/11 attacks. Thus, a backup server makes sense, however unless it is really important a backup data centre may not.