Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet

Car Hits Utility Pole, Takes Out EC2 Datacenter 250

1sockchuck writes "An Amazon cloud computing data center lost power Tuesday when a vehicle struck a nearby utility pole. When utility power was lost, a transfer switch in the data center failed to properly manage the shift to backup power. Amazon said a "small number" of EC2 customers lost service for about an hour, but the downtime followed three power outages last week at data centers supporting EC2 customers. Tuesday's incident is reminiscent of a 2007 outage at a Dallas data center when a truck crash took out a power transformer."
This discussion has been archived. No new comments can be posted.

Car Hits Utility Pole, Takes Out EC2 Datacenter

Comments Filter:
  • by realmolo ( 574068 ) on Thursday May 13, 2010 @11:06PM (#32203418)

    Seriously, Amazon screwed up in a fairly major way with this.

    What more upsetting is this: If Amazon doesn't have working disaster recovery, what do other websites/companies have?

    Answer: Nothing. You'd be surprised how may US small-to-medium sized business are one fire/tornado/earthquake/hurricane away from bankruptcy. I'd bet it's over 80% of them.

  • UPS's (Score:5, Interesting)

    by MichaelSmith ( 789609 ) on Thursday May 13, 2010 @11:14PM (#32203446) Homepage Journal

    The classic in my last job was when we had a security contractor in on the weekend hooking something up and he looped off a hot breaker in the computer room, slipped, and shorted the white phase to ground. This blew the 100A fuses both before and after the UPS and somehow caused the generator set to fault so that while we had power from the batteries, that was all we had.

    It also blew the power supply on an alphaserver and put a nice burn mark in the breaker panel. So the UPS guy comes out and he doesn't have two of the right sort of fuse. Fortunately 100A fuses are just strips of steel with two holes drilled in them and he had a file, and a drill, etc. So we got going in the end.

  • by KGBear ( 71109 ) on Thursday May 13, 2010 @11:19PM (#32203470) Homepage
    I expect this is just a scaled up version of the problems I deal with every day. And I'm sure I'm not the only one. Users have grown so dependent on system services and management has grown so apart from the trenches that completely unreasonable expectations are the norm. Where I work for instance it's almost impossible to even *test* backup power and failover mechanisms and procedures because users consider even minor outages in the middle of the night unacceptable and managers either don't have the clout or don't understand the problem well enough to put limits to such expectations. As a result often times the only tests such systems get happen during real emergencies, when they are actually needed. I don't know how, but I feel we should start educating our users and managers better, not to mention being realistic about risks and expectations.
  • by pavera ( 320634 ) on Thursday May 13, 2010 @11:30PM (#32203528) Homepage Journal

    The DC that my company colos a few racks in had this same thing happen about a year ago (not a car crash, just a transformer blew out). But the transfer switch failed to switch to backup power, and the DC lost power for 3 hours.

    What is up with these transfer switches? Do the DCs just not test them? Or is it the sudden loss of power that freaks them out vs a controlled "ok we're cutting to backup power now" that would occur during a test? Someone with more knowledge of DC power systems might enlighten me...

  • by GaryOlson ( 737642 ) <.gro.nosloyrag. .ta. .todhsals.> on Thursday May 13, 2010 @11:52PM (#32203656) Journal
    Most Americans these days are over-pampered self-absorbed malcontents. If the poles are not out in front where crews can service without going on property -- or even using predefined right of ways -- too many people complain or sue for negligible property damage.

    Where I grew up, the power poles ran on the property lines behind and between the houses. Once, lightning took out the transformer on the power pole [great light show and high speed spark ejection] ; and people were willing to take down the fence, put the dogs in a kennel, and remove landscaping which had encroached on the power pole so the crew could replace the transformer and other service. Today, I expect everyone shows up with a digital camera to document "property damage" to file for compensation for landscaping which has illegally encroached on the equipment.

    Many places various issues prevent burying the power cable: high water table, daytime temperatures which do not cool the ground -- and the power cables, or even fire ants.
  • by Coopjust ( 872796 ) on Friday May 14, 2010 @12:26AM (#32203810)
    Often, mods will give a funny post "insightful" instead of "funny" because it gives the user positive karma (whereas funny does not affect karma). Not a use intended by CmdrTaco, I'd imagine, but it's a common practice.
  • by mcrbids ( 148650 ) on Friday May 14, 2010 @01:26AM (#32204070) Journal

    For years, I co-located at the top-rated 365 Main data center in San Francisco, CA [365main.com] until they had a power failure a few years ago. Despite having 5x redundant power that was regularly tested, it apparently wasn't tested against a *brown out*. So when Pacific Gas and Electric had a brownout, it failed to trigger 2 of the 5 redundant generators. Unfortunately, the system was designed so that any *one* of the redundant generators could fail and there wouldn't be any problem.

    So power was in a brownout condition, the voltage dropped from the usual 120 volts or so down to 90. Many power supplies have brownout detectors and will shut off. Many did, until the total system load dropped to the point where normal power was restored. All of this happened within a few seconds, and the brownout was fixed in just a few minutes. But at the end of it all, there was perhaps 20% of all the systems in the building shut down. The "24x7 hot hands" were beyond swamped. Techies all around the San Francisco area were pulled from whatever they were doing to converge on downtown SF. And me, 4 hours drive away, managed to restore our public-facing services on the one server (of four) I had that survived the voltage spikes before driving in. (Alas, my servers had the "higher end" power supplies with brownout detection)

    And so it was a long chain of almost success of well-tested, high-quality equipment that failed all in sequence because real life didn't happen to behave like the frequently performed tests did.

    When I did finally arrive, the normally quiet, meticulously clean facility was a shambles. Littered with bits of network cable, boxes of freshly-purchased computer equipment, pizza boxes, and other refuge were to be found in every corner. The aisles were crowded with techies performing disk checks and chattering tersely on cell phones. It was other-worldly.

    All of my systems came up normally; simply pushing the power switch and letting the fsck run did the trick, we were fully back up and all tests performed (and the system configuration returned to normal) in about an hour.

    Upon reflection, I realized that even though I had some down time, I was really in a pretty good position:

    1) I had backup hosting elsewhere, with a backup from the previous night. I could have switched over, but decided not to because we had current data on one system and we figured it was better not to have anybody lose any data than to have everybody lose the morning's work.

    2) I had good quality equipment; the fact that none of my equipment was damaged from the event may have been partly due to the brownout detection in the power supplies of my servers.

    3) At no point did I have any less than two backups off site in two different location, so I had multiple, recent data snapshots off site. As long as the daisy chains of failure can be, it would be freakishly rare to have all of these points go down at once.

    4) Even with 75% of my hosting capacity taken offline, we were able to maintain uptime throughout all this because our configuration has full redundancy within our cluster - everything is stored in at least 2 places onsite.

    Moral of the story? Never, EVER have all your eggs in one basket.

  • by Technonotice_Dom ( 686940 ) on Friday May 14, 2010 @02:27AM (#32204358)

    I don't get why you wouldn't have dual-redundant power supplies on all devices (routers, switches, servers), .... [snip]

    Seems like a design flaw here and/or someone was just being cheap.

    It would be the latter. The AWS EC2 instances aren't marketed or intended to be high availability individually. They're designed to be cheap and Amazon do say instances will fail. They provide a good number of data centres and specifically say that systems within the same zone may fail - different data centres are entirely independent. They provide a number of extra services that can also tolerate the loss of one data centre.

    Anybody who believes they're getting highly available instances hasn't done a basic level of research about the platform they're using and deserves to be bitten by this. Anybody who does know the basics of the platform will know the risks and will be able to recover from a failure, possibly even seamlessly.

  • by Renraku ( 518261 ) on Friday May 14, 2010 @02:27AM (#32204362) Homepage

    It's not really the DC power system that's the issue.

    The people are the issue.

    Example: You're the lead technician for a new data center. You request backup power systems be written into the budget, and are granted your wish. You install the backup power systems and then ask to test them. Like a good manager, your boss asks you what that will involve. You say that it'll involve testing the components one by one, which he nods in agreement with. However, when you get to the 'throw the main breaker and see if it works' part and he realizes that this one test might make them less than 99.99999% reliable if it fails, he disagrees and won't approve the testing.

    I can see where they're coming from here. They don't want downtime. They just aren't thinking far enough ahead. Ten minute test downtime or hours of unmitigated downtime. I abso-fucking-lutely guarantee you that the technicians will be blamed. Not management.

  • Who Cares? (Score:2, Interesting)

    by Aerosiecki ( 147637 ) on Friday May 14, 2010 @02:34AM (#32204400)

    Doesn't EC2 let you request hosts in any of several particular datacentres (which they call an "availability zones") just so you can plan around such location-specific catastrophes? No matter how good the redundant systems, some day a meteor will hit one datacentre and you'll be S.O.L. no matter what if you put all your proverbial eggs in that basket.

    Only a fool cares about a single-datacentre outage. This is why it's called "*distributed*-systems engineering", folks.

  • by thegarbz ( 1787294 ) on Friday May 14, 2010 @04:14AM (#32204754)

    It is exactly that level of understanding that can cause most outages (and even failures of safety critical systems). There is one part of the UPS that is uninterruptible and that is the voltage at the battery. Between the voltage at the battery and the computer you have cables, electronics, control systems, charging circuits, and inverters. Beyond that if it's an industrial sized UPS there'll be circuit breakers, distribution boards, and other such equipment, each adding failure modes to the "uninterruptible" supply.

    I'll give you an example of what went wrong at my work (a large petro chemical plant in Australia). Like a lot of plants most pumps are redundant, and fed from two different sub stations, that doesn't prevent loss of power but the control circuits in those sub stations run from 24V. Those 24V come from two different cross linked UPS units (cross linked meaning that both redundant boards are fed from both redundant UPS). So in theory not only is there a backup at the plant, backup substations, and backup UPSs but in theory any component can fail and still keep upstream and downstream systems redundant.

    Anyway we had to take down one of the UPS for maintenance reasons following a procedure we'd used plenty of times before. The procedure is simple: 1. Check the current in the circuit breakers so that the redundant breakers can withstand the load, 2. close the circuit breakers upstream of the UPS that is being shut down, 3. Close main isolator to the UPS. So that's exactly what we did, and when we isolated one of the UPS, the upstream circuit breaker tripped from the OTHER UPS and control power was lost to half the plant as it was now effectively not only isolated from battery backup, but from the main 24V supply.

    So after lots of head scratching we did some thermal imagery of the installation. The circuit breaker which tripped in sympathy when we took down it's counterpart was running significantly hotter than the main one. The cause was determined to be a lose wire. So even though the load through the circuit breaker was much less than 1/2 of the total load, when we took down the redundant supply and the circuit breaker got loaded, the temperature pushed it over the edge.

    A carefully designed dually redundant UPS system providing 4 sources of power failed when we took down 2 of them in a careful way due to a lose wire in a circuit breaker. A UPS is never truly uninterruptible, and even internal batteries in servers would be protected by a fuse of some kind to ensure the equipment goes down, but ultimately survives a fault

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday May 14, 2010 @09:00AM (#32206068) Homepage Journal

    2) I had good quality equipment; the fact that none of my equipment was damaged from the event may have been partly due to the brownout detection in the power supplies of my servers.

    Having had the spade connector that carries power from the jack at the back of a machine in a 1000W power supply fail and apparently (from the pattern of smoke in its case) actually emit flames I can say that brownout protection is definitely worth some money.

  • by Jeffrey Baker ( 6191 ) on Friday May 14, 2010 @12:14PM (#32208292)

    The answer is "yes". Transfer switches often fail and are rarely tested. This is also true of other power equipment. If it's rarely used the probability of it working in an emergency are somewhat low.

    However, in this case the transfer switch worked fine, but it had been misconfigured by Amazon technicians. According to their status email from yesterday (posted in their AWS status RSS feed) the outage was a result of the fact that one transfer switch had not been loaded with the same configuration as the rest of the transfer switches in the datacenter. The "failed" switch performed as configured and powered down.

There are two ways to write error-free programs; only the third one works.

Working...