Seattle Data Center Outage Disrupts E-Commerce 118
1sockchuck writes "A major power outage at Seattle telecom hub Fisher Plaza has knocked payment processing provider Authorize.net offline for hours, leaving thousands of web sites unable to take credit cards for online sales. The Authorize site is still down, but its Twitter account attributes the outage to a fire, while AdHost calls it a 'significant power event.' Authorize.net is said to be trying to resume processing from a backup data center, but there's no clear ETA on when Fisher Plaza will have power again."
Also affecting Bing.com (Score:2, Interesting)
Bing Travel servers are located in the same server hall. More info: http://isc.sans.org/diary.html?storyid=6721
The best line from the SANS ISC (Score:4, Interesting)
The media are also following the story, KOMO a local station was knocked offline but are broadcasting from a backup site.
Way to go guys! At least two national, and maybe even international, ICT companies on whom numerous affiliates depend upon fail to provide for an adequate backup facility and continuity plan, yet the local AM radio station manages to pull it off. I'm guessing that some heads are gonna roll after the holiday weekend...
Re:No Backup?? (Score:4, Interesting)
I know redundancy and such is better on business stuff, but this kind of reminds me of the fact how customer lines have lots of single failure points aswell. There was a day when TeliaSonera's, large nordic ISP, DHCP stopped working, leading 1/3 of the whole country's residents without internet access. Turns out there was a hardware failure on the dhcp server, leading me to believe that they actually depend on just one server to handle all the dhcp requests coming from customers. They did fix it in a few hours, but it was still unavailable for the rest of the day because hundreds of thousands computer's were trying to get an ip address from it. That being said, I remember it happening only once, but it still seems stupid.
Re:Heh (Score:3, Interesting)
there was a failure in the main/generator transfer switch which resulted in a fire. The sprinkler system activated.
Where I work, the D.C. is in a sub-level basement. One day a few years ago, a dim-wit plumber was brazing a pipe with a propane torch, and swung it too close to a sprinkler head.
Sprinkler went off and water did what it does: flow downhill, eventually pouring into the D.C., right onto the SAN storing "my" database...
We were down for a few days. People couldn't access the web site or IVR, but fortunately it happened over a weekend, so the store-front operations weren't totally affected. Also, the system is part of an "asynchronously buffered" stove pipe, so operations "in front" of the downed machine just kept on processing.
Re:Authorize.Net did have a backup (Score:4, Interesting)
Sometimes folks set up a redundant system and forget to make one key piece redundant.
Example: A server rack with two UPS systems. Each server has two power cords, one going to each UPS.. but the switch everything is plugged into only has one power input, so it's connected to UPS A.
Power blinks and UPS A decides to shit itself. Rack goes down, even though all the machines are up, because the network switch loses power.
Solution? An auto switching power Y-cable with two inputs, and one output. But 80% of people will be lazy and not bother. Oops.
Happens all the time; I see it everywhere.
Huge portable generator arrives at Fisher Plaza (Score:2, Interesting)