Seattle Data Center Outage Disrupts E-Commerce 118
1sockchuck writes "A major power outage at Seattle telecom hub Fisher Plaza has knocked payment processing provider Authorize.net offline for hours, leaving thousands of web sites unable to take credit cards for online sales. The Authorize site is still down, but its Twitter account attributes the outage to a fire, while AdHost calls it a 'significant power event.' Authorize.net is said to be trying to resume processing from a backup data center, but there's no clear ETA on when Fisher Plaza will have power again."
Heh (Score:5, Insightful)
Redundancy ain't just a river in Egypt.
Re:No Backup?? (Score:1)
Re:No Backup?? (Score:5, Insightful)
When this happens in this day and age the CIO should be fired!
And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????
Re: (Score:3)
When this happens in this day and age the CIO should be fired! And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????
If that's the case, then the aformentioned officers should give up their pay to the thousands of merchants who lost their day's pay due to this problem. Yeah, like that'll happen.
Phone lines occasionally go out and that might affect local merchants, but when it's a data center that handles the livelihoods of thousands of merchants, there needs to be much greater redundancy. The businesses that are affected by this are not all huge e-tailers either. Many are just small operators trying to make a living on t
Re: (Score:2)
Google Checkout [google.com] and Amazon Payments [amazon.com] -- there's your redundancy, both with neither setup nor monthly fees.
Re: (Score:2)
Re: (Score:2)
And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????
Then they fire the CIO post-haste and blame the whole thing on him.
Re:No Backup?? (Score:4, Interesting)
I know redundancy and such is better on business stuff, but this kind of reminds me of the fact how customer lines have lots of single failure points aswell. There was a day when TeliaSonera's, large nordic ISP, DHCP stopped working, leading 1/3 of the whole country's residents without internet access. Turns out there was a hardware failure on the dhcp server, leading me to believe that they actually depend on just one server to handle all the dhcp requests coming from customers. They did fix it in a few hours, but it was still unavailable for the rest of the day because hundreds of thousands computer's were trying to get an ip address from it. That being said, I remember it happening only once, but it still seems stupid.
Re: (Score:2)
Re: (Score:2)
All fine and good... There is no possible way to design the entire world with redundant systems. But a company like Authorize.net doesn't have that excuse. Hopingh has nothing to do with it, it's called network engineering. They should have multiple data centers located in geographically dispersed parts of the world. This is hosting 101 for any large-scale internet business. The OP is right, the CIO should be cleaning out his desk as we speak.
Re: (Score:1)
They should also fire the person who was responsible for having a sprinkler installed above a transformer, exactly how is spraying water on a transformer going to help in a fire?
Re: (Score:3, Informative)
It's interesting how many companies have assumed redundancy in place but never take the time to do proper testing. They figure that once a disaster happens, that everything will automatically work because their vendor or staff said so. To achieve true redundancy a company needs to do semi-frequent testing to ensure that everything is working properly. Authorize.net might have had what was assumed a redundant system in place, but once the disaster happen they soon realized their system wasn't designed or con
Re: (Score:2)
Re: (Score:1)
I wonder how many companies will switch to PayPal after this...
Re: (Score:2)
... putting all your eggs in one basket is a stupid idea...
....but... maybe they blew their budget on a really, really good basket?
Re: (Score:2)
maybe they blew their budget on a really, really good basket?
Mark Twain: Put all your eggs in one basket, and then guard that basket!!
Re: (Score:1)
More information is available from the NANOG (North American Network Operator's Group) list: http://comments.gmane.org/gmane.org.operators.nanog/65992 [gmane.org] .
Excerpt:
"
Fisher Plaza, a self-styled carrier hotel in Seattle, and home to multiple
datacenter and colocation providers, has had a major issue in one of its
buildings late last night, early this morning.
The best information I am aware of is that there was a failure in the
main/generator transfer switch which resulted in a fire. The sprinkler
system activated.
Oh, the humanity! (Score:1, Troll)
Oh noes! Whatever shall we do if e-commerce gets disrupted?
Because we all know that the cha-chinging of virtual cash registers is the very music of the spheres that keeps the Universe in motion.
Re: (Score:2)
Let's imagine that you're actually paying this data centre large amounts of money with the assurance that the money means 99.9% uptime. Then, maybe, it might mean something more.
If you don't give a crap about uptime, then hell, get a Google webpage or something.
Re: (Score:1)
Re: (Score:2)
Then I'd have to kill myself for having created a stupid business model.
Re:Oh, the humanity! (Score:5, Insightful)
I'm guessing that the server was probably local, possibly above the store, and might have gone fritzy in the heat.
So, real-world implications of computer failure. A server goes down, and suddenly Eric Cannot Buy Cheese ("Aaaaiiiieeee!"). Eric has hard cash, store (presumably) has cheese, but store can no longer sell cheese to Eric. Or anything else.
The shop "crashed".
Okay, so I trudged off and did my grocery shopping elsewhere, but it was a little disturbing to think that we've already gotten to the point where a server problem can stop you buying food, in a "real" shop, with "real" money.
Re: (Score:3, Insightful)
When we lose power around here (once every 6 months or so), the stores stay open. They simply don't accept debit cards (which require a connection to the bank) until the power comes back on.
Re: (Score:2)
Or (gasp!) make change without a computron! I wonder if they even train that in grocery stores anymore...scary, indeed.
Re: (Score:3, Insightful)
Or (gasp!) make change without a computron! I wonder if they even train that in grocery stores anymore...scary, indeed.
I think the bigger issue in this case would be manually looking up the price for every single item. We tend to simplify selling things manually in this way (manually processing credit card transactions, making change manually, etc.), when really when really the biggest problem is being without the UPC system.
Re: (Score:2)
The only thing I can add was that I was at a Home Depot once during an extended power outage. They had a generator that ran emergency lighting and the register system, but they had to wait a while for it to boot back up and re-sync to corporate or something. Anyway, during that time they had employees all over the place helping people write down the price of items they were purchasing so the checkers could ring you up manually. At the register they would write down the UPC, price and quantity to update inve
Re: (Score:2)
Way back when dinosaurs roamed the earth, we had these 'stickers' on every can that showed the price....
Others would print it in ink using a stamper.
Re: (Score:2)
When we lose power around here (once every 6 months or so), the stores stay open. They simply don't accept debit cards (which require a connection to the bank) until the power comes back on.
In other words it happens frequently enough that there is a procedure to
Re: (Score:2)
I worked in retail once, for a regional department/grocery store.
We had enough generator to maintain minimal lighting, keep cold stuff cold, and run the registers. Whenever the power was out on that end of town, people would instantly line up buying things there instead of the neighboring competitors who had no such facilities.
I'd guess that this allowed it to pay for itself.
Obligatory (Score:2)
I can't buy any cheddar here? But it's the most popular cheese in the world!
Re: (Score:3, Interesting)
there was a failure in the main/generator transfer switch which resulted in a fire. The sprinkler system activated.
Where I work, the D.C. is in a sub-level basement. One day a few years ago, a dim-wit plumber was brazing a pipe with a propane torch, and swung it too close to a sprinkler head.
Sprinkler went off and water did what it does: flow downhill, eventually pouring into the D.C., right onto the SAN storing "my" database...
We were down for a few days. People couldn't access the web site or IVR, but f
No Carr.... (Score:2)
Hmm. Power outage stops /. posts. News at 11
Backup data center was impacted too (Score:2)
Slow news day! (Score:1)
News at 11...
tomorrow.
Also affecting Bing.com (Score:2, Interesting)
Bing Travel servers are located in the same server hall. More info: http://isc.sans.org/diary.html?storyid=6721
The best line from the SANS ISC (Score:4, Interesting)
The media are also following the story, KOMO a local station was knocked offline but are broadcasting from a backup site.
Way to go guys! At least two national, and maybe even international, ICT companies on whom numerous affiliates depend upon fail to provide for an adequate backup facility and continuity plan, yet the local AM radio station manages to pull it off. I'm guessing that some heads are gonna roll after the holiday weekend...
Re: (Score:2)
Re: (Score:2)
KOMO is one of the largest TV broadcasters in Seattle. Possibly the largest, although KING might have them beat. Yah, they also own a AM station.
I mean, your point still kind of applies, but you might want to look up with KOMO actually is before you chime in with the podunk AM radio comments... http://en.wikipedia.org/wiki/KOMO-TV [wikipedia.org]
Failover Planning (and this broke FiOS too) (Score:5, Informative)
Apparently Verizon has a single point of failure for much of its FiOS for the metro areas of Western Washington state in this building as well so the FiOS customers are offline as well right now.
Hot/Hot is always a more ideal solution than Hot/Warm or Hot/Cold for disaster recovery (and increasing equipment utilization/ROI), and this event demonstrates why.
Re: (Score:3, Informative)
Looks like from twitter comments that Verizon finished their failover since people's FiOS is coming back now.
Re: (Score:1, Informative)
Not just FIOS it looks like, I was wondering why my DSL was offline. Nearly all network services I would guess.
Re: (Score:1)
Best: Eliminate single points of failure...
Earth is a single point of failure.
Re: (Score:1)
Best: Eliminate single points of failure...
Earth is a single point of failure.
Milky Way Galaxy is a single point of failure.
Re: (Score:2)
I'll really be concerned about that in 2087 when Verizon finally starts rolling FIOS out in Snohomish. Christ, we had DSL in 1997, what the hell do you have to do to get FIOS? Sacrifice a virgin? Then to pour lemon juice on the wound, they SATURATE the airspace, billboards, advertising on mass transit (especially buses that go to Snohomish!) telling people to order FIOS. Meanwhile I know hicks in Louisiana who can't even spell the word "fiber" who have it in their dirt-floored one-room shacks.
Fucking Verizo
Re: (Score:2)
Meanwhile I know hicks in Louisiana who can't even spell the word "fiber" who have it in their dirt-floored one-room shacks.
Fucking Verizon.
Blame the gubment for paying more money to companies who are willing to run new broadband service to areas that were previously under- or un-served. :/
Re: (Score:2)
Not everything can always be hot/hot. Anything involving state and large data volumes, for example.
Think large databases. You need block level hot/hot for redundancy. Now for real redundancy, you need the other datastore to be in a different geography, and under a different government. That's a lot of latency per transaction.
Hot/Warm may be feasible.
It's mindboggling ... (Score:1)
Re: (Score:2)
They do. It sounds like the involves-humans failover process failed somehow.
Fisher Plaza is a disaster response center (Score:4, Informative)
Fisher Plaza is supposed to be a regional telecomm / communications / medical care hub for the Seattle area. It was designed and built to *not* crash, even in a magnitude 9.5 quake. Sounds like they've got work to do ...
Re: (Score:1)
and this is not the first time they had an power outage. There is still no backup generate either.
Re: (Score:1)
System failure (Score:5, Informative)
1: ACTS OF GOD ...
Meteor strike, lightnight strike, extreme weather
2: ACTS OF MALICE ...
War, terrorism, extortion, employee sabotage, criminal attacks
3: WEAK INFRASTRUCTRUCTURE ...
Underpowered networks, inadequate UPS backups, skeleton staffing, the shaving of safety margins as an efficiency exercise, inadequate rate of replacing old hardware
4: MANAGEMENT ARSINESS
This is when a problem starts, and the people in charge either don't know how to react, don't care, or prioritise face-saving over actual problem-solving. This happens when you get an outage, and instead of system management promptly calling all their critical clients to inform them, and warn them that there's maybe twenty minutes of UPS capacity in the routers if the system's not fixed by then, they instead cross their fingers and hope that things'll work out, and worry about what to tell the clients afterwards.
Fisher Plaza seems to have suffered from a case of #4 recently, so it's not surprising that they've gone down again. The first time should have been the wakeup call to show them that their human systems were in need of an overhaul. Without that overhaul, you're setting up a dynamic in which the second time it happens, things are even worse (because now people are locked into defensive mode).
No matter how advanced your technological systems, if the people running it have the wrong mindset, you're gonna go down. And when you go down, you're gonna go down far far harder than necessary.
Re: (Score:3, Insightful)
5: Government...
A government that decides to come to your headquarters and decides they want all of your hardware pronto...
Re: (Score:3, Funny)
3: WEAK INFRASTRUCTRUCTURE
It's good to see that you've provided redundancy for the "TRUC" part of your infrastructure, but I'm concerned about the rest of it.
Re: (Score:2)
Authorize.Net did have a backup (Score:3, Informative)
"@gotwww The backup data center was impacted too. Don't have info as to why. The team is solely focused on getting us back up for now."
Re: (Score:2)
If I had to guess, either they did something that stupid or they didn't properly test their failover procedures or their backup data center, and either one or both of those things turned out to be inadequate.
Re:Authorize.Net did have a backup (Score:4, Interesting)
Sometimes folks set up a redundant system and forget to make one key piece redundant.
Example: A server rack with two UPS systems. Each server has two power cords, one going to each UPS.. but the switch everything is plugged into only has one power input, so it's connected to UPS A.
Power blinks and UPS A decides to shit itself. Rack goes down, even though all the machines are up, because the network switch loses power.
Solution? An auto switching power Y-cable with two inputs, and one output. But 80% of people will be lazy and not bother. Oops.
Happens all the time; I see it everywhere.
Re: (Score:3, Insightful)
An auto switching power Y-cable with two inputs, and one output? ive never seen or heard of these.. Do you have a manufacturer or part number?
id defiantly like some.
Re: (Score:2, Informative)
An auto switching power Y-cable with two inputs, and one output? ive never seen or heard of these.. Do you have a manufacturer or part number?
id defiantly like some.
Well, it ain't just a Y cable and they're not super-cheap, but still affordable if you're running anything that needs anywhere near the level of redundancy that they provide.
It's called a static transfer switch [apc.com] and can be had for a few hundred bucks from most APC dealers (and MGE dealers, now that the merger is complete).
What's nice about them is that unlike a UPS, colo providers don't mind if you stick an STS in your rack, as a UPS removes the colo provider's ability to completely shut off everything in th
Re: (Score:2)
Of course, you still have a single point of failure, you've only moved it from the UPS to the transfer switch.
Geocaching.com too (Score:5, Informative)
And on a holiday. Bummer. :(
Re: (Score:2)
Not only is it a holiday, but there is a HUGE geocaching event (for 3 days) happening in B.C. and anyone attending (I know some people) are SOL for getting information about it.
If anyone knows of a secondary site for finding info on the events, please post!
Re: (Score:1)
# Cache Creek Park, 1500 Quartz Rd. (N50Â 49.039 W121Â 19.561)
# Clinton, Reg Conn Park, Smith Ave. (N51Â 05.314 W121Â 35.225)
# Lillooet, Xwisten Park, approx 5km from Lillooet on Hwy 40 (Moha Road) (N50Â 45.111 W121Â 56.112)
# Logan Lake, Maggs Park, Chartrand Ave. (N50Â 29.549 W120Â 48.691)
# Lytton, Caboose Park, 4th St. (N50Â 13.875 W121Â 34.925)
# Merritt, Lions Park, Voght St & 1st Ave. (N50Â 06.882 W120Â 47.188)
http: [goldcountry.bc.ca]
Re: (Score:2)
Geocachers of Slashdot unite!
Yesterday I almost broke my daily record with 42 finds, but I came home too late to do the logging. Today the site was down all day long. Well, tomorrow then...
As for KPexEA: great service!
Re: (Score:2)
This is the bummer of all bummers this weekend. And me with a cache I'm about to hide.
Nope, not gonna tell. It's a puzzle cache and no freebies!
Re: (Score:2)
Still down here at 1:30 EDT Saturday. They could at least have done an off-site DNS. Weak!
Re: (Score:1)
According to KOMO news (Score:3, Informative)
... who's broadcast facilities reside in this building (they were broadcasting from a park on Queen Anne hill this morning), it was due to a transformer vault fire. The resulting sprinkler operation rendered their backup generator inoperable.
Being in the power biz, this sort of thing is to be expected in typical office buildings. Sometimes the power goes out. Live with it. What really puzzles me is how someone can take such a structure, install a raised floor and some big A/C units on the roof and sell it as a data center. This kind of crap goes on all the time, as I've seen purpose built data centers go down for single point failures.
Re: (Score:1)
Re: (Score:1)
Ditto happened to Caro Hosting several months ago. The backup generator, which had just been turned on because of a power outage,caught on fire. Said hosting service kept backups only of data, did not have actual failover servers (which they'd promised). Needless to say, providers were switched soon after.
Re: (Score:2)
Fischer plaza is actually pretty robust of a site, and well compartementalized. The problem with most telecom hotels though is the battery plant is the main line of defense; generator and utility equipment are often located in the same room.
With Verizon, their HUB there should go 8 hours on battery in this type of failure while they are trying to coordinate with Aggrecco for a roll-up unit. Depending on timing and the fire department, they would expect a 6-8 hour outage.
Re: (Score:1)
How would you compare it to the Westin office building, which has redundant power risers, etc.?
sloppy engineering (Score:1, Flamebait)
"Our current estimate for re-establishing Bing Travel functionality is 5pm PST," says a notice at Bing
When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.
Re: (Score:2)
I guess your point is that it is PDT time now.
Re:sloppy engineering (Score:4, Funny)
Re:sloppy engineering (Score:4, Informative)
Re: (Score:2)
chromablue photography [chromablue...graphy.com]
by your sig I can tell that you're not an insufferable pedant. at all.
Re: (Score:2)
Also, having a Slashdot account.
Re: (Score:3, Insightful)
"Our current estimate for re-establishing Bing Travel functionality is 5pm PST," says a notice at Bing
When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.
It's quite likely that this message was not posted by somebody in a technical role, but a managerial role. The technical people may very well have just said "by 5:00" or possibly "by 5:00 Pacific Time", and whoever posted the notice on the web site (while the technical people were busy working on trying to fix things) added "PST" instead of "PDT".
Re: (Score:1)
Pacific Standard Time.
Seattle is on the west coast.
Not everyone lives in new york you know...
Re: (Score:2)
When someone is excessively pedantic for the sole reason of making his virtual penis larger and harder, I point and laaaaaaaaaaugh.
Seriously, get the fuck over yourself. PST is a widely used and widely accepted descriptive term for the Pacific time zone.
Re: (Score:2)
Not really sure what you are complaining about... PST == Pacific Standard Time. I don't see anything wrong with this.
And that's exactly why these kinds of mistakes are made.
Seattle is currently on PDT (GMT -0700), not PST (GMT -0800). The switch back to PST happens in November.
Re: (Score:2)
its still not incorrect as they stated that it was in standard time. if they only stated 5pm Pacific time, one would assume the current Daylight Savings time.
Canadian (and American I think, but dont hold me to it) Tide and Current tables are in Standard time, so you need to remember to add the hour when you are in Daylight Savings Time, otherwise your calculations are off, and you can hit low things, and run around on high things.
Re: (Score:2)
Changes to the time are not random at all, they're clearly defined. Of course those definitions are periodically changed randomly with minimal notification, but that's not the same problem.
Local Seattle coverage: (Score:2)
http://www.seattlepi.com/local/6420ap_wa_fisher_plaza_fire.html?source=mypi [seattlepi.com]
http://seattletimes.nwsource.com/html/localnews/2009415646_webfisherplaza04.html [nwsource.com]
Fisher Plaza? (Score:2)
Wow... (Score:2)
Re: (Score:2)
Re: (Score:1)
I'd post my url for proof, but, I like it to stay online...
Fisher Plaza Designed to survive External factors (Score:2)
I used to manage a 22 rack cage that we leased from Internap at Fisher Plaza back in 2005. They really did build the place well. Massive diesel generators, independent well water, redundant cooling, etc. But it was designed to survive and continue broadcasting for a local news station for 18 days without resupply in the event of a major external disaster like an earthquake.
I imagine they are reviewing their DR procedures and designs now to minimize collateral damage from internal factors.
But let's not be to
Not the first time (Score:1, Informative)
This is the 2nd fire since 2008... Apparently Internap rent the power from the building so they have no control over the quality/maintenance of these generators and UPSes.
The fire which started around 11:30 PM (or maybe earlier, but first signs were around that time) damaged badly some of the electrical risers, so they are unable to get power back so some parts of the datacenter. According to their last update they're getting external generators to bypass the damaged equipment and power up the rest of the d
Huge portable generator arrives at Fisher Plaza (Score:2, Interesting)
Twitpic link blocked from Slashdot?? (Score:1)
Re: (Score:2)
Works For Me.
affected me (Score:1)
I had to work today to find and fix a bug related to a particular external site... sure enough, our internet access was down.
Pfft! I had a copy of Barry on a linux box, tethered my BlackBerry, a bit of iptables magic, and I'm back online to test.
Re: (Score:2)
Wow, you are just as bad as AuthorizeNet... Namely you are putting all of your eggs into one basket called AMERICA... What you are ignoring are the ramifications if a government decides to take you down. And frankly I am more worried about a government taking me down than some accident.
I am part of a hedge fund and we have data centers in... Caymans, Monaco, and Switzerland... I think you get the drift here... And our exchanges that we talk to are scattered throughout the world... Is it simple? Cheap? No
Re: (Score:2)
it would require a terrorist attack on New York PLUS an earthquake in San Francisco to knock us offline.
Which is all moot since you're using authorize.net as a payment gateway. ;)
Re: (Score:2)
*avoids eye contact*