Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Technology

Seattle Data Center Outage Disrupts E-Commerce 118

1sockchuck writes "A major power outage at Seattle telecom hub Fisher Plaza has knocked payment processing provider Authorize.net offline for hours, leaving thousands of web sites unable to take credit cards for online sales. The Authorize site is still down, but its Twitter account attributes the outage to a fire, while AdHost calls it a 'significant power event.' Authorize.net is said to be trying to resume processing from a backup data center, but there's no clear ETA on when Fisher Plaza will have power again."
This discussion has been archived. No new comments can be posted.

Seattle Data Center Outage Disrupts E-Commerce

Comments Filter:
  • Heh (Score:5, Insightful)

    by MightyMartian ( 840721 ) on Friday July 03, 2009 @12:15PM (#28573147) Journal

    Redundancy ain't just a river in Egypt.

    • When this happens in this day and age the CIO should be fired! There is no excuse. It's a situation where you gamble that this will never happen but when it does you should go.
      • Re:No Backup?? (Score:5, Insightful)

        by Nutria ( 679911 ) on Friday July 03, 2009 @01:09PM (#28573665)

        When this happens in this day and age the CIO should be fired!

        And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????

        • by SkyDude ( 919251 )

          When this happens in this day and age the CIO should be fired! And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????

          If that's the case, then the aformentioned officers should give up their pay to the thousands of merchants who lost their day's pay due to this problem. Yeah, like that'll happen.

          Phone lines occasionally go out and that might affect local merchants, but when it's a data center that handles the livelihoods of thousands of merchants, there needs to be much greater redundancy. The businesses that are affected by this are not all huge e-tailers either. Many are just small operators trying to make a living on t

        • And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????

          Then they fire the CIO post-haste and blame the whole thing on him.

      • Re:No Backup?? (Score:4, Interesting)

        by sopssa ( 1498795 ) * <sopssa@email.com> on Friday July 03, 2009 @01:14PM (#28573715) Journal

        I know redundancy and such is better on business stuff, but this kind of reminds me of the fact how customer lines have lots of single failure points aswell. There was a day when TeliaSonera's, large nordic ISP, DHCP stopped working, leading 1/3 of the whole country's residents without internet access. Turns out there was a hardware failure on the dhcp server, leading me to believe that they actually depend on just one server to handle all the dhcp requests coming from customers. They did fix it in a few hours, but it was still unavailable for the rest of the day because hundreds of thousands computer's were trying to get an ip address from it. That being said, I remember it happening only once, but it still seems stupid.

        • Comment removed based on user account deletion
          • by ibbey ( 27873 )

            All fine and good... There is no possible way to design the entire world with redundant systems. But a company like Authorize.net doesn't have that excuse. Hopingh has nothing to do with it, it's called network engineering. They should have multiple data centers located in geographically dispersed parts of the world. This is hosting 101 for any large-scale internet business. The OP is right, the CIO should be cleaning out his desk as we speak.

      • They should also fire the person who was responsible for having a sprinkler installed above a transformer, exactly how is spraying water on a transformer going to help in a fire?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      It's interesting how many companies have assumed redundancy in place but never take the time to do proper testing. They figure that once a disaster happens, that everything will automatically work because their vendor or staff said so. To achieve true redundancy a company needs to do semi-frequent testing to ensure that everything is working properly. Authorize.net might have had what was assumed a redundant system in place, but once the disaster happen they soon realized their system wasn't designed or con

    • Denial of service as it were?
    • This is why clustering is ABSOLUTELY necessary for as large a company as Authorize.net. As parent said, putting all your eggs in one basket is a stupid idea...

      I wonder how many companies will switch to PayPal after this...
      • by rhekman ( 231312 )

        ... putting all your eggs in one basket is a stupid idea...

        ....but... maybe they blew their budget on a really, really good basket?

        • by Nutria ( 679911 )

          maybe they blew their budget on a really, really good basket?

          Mark Twain: Put all your eggs in one basket, and then guard that basket!!

    • More information is available from the NANOG (North American Network Operator's Group) list: http://comments.gmane.org/gmane.org.operators.nanog/65992 [gmane.org] .

      Excerpt:
      "
      Fisher Plaza, a self-styled carrier hotel in Seattle, and home to multiple
      datacenter and colocation providers, has had a major issue in one of its
      buildings late last night, early this morning.
      The best information I am aware of is that there was a failure in the
      main/generator transfer switch which resulted in a fire. The sprinkler
      system activated.

      • ..Outage Disrupts E-commerce.

        Oh noes! Whatever shall we do if e-commerce gets disrupted?

        Because we all know that the cha-chinging of virtual cash registers is the very music of the spheres that keeps the Universe in motion.

        • Let's imagine that you're actually paying this data centre large amounts of money with the assurance that the money means 99.9% uptime. Then, maybe, it might mean something more.

          If you don't give a crap about uptime, then hell, get a Google webpage or something.

          • There is a reason its a 99.9% uptime and not 100%, this can happen and you can't really sue them if they argue that this is the .1% its down.
          • Let's imagine that you're actually paying this data centre large amounts of money with the assurance that the money means 99.9% uptime.

            Then I'd have to kill myself for having created a stupid business model.

        • by ErkDemon ( 1202789 ) on Friday July 03, 2009 @01:56PM (#28574047) Homepage
          Actually -- in a totally unconnected incident -- my grocery shopping was disrupted today because (according to the note pinned to the closed store's shutters) the store's till server was down, and they'd shut up the shop while they waited for an engineer.

          I'm guessing that the server was probably local, possibly above the store, and might have gone fritzy in the heat.

          So, real-world implications of computer failure. A server goes down, and suddenly Eric Cannot Buy Cheese ("Aaaaiiiieeee!"). Eric has hard cash, store (presumably) has cheese, but store can no longer sell cheese to Eric. Or anything else.

          The shop "crashed".

          Okay, so I trudged off and did my grocery shopping elsewhere, but it was a little disturbing to think that we've already gotten to the point where a server problem can stop you buying food, in a "real" shop, with "real" money.

          • Re: (Score:3, Insightful)

            That's pathetic. I've seen stores stay open during 24 hour POWER FAILURES! Any manager who does not teach their employees how to manually do credit card transactions (yes you can do them by paper!) should never have been hired in the first place.

            When we lose power around here (once every 6 months or so), the stores stay open. They simply don't accept debit cards (which require a connection to the bank) until the power comes back on.
            • Or (gasp!) make change without a computron! I wonder if they even train that in grocery stores anymore...scary, indeed.

              • Re: (Score:3, Insightful)

                by Spike15 ( 1023769 )

                Or (gasp!) make change without a computron! I wonder if they even train that in grocery stores anymore...scary, indeed.

                I think the bigger issue in this case would be manually looking up the price for every single item. We tend to simplify selling things manually in this way (manually processing credit card transactions, making change manually, etc.), when really when really the biggest problem is being without the UPC system.

                • The only thing I can add was that I was at a Home Depot once during an extended power outage. They had a generator that ran emergency lighting and the register system, but they had to wait a while for it to boot back up and re-sync to corporate or something. Anyway, during that time they had employees all over the place helping people write down the price of items they were purchasing so the checkers could ring you up manually. At the register they would write down the UPC, price and quantity to update inve

                • by sjames ( 1099 )

                  Way back when dinosaurs roamed the earth, we had these 'stickers' on every can that showed the price....

                  Others would print it in ink using a stamper.

            • by mpe ( 36238 )
              That's pathetic. I've seen stores stay open during 24 hour POWER FAILURES! Any manager who does not teach their employees how to manually do credit card transactions (yes you can do them by paper!) should never have been hired in the first place.
              When we lose power around here (once every 6 months or so), the stores stay open. They simply don't accept debit cards (which require a connection to the bank) until the power comes back on.


              In other words it happens frequently enough that there is a procedure to
              • by adolf ( 21054 )

                I worked in retail once, for a regional department/grocery store.

                We had enough generator to maintain minimal lighting, keep cold stuff cold, and run the registers. Whenever the power was out on that end of town, people would instantly line up buying things there instead of the neighboring competitors who had no such facilities.

                I'd guess that this allowed it to pay for itself.

          • I can't buy any cheddar here? But it's the most popular cheese in the world!

      • Re: (Score:3, Interesting)

        by Nutria ( 679911 )

        there was a failure in the main/generator transfer switch which resulted in a fire. The sprinkler system activated.

        Where I work, the D.C. is in a sub-level basement. One day a few years ago, a dim-wit plumber was brazing a pipe with a propane torch, and swung it too close to a sprinkler head.

        Sprinkler went off and water did what it does: flow downhill, eventually pouring into the D.C., right onto the SAN storing "my" database...

        We were down for a few days. People couldn't access the web site or IVR, but f

  • Hmm. Power outage stops /. posts. News at 11

  • http://twitter.com/AuthorizeNet/status/2455435020 [twitter.com] Hopefully someone made an offsite backup as well.
  • News at 11...

    tomorrow.

  • by Cothol ( 460219 )

    Bing Travel servers are located in the same server hall. More info: http://isc.sans.org/diary.html?storyid=6721

    • by Zocalo ( 252965 ) on Friday July 03, 2009 @01:00PM (#28573589) Homepage

      The media are also following the story, KOMO a local station was knocked offline but are broadcasting from a backup site.

      Way to go guys! At least two national, and maybe even international, ICT companies on whom numerous affiliates depend upon fail to provide for an adequate backup facility and continuity plan, yet the local AM radio station manages to pull it off. I'm guessing that some heads are gonna roll after the holiday weekend...

      • I'm pretty sure they're talking about KOMO, the TV station, actually. It's one of the largest stations here in Seattle. I think they take up a fair chunk of Fischer Plaza, where the fire was. Still, your point about international and national business entities failing, when a local business succeeds is pretty stupid.
      • KOMO is one of the largest TV broadcasters in Seattle. Possibly the largest, although KING might have them beat. Yah, they also own a AM station.

        I mean, your point still kind of applies, but you might want to look up with KOMO actually is before you chime in with the podunk AM radio comments... http://en.wikipedia.org/wiki/KOMO-TV [wikipedia.org]

  • by Cysgod ( 21531 ) on Friday July 03, 2009 @12:29PM (#28573285) Homepage

    Apparently Verizon has a single point of failure for much of its FiOS for the metro areas of Western Washington state in this building as well so the FiOS customers are offline as well right now.

    • Clownshoes: Have no failover plan and be singly homed.
    • Meh: Have a failover plan.
    • Good: Have a failover plan that requires humans and exercise it regularly.
    • Better: Have a failover plan that is automated and exercise it regularly.
    • Best: Eliminate single points of failure so failover is turning off the flake or fail and going back to drinking a beer.

    Hot/Hot is always a more ideal solution than Hot/Warm or Hot/Cold for disaster recovery (and increasing equipment utilization/ROI), and this event demonstrates why.

    • Re: (Score:3, Informative)

      by Cysgod ( 21531 )

      Looks like from twitter comments that Verizon finished their failover since people's FiOS is coming back now.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Not just FIOS it looks like, I was wondering why my DSL was offline. Nearly all network services I would guess.

    • by brianc ( 11901 )

      Best: Eliminate single points of failure...

      Earth is a single point of failure.

      • Best: Eliminate single points of failure...

        Earth is a single point of failure.

        Milky Way Galaxy is a single point of failure.

    • I'll really be concerned about that in 2087 when Verizon finally starts rolling FIOS out in Snohomish. Christ, we had DSL in 1997, what the hell do you have to do to get FIOS? Sacrifice a virgin? Then to pour lemon juice on the wound, they SATURATE the airspace, billboards, advertising on mass transit (especially buses that go to Snohomish!) telling people to order FIOS. Meanwhile I know hicks in Louisiana who can't even spell the word "fiber" who have it in their dirt-floored one-room shacks.

      Fucking Verizo

      • Meanwhile I know hicks in Louisiana who can't even spell the word "fiber" who have it in their dirt-floored one-room shacks.

        Fucking Verizon.

        Blame the gubment for paying more money to companies who are willing to run new broadband service to areas that were previously under- or un-served. :/

    • by dodobh ( 65811 )

      Not everything can always be hot/hot. Anything involving state and large data volumes, for example.
      Think large databases. You need block level hot/hot for redundancy. Now for real redundancy, you need the other datastore to be in a different geography, and under a different government. That's a lot of latency per transaction.

      Hot/Warm may be feasible.

  • ... that authorize.net does not have a failover site.
  • by Anonymous Coward on Friday July 03, 2009 @12:41PM (#28573405)

    Fisher Plaza is supposed to be a regional telecomm / communications / medical care hub for the Seattle area. It was designed and built to *not* crash, even in a magnitude 9.5 quake. Sounds like they've got work to do ...

    • and this is not the first time they had an power outage. There is still no backup generate either.

      • And having sprinklers in the electrical room is such a good idea too. Let's make sure we don't just suffer from fire damage.... They're lucky no one was seriously injured.
  • System failure (Score:5, Informative)

    by ErkDemon ( 1202789 ) on Friday July 03, 2009 @12:44PM (#28573431) Homepage
    There are four main factors that can take a part of a society's key infrastructure offline.

    1: ACTS OF GOD
    Meteor strike, lightnight strike, extreme weather ...

    2: ACTS OF MALICE
    War, terrorism, extortion, employee sabotage, criminal attacks ...

    3: WEAK INFRASTRUCTRUCTURE
    Underpowered networks, inadequate UPS backups, skeleton staffing, the shaving of safety margins as an efficiency exercise, inadequate rate of replacing old hardware ...

    4: MANAGEMENT ARSINESS
    This is when a problem starts, and the people in charge either don't know how to react, don't care, or prioritise face-saving over actual problem-solving. This happens when you get an outage, and instead of system management promptly calling all their critical clients to inform them, and warn them that there's maybe twenty minutes of UPS capacity in the routers if the system's not fixed by then, they instead cross their fingers and hope that things'll work out, and worry about what to tell the clients afterwards.

    Fisher Plaza seems to have suffered from a case of #4 recently, so it's not surprising that they've gone down again. The first time should have been the wakeup call to show them that their human systems were in need of an overhaul. Without that overhaul, you're setting up a dynamic in which the second time it happens, things are even worse (because now people are locked into defensive mode).

    No matter how advanced your technological systems, if the people running it have the wrong mindset, you're gonna go down. And when you go down, you're gonna go down far far harder than necessary.

    • Re: (Score:3, Insightful)

      by SerpentMage ( 13390 )

      5: Government...

      A government that decides to come to your headquarters and decides they want all of your hardware pronto...

    • Re: (Score:3, Funny)

      by eln ( 21727 )

      3: WEAK INFRASTRUCTRUCTURE

      It's good to see that you've provided redundancy for the "TRUC" part of your infrastructure, but I'm concerned about the rest of it.

    • Isn't #3 a result of #4?
  • by johnncyber ( 1478117 ) on Friday July 03, 2009 @12:46PM (#28573461)
    ...except it failed as well. From their twitter:

    "@gotwww The backup data center was impacted too. Don't have info as to why. The team is solely focused on getting us back up for now."
    • by eln ( 21727 )
      What, was the backup data center on the floor directly below the primary data center?

      If I had to guess, either they did something that stupid or they didn't properly test their failover procedures or their backup data center, and either one or both of those things turned out to be inadequate.
    • by ZorinLynx ( 31751 ) on Friday July 03, 2009 @02:44PM (#28574429) Homepage

      Sometimes folks set up a redundant system and forget to make one key piece redundant.

      Example: A server rack with two UPS systems. Each server has two power cords, one going to each UPS.. but the switch everything is plugged into only has one power input, so it's connected to UPS A.

      Power blinks and UPS A decides to shit itself. Rack goes down, even though all the machines are up, because the network switch loses power.

      Solution? An auto switching power Y-cable with two inputs, and one output. But 80% of people will be lazy and not bother. Oops.

      Happens all the time; I see it everywhere.

      • Re: (Score:3, Insightful)

        by linuxbert ( 78156 )

        An auto switching power Y-cable with two inputs, and one output? ive never seen or heard of these.. Do you have a manufacturer or part number?
        id defiantly like some.

        • Re: (Score:2, Informative)

          by funkboy ( 71672 )

          An auto switching power Y-cable with two inputs, and one output? ive never seen or heard of these.. Do you have a manufacturer or part number?
          id defiantly like some.

          Well, it ain't just a Y cable and they're not super-cheap, but still affordable if you're running anything that needs anywhere near the level of redundancy that they provide.

          It's called a static transfer switch [apc.com] and can be had for a few hundred bucks from most APC dealers (and MGE dealers, now that the merger is complete).

          What's nice about them is that unlike a UPS, colo providers don't mind if you stick an STS in your rack, as a UPS removes the colo provider's ability to completely shut off everything in th

          • by Zak3056 ( 69287 )

            Of course, you still have a single point of failure, you've only moved it from the UPS to the transfer switch.

  • Geocaching.com too (Score:5, Informative)

    by dickens ( 31040 ) on Friday July 03, 2009 @12:56PM (#28573555) Homepage

    And on a holiday. Bummer. :(

    • Aha, somebody else noticed this as well!

      Not only is it a holiday, but there is a HUGE geocaching event (for 3 days) happening in B.C. and anyone attending (I know some people) are SOL for getting information about it.

      If anyone knows of a secondary site for finding info on the events, please post!
      • by KPexEA ( 1030982 )
        Event Locations
        # Cache Creek Park, 1500 Quartz Rd. (N50Â 49.039 W121Â 19.561)
        # Clinton, Reg Conn Park, Smith Ave. (N51Â 05.314 W121Â 35.225)
        # Lillooet, Xwisten Park, approx 5km from Lillooet on Hwy 40 (Moha Road) (N50Â 45.111 W121Â 56.112)
        # Logan Lake, Maggs Park, Chartrand Ave. (N50Â 29.549 W120Â 48.691)
        # Lytton, Caboose Park, 4th St. (N50Â 13.875 W121Â 34.925)
        # Merritt, Lions Park, Voght St & 1st Ave. (N50Â 06.882 W120Â 47.188)

        http: [goldcountry.bc.ca]
    • Geocachers of Slashdot unite!

      Yesterday I almost broke my daily record with 42 finds, but I came home too late to do the logging. Today the site was down all day long. Well, tomorrow then...

      As for KPexEA: great service!

    • by ackthpt ( 218170 )

      This is the bummer of all bummers this weekend. And me with a cache I'm about to hide.

      Nope, not gonna tell. It's a puzzle cache and no freebies!

    • by dickens ( 31040 )

      Still down here at 1:30 EDT Saturday. They could at least have done an off-site DNS. Weak!

    • by dwywit ( 1109409 )
      My wife's going nuts. She asked me "what happens when it comes back online and a bazillion geocachers start hitting the site?
  • by PPH ( 736903 ) on Friday July 03, 2009 @01:17PM (#28573741)

    ... who's broadcast facilities reside in this building (they were broadcasting from a park on Queen Anne hill this morning), it was due to a transformer vault fire. The resulting sprinkler operation rendered their backup generator inoperable.

    Being in the power biz, this sort of thing is to be expected in typical office buildings. Sometimes the power goes out. Live with it. What really puzzles me is how someone can take such a structure, install a raised floor and some big A/C units on the roof and sell it as a data center. This kind of crap goes on all the time, as I've seen purpose built data centers go down for single point failures.

    • by kyoorius ( 16808 )
      Same thing [slashdot.org] happened to theplanet.com last year. Transformer went boom, fire, etc. Backup generator was allegedly shut down as ordered by the fire department. This is happening so frequently, it should be included in the disaster planning and standard test scenarios.
      • Ditto happened to Caro Hosting several months ago. The backup generator, which had just been turned on because of a power outage,caught on fire. Said hosting service kept backups only of data, did not have actual failover servers (which they'd promised). Needless to say, providers were switched soon after.

    • Fischer plaza is actually pretty robust of a site, and well compartementalized. The problem with most telecom hotels though is the battery plant is the main line of defense; generator and utility equipment are often located in the same room.

      With Verizon, their HUB there should go 8 hours on battery in this type of failure while they are trying to coordinate with Aggrecco for a roll-up unit. Depending on timing and the fire department, they would expect a 6-8 hour outage.

  • "Our current estimate for re-establishing Bing Travel functionality is 5pm PST," says a notice at Bing

    When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.

    • by hey ( 83763 )

      I guess your point is that it is PDT time now.

    • by eln ( 21727 ) on Friday July 03, 2009 @02:05PM (#28574119)
      Focusing on something that 99% of us screw up at one point or another, particularly when our primary focus at the time is probably getting the service back online rather than checking the calendar to see if it's Daylight Saving Time or not, for me is always a red flag that you're an insufferable pedant.
    • Re: (Score:3, Insightful)

      by Phroggy ( 441 )

      "Our current estimate for re-establishing Bing Travel functionality is 5pm PST," says a notice at Bing

      When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.

      It's quite likely that this message was not posted by somebody in a technical role, but a managerial role. The technical people may very well have just said "by 5:00" or possibly "by 5:00 Pacific Time", and whoever posted the notice on the web site (while the technical people were busy working on trying to fix things) added "PST" instead of "PDT".

    • Pacific Standard Time.

      Seattle is on the west coast.

      Not everyone lives in new york you know...

    • When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.

      When someone is excessively pedantic for the sole reason of making his virtual penis larger and harder, I point and laaaaaaaaaaugh.

      Seriously, get the fuck over yourself. PST is a widely used and widely accepted descriptive term for the Pacific time zone.

  • Sounds more like Fischer Price. Glad that none of customers rely on Authorize.net.
  • Adhost oversees two sites for my family's business: http://www.seliger.com [seliger.com] and http://blog.seliger.com [seliger.com]. At least part of the Fisher Plaza data center seems to be up at the moment because seliger.com will load for me, while blog.seliger.com won't. When I figured this out a few hours ago, I sent an e-mail to Adhost and got this as part of the response:

    We have been advised by the building engineering team that they anticipate restoring power to the Plaza East building in plus or minus 4 hours. We sincerely ho

    • Sorry to reply to my own comment, but the Adhost e-mail servers are also working. I don't know if this is because their main site is coming back online or if it's because their backup worked.
      • My website is hosted at Adhost, and it's up right now. Email, too.

        I'd post my url for proof, but, I like it to stay online...
  • I used to manage a 22 rack cage that we leased from Internap at Fisher Plaza back in 2005. They really did build the place well. Massive diesel generators, independent well water, redundant cooling, etc. But it was designed to survive and continue broadcasting for a local news station for 18 days without resupply in the event of a major external disaster like an earthquake.

    I imagine they are reviewing their DR procedures and designs now to minimize collateral damage from internal factors.

    But let's not be to

  • Not the first time (Score:1, Informative)

    by Anonymous Coward

    This is the 2nd fire since 2008... Apparently Internap rent the power from the building so they have no control over the quality/maintenance of these generators and UPSes.

    The fire which started around 11:30 PM (or maybe earlier, but first signs were around that time) damaged badly some of the electrical risers, so they are unable to get power back so some parts of the datacenter. According to their last update they're getting external generators to bypass the damaged equipment and power up the rest of the d

  • I had to work today to find and fix a bug related to a particular external site... sure enough, our internet access was down.

    Pfft! I had a copy of Barry on a linux box, tethered my BlackBerry, a bit of iptables magic, and I'm back online to test.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...