Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking IT Technology

Stupid Data Center Tricks 305

jcatcw writes "A university network is brought down when two network cables are plugged into the wrong hub. An employee is injured after an ill-timed entry into a data center. Overheated systems are shut down by a thermostat setting changed from Fahrenheit to Celsius. And, of course, Big Red Buttons. These are just a few of the data center disasters caused by human folly."
This discussion has been archived. No new comments can be posted.

Stupid Data Center Tricks

Comments Filter:
  • bad article is bad (Score:5, Insightful)

    by X0563511 ( 793323 ) on Sunday August 15, 2010 @08:33AM (#33256488) Homepage Journal

    The summary reads like a digg post, and has two different links that, in actuality, link to the exact same thing.

    This needs some fixin'.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I seem to remember in the early days of Telehouse London an engineer switched off power to the
      entire building. Only two routes out of the UK remained (one was a 256k satellite connection)
      that had their own back-up power.

      • Good judgement comes from experience. And most experience comes as a result of bad judgement.

        Just about anyone who has been in the line of fire as sysadmin for long enough will recall some ill-concieved notion that caused untold trouble. Since my earliest experience with commercial computers was in a batch-processing environment, my initial mishaps rarely inconvenienced anybody other than myself. But I still recall an incident much later (early '90s) when I inadvertently managed to delete the ":per" direc
        • by Helen O'Boyle ( 324127 ) on Sunday August 15, 2010 @07:18PM (#33259636) Journal
          Good post title, BrokenHalo. I'll chime in with my two. 1987, my first full time job. I was a small ISV's UNIX guru. I wanted to remove everything under /usr/someone. I cd'd to /usr/someone and typed, "rm -r *", then I realized, hey, I know that won't get everything, better add some more, and the command became, "rm -r * .*". I realized, oh, no, this'll get .. too, so I better change it to: "rm -r * .?*". It took about 12 microseconds after I hit enter to realize that ".?*" still included "..". Yes, disastrous results ensued, even though I was able to ^C to avoid most of the damage, and I had the backup tape (back in the day, we used reels) in the tape drive just as users (other devs) began to notice that /usr/lib wasn't there. Yep, I have my own memories of red-facedly telling my boss, "oops, I did this, I'm in the process of fixing it now. Give me half an hour." In the future, "rm -r /usr/someone" did the trick nicely. Early 1990's, I was consulting in the data center of a company with 8 locations around the world. It contained the company's central servers that were accessed by about 700 users. Being a consultant, they didn't have a good place to put me, so I ended up at a desk in the computer room. Behind me was a large counter-high UPS that the previous occupant had used as somewhat of a credenza, and I carried on the tradition. That is, until the day I had put my cape on there, and the cape slid down and through one of those Rube Goldberg miracles caught the UPS master shutoff handle, pulled it down, and I heard about 30 servers (thank goodness there weren't more) powering down instantaneously. Amazingly, I lived, based on the ops manager pointing out to the powers that be that it was a freak accident and that others had been sitting similar stuff in the same place for years. The cape, however, was not allowed back in the data center. Fortunately, I've had better luck and/or been more careful over the past 20 years.
    • by macwhizkid ( 864124 ) on Sunday August 15, 2010 @09:33AM (#33256716)

      Article also needs fixin' in the lessons learned from the incidents described. Look, I'm sorry, but if your hospital network was inadvertently taken down by a "rogue wireless access point", the lesson to be learned isn't that "human errors account for more problems than technical errors" -- it's that your network design is fundamentally flawed.

      Or the woman who backed up the office database, reinstalled SQL server, and backed up the new (empty) server on the same tape. Yeah, a new tape would have solved that problem. Or, you know, not being a mindless automaton. Reminds me of a quote one of my high school teachers was fond of: "Life is hard. But life is really hard if you're stupid."

      • by amorsen ( 7485 )

        Or the woman who backed up the office database, reinstalled SQL server, and backed up the new (empty) server on the same tape. Yeah, a new tape would have solved that problem. Or, you know, not being a mindless automaton.

        It is not obvious to someone replacing the backup tape whether the backup is appended to the previous backup or replaces the previous one entirely. The former was not all that uncommon back when backup tapes had decent sizes. These days where you need 4 tapes to backup a single drive no one appends.

        Of course there are tons of other things wrong with a one-tape backup schedule, but again she couldn't necessarily be expected to know about them.

        • Re: (Score:2, Insightful)

          by macwhizkid ( 864124 )

          It is not obvious to someone replacing the backup tape whether the backup is appended to the previous backup or replaces the previous one entirely. The former was not all that uncommon back when backup tapes had decent sizes. These days where you need 4 tapes to backup a single drive no one appends.

          Yeah, it's not clear from TFA whether she thought there was enough space, or was just clueless. Regardless, though, when you have mission critical data on a single drive you shut it down, put in a fire safe until you're ready to restore, whatever. But you don't just casually keep using it. And who backs up a test database install anyway?

          It's just interesting that the first story in the article was a technical problem (poor network design/admin) being blamed on user error (unauthorized wireless AP/network ca

        • Re: (Score:2, Insightful)

          by fishbowl ( 7759 )

          >These days where you need 4 tapes to backup a single drive no one appends.

          These days with LTO-4, my biggest problem is having enough time to guarantee a daily backup.

      • Re: (Score:3, Insightful)

        by mlts ( 1038732 ) *

        This is one reason that D2D2T setups are a good thing. If the tape gets overwritten, most likely the copy sitting on the HDD is still useful for recovering.

        One thing I highly recommend businesses get is a backup server. It can be as simple as a 1U Linux box connected to a Drobo that does an rsync. It can be a passive box that does Samba/CIFS serving, one account for each machine and each machine's individual backup program dump to it. Or the machine can have an active role and run a backup utility like

  • by Florian Weimer ( 88405 ) <fw@deneb.enyo.de> on Sunday August 15, 2010 @08:37AM (#33256508) Homepage

    Can this really happen easily? I thought for really ugly things to happen, you need to have switches (without working STP, that is).

    • by Lehk228 ( 705449 )
      a hub can also be a switch. I have worked with people who referred to both switches and repeaters as hubs
      • by ianalis ( 833346 ) on Sunday August 15, 2010 @09:33AM (#33256718) Homepage

        According to CCNA Sem 1, a hub is a multiport repeater that operates in layer 1. A switch is a multiport bridge that operates in layer 2. I thought these definitions are universally accepted and used, until I used non-Cisco devices. I now have to refer to L2 and L3 switches even if CCNA taught me that these are switches and routers, respectively.

        • Re: (Score:3, Interesting)

          by X0563511 ( 793323 )

          It's so irritating when you ask for a hub, and someone hands you a switch. Stores do the same thing. It's hard enough to find hubs, let alone find them when the categorization lumps them together.

          No, I said hub. I don't want switching. I want bits coming in one port to come back out of all the others.

          You can do that with a switch, but getting a switch that can do that is a bit more pricey than a real hub...

        • Re: (Score:3, Informative)

          by bsDaemon ( 87307 )

          There is such a thing as a Layer 3 switch. They have routing functionality built-in, mostly to reduce latency for inter-vlan routing across a single switch. Cisco makes devices called Layer 3 switches, which are different from routers.

        • by Geoff-with-a-G ( 762688 ) on Sunday August 15, 2010 @06:15PM (#33259310)

          I'm CCNP, taking my CCIE lab next month, I'll give this a shot.

          Yes, the "cow goes moo" level definitions you get are "hub = L1, switch = L2, router = L3" but the reality is more complex.
          A hub is essentially a multi-port repeater. It just takes data in on one port and spews it out all the others.
          A switch is a device that uses hardware (not CPU/software) to consult a simple lookup table which tells it which port(s) to forward the data, and does so very fast (if not always wire-speed). Think like the GPU/graphics card in your PC. Something specific super fast.
          A router is a device that understands network hierarchy/topology (in the case of IP, this is mainly about subnetting, but there are plenty of other routed protocols) and can traverse that hierarchy/topology to determine the next hop towards a destination.

          Now, because of the protocol addressing in Ethernet and IP, these lend themselves easily to hub/switch/router = L1/L2/L3, but they're not really defined that way.

          These days, most Cisco switches (3560, 3750, 6500, etc) run IOS, the software which can do routing, and which uses CEF. CEF in a nutshell takes the routing table (which would best be represented as a tree) and compiles it into a "FIB", which is essentially a flat lookup-table version of that same (layer 3, IP) table. It also caches a copy of the L2 header that the router needs to forward an L3 packet. The hardware (ASICs) in the switches hold this FIB, and thus allow them to "switch" IP/L3 packets at fast rates and without CPU intervention, thus making them still "switches", even if they run a routing protocol and build a routing table.

          Meanwhile, when Cisco refers to a "router" in marketing terms, they're talking about a device with a (relatively) powerful CPU, which can not only perform actual routing, but also usually more CPU-intensive inter-network tasks like Netflow and NBAR.

    • by Pentium100 ( 1240090 ) on Sunday August 15, 2010 @08:51AM (#33256566)

      This should work quite OK with hubs. A hub, after all, sends the packet to every port except the one where it came from. So two hubs in a loop should just forward the same packet back and forth all the time.

    • by omglolbah ( 731566 ) on Sunday August 15, 2010 @08:53AM (#33256584)

      Oh yes, it works quite well for sabotaging a network.

      It used to be a constant issue at LAN parties where "pranksters" would do it before going to sleep... Usually we never found them but when we did we flogged them with cat5 cables stripped of insulation :p

    • I saw this happen at my high school once -- someone thought it would be funny to connect one port of an old switch to another port on that same switch. The entire network was flooded for a day while the IT staff tried to figure out where the switch was.

      That was years ago though, I would have thought that by now, these issues had been resolved.
      • by jimicus ( 737525 )

        It has in theory. Spanning tree should take care of it.

        Though I have seen interop issues which prevent any traffic from going between two different vendors' STP-enabled switches.

    • Human error rate is enormously variable [hawaii.edu], but for infrequently-occurring tasks (those you only do occasionally, not every day), a value of between 1% and 2% is a useful approximation.

      I am fortunate in working in an organisation with perhaps the best and most competent ops manager I have ever worked with, but even with well-written procedures and well-trained ops staff, errors still occur — but very rarely.

    • by MoogMan ( 442253 )

      Reading TFA, it was almost certainly because STP wasn't set up correctly. For instance, if the switchport in question had bpduguard enabled then it would have become disabled as soon as the erroneous hub was added, resulting in a localised issue not a network-wide problem.

      It's an issue that many Network Engineers learn the hard way exactly once and fix quickly by reviewing their STP configuration and in many cases, introduce QoS for sanity.

      "We didn't do an official lessons learned [exercise] after this, it was just more of a 'don't do that again,'" says Bowers

      Well, apart from that guy.

    • Re: (Score:3, Informative)

      by Shimbo ( 100005 )

      Can this really happen easily? I thought for really ugly things to happen, you need to have switches (without working STP, that is).

      Spanning tree can not deal with the situation where there is a loop on a single port, which you can do easily by attaching a consumer grade switch. There are various workarounds (such as BPDU protection) but they aren't standard, and require manual configuration. Once your network gets big enough, you probably can't afford not to use them, though.

  • by Anonymous Coward on Sunday August 15, 2010 @08:39AM (#33256514)

    Where I work a couple years ago one of the non-technical people decided to plug a router into itself. Ended up bringing down the whole network for ~25 people in a company which depended on the Internet (Internet marketing company).

    Unfortunately one of the tech guys figured it out literally as everyone was standing by the elevator waiting for it to take us home. We were that close to freedom :(

    • Plug all the ethernet-like T1 cables into a switch
    • Change the administrator password and forget what you changed it to
    • Hang everything off a single power strip, no UPS
    • Buy expensive remote management cards but don't bother to configure them
    • by v1 ( 525388 ) on Sunday August 15, 2010 @09:03AM (#33256610) Homepage Journal

      - run thinnet lines along the floor under people's desks, for them to occasionally get kicked and aggravate loose crimps, taking entire banks of computers (in a different wing of the building) off the LAN with maddening irregularity

      - plug a critical switch into one of the ups's "surge only" outlets

      - install expensive new baytech RPMs on the servers at all remote locations, and forget to configure several of the servers to "power on after power failure".

      - on the one local server you cannot remote manage, plug its inaccessible monitor into a wall outlet

      honorable mention:

      - junk the last service machine you have laying around that has a scsi card in it while you still have a few servers using scsi drives

  • Not using Cisco ACLs (Score:4, Interesting)

    by Nimey ( 114278 ) on Sunday August 15, 2010 @08:48AM (#33256556) Homepage Journal

    Our entire network was brought down a few years ago when a student plugged a consumer router into his dorm room's port. Said router provided DHCP, and having two conflicting DHCP servers on the network terminally confused everything that didn't use static IPs.

    Took our networking guys hours to trace that one down.

    • by omglolbah ( 731566 ) on Sunday August 15, 2010 @08:52AM (#33256574)

      Amusingly anyone who ever worked as tech crew at a lan party knows that this is the first thing you look for... :p

    • Re: (Score:3, Interesting)

      by GuldKalle ( 1065310 )

      I had that error too, on a city-wide network. The solution? Get an IP from the offending router, go to its web interface, use the default password to get in, and disable DHCP.

    • by jimicus ( 737525 ) on Sunday August 15, 2010 @09:16AM (#33256656)

      Hours?

      You get something on the network which has an IP from the offending DHCP server, use ARP to establish what that DHCP servers' MAC address is then lookup the switches' own tables to figure out which port that MAC is plugged into and switch that port off and wait for the equipment owner to start complaining. Takes about 3-5 minutes to do by hand, and some switches can do it automatically.

      • Re: (Score:2, Informative)

        by eric2hill ( 33085 )

        Cisco switches have a wonderful feature called dhcp snooping.

        ip dhcp snooping
        Followed by
        ip dhcp snooping trust
        on your port that supplies DHCP to the network. This ensures that only the trusted port can hand out dhcp addresses, and as a bonus, the switch tells you which MAC has which IP.
        show ip dhcp snooping binding

        • by fluffy99 ( 870997 ) on Sunday August 15, 2010 @12:49PM (#33257616)

          Cisco switches have a wonderful feature called dhcp snooping.

          Not supported on many of the lower end Cisco edge switches. It believe it also interferes with DHCP relaying.

          Another great tool is "ip verify source vlan dhcp-snooping
          " which can be used to block traffic from IPs/macs that did not obtain their IP from the DHCP server. This nicely prevents users from statically assigning addresses and/or spoofing their mac address.

      • by Nimey ( 114278 )

        *shrug* Most likely they'd never considered a "hostile" DHCP server on the network (lots of other things could have killed the network, so they thought), and had never seen what that looks like.

        OTOH we can't pay very well, so we can't get top-notch talent.

        • by jimicus ( 737525 )

          *shrug* Most likely they'd never considered a "hostile" DHCP server on the network (lots of other things could have killed the network, so they thought), and had never seen what that looks like.

          OTOH we can't pay very well, so we can't get top-notch talent.

          My employer develops router firmware. Our engineers are experts at finding odd ways to kill the network ;)

      • That just tells you what it's plugged in to. Doesn't necessarily tell you *where* it is, it just narrows it down. and if you can't disable that switch port remotely....hoo boy...and since it's in a dorm you have the risk of multiple patches in a single room or worse, someone smart enough to say "hey, this doesn't work in my room, lemme try my friend's room down the hall..."

        Goes back to the old line "I've lost a server. Literally lost it. It's up, it responds to ping, i just cant *find* it."

    • Re: (Score:3, Funny)

      I have done this error before :)

      What surprised me was that the linksys router assigned IP numbers up thorough the uplink connection. I thought that was impossible, guess not.

  • Quad Graphics 2000 (Score:5, Interesting)

    by Anonymous Coward on Sunday August 15, 2010 @08:51AM (#33256570)

    In the summer of 2000 I worked at Quad/Graphics (printer, at least at that time, of Time, Newsweek, Playboy, and several other big-name publications). I was on a team of interns inventorying the company's computer equipment -- scanning bar coded equipment, and giving bar codes to those odds and ends that managed to slip through the cracks in the previous years. (It's amazing what grew legs and walked from one plant to another 40 miles away without being noticed.)

    One of my co-workers got curious about the unlabeled big red button in the server room. Because he lied about hitting it, the servers were down for a day and a half while a team tried to find out what wiring or environmental monitor fault caused the shutdown. That little stunt cost my co-worker his job and cost the company several million dollars in productivity. It slowed or stopped work at three plants in Wisconsin, one in New York, and one in Georgia.

    The real pisser was the guilty party lying about it, thereby starting the wild goose chase. If he had been honest, or even claimed it was an accident, the servers would have all been up within the hour, and at most plants little or no productivity would have been lost.

    The reality: a 20 year old's shame cost a company millions.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Why the fuck was the button unlabeled? That's the REAL MISTAKE.

    • by FictionPimp ( 712802 ) on Sunday August 15, 2010 @09:16AM (#33256660) Homepage

      Well, where I work some maintenance genius decided that the location of the red button (near the entrance door) was too risky. They said people coming in the door could hit it while trying to turn on the lights.

      Their solution? They moved it to behind the racks. So every time I bend down to move or check something I have to be conscious not to turn off the power to the entire room with my ass.

    • by drsmithy ( 35869 ) <drsmithy@gm[ ].com ['ail' in gap]> on Sunday August 15, 2010 @05:05PM (#33259024)

      One of my co-workers got curious about the unlabeled big red button in the server room. Because he lied about hitting it [...]

      At a previous job we had one of these (albeit with a "Do not push this, ever" label above it) that did nothing more than set off a siren and snap a photo of the offender with a hidden camera. Much amusement was had by all when some new employee's curiosity inevitably got the better of them.

  • by Anonymous Coward
    The Etherkiller [fiftythree.org]
  • Sure, technology causes its share of headaches, but human error accounts for roughly 70% of all data-center problems.

    And 70% of all statistics are made up on the spot.

  • Video (Score:5, Funny)

    by AnonymousClown ( 1788472 ) on Sunday August 15, 2010 @09:11AM (#33256644)
    Here's a video of a tech worker explaining why these things happen. [youtube.com]

    It's very disturbing and you'll see why these things happen.

  • Way back in the day at the B.U. computer center, the machine room had an extensive Halon fire system with nozzles under the raised flooring and on the ceiling. Pretty big room that housed an IBM mainframe, about a half dozen tape drives, maybe 50 refrigerator-sized disk drives, racks and racks of magnetic tape, a laser printer the size of a small car, networking hardware, etc. etc. One day, the maintenance people were walking through and their two-way radios set off the secondary fire alarm. At that poin

  • by Kupfernigk ( 1190345 ) on Sunday August 15, 2010 @09:50AM (#33256782)
    This was a server room at an (unnamed) UK PLC. The air conditioning had remote management, and the remote management notified the maintenance people that attention was needed. So someone was sent out, on a Friday afternoon.

    When he arrived, most of the staff had gone home and the skeleton IT staff didn't want to hang around. So, they sent him away on the basis that his work wasn't "scheduled".

    Everybody came back on Monday to find totally fried servers.

    • by dirk ( 87083 ) <dirk@one.net> on Sunday August 15, 2010 @01:51PM (#33257918) Homepage

      I have a better AC story. We had a second AC unit installed in server room, as the first was cranking 24/7 and was just barely keeping up, with the thought that the 2 of them in tandem could handle the load. A few days after it was installed, we noticed the room was hot when we got in in the morning. Not enough to cause alarms, but hotter than it should be. As the day went on, it dropped, so we chalked it up to a one time fluke. This happened a time or 2 more throughout the week, but it always dropped during the day. Finally the weekend came, and it got hot enough to cause an alarm. We got in and the AC units kicked on without us actually doing anything, and the room started to cool down. We called out AC guys and they checked both system and couldn't find anything wrong with either of them. Well, the same thing happened again that night. Finally, someone was there late, trying to see if they could see what was going on. Everything was fine throughout the evening, so they finally decided to leave. Luckily, they noticed as they walked out the door and flipped off the lights that the AC units both turned off. HE went back in to verify, and when he turned the lights back on, the AC units both started again. Turned the lights off, and they both shut off again. The genius (lowest bid) company that we hired to install the new AC unit had wired both units into the wall switch for the lights! So when we were there checking, we had the lights on and everything worked perfectly. We went home for the day and turned off the lights, and the AC units. Needless to say, that company isn't even allowed inside out building anymore!

      • Re: (Score:3, Funny)

        by Linker3000 ( 626634 )
        More AC fun - all in the same room - as refurbed into a computer room in the 1980s by the in-house maintenance team:

        1) They re-lined the walls, but also boxed in the radiators without turning them off - we had numerous AC engineers turning up and scratching their heads while they re-did their thermal load calculations until we realised our walls were warm to the touch.

        2) They put the AC stat on a pillar by the windows so in the summer, the heat radiation falling on the stat from outside made the AC run
  • cascade failures (Score:4, Interesting)

    by Velox_SwiftFox ( 57902 ) on Sunday August 15, 2010 @10:02AM (#33256822)

    How can this leave out the standard cascade failure scenario?

    Trying to achieve redundancy, someone gets what they think is worst-case-30A of servers with multiple power supplies, plugs one power supply on each into one PDU rated 30A, one power supply into the other.

    They may or may not know that the derated capacity of of the circuit is only 24A, the data center is unlikely to warn them as they only appear to be using 15A per circuit at most.

    Anyway, something happens to one of the PDUs and the power is lost from it. Perhaps power factor corrections (remember the derating?) and cron jobs running at midnight on all the servers that raise the load high simultaneously. Maybe just the failure of one of the PDUs that was feared, causing the attempt at "redundancy".

    In any case, all of the load is then put on the remaining circuit, and it always fails. The whole rack loses power.

    • Re: (Score:3, Interesting)

      by omglolbah ( 731566 )

      Yep, it is one of the specific steps when we define requirements for server racks. Sadly not all the customers pay attention and then yell for us to come fix the mess when they find out years later :p

      This is especially fun if the trip to the "datacenter" involves a helicopter ride to the oil rig where it is located :p

  • So I'm working in this company's datacenter on their networking equipment. But it's installed is such a crappy way that there's a floor tile pulled right next to the rack and the cables are run down into that hole. I'm working around on the equipment and step down into the hole by accident, at that point I notice that it's suddenly alot quieter where I'm standing, I look down and realize I'd just stepped on the power button of a power strip that most of the networking equipment was plugged into. Oh Sh!t.

  • by ei4anb ( 625481 ) on Sunday August 15, 2010 @10:28AM (#33256938)
    Those data centers in the article sound huge, some may even have up to ten servers!
    • Well, they are those that probably will have less people and with less experience servicing it.... you can try to manage the first couple of servers with some "flexibility"; when you have hundreds of them everything must be done "by the book" or thing go definitely wrong.

      When I got to my current job, a couple of servers (our first rack servers) where installed, and nobody was "in charge" of them. Being myself a guy with initiative, I did the best that I could with them even if I had only experience in progr

  • Mainframe days story (Score:5, Interesting)

    by assemblerex ( 1275164 ) on Sunday August 15, 2010 @10:45AM (#33256994)
    The old tape machines (six foot tall) used to put out a tremendous amount of heat. Space is at a premium, so in the mainframe room the drives were normally put edge to edge,
    with one pushing air in and the other pulling air out. The machines had two 10-12" fans per unit, so stacking two or three units was fine. One site had so many machines side to
    side (over 7), the air coming out the last machine regularly set things on FIRE. It was not uncommon for the machine to ignite lint going through the stack, with it coming out the
    end as a small explosion like dust in a grain silo explosion. A fire extinguisher was kept on hand, and the wall eventually got a stainless steel panel because it was so common.
    • One site had so many machines side to side (over 7), the air coming out the last machine regularly set things on FIRE. It was not uncommon for the machine to ignite lint going through the stack, with it coming out the end as a small explosion like dust in a grain silo explosion. A fire extinguisher was kept on hand, and the wall eventually got a stainless steel panel because it was so common.

      I call BS.

      Thermodynamics 101: If the air coming out of the last unit is hot enough to ignite things, then what is the minimum temperature of the stuff inside?

      I can maybe believe that there was some sort of electrical fault inside that was infrequently arcing (maybe when a dust bunny passed through the fans?) and that might have caused the apparent problem. But there's no way to have functional electronics that are hot enough to ignite organic matter.

      • I don't know how old these tape machines were, but I can assure you that back in the day we had power systems that used vacuum tubes, and the tube space needed to be air cooled. The air temperature could reach several hundred Celsius if the fans stopped. Shortly after this would come the plop of inrushing air as the envelope of a KT88 collapsed at the hottest point. It would not be good design practice to series the units like this, but again back in the day thermal management wasn't even a black art. The l
  • by AnAdventurer ( 1548515 ) on Sunday August 15, 2010 @10:52AM (#33257020)
    When I was IT manager for a big retail mfg we had a cross-country move from the SF bay area to TN (closer to shipping hubs and lower tax rates). I was hired for the new plant, and I was there setting up everything (I did not know the company knew next to nothing about technology) and the last thing shipped before the company shutdown for the move was ship the data server via 2 day FedEx. The CFO packed it up and shipped it out, as the driver pulled away from the bay the server fell off the bumper and onto the cement. They picked it up (looking undamaged in it's box). When I opened it there was a shower of parts. A HD drive had detached from the case but not the cable and had swung around in that case like a flail. CFO had NOT INSURED the shipment or taken anything apart. That and much more to save $50 here and there.
  • Data center power (Score:4, Interesting)

    by PPH ( 736903 ) on Sunday August 15, 2010 @11:22AM (#33257156)

    Back when I worked for Boeing, we had an "interesting" condition in our major Seattle area data center (the one built right on top of a major earthquake fault line). It seems that the contractors who had built the power system had cut a few corners and used a couple of incorrect bolts on lugs in some switchgear. The result of this was that, over time, poor connections could lead to high temperatures and electrical fires. So, plans were made to do maintenance work on the panels.

    Initially, it was believed that the system, a dually redundant utility feed with diesel gen sets, UPS supplies and redundant circuits feeding each rack could be shut down in sections. So the repairs could be done on one part at a time, keeping critical systems running on the alternate circuits. No such luck. It seems that bolts were not the only thing contractors skimped upon. We had half of a dual power system. We had to shut down the entire server center (and the company) over an extended weekend*.

    *Antics ensued here as well. The IT folks took months putting together a shut down/power up plan which considered numerous dependencies between systems. Everything had a scheduled time and everyone was supposed to check in with coordinators before touching anything. But on the shutdown day, the DNS folks came in early (there was a football game on TV they didn't want to miss) and pulled the plug on their stuff, effectively bringing everything else to a screeching halt.

    • Re: (Score:3, Interesting)

      by thegarbz ( 1787294 )
      Basic rules of redundancy. A UPS isn't!

      We had a similar situation to yours except we actually had a dual power system. The circuit breakers on the output however had very dodgy lugs on their cables which caused the circuit breakers to heat up, A LOT. This moved them very close to their rated trip current. When we eventually came in to do maintenance on one of the UPSes we turned it off as per procedure, naturally the entire load moved to the other. About 30 seconds later we hear a click come from a dis
  • ... that an idiot with his/her hand on a switch, a breaker or a power cord is more dangerous than even the worst computer bug.

    (Judging from the houses that I see on my way to work each morning, some people shouldn't even be allowed to buy PAINT without supervision. And we provide them with computers and access to the Internet nowadays!)

    (If that doesn't terrify you, you have nerves of steel.)

  • by eparker05 ( 1738842 ) on Sunday August 15, 2010 @11:52AM (#33257324)

    My mother, who is a database admin for a county office (and has been for a long time), was getting a tour of a brand new mainframe server in the basement of her department's building back in the early 80's. At some point during the tour a large red button was pointed out that controlled the water-free fire suppression system. When pressed it activated a countdown safety timer that could be deactivated when the button was pulled back out.

    Always wanting to try things for herself, she went to the red button at the end of the tour and pressed it. No timer was activated, instead a noticeable shutting down sound was heard as the buzzing of the mainframe died down. She accidentally hit the manual power-off button for the mainframe which was situated very close to the fire suppression button and happened to look similar.

    All the IT staff of that building got to go home early that day because the mainframe took several hours to reboot and it was already lunch. She was very embarrassed and I have heard that story many times.

  • by martyb ( 196687 ) on Sunday August 15, 2010 @12:03PM (#33257376)

    Ah, the memories! Here are some of the stories I've heard and or witnessed over the years.

    1. Orientation: As a co-op student at DEC in 1980, I was told this (possibly apocryphal) story. On seemingly random occasions, a fixed-head disk drive would crash at the main plant in Maynard, Massachusetts. Not all of the drives, just a couple. Apparently the problem was isolated when someone was midway between the computer room and the loading dock. They heard the bump of a truck backing hard into the loading dock followed very shortly by a curse from the computer room! It apparently caused enough of a jolt to cause platters to tilt up and hit the heads... but only on the drives which were oriented north-south; those oriented east-west were not affected. So came the directive that all drives, henceforth, needed to be oriented north-south.
    2. Hot Stuff: Seems that a mini-computer developed a nasty tendency to crash in the early afternoon. But only on some days. Diagnostics were run. Job schedules were checked and evaluated. All the software and hardware checked out A-OK. This went on for quite a while until someone noticed that there was a big window to the outside and that in the early afternoon the sun's light would fall upon the computer. This additional heat load was enough to put components out of expected operational norms and caused a crash.
    3. Cool!: A friend of mine was a field engineer for DEC back in the day when minicomputers had core memory. He was called into a site where their system had some intermittent crashes. He ran diagnostics. All seemed to be within spec. He replaced memory boards. Still crashed. Replaced mother boards. Reloaded the OS from fresh tapes. Still crashed. He finally noticed that one of the fans on the rack was not an official DEC fan. Though it WAS within spec for airflow and power draw, it was NOT within spec for magnetic shielding... it would sporadically cause bit flips in the (magnetic) core memory. Swapping out the fan solved the problem.
    4. This sucked: Another place had a problem with a computer that would sometimes crash in the early evening after everyone went home for the day. Well, not everyone. The cleaning staff apparently noticed a convenient power strip on a rack and plugged their vacuum cleaner into it. The resulting voltage sag took down the server!
    5. Buttons: Every couple years, IBM would hold an open house where anyone in the community could come in and get a tour of the facility (Kingston, NY). This was back in 1984, IIRC. PCs were just starting to make an impact at this time... big iron was king. We're talking about a huge raised-floor area with multiple mainframes, storage, tape drives... MANY millions of dollars per system. A few hundred users on a system was quite an accomplishment back then and these boxes could handle a thousand users. We were also in the midst of a huge test effort of the next release of VM/SP. I had come in that Sunday afternoon to get several tests done (death marches are no fun). All of a sudden the mainframe I was on crashed. Hard. I'd grown accustomed to this as we were at a point where we were "eating our own dog food"; the production system was running the latest build of the OS. But, an hour later and it was STILL down. Apparently, a tour guide had led a group to one of the operator consoles and a child could not resist pressing buttons. Back in those days, booting a mainframe meant "re-IPL" Initial Program Load. Unless the computer was REALLY messed up and wouldn't boot. Only then would someone re-IML the system. Initial Microcode Load. Guess which button the kid pressed? It left the system in such a wonky state that it had to be reloaded from tape. All the development work of that weekend was lost and had to be recreated and rebuilt. (It was a weekend and backups were only done on weekday nights.) It took us a week to get things back to normal.
    6. Drivers: A friend of mine at IBM told me of an
    • Magic/More Magic (Score:3, Informative)

      by Dadoo ( 899435 )

      I can't believe no one's posted Guy Steele's Magic/More Magic story, yet:

              http://everything2.com/user/Accipiter/writeups/Magic [everything2.com]

  • Washer in the UPS (Score:5, Interesting)

    by Bob9113 ( 14996 ) on Sunday August 15, 2010 @12:43PM (#33257594) Homepage

    My favorite was at a big office building. An electrician was upgrading the fluorescent fixtures in the server room. He dropped a washer into one of the UPSs, where it promptly completed a circuit that was never meant to be. The batteries unloaded and fried the step-down transformer out at the street. The building had a diesel backup generator, which kicked in -- and sucked the fuel tank dry later that day. For the next week there were fuel trucks pulling up a few times a day. Construction of a larger fuel tank began about a week later.

  • by 1984 ( 56406 ) on Sunday August 15, 2010 @12:45PM (#33257604)

    I had one a few years back which highlighted issues with both our attention to the network behavior, and the ISP's procedures. One day the network engineer came over and asked if I knew why all the traffic on our upstream seemed to be going over the 'B' link, where it would typically head over the 'A' link to the same provider. The equipment was symmetrical and there was no performance impact, it was just odd because A was the preferred link. We looked back over the throughput graphs and saw that the change had occurred abruptly several days ago. We then inspected the A link and found it down. Our equipment seemed fine, though, so we got in touch with the outfit that was both colo provider and ISP.

    After the usual confusion it was finally determined that one of the ISP's staff had "noticed a cable not quite seated" while working on the data center floor. He had apparently followed a "standard procedure" to remove and clean the cable before plugging it back in. It was a fiber cable and he managed to plug it back in wrong (transposed connectors on a fiber cable). Not only was the notion of cleaning the cable end bizarre -- what, wipe it on his t-shirt? -- and never fully explained, but there was no followup check to find out what that cable was for and whether it still worked. It didn't, for nearly a week. That highlighted that we were missing checks on the individual links to the ISP and needed those in addition to checks for upstream connectivity. We fixed those promptly.

    Best part was that our CTO had, in a former misguided life, been a lawyer and had been largely responsible for drafting the hosting contract. As such, the sliding scale of penalties for outages went up to one-month free for multi-day incidents. The special kicker was that the credit applied to "the facility in which the outage occurred", rather than just to the directly effected items. Less power (not included in the penalty) the ISP ended up crediting us over $70K for that mistake. I have no idea if they train their DC staff better these days about well-meaning interference with random bits of equipment.

    • Re: (Score:3, Informative)

      by Jayfar ( 630313 )

      After the usual confusion it was finally determined that one of the ISP's staff had "noticed a cable not quite seated" while working on the data center floor. He had apparently followed a "standard procedure" to remove and clean the cable before plugging it back in. It was a fiber cable and he managed to plug it back in wrong (transposed connectors on a fiber cable). Not only was the notion of cleaning the cable end bizarre -- what, wipe it on his t-shirt? -- and never fully explained, but there was no followup check to find out what that cable was for and whether it still worked. It didn't, for nearly a week.

      Actually there's nothing odd about cleaning a fiber connection at all and it is a very exacting process (see link below). Apparently exacting in this case just didn't include re-inserting the ends in the right holes.

      Inspection and Cleaning Procedures for Fiber-Optic Connections
      http://www.cisco.com/en/US/tech/tk482/tk876/technologies_white_paper09186a0080254eba.shtml [cisco.com]

      • Re: (Score:3, Informative)

        by 1984 ( 56406 )

        That's what I was getting at -- it's not as if it's a simple case of blowing on the end to clear out some fluff. Detailed procedures, including not least unplugging the other end of said cable to make sure it's unlit, which would include finding said other end. And likely go and get various the items required for the cleaning procedure. Which would add up at least to a conversation or two, and perhaps one with us the customer discussing the topic. I'm not disagreeing with cleaning of fiber cables sometimes

  • Fun with PIX (Score:3, Insightful)

    by mkiwi ( 585287 ) on Sunday August 15, 2010 @12:52PM (#33257628)

    I had fun with a company awhile back. They are about 300 employees and ~90mil/year, so this is a small corporation.

    Anyway, the company was trying to get a VPN tunnel established to their China office, and they were having a hell of a time at it. The employees on the China side had no IT experience so everything was done remotely.

    It just so happens that one of the Chinese employees was recruited to make a change to the PIX firewall on the China side in order to get everything working. To our astonishment, it worked, and we had a secure VPN tunnel established.

    The problem was accounts in the US started to get locked out, alphabetically, every 30 minutes. Our Active Directory was getting tons of password crack attempts from inside our internal network. I was using LDAP to develop an application at the time, so naturally I was suspect for causing all these lockouts.

    Fast-forward a week. We look at the configuration of the Chinese firewall and it allowed all access from any IP address on the Chinese side. In other words, crackers were trying to get into our systems through our VPN tunnel in China. In effect, our corporate LAN had been directly connected to the Internet. Once we figured that out, I was free to go back to work and the network lived to see another day, but that incident caused major trouble for all our employees.

    Moral of the story: Don't trust a Chinese firewall.

  • by gagol ( 583737 ) on Sunday August 15, 2010 @01:14PM (#33257732)
    I was employed in a 50 employees publicity company. They have a couple of offices across the country and need to share a filesystem through WAFS. The main repository for the WAFS was running off a USB drive, connected to the server using a wire too short. I pointed the problem multiple times to my IT boss (no IT background what so ever) without success, tried to talk the issue to the owner of the company, without success, and one day tyhe worst happenned. The USB controller of the drive fried and we lost the last day of work. Thw windows server system went AWOL. It took an external consultant 3½ days to rebuild the main server, which was running the AD, WAFS, Exchange and our enterprise database. It costed us an account worth 12 MILLIONS $. The big boss then hired consultants and gave them over a thousand box to get her told the exact same thing I pointed to 3 months earlier when I audited the IT infrastructure. Two months later she comes top me and ask me how much it would cost to have a bullet-proof infrastructure. I told her to invest arounbd 80K in virtualisation solution with scripts to move VM around when workload changes and go with a consolidated storage with live backups and replication. It was too expensive. Another three months pass, she hire some consultants, gave them another thousands $ to get told basically the same thing I told her 3 months earlier... Than is where i quitted.
  • by Bruha ( 412869 ) on Sunday August 15, 2010 @07:49PM (#33259802) Homepage Journal

    I dont care where you work, if you're on site doing training, you're probably also sucked back into the work cycle. I see it all the time at work, I have always preferred offsite training, turn off the cell phones. It also helps if you have to use your laptop on the lab, because 99% of the time it means you can not vpn into work so email is not a concern either.

    I think my other Data Center operators would agree were all understaffed, and I work on a network with hundreds of millions of customers using it on a 24/7 cycle. The other danger nobody speaks of is that some companies are too passive when it comes to testing redundancy because half the time while there's redundancy in the system to keep a DMZ up and running, there's no spare DMZ capacity to handle a true outage such as a fiber ring failure that isolates the data center or other disaster. Companies need to design their redundancy so you can unplug the entire data center and your customers never knows it, because if you do not, you will rue the day a true outage happens that impacts the entire datacenter and you will hear about it on the news later. Not a good thing.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...