Stupid Data Center Tricks 305
jcatcw writes "A university network is brought down when two network cables are plugged into the wrong hub. An employee is injured after an ill-timed entry into a data center. Overheated systems are shut down by a thermostat setting changed from Fahrenheit to Celsius. And, of course, Big Red Buttons. These are just a few of the data center disasters caused by human folly."
Network meltdown due to hub cross-connects (Score:5, Interesting)
Can this really happen easily? I thought for really ugly things to happen, you need to have switches (without working STP, that is).
Re:bad article is bad (Score:2, Interesting)
I seem to remember in the early days of Telehouse London an engineer switched off power to the
entire building. Only two routes out of the UK remained (one was a 256k satellite connection)
that had their own back-up power.
Don't try this at work... (Score:2, Interesting)
Not using Cisco ACLs (Score:4, Interesting)
Our entire network was brought down a few years ago when a student plugged a consumer router into his dorm room's port. Said router provided DHCP, and having two conflicting DHCP servers on the network terminally confused everything that didn't use static IPs.
Took our networking guys hours to trace that one down.
Quad Graphics 2000 (Score:5, Interesting)
In the summer of 2000 I worked at Quad/Graphics (printer, at least at that time, of Time, Newsweek, Playboy, and several other big-name publications). I was on a team of interns inventorying the company's computer equipment -- scanning bar coded equipment, and giving bar codes to those odds and ends that managed to slip through the cracks in the previous years. (It's amazing what grew legs and walked from one plant to another 40 miles away without being noticed.)
One of my co-workers got curious about the unlabeled big red button in the server room. Because he lied about hitting it, the servers were down for a day and a half while a team tried to find out what wiring or environmental monitor fault caused the shutdown. That little stunt cost my co-worker his job and cost the company several million dollars in productivity. It slowed or stopped work at three plants in Wisconsin, one in New York, and one in Georgia.
The real pisser was the guilty party lying about it, thereby starting the wild goose chase. If he had been honest, or even claimed it was an accident, the servers would have all been up within the hour, and at most plants little or no productivity would have been lost.
The reality: a 20 year old's shame cost a company millions.
Re:Not using Cisco ACLs (Score:3, Interesting)
I had that error too, on a city-wide network. The solution? Get an IP from the offending router, go to its web interface, use the default password to get in, and disable DHCP.
Re:Don't try this at work... (Score:4, Interesting)
- run thinnet lines along the floor under people's desks, for them to occasionally get kicked and aggravate loose crimps, taking entire banks of computers (in a different wing of the building) off the LAN with maddening irregularity
- plug a critical switch into one of the ups's "surge only" outlets
- install expensive new baytech RPMs on the servers at all remote locations, and forget to configure several of the servers to "power on after power failure".
- on the one local server you cannot remote manage, plug its inaccessible monitor into a wall outlet
honorable mention:
- junk the last service machine you have laying around that has a scsi card in it while you still have a few servers using scsi drives
Re:Network meltdown due to hub cross-connects (Score:4, Interesting)
According to CCNA Sem 1, a hub is a multiport repeater that operates in layer 1. A switch is a multiport bridge that operates in layer 2. I thought these definitions are universally accepted and used, until I used non-Cisco devices. I now have to refer to L2 and L3 switches even if CCNA taught me that these are switches and routers, respectively.
Re:bad article is bad (Score:2, Interesting)
But.....
I only got a 200 on my English SAT. I's got no writin' skills. That's why I became a computer geek instead.
Re:bad article is bad (Score:3, Interesting)
The first one, at the IU school of medicine, I'm very familiar with that place...they have no data center to speak of, and I do not know that person. I never heard of that incident. Also, who doesn't run spanning tree with BPDU gaurd and other such protections. I know IU does, for a fact.
Something is very very wrong with that article.
My favourite human error - a true story (Score:5, Interesting)
When he arrived, most of the staff had gone home and the skeleton IT staff didn't want to hang around. So, they sent him away on the basis that his work wasn't "scheduled".
Everybody came back on Monday to find totally fried servers.
Re:Network meltdown due to hub cross-connects (Score:3, Interesting)
It's so irritating when you ask for a hub, and someone hands you a switch. Stores do the same thing. It's hard enough to find hubs, let alone find them when the categorization lumps them together.
No, I said hub. I don't want switching. I want bits coming in one port to come back out of all the others.
You can do that with a switch, but getting a switch that can do that is a bit more pricey than a real hub...
cascade failures (Score:4, Interesting)
How can this leave out the standard cascade failure scenario?
Trying to achieve redundancy, someone gets what they think is worst-case-30A of servers with multiple power supplies, plugs one power supply on each into one PDU rated 30A, one power supply into the other.
They may or may not know that the derated capacity of of the circuit is only 24A, the data center is unlikely to warn them as they only appear to be using 15A per circuit at most.
Anyway, something happens to one of the PDUs and the power is lost from it. Perhaps power factor corrections (remember the derating?) and cron jobs running at midnight on all the servers that raise the load high simultaneously. Maybe just the failure of one of the PDUs that was feared, causing the attempt at "redundancy".
In any case, all of the load is then put on the remaining circuit, and it always fails. The whole rack loses power.
Re:Network meltdown due to hub cross-connects (Score:3, Interesting)
Cheap deep-packet inspection (using an old hub and Wireshark) ?
Mainframe days story (Score:5, Interesting)
with one pushing air in and the other pulling air out. The machines had two 10-12" fans per unit, so stacking two or three units was fine. One site had so many machines side to
side (over 7), the air coming out the last machine regularly set things on FIRE. It was not uncommon for the machine to ignite lint going through the stack, with it coming out the
end as a small explosion like dust in a grain silo explosion. A fire extinguisher was kept on hand, and the wall eventually got a stainless steel panel because it was so common.
FedEx, get insurance/ship your server (Score:4, Interesting)
Data center power (Score:4, Interesting)
Back when I worked for Boeing, we had an "interesting" condition in our major Seattle area data center (the one built right on top of a major earthquake fault line). It seems that the contractors who had built the power system had cut a few corners and used a couple of incorrect bolts on lugs in some switchgear. The result of this was that, over time, poor connections could lead to high temperatures and electrical fires. So, plans were made to do maintenance work on the panels.
Initially, it was believed that the system, a dually redundant utility feed with diesel gen sets, UPS supplies and redundant circuits feeding each rack could be shut down in sections. So the repairs could be done on one part at a time, keeping critical systems running on the alternate circuits. No such luck. It seems that bolts were not the only thing contractors skimped upon. We had half of a dual power system. We had to shut down the entire server center (and the company) over an extended weekend*.
*Antics ensued here as well. The IT folks took months putting together a shut down/power up plan which considered numerous dependencies between systems. Everything had a scheduled time and everyone was supposed to check in with coordinators before touching anything. But on the shutdown day, the DNS folks came in early (there was a football game on TV they didn't want to miss) and pulled the plug on their stuff, effectively bringing everything else to a screeching halt.
Re:cascade failures (Score:3, Interesting)
Yep, it is one of the specific steps when we define requirements for server racks. Sadly not all the customers pay attention and then yell for us to come fix the mess when they find out years later :p
This is especially fun if the trip to the "datacenter" involves a helicopter ride to the oil rig where it is located :p
Washer in the UPS (Score:5, Interesting)
My favorite was at a big office building. An electrician was upgrading the fluorescent fixtures in the server room. He dropped a washer into one of the UPSs, where it promptly completed a circuit that was never meant to be. The batteries unloaded and fried the step-down transformer out at the street. The building had a diesel backup generator, which kicked in -- and sucked the fuel tank dry later that day. For the next week there were fuel trucks pulling up a few times a day. Construction of a larger fuel tank began about a week later.
Know your colo contracts (Score:3, Interesting)
I had one a few years back which highlighted issues with both our attention to the network behavior, and the ISP's procedures. One day the network engineer came over and asked if I knew why all the traffic on our upstream seemed to be going over the 'B' link, where it would typically head over the 'A' link to the same provider. The equipment was symmetrical and there was no performance impact, it was just odd because A was the preferred link. We looked back over the throughput graphs and saw that the change had occurred abruptly several days ago. We then inspected the A link and found it down. Our equipment seemed fine, though, so we got in touch with the outfit that was both colo provider and ISP.
After the usual confusion it was finally determined that one of the ISP's staff had "noticed a cable not quite seated" while working on the data center floor. He had apparently followed a "standard procedure" to remove and clean the cable before plugging it back in. It was a fiber cable and he managed to plug it back in wrong (transposed connectors on a fiber cable). Not only was the notion of cleaning the cable end bizarre -- what, wipe it on his t-shirt? -- and never fully explained, but there was no followup check to find out what that cable was for and whether it still worked. It didn't, for nearly a week. That highlighted that we were missing checks on the individual links to the ISP and needed those in addition to checks for upstream connectivity. We fixed those promptly.
Best part was that our CTO had, in a former misguided life, been a lawyer and had been largely responsible for drafting the hosting contract. As such, the sliding scale of penalties for outages went up to one-month free for multi-day incidents. The special kicker was that the credit applied to "the facility in which the outage occurred", rather than just to the directly effected items. Less power (not included in the penalty) the ISP ended up crediting us over $70K for that mistake. I have no idea if they train their DC staff better these days about well-meaning interference with random bits of equipment.
None of us are innocent. (Score:3, Interesting)
Just about anyone who has been in the line of fire as sysadmin for long enough will recall some ill-concieved notion that caused untold trouble. Since my earliest experience with commercial computers was in a batch-processing environment, my initial mishaps rarely inconvenienced anybody other than myself. But I still recall an incident much later (early '90s) when I inadvertently managed to delete the ":per" directory on a Data General mainframe (more or less equivalent to
USB drive running mission critical WAFS (Score:4, Interesting)
Re:bad article is bad (Score:2, Interesting)
Back in the late '80s when I was working on Prime "mini-computers" (as such machines were then known), I would receive periodic calls from Prime's tech support to alert me to (yet) another bug found in their BRMS (Backup/Restore Management System), and would I pretty-please stop using it. As it happened, I was using their less sophisticated but otherwise bombproof dump/restore utilities, so this was never an issue for me, but it was still pretty funny...
A classic (Score:1, Interesting)
One of my favorite stories my grandfather told me is a story about a computer that would screw up its calculations at the same time every day, about 2pm. This was back in the 60s, when computers were rather large. Basically, if the accountants ran the job in the morning, everything checked out. But if they were to run the batch through in the afternoon, their results would all be off. After two days of checking all the standard stuff (bad memory modules, bad cooling, what have you), he noticed that there was this loud banging noise that would start in the afternoon. He went out of the office to the next set of offices over, which had a machine shop. Turns out the machine shop press would start running about 2pm every day, and that machine press happened to be on the same circuit as the adding machine, so it would draw off just enough power to screw with the results.
Re:None of us are innocent. (Score:4, Interesting)
Air Force fun (Score:1, Interesting)
Some Air Force instructors told us in class that one time one of the tech school instructors wanted to brush up on his cisco skills, so he asked the IT if they had any old routers lying around. They did have one that they thought was cleared out, so they gave it over to him telling him to play around with it. He made the wonderful mistake of plugging it into the network inside the building and it started propagating all the old router information all over the network, which was hooked in the unclassified base network.
Why did you have to bring this up? (Score:1, Interesting)
Why did you have to bring this up? You brought back bad memories of the time that I actually did this. I worked at the time as a computer operator in the Southland Corporation data center in Dallas. We had moved into our newly built headquarters building and there was a red light switch on the wall by the master breakers. We all wondered for days what the switch would do and I was the only one who eventually got brave (stupid?) enough to throw it. The master breakers to the computer immediately dropped out. So we tried to flip the master breakers up again, but they wouldn't budge. We had to call building maintenance wherein after an hour of delaying production waiting for them to call in, we were able to get power to the computers back. We didn't know that you had to reset the breakers first by forcing them all the way down. I was scheduled to by promoted into programming anytime so I was really sweating it. The other computer room operators and evening manager decided to not tell anyone who caused the breakers to trip. There was a major management inquisition about what had happened but everyone kept quite. Finally, the evening manager was told that he had until the next day to out the culprit. He was going to do this the next day, but said that management decided to drop it. I was saved. A glass covered wooden box was made to cover the switch. I was promoted to programmer shortly afterwards. Climb mountains, but don't ever flip switches because they are there.
Re:bad article is bad (Score:3, Interesting)
Re:None of us are innocent. (Score:1, Interesting)
My co-worker was windows guy and learning the *nix command on a Mac with OS X. He tried the rm -r on a mount point that he had learn to map to our dev server housing the builds and source. Just like what Mr. Jobs kept saying, it simply works. Luckily, the server was backed up nightly. The restore took about a day. Everybody got the read only access after that.
Re:Data center power (Score:3, Interesting)
We had a similar situation to yours except we actually had a dual power system. The circuit breakers on the output however had very dodgy lugs on their cables which caused the circuit breakers to heat up, A LOT. This moved them very close to their rated trip current. When we eventually came in to do maintenance on one of the UPSes we turned it off as per procedure, naturally the entire load moved to the other. About 30 seconds later we hear a click come from a distribution board on the wall, and suddenly refinery operators were shouting panicked abuse through the 2ways to turn the damn thing back on.
These UPSes fed the emergency shutdown system of an oil refinery. Operators don't like their naps interrupted.