Microsoft Innovates Tent Data Centers 201
1sockchuck writes "The outside-the-box thinking in data center design continues. Microsoft has tested running a rack of servers in a tent outside one of its data centers. In seven months of testing, a small group of servers ran for seven months without failures, even when water dripped on the rack. The experiment builds on Intel's recent research on air-side economizers in suggesting that servers may be sturdier than believed, leaving more room to save energy by optimizing cooling set points and other key environmental settings in the server room."
Or otherwised titled (Score:5, Funny)
Microsoft Pitches a Tent.
Re:Or otherwised titled (Score:4, Funny)
This sounds like an in tents way to manage a data center.
Re:Or otherwised titled (Score:5, Funny)
Re:Or otherwised titled (Score:5, Funny)
Re:Or otherwised titled (Score:4, Funny)
But at least it's one sure-fire way to turn your sysadmins into happy campers...
Re: (Score:2, Funny)
Al Quaida IT, Designed by Microsoft (Score:2)
now let 'em try and sell it.
Re:Or otherwised titled (Score:5, Funny)
Re: (Score:3, Funny)
Slap a bag over Clippy's head and it wouldn't be half bad!
Re:Or otherwised titled (Score:5, Funny)
Oh! Look at the cool clowns!
Oh... wait, those are Windows Administrators... my bad.
Not particularly innovative... (Score:4, Informative)
Stop giving PHB dumb ideas (Score:2)
marketing for the new millenium... (Score:5, Funny)
Maybe the PHB's are just trying to market to the many people becoming homeless due to the increase in foreclosures [chicagotribune.com].
If there are going to be more citizens living in tent cities like during the great depression, corporate America will want to be there to provide desperately needed services, like up to the minute stock quotes and SPAM for new investment opportunities in Nigeria.
Re: (Score:3, Funny)
Nah. Some MS's PHB probably read ``1.5 billion for Lehman data center,'' and realized he could turn a killer profit by setting up a few hundred bucks worth of tents.
Sensible? (Score:5, Insightful)
Re: (Score:2)
Re:Sensible? (Score:5, Funny)
Kids these days...so reckless......
Re: (Score:2)
Re: (Score:2, Funny)
You could fertilize them with this article
Garden? (Score:2, Funny)
How about using them in chillier climates to warm greenhouses and get a carbon offset? Might even prompt a healthier diet for workers if they were allowed to graze.
Re: (Score:3, Insightful)
That was my thought. Who needs a key to the door when a pocket knife will make another.
I'm wondering which fortune 500 company will be lulled into doing something like this "because nobody has ever got fired for going with Microsoft" just to find that have to report a customer information loss in the future.
Re: (Score:2)
You're right, then we'll really be in trouble. I sure don't want a thief with the strength of ten men running around.
uptime! (Score:5, Funny)
Wow 7 months uptime... was it running Linux?
Re:uptime! (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
For major corporations the bottles read "If erection lasts longer than 4 months" rather than "If erection lasts longer than 4 hours". Priapism ftl!
That's all fine and good (Score:5, Insightful)
But what you're really doing in a situation like this is dodging bullets, rather than proving that we overbuild environmental in our server rooms. We KNOW that excess heat, water, humidity, etc can kill servers. These are facts that cannot be ignored.
I understand the idea here but still, do you really want to tell your bosses that the server room got to 115 F in July and killed the SAN because you skimped on the air units?
Re:That's all fine and good (Score:5, Insightful)
No one is suggesting that heat, humidity, water can't kill servers. The point is overall uptime vs. overall cost.
If you build your SAN or whatever with enough redundancy or capacity that it can handle one or a handful of servers going down in that 115F July heat with little to no impact on uptime or productivity...and, you also save a boatload of money in AC installation, cooling, and maintenance costs because the cost of replacing/rebuilding those servers is less than the cost of cooling the server room, you have a net win. It's purely a numbers game.
Re:That's all fine and good (Score:5, Interesting)
I'm pretty sure my SAN's redundancy has nothing to do with servers attached to it dying.
With the July heat, it's not just the baked electronics in the servers, either. Your hard drives become less and less reliable, and their expected lifetime is far shorter after they've operated for any length of time in conditions like you're experiencing.
You also completely ignore the cost of the downtime itself. Doesn't matter how much it costs to restore the data if you're down long enough that your clients lose faith in you and leave.
"Good will" is on an account sheet for a reason.
Re: (Score:2)
your right in that you reduce the life span of the dives.. but then how many of the drives ever make it to the end of their life span anyways>?
i know places that replace hardware based on when it was purchased.. after x years it is replaced>
even if it had a failure and was replaced - the counter doesn't get reset..
if you can run it for less money with the side effect of shorting the life span.. as long as that lifespan is still > x then you just saved money.
take for instance a drive that
at 60f wil
Re: (Score:2)
If the drive were the only variable, then you're right, it would make sense. Instead, there are power supplies, capacitors, miscellaneous electronic bits (MEBs). Thermal expansion can wreck havoc on things, too. Solder joints can go lose, intermittent errors crop up.
It's just easier to run your equipment cooler. Your failure rates go down, and your uptime stays high.
Re: (Score:2)
i was just using drives as an example becuase the grandparent point them out
yes you have to factor allthat in - but in reality most equipment could be run at a higher temp without issues for the planned lifespan..
while it's nice to say we run it colder so they last longer.. well turns out in this day and age things get replaced so fast we don't need them to last longer.
when you start evaluating DC's and realize that most now days are tracking floorspace turn over when planning new equipment - you realize t
Re: (Score:2)
Funny thing about this "test" is that failure rates went from 3.8 something percent to 4.6 something percent and they say that's good.... but hey that's a 20 percent increase or thereabouts isn't it? And both of those numbers are laughable compared to mainframe failure rates which I think have a decimal point in front of them.
They seem to be saying: "Forget the fact that PCs aren't very reliable, just consider that they are only 20 percent less reliable if you run them in adverse conditions."
WhopDeeDooo!!
Re: (Score:2)
4.6 out of every hundred is 17% more units failing than 2.8 out of every hundred, per hundred. That means the expected replacement cost for hardware during its expect life of service goes up 17%.
Re:That's all fine and good (Score:4, Interesting)
Still, it's an interesting approach even if you're *just* dodging bullets and this is a disaster recovery scenario for your company. If anything it proves that you don't need a white-room, halon-protected, perfectly air conditioned data center to run your business, which seems to be the common belief across the US, European and Canadian enterprise.
Just ask any of the companies in the Gulf area affected by Ike if they would have been glad to have something like this in place a month ago.
I could have told them that computers tend to be resilient. I ran lots of them for many years in a little room at ambient temperature or higher, and high humidity. Every time I opened one of them up to upgrade or something I was amazed that they would even run at all. And the dirt...
An equal and opposite anecdote. (Score:5, Insightful)
And from the other side, I'm constantly telling people to clean the crud out of their machines. Just last week a co-worker brought in her boyfriend's machine because it "would not work". Two minutes of blowing out the dust in the slots (RAM, AGP and PCI) and it booted up just fine.
I'm in agreement with the "dodging a bullet" comment.
Just because it is possible that there might not be problems (unless X, Y or Z happens) is not the same as taking pro-active steps to reduce the potential problems.
Sure, their server handled the water dripping on it.
But then, you would NOT be reading the story (because it would not be published) if the water had shorted out that server. It would have been a case of "Duh! They put the servers in a tent in the rain. What did they expect."
With stories like these, you will NEVER read of the failures. The failures are common sense. You will only read of the times when it seems to have worked. And only then because it seems to contradict "common sense".
Re: (Score:2)
But you didn't have the expectation of providing enterprise class services, either. At least, I hope your clients didn't expect that of those machines.
Re:That's all fine and good (Score:5, Interesting)
You are talking about the other kind of datacenter.
Regarding this issue you have 2 kinds of datacenters:
- the cluster/cloud type where servers are expandable. They might die but you don't care because you have loads and all your data is redundant (e.g. Google, most nodes of a cluster, web servers etc)
- the big iron kind where you buy high quality machines, support, redundant power supplies, redundant NICs, pay people with pagers to babysit them, lower the temperature to increase the MTBF etc.
All this research applies to the first case. You are right to pinpoint that in the second case you will still want to take all the precautions you can to avoid failures.
Re: (Score:2)
OK, first we have Intel with ambient-air cooled datacenters.
Now Microsoft is putting them in tents.
What's next, raincoats in 19" rack size, or is it Server Umbrellas?
citation please? (Score:4, Insightful)
This harps back to mainframe days. In order to keep your warranty valid, you had to strictly control the environment - including having strip recorders to PROVE that you hadn't exceeded temp or humidity limits. The reason was that the heat output of these ECL beasts was so high that they were teetering on the brink of temperature-induced race conditions, physically burning their PCBs and causing thermal-expansion induced stresses in the mechanical components.
Nowadays we are nowhere near as close to max. rated tolerances and therefore can open windows in datacentres when the air-con fails. However, the old traditions die hard and what was true even 10 years ago (the last time I specc'd an ECL mainframe) is no longer valid.
I'd suggest that if it wasn't for security reasons, plus the noise they make and the dust they suck up, most IT equipment could be run in a normal office.
HP Bulletproof Datacenters (Score:2)
I think these stunts are very similar to the HP Bulletproof marketing where they would shoot a highpowered rifle through the motherboard during operation to demonstrate its redundancy.
You don't want to shoot your server but it demonstrates a worst case scenario to illustrate that it's a bit tougher than you might otherwise think.
Running hot (Score:4, Funny)
a small group of servers ran for seven months without failures, even when water dripped on the rack.
ie: The trick to water proofing is to let your system be constantly near over-heating, any contact with water immediately results in water vapour.
Re: (Score:2)
Vaporware?
Re: (Score:3, Funny)
ie: The trick to water proofing is to let your system be constantly near over-heating, any contact with water immediately results in water vapour.
Does that also work for raccoons? With a rack server in a tent, I'd be more worried about raccoons than a bit of water. If we could vapourise them, that would be great.
Outside security. (Score:5, Insightful)
PHB: Well we just put up all of our servers outside. And it looks great! Say, what is that truck doing? Why is it driving so fast through all the security points... omg !
Re: (Score:2)
Don't put all your servers physically close together, but spread them out over two or more locations. Then you eliminate the single point of failure.
As argued in a previous comment, this experiment is great for distributed computing, where your servers become expandable. One failing in that case doesn't affect the overall business.
Re:Outside security. (Score:5, Funny)
It's Bill Gates and Jerry Seinfeld coming to upgrade your servers to Vista. OMG! Run!
Clarity (Score:5, Funny)
Re:Clarity (Score:4, Insightful)
When I said there would never be any Microsoft servers running in my department, I don't think they quite got my meaning.
They weren't microsoft servers - they were HP servers. For some reason MS are getting all the publicity and credit from this article, although they've actually done very little to deserve it. The hardware that should get the plaudits - typical!
They are tougher than most people think. (Score:4, Interesting)
I also worked with someone who worked night shift as an operator in a large company that did have an air-conditioned computer room. During the day the machine room was treated with reverence, carefully dusted with special cloths, etc.. He told me that at night when they got bored they'd play cricket down the central corridor with a tennis ball and a hard back book. The computer cabinets regularly got hit with the ball and once or twice had people run into them. On one occasion a disk unit started giving "media error warnings" but apart from that no ill effects again.
Re: (Score:3, Funny)
On one occasion a disk unit started giving "media error warnings" but apart from that no ill effects again.
So, apart from doing the exact sort of damage that most technical people would predict you'd see when hard drives are repeatedly subjected to shock, nothing happened?
Re:They are tougher than most people think. (Score:5, Funny)
On one occasion a disk unit started giving "media error warnings" but apart from that no ill effects again.
Understandable. I once watched a cricket match, and pretty much the same thing happened to my brain.
Re: (Score:2)
Yea, I've got to agree with the other posters, "media error warnings" on a disk is severe enough that you can play the game in the bloody hallway instead of the server room.
Great idea (Score:5, Funny)
Datacenter break-ins are becoming more and more commonplace, and it costs so much to replace the reinforced doors etc that the thieves bust up on their way in. Now with this innovation, they can just walk in and take the servers without doing any infrastructure damage. I think I'll pitch (groan) this idea to the boss right now!
Re: (Score:2)
microsoft innovates? yeah right. (Score:5, Insightful)
Funny how the military and the Live concert people have been doing this for years, but microsoft innovated putting servers in a tent.
Re: (Score:2)
I can't wait for MS to patent the idea and sue those bastards for infringing! ;)
Re: (Score:2)
Re: (Score:2)
These are not military-spec servers, and not many concerts last 7 months. But nice try.
Re: (Score:2)
The same ones you use. If they want to have a "rugged" server, they just load a server OS on a Toughbook.
Is there a reason there's no foot? (Score:4, Funny)
This is ridiculous. There is no situation that comes to mind, even after some consideration, that would compel me to operate anything remotely critical in this manner.
Honestly, servers under a tent. I guess if the ferris wheel ever goes really high tech, the carnies will have something to play solitaire on
Sturdy, not indestructable (Score:5, Insightful)
While it's certainly an interesting experiment there is no way I would run my companies servers in a tent, especially a leaky tent.
Anyone who has built a few machines knows that hardware can prove to be a lot tougher than many people think it is. We once had a server running for over two years that had been dropped down a flight of steel stairs a few hours before delivery (we got the server free because it was really badly dented and no one thought it would actually run).
There is a difference between the above scenario though and the one where a whole rack of servers is sitting in a tent. One decent tear in the tent could easily flood the tent. Tough as they are I can't see any server running with water pouring into it and this scenario would result in the whole tent going down in one go. If you have to have a hot spare for this situation it's probably just easier to put it in a real building or a shipping container.
Re: (Score:2)
I've wondered why some servers aren't made more like car audio amps. Just hang the guts in a rugged, densely-finned, extruded aluminum case that's used as a heatsink for anything needing one.
Put the warm bits on a PC board laid out so they all touch the case when installed, spooge 'em with thermal paste and bolt 'em in. Have a simple gasketed cover plate for maintenance.
They could be made stackable by casting male and female dovetails into the case. Just slide them together and they'd be self-supporting. (T
Outdoor job (Score:5, Funny)
Oh my, who's that burly, rugged, well-tanned guy with the rolled-up shirtsleeves?
Him? Oh he's our server admin
Cost-Benefit (Score:5, Insightful)
While I agree with the notion of bulletproof data centers, I think one of the points of all these experiments, is that if you can save $100,000 a month on A/C and environmental costs, at the expense of reducing the life of $500,000 worth of hardware by 20%, you actually save money, because you spend so much more maintaining the environment than you do on the hardware itself -- as long as you plan for hardware failure and have appropriate backups (which you should anyway). On the other hand, if your hardware is worth a lot more, relative to your expenses, or if your hardware failure rate would increase sufficiently, then this approach wouldn't make any sense. It's all cost-benefit analysis.
-brian
Re: (Score:2)
I think one of the points of all these experiments, is that if you can save $100,000 a month on A/C and environmental costs, at the expense of reducing the life of $500,000 worth of hardware by 20%, you actually save money
Maybe.
What are the environmental costs of throwing out a data centers' worth of busted hardware every four years instead of five? How many cubic miles of landfill would that be if every company with a datacenter adopted the view that it's better to let a server fry itself every once in a
Re: (Score:2)
Re: (Score:2)
Unscheduled downtime due to server failure is something many companies try their best to avoid, even as far as going into severe diminishing returns on the money they pour into the environmental controls. If $10 provides 90% uptime, $100 provides 99% uptime, $1,000 provides 99.9% uptime, and
Location, location, location. (Score:3, Funny)
prior art (Score:3, Funny)
Bollocks (Score:3, Interesting)
> suggesting that servers may be sturdier than believed
Anyone who thought you couldn't run a reliable server in 85+ degree heat in a sweaty, humid room with water dripping on a *sealed chassis* was a moron anyway. Most servers come with filters on the fan vents, are pretty tightly sealed shut
otherwise (and none of them would vent out of the top of a chassis because it would impact the servers above and below, so where's the water going to drip into?)
Air conditioning and all the other niceness we get in server rooms is just an insurance policy.
Re: (Score:2)
Intel just proved it was only increasing failures by 0.6% though.
That meets my expectations; a server lasts as long as a server lasts. It's components will fail, on the whole, exactly when they mean to fail, and not because your server room is too warm.
I have had the (dis)pleasure to work on a server which was installed in a print room at a major newspaper publisher. The thing was *CAKED* in black dust, it was nearly an inch thick in places. We took it outside, took the vacuum cleaner to it, and replaced th
ExtremeServers.com? (Score:2)
Next up, you'll have another tech-company claiming they've got their server's running without failure in the car park, nay, ground into the tarmac without significant failure rate increase.
It's a bit like extreme ironing [extremeironing.com]; it's probably possible, but why would you want to?
Carnies? (Score:2)
A perfect place for slashdot. (Score:2)
We got Microsoft doing something. (Oh that can't be good, or must be flawed)
We got an excrement that attempts to test extremes. (That means that microsoft wan't people to do this for real world situations)
It challenges and old truth. (Back in 1980 our server rooms needed to be run at 60F for optimal performance and they still are because truth never changes)
I don't think anyone is saying that your data centers should be allowed to be placed in rough environments. But it shows that you can turn your AC from
What a breeze (Score:2)
Seriously: what happens after a few years when the racks start to rust because of the damp air that they have been operating in ? They did this in Ireland - plenty of moisture there!
Ignoring the warranty (Score:2)
Unconventional data centers that don't provide adequate services will start costing these companies in other ways: canceled warranties. Warranty clauses always require that the company purchasing the equipment ensure an operating area within device specifications for humidity, temperature, shock, power, etc. The vendor can void the warranty if you go outside these bounds.
Currently this is not much enforced, except in egregious cases. I've even seen a vendor replace servers that fell out of a truck (liter
Robust (Score:2)
However they will run a tiny bit hotter and their overall life may be shortened because of moisture and other elements.
Hell, I remember when the AC failed in a room with 70 servers and a phone system. Got very toasty in there very quickly.
Next from the Marketing Dept... (Score:2)
Re: (Score:2)
In other news ... (Score:2)
Sturdy servers, except when they're not servers (Score:2)
I can imagine that good server equipment is much more sturdy than the average rack-geek thinks. That is, of course, unless you're using $299 desktops with server operating systems.
Got a Dimension GX260 running your business of 50 people with the same single IDE hard drive you bought with the system? Leave it inside! Oh, you mean you put a couple of 10k RPM drives with a RAID controller in that box which barely has enough airflow for the base system? Yeah, sure... DON'T cool it and see what happens.
Re: (Score:2)
"Power Usage Effectiveness" ignoring fan speed? (Score:2)
In a modern server the fan speed (and power use) varies with the in-temperature. Saving 20kW on AC by running the room warmer doesn't help much if your computers increase the load from 200kW to 240kW just due to increased fan speed.
I see no mention of this in either Intel's or MS's experiments, even though it is the big reason our machine room is speced at max 18C intake air. Of course, we spend roughly 5kW to cool 240kW, so I can't really bring myself to think this is a big potential for improvement.
Servers in Iraq (Score:2, Informative)
I was a server admin for a parachute infantry regiment for six years and I can tell you, from experience, that servers are a LOT tougher than people think.
On my most recent trip to Iraq, we ran about 25 Dell servers in a 15x15 ft wooden room with no insulation, very unstable power, a LOT of dust and two small AC units that worked sporadically and didn't blow air directly onto the servers. On average, it was probably 85-100 degrees in the room, depending on what part of the year. The only issues we had was f
Balcony server (Score:2, Interesting)
I've had my server outside on the balcony [gnist.org] for a couple of years here in Norway. No problems at all - temperatures ranging from +40C to -20C:
Five servers, seven months - not impressive (Score:2)
This is a life-cycle thing. So five servers survived seven months. First year failure rates could be as high as 3% and they wouldn't have noticed. Let's see them run that thing for five years.
Corrosion is cumulative. It's not that useful to observe that something didn't fail from corrosion in seven months.
They really just want to sell hardware and software: "There is a possible benefit in having servers fail, in that this failure forces obsolescence and ensures timely decommissioning of servers."
Prior Art (Score:2)
I believe Sheriff Joe has the patent on this one [wikipedia.org]
Wow, that is some awesome innovation there.... (Score:2)
And this proves... (Score:3, Insightful)
Intel's experiment was useful, but not surprising to the vast majority of IT people who frequently put racks in closets, their offices, etc. because that's the only place we have for them. Intel is just giving us ammo for tell the datacenter guys that we don't really need them if they're charging too much.
Microsoft's experiment is simply ludicrous. For many obvious reasons, theft being the most obvious, no one would ever actually run a server on a tent unless you're on some scientific expedition to some place where there are no buildings and you're not staying long enough to build one and there's no bandwidth available, even by satellite.
While Intel is addressing the problem of physical space costing far more than the computers we can store in it, Microsoft is almost making light of it. Stupid Redmond bastards.
Re:Software vs Hardware Engineers (Score:4, Interesting)
Re:More an Add for the Servers (Score:2, Interesting)
If you're going to run them hot, it's best to have them outside because when all those fans go high, it's enough to wake the dead.
Re:Software vs Hardware Engineers (Score:5, Insightful)
They know what the servers will survive, not what it could survive.
When designing a machine to work from 10C to 50C and from 20% to 70% humidity, they don't deliberately design it to fail just outside that range. They just make damned sure it won't fail within those ranges (at least, not because of temperature or humidity).
Microsoft's software engineers can show them what the servers are really capable of, without even testing them out for all four seasons.
Sarcasm ignored, yes, Microsoft (or any of us willing to sacrifice a server for the cause) can indeed demonstrate that a server can live in a more harsh environment than intended. Because, as mentioned above, the hardware engineers didn't design the systems to fail just outside their spec'd range.
We (as a whole) tend to baby servers because they cost a lot... But the cost of maintaining a perfect environment for them far outweighs the price for the actual hardware; If you can chop that expense out of the budget for the 99% of your servers that don't strictly require five-9s uptime, the savings in TCO could potentially far outweigh the increased cost of more frequent hardware replacement.
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
downtime costs per hour (if not minute) equal to the cost of the hardware
No argument there, and I'm not stepping past the potential losses in terms of business impact, I'm simply pointing out that the cost of the machines themselves aren't exactly negligible (regardless of how those costs stack up compared to other costs). As you probably know, there are people within an organization who get their butts chewed when a service stops running on a box and there are also those who get their butts chewed when a piece of hardware fails. I'm saying that just because the potential l
Re:pedant (Score:2)
No, the phrase "thinking outside the box" has always meant thinking unconventionally, or outside of assumed (but not actual) limits. It has nothing to do with packaging.
The phrase comes from a puzzle. Take a 3x3 grid of dots (as below) and connect them all with just four straight lines (without lifting pen from paper). The only way to do it is to extend the lines "outside the box" formed by the outermost dots.
Re: (Score:2)
Actually I think the original puzzle is to connect the dots with as few straight lines (and no curved ones!) as possible.
If the dots have size (as in my example above), and are not just zero-dimensional imaginary entities, then it can be done with just three lines.
If "straight" means "as confined to the plane of the page" and you allow the "plane" to be curved in the third dimension, it can be done with one line.
Re: (Score:2)
You can actually do it with three connected lines.
Re:A/C is Expensive (Score:5, Funny)
Not too long ago, there was a small furor in the local media about a major disaster at The State's Technology Services Division. The details were a bit sketchy â" mostly because The State was "unable to comment on an ongoing investigation" â" but what was reported was that, for two full days, employees of The State were unable to logon to their computers or access email, and that this caused business within The State to grind to a halt.
As the "investigation" carried on, the media lost interest in the story and moved on to more newsworthy stories like who Paris Hilton was partying with last weekend. Fortunately for us, a certain employee of The State named J.N. works in the Technology Services Division and decided to share what really was behind those fateful days.
When employees of The State came in to work following a three day weekend, they found their workstations overloaded with "cannot logon" and "Exchange communication" error messages. The Network Services folks had it even worse: the server room was a sweltering 109 Fahrenheit and filled with dead or dying servers.
At first, everyone had assumed that the Primary A/C, the Secondary A/C, and the Tertiary A/C had all managed to fail at once. But after cycling the power, the A/Cs all fired up and brought the room back to a cool 64. At the time, the "why" wasnâ(TM)t so important: the network administrators had to figure out how to bring online the four Exchange Services, six Domain Controllers, a few Sun servers, and the entire State Tax Commissionâ(TM)s server farm. Out of all of the downed servers, those were the only ones that did not come back to life upon a restart.
They worked day and night to order new equipment, build new servers, and restore everything from back-up. Countless overtime hours and nearly two hundred thousand dollars in equipment costs later, they managed to bring everything back online. When the Exchange servers were finally restored, the following email finally made its way to everyone's inbox, conveniently answering the "why"
From: ----- -----------
To: IT Department
Re: A/C constantly running.
To whom it may concern,
I came in today (Monday) to finish up a project I was working
on before our big meeting with the State ----- Commission tomorrow,
and I noticed that there were three or four large air conditioners
running the entire time I was here. Since it's a three day weekend,
no one is around, why do we need to have the A/C running 24/7?
With all the power that all those big computers in that room use, I
doubt it is really eco-friendly to run those big units at the same
time. And all computers have cooling fans anyway, so why put the A/C
for the building in that room?
I got a keycard from [the facility managerâ(TM)s] desk and shut off the
A/C units. I'm sure you guys can deal with it being warm for an hour
or two when you come in tomorrow morning.
In the future, let's try to be a little more conscientious of our
energy usage!
Thanks,
-----
As for the employee who sent it, he decided to take an early retirement.
-Daily WTF [thedailywtf.com]