Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Microsoft Innovates Tent Data Centers 201

1sockchuck writes "The outside-the-box thinking in data center design continues. Microsoft has tested running a rack of servers in a tent outside one of its data centers. In seven months of testing, a small group of servers ran for seven months without failures, even when water dripped on the rack. The experiment builds on Intel's recent research on air-side economizers in suggesting that servers may be sturdier than believed, leaving more room to save energy by optimizing cooling set points and other key environmental settings in the server room."
This discussion has been archived. No new comments can be posted.

Microsoft Innovates Tent Data Centers

Comments Filter:
  • by MyLongNickName ( 822545 ) on Monday September 22, 2008 @11:32AM (#25104967) Journal

    Microsoft Pitches a Tent.

  • Subject says it all.
  • Sensible? (Score:5, Insightful)

    by Azaril ( 1046456 ) on Monday September 22, 2008 @11:35AM (#25105027) Homepage
    I'm sure it'll work fine untill someone strolls past, lifts up the canvas and walks off with the entire rack. Or accidently flicks a cigarrette but at the tent. Or....
    • Actually this coincides with some of my own testing, keeping servers under my sink, under my car and sometimes in my garden! No failures yet, it must be a good idea! What's more eco-friendly than keeping servers in the garden?
    • Re: (Score:3, Insightful)

      by sumdumass ( 711423 )

      That was my thought. Who needs a key to the door when a pocket knife will make another.

      I'm wondering which fortune 500 company will be lulled into doing something like this "because nobody has ever got fired for going with Microsoft" just to find that have to report a customer information loss in the future.

    • by Atario ( 673917 )

      I'm sure it'll work fine untill someone strolls past, lifts up the canvas and walks off with the entire rack.

      You're right, then we'll really be in trouble. I sure don't want a thief with the strength of ten men running around.

  • uptime! (Score:5, Funny)

    by Anonymous Coward on Monday September 22, 2008 @11:36AM (#25105035)

    Wow 7 months uptime... was it running Linux?

  • by jayhawk88 ( 160512 ) <jayhawk88@gmail.com> on Monday September 22, 2008 @11:38AM (#25105083)

    But what you're really doing in a situation like this is dodging bullets, rather than proving that we overbuild environmental in our server rooms. We KNOW that excess heat, water, humidity, etc can kill servers. These are facts that cannot be ignored.

    I understand the idea here but still, do you really want to tell your bosses that the server room got to 115 F in July and killed the SAN because you skimped on the air units?

    • by AcidPenguin9873 ( 911493 ) on Monday September 22, 2008 @11:45AM (#25105219)

      No one is suggesting that heat, humidity, water can't kill servers. The point is overall uptime vs. overall cost.

      If you build your SAN or whatever with enough redundancy or capacity that it can handle one or a handful of servers going down in that 115F July heat with little to no impact on uptime or productivity...and, you also save a boatload of money in AC installation, cooling, and maintenance costs because the cost of replacing/rebuilding those servers is less than the cost of cooling the server room, you have a net win. It's purely a numbers game.

      • by Bandman ( 86149 ) <bandman AT gmail DOT com> on Monday September 22, 2008 @11:58AM (#25105445) Homepage

        I'm pretty sure my SAN's redundancy has nothing to do with servers attached to it dying.

        With the July heat, it's not just the baked electronics in the servers, either. Your hard drives become less and less reliable, and their expected lifetime is far shorter after they've operated for any length of time in conditions like you're experiencing.

        You also completely ignore the cost of the downtime itself. Doesn't matter how much it costs to restore the data if you're down long enough that your clients lose faith in you and leave.

        "Good will" is on an account sheet for a reason.

        • by Amouth ( 879122 )

          your right in that you reduce the life span of the dives.. but then how many of the drives ever make it to the end of their life span anyways>?

          i know places that replace hardware based on when it was purchased.. after x years it is replaced>

          even if it had a failure and was replaced - the counter doesn't get reset..

          if you can run it for less money with the side effect of shorting the life span.. as long as that lifespan is still > x then you just saved money.

          take for instance a drive that

          at 60f wil

          • by Bandman ( 86149 )

            If the drive were the only variable, then you're right, it would make sense. Instead, there are power supplies, capacitors, miscellaneous electronic bits (MEBs). Thermal expansion can wreck havoc on things, too. Solder joints can go lose, intermittent errors crop up.

            It's just easier to run your equipment cooler. Your failure rates go down, and your uptime stays high.

            • by Amouth ( 879122 )

              i was just using drives as an example becuase the grandparent point them out

              yes you have to factor allthat in - but in reality most equipment could be run at a higher temp without issues for the planned lifespan..

              while it's nice to say we run it colder so they last longer.. well turns out in this day and age things get replaced so fast we don't need them to last longer.

              when you start evaluating DC's and realize that most now days are tracking floorspace turn over when planning new equipment - you realize t

      • by cmacb ( 547347 )

        Funny thing about this "test" is that failure rates went from 3.8 something percent to 4.6 something percent and they say that's good.... but hey that's a 20 percent increase or thereabouts isn't it? And both of those numbers are laughable compared to mainframe failure rates which I think have a decimal point in front of them.

        They seem to be saying: "Forget the fact that PCs aren't very reliable, just consider that they are only 20 percent less reliable if you run them in adverse conditions."

        WhopDeeDooo!!

    • by dedazo ( 737510 ) on Monday September 22, 2008 @11:46AM (#25105233) Journal

      Still, it's an interesting approach even if you're *just* dodging bullets and this is a disaster recovery scenario for your company. If anything it proves that you don't need a white-room, halon-protected, perfectly air conditioned data center to run your business, which seems to be the common belief across the US, European and Canadian enterprise.

      Just ask any of the companies in the Gulf area affected by Ike if they would have been glad to have something like this in place a month ago.

      I could have told them that computers tend to be resilient. I ran lots of them for many years in a little room at ambient temperature or higher, and high humidity. Every time I opened one of them up to upgrade or something I was amazed that they would even run at all. And the dirt...

      • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday September 22, 2008 @11:54AM (#25105373)

        Every time I opened one of them up to upgrade or something I was amazed that they would even run at all. And the dirt...

        And from the other side, I'm constantly telling people to clean the crud out of their machines. Just last week a co-worker brought in her boyfriend's machine because it "would not work". Two minutes of blowing out the dust in the slots (RAM, AGP and PCI) and it booted up just fine.

        I'm in agreement with the "dodging a bullet" comment.

        Just because it is possible that there might not be problems (unless X, Y or Z happens) is not the same as taking pro-active steps to reduce the potential problems.

        Sure, their server handled the water dripping on it.

        But then, you would NOT be reading the story (because it would not be published) if the water had shorted out that server. It would have been a case of "Duh! They put the servers in a tent in the rain. What did they expect."

        With stories like these, you will NEVER read of the failures. The failures are common sense. You will only read of the times when it seems to have worked. And only then because it seems to contradict "common sense".

      • by Bandman ( 86149 )

        But you didn't have the expectation of providing enterprise class services, either. At least, I hope your clients didn't expect that of those machines.

    • by glop ( 181086 ) on Monday September 22, 2008 @11:53AM (#25105347)

      You are talking about the other kind of datacenter.
      Regarding this issue you have 2 kinds of datacenters:
        - the cluster/cloud type where servers are expandable. They might die but you don't care because you have loads and all your data is redundant (e.g. Google, most nodes of a cluster, web servers etc)
        - the big iron kind where you buy high quality machines, support, redundant power supplies, redundant NICs, pay people with pagers to babysit them, lower the temperature to increase the MTBF etc.

      All this research applies to the first case. You are right to pinpoint that in the second case you will still want to take all the precautions you can to avoid failures.

    • by dpilot ( 134227 )

      OK, first we have Intel with ambient-air cooled datacenters.
      Now Microsoft is putting them in tents.

      What's next, raincoats in 19" rack size, or is it Server Umbrellas?

    • citation please? (Score:4, Insightful)

      by petes_PoV ( 912422 ) on Monday September 22, 2008 @12:08PM (#25105605)
      > We KNOW that excess heat, water, humidity, etc can kill servers. These are facts that cannot be ignored.

      This harps back to mainframe days. In order to keep your warranty valid, you had to strictly control the environment - including having strip recorders to PROVE that you hadn't exceeded temp or humidity limits. The reason was that the heat output of these ECL beasts was so high that they were teetering on the brink of temperature-induced race conditions, physically burning their PCBs and causing thermal-expansion induced stresses in the mechanical components.

      Nowadays we are nowhere near as close to max. rated tolerances and therefore can open windows in datacentres when the air-con fails. However, the old traditions die hard and what was true even 10 years ago (the last time I specc'd an ECL mainframe) is no longer valid.

      I'd suggest that if it wasn't for security reasons, plus the noise they make and the dust they suck up, most IT equipment could be run in a normal office.

    • I think these stunts are very similar to the HP Bulletproof marketing where they would shoot a highpowered rifle through the motherboard during operation to demonstrate its redundancy.

      You don't want to shoot your server but it demonstrates a worst case scenario to illustrate that it's a bit tougher than you might otherwise think.

  • Running hot (Score:4, Funny)

    by Anonymous Coward on Monday September 22, 2008 @11:38AM (#25105085)

    a small group of servers ran for seven months without failures, even when water dripped on the rack.

    ie: The trick to water proofing is to let your system be constantly near over-heating, any contact with water immediately results in water vapour.

    • by wik ( 10258 )

      Vaporware?

    • Re: (Score:3, Funny)

      by Kenshin ( 43036 )

      ie: The trick to water proofing is to let your system be constantly near over-heating, any contact with water immediately results in water vapour.

      Does that also work for raccoons? With a rack server in a tent, I'd be more worried about raccoons than a bit of water. If we could vapourise them, that would be great.

  • Outside security. (Score:5, Insightful)

    by UberHoser ( 868520 ) on Monday September 22, 2008 @11:38AM (#25105093)

    PHB: Well we just put up all of our servers outside. And it looks great! Say, what is that truck doing? Why is it driving so fast through all the security points... omg !

    • Don't put all your servers physically close together, but spread them out over two or more locations. Then you eliminate the single point of failure.

      As argued in a previous comment, this experiment is great for distributed computing, where your servers become expandable. One failing in that case doesn't affect the overall business.

    • by MobyDisk ( 75490 ) on Monday September 22, 2008 @12:54PM (#25106467) Homepage

      It's Bill Gates and Jerry Seinfeld coming to upgrade your servers to Vista. OMG! Run!

  • Clarity (Score:5, Funny)

    by Rob T Firefly ( 844560 ) on Monday September 22, 2008 @11:38AM (#25105097) Homepage Journal
    When I said there would never be any Microsoft servers running in my department, I don't think they quite got my meaning.
    • Re:Clarity (Score:4, Insightful)

      by petes_PoV ( 912422 ) on Monday September 22, 2008 @12:12PM (#25105673)

      When I said there would never be any Microsoft servers running in my department, I don't think they quite got my meaning.

      They weren't microsoft servers - they were HP servers. For some reason MS are getting all the publicity and credit from this article, although they've actually done very little to deserve it. The hardware that should get the plaudits - typical!

  • by Chrisq ( 894406 ) on Monday September 22, 2008 @11:39AM (#25105107)
    I worked for a very small company that had a server-rack in a cupboard without ventilation. In winter we'd open the door to keep the office warm. In summer we'd keep it closed to stop making the office to warm - there was no air conditioning. The temperature must have varied from 16 degrees C, maybe lower at night to 35 degrees C and the server never had any problems.

    I also worked with someone who worked night shift as an operator in a large company that did have an air-conditioned computer room. During the day the machine room was treated with reverence, carefully dusted with special cloths, etc.. He told me that at night when they got bored they'd play cricket down the central corridor with a tennis ball and a hard back book. The computer cabinets regularly got hit with the ball and once or twice had people run into them. On one occasion a disk unit started giving "media error warnings" but apart from that no ill effects again.
    • Re: (Score:3, Funny)

      by blincoln ( 592401 )

      On one occasion a disk unit started giving "media error warnings" but apart from that no ill effects again.

      So, apart from doing the exact sort of damage that most technical people would predict you'd see when hard drives are repeatedly subjected to shock, nothing happened?

    • by Rob T Firefly ( 844560 ) on Monday September 22, 2008 @11:44AM (#25105201) Homepage Journal

      On one occasion a disk unit started giving "media error warnings" but apart from that no ill effects again.

      Understandable. I once watched a cricket match, and pretty much the same thing happened to my brain.

    • by Bandman ( 86149 )

      Yea, I've got to agree with the other posters, "media error warnings" on a disk is severe enough that you can play the game in the bloody hallway instead of the server room.

  • Great idea (Score:5, Funny)

    by Tx ( 96709 ) on Monday September 22, 2008 @11:39AM (#25105115) Journal

    Datacenter break-ins are becoming more and more commonplace, and it costs so much to replace the reinforced doors etc that the thieves bust up on their way in. Now with this innovation, they can just walk in and take the servers without doing any infrastructure damage. I think I'll pitch (groan) this idea to the boss right now!

  • by Lumpy ( 12016 ) on Monday September 22, 2008 @11:50AM (#25105305) Homepage

    Funny how the military and the Live concert people have been doing this for years, but microsoft innovated putting servers in a tent.

    • I can't wait for MS to patent the idea and sue those bastards for infringing! ;)

    • Next thing they'll do is patent the concept of running electronics in a tent and then sue the DOD for infringement.
    • by dave420 ( 699308 )

      These are not military-spec servers, and not many concerts last 7 months. But nice try.

      • by Hyppy ( 74366 )
        Psst... wanna know which servers the military usually uses in its tents?

        The same ones you use. If they want to have a "rugged" server, they just load a server OS on a Toughbook.
  • by Bandman ( 86149 ) <bandman AT gmail DOT com> on Monday September 22, 2008 @11:51AM (#25105331) Homepage

    This is ridiculous. There is no situation that comes to mind, even after some consideration, that would compel me to operate anything remotely critical in this manner.

    Honestly, servers under a tent. I guess if the ferris wheel ever goes really high tech, the carnies will have something to play solitaire on

  • by squoozer ( 730327 ) on Monday September 22, 2008 @11:54AM (#25105371)

    While it's certainly an interesting experiment there is no way I would run my companies servers in a tent, especially a leaky tent.

    Anyone who has built a few machines knows that hardware can prove to be a lot tougher than many people think it is. We once had a server running for over two years that had been dropped down a flight of steel stairs a few hours before delivery (we got the server free because it was really badly dented and no one thought it would actually run).

    There is a difference between the above scenario though and the one where a whole rack of servers is sitting in a tent. One decent tear in the tent could easily flood the tent. Tough as they are I can't see any server running with water pouring into it and this scenario would result in the whole tent going down in one go. If you have to have a hot spare for this situation it's probably just easier to put it in a real building or a shipping container.

    • I've wondered why some servers aren't made more like car audio amps. Just hang the guts in a rugged, densely-finned, extruded aluminum case that's used as a heatsink for anything needing one.

      Put the warm bits on a PC board laid out so they all touch the case when installed, spooge 'em with thermal paste and bolt 'em in. Have a simple gasketed cover plate for maintenance.

      They could be made stackable by casting male and female dovetails into the case. Just slide them together and they'd be self-supporting. (T

  • Outdoor job (Score:5, Funny)

    by oldspewey ( 1303305 ) on Monday September 22, 2008 @11:56AM (#25105415)

    Oh my, who's that burly, rugged, well-tanned guy with the rolled-up shirtsleeves?

    Him? Oh he's our server admin

  • Cost-Benefit (Score:5, Insightful)

    by bziman ( 223162 ) on Monday September 22, 2008 @11:57AM (#25105419) Homepage Journal

    While I agree with the notion of bulletproof data centers, I think one of the points of all these experiments, is that if you can save $100,000 a month on A/C and environmental costs, at the expense of reducing the life of $500,000 worth of hardware by 20%, you actually save money, because you spend so much more maintaining the environment than you do on the hardware itself -- as long as you plan for hardware failure and have appropriate backups (which you should anyway). On the other hand, if your hardware is worth a lot more, relative to your expenses, or if your hardware failure rate would increase sufficiently, then this approach wouldn't make any sense. It's all cost-benefit analysis.

    -brian

    • I think one of the points of all these experiments, is that if you can save $100,000 a month on A/C and environmental costs, at the expense of reducing the life of $500,000 worth of hardware by 20%, you actually save money

      Maybe.

      What are the environmental costs of throwing out a data centers' worth of busted hardware every four years instead of five? How many cubic miles of landfill would that be if every company with a datacenter adopted the view that it's better to let a server fry itself every once in a

      • by afidel ( 530433 )
        Except those numbers are nowhere near reality. My calculations are that running a $36,000 DL 585 G2 at 70% utilization will result in a 3 year power bill for the machine + AC of ~$2,500 @ $.10/KWhr. AC is only about 1/8th of total power usage, so you would have to see a less than 2% increase in failure rate for this strategy to pay off assuming that you incur no measurable increase in downtime and association expenses.
    • by Hyppy ( 74366 )
      The bigger benefit to analyze is uptime. If a server dies, uptime is impacted unless it happens to be running an application with some form of clustering or other failover ability. Not all applications provide this.

      Unscheduled downtime due to server failure is something many companies try their best to avoid, even as far as going into severe diminishing returns on the money they pour into the environmental controls. If $10 provides 90% uptime, $100 provides 99% uptime, $1,000 provides 99.9% uptime, and
  • by AltGrendel ( 175092 ) <ag-slashdot&exit0,us> on Monday September 22, 2008 @12:01PM (#25105479) Homepage
    They'll be testing this in Galveston, Tx.
  • prior art (Score:3, Funny)

    by n3tcat ( 664243 ) on Monday September 22, 2008 @12:03PM (#25105521)
    my "corporation" has been running servers in tents for going over 5 years now ;)
  • Bollocks (Score:3, Interesting)

    by NekoXP ( 67564 ) on Monday September 22, 2008 @12:04PM (#25105537) Homepage

    > suggesting that servers may be sturdier than believed

    Anyone who thought you couldn't run a reliable server in 85+ degree heat in a sweaty, humid room with water dripping on a *sealed chassis* was a moron anyway. Most servers come with filters on the fan vents, are pretty tightly sealed shut
    otherwise (and none of them would vent out of the top of a chassis because it would impact the servers above and below, so where's the water going to drip into?)

    Air conditioning and all the other niceness we get in server rooms is just an insurance policy.

  • Next up, you'll have another tech-company claiming they've got their server's running without failure in the car park, nay, ground into the tarmac without significant failure rate increase.

    It's a bit like extreme ironing [extremeironing.com]; it's probably possible, but why would you want to?

  • How do you keep the carnies from messing with the servers, though? Or what if you get cotton candy stuck in the cooling vents?
  • We got Microsoft doing something. (Oh that can't be good, or must be flawed)
    We got an excrement that attempts to test extremes. (That means that microsoft wan't people to do this for real world situations)
    It challenges and old truth. (Back in 1980 our server rooms needed to be run at 60F for optimal performance and they still are because truth never changes)

    I don't think anyone is saying that your data centers should be allowed to be placed in rough environments. But it shows that you can turn your AC from

  • just lift a tent flap when there is a wind and you have all the air con that you want!

    Seriously: what happens after a few years when the racks start to rust because of the damp air that they have been operating in ? They did this in Ireland - plenty of moisture there!

  • Unconventional data centers that don't provide adequate services will start costing these companies in other ways: canceled warranties. Warranty clauses always require that the company purchasing the equipment ensure an operating area within device specifications for humidity, temperature, shock, power, etc. The vendor can void the warranty if you go outside these bounds.

    Currently this is not much enforced, except in egregious cases. I've even seen a vendor replace servers that fell out of a truck (liter

  • Servers are very robust. So long as their fans are functional they'll stay well enough to run for some time.

    However they will run a tiny bit hotter and their overall life may be shortened because of moisture and other elements.

    Hell, I remember when the AC failed in a room with 70 servers and a phone system. Got very toasty in there very quickly.
  • Bill and Jerry go camping.
  • Comment removed based on user account deletion
  • ... prisoners in Arizona's Maricopa County [wikipedia.org] can train as admins and work from "home".
  • I can imagine that good server equipment is much more sturdy than the average rack-geek thinks. That is, of course, unless you're using $299 desktops with server operating systems.

    Got a Dimension GX260 running your business of 50 people with the same single IDE hard drive you bought with the system? Leave it inside! Oh, you mean you put a couple of 10k RPM drives with a RAID controller in that box which barely has enough airflow for the base system? Yeah, sure... DON'T cool it and see what happens.

    • Indeed, if anything, this kind of research show that either the server rooms AND/OR the server hardware is over-engineered. You could save money by simplifying the engineering on either of them, but doing both will bring your system down to a halt in no-time.
  • In a modern server the fan speed (and power use) varies with the in-temperature. Saving 20kW on AC by running the room warmer doesn't help much if your computers increase the load from 200kW to 240kW just due to increased fan speed.

    I see no mention of this in either Intel's or MS's experiments, even though it is the big reason our machine room is speced at max 18C intake air. Of course, we spend roughly 5kW to cool 240kW, so I can't really bring myself to think this is a big potential for improvement.

  • Servers in Iraq (Score:2, Informative)

    by Anonymous Coward

    I was a server admin for a parachute infantry regiment for six years and I can tell you, from experience, that servers are a LOT tougher than people think.

    On my most recent trip to Iraq, we ran about 25 Dell servers in a 15x15 ft wooden room with no insulation, very unstable power, a LOT of dust and two small AC units that worked sporadically and didn't blow air directly onto the servers. On average, it was probably 85-100 degrees in the room, depending on what part of the year. The only issues we had was f

  • Balcony server (Score:2, Interesting)

    by lkstrand ( 463964 )

    I've had my server outside on the balcony [gnist.org] for a couple of years here in Norway. No problems at all - temperatures ranging from +40C to -20C:

  • This is a life-cycle thing. So five servers survived seven months. First year failure rates could be as high as 3% and they wouldn't have noticed. Let's see them run that thing for five years.

    Corrosion is cumulative. It's not that useful to observe that something didn't fail from corrosion in seven months.

    They really just want to sell hardware and software: "There is a possible benefit in having servers fail, in that this failure forces obsolescence and ensures timely decommissioning of servers."

  • I believe Sheriff Joe has the patent on this one [wikipedia.org]

  • For this we have to pay the Windows tax on our PCs?
  • And this proves... (Score:3, Insightful)

    by pseudorand ( 603231 ) on Monday September 22, 2008 @02:49PM (#25108619)

    Intel's experiment was useful, but not surprising to the vast majority of IT people who frequently put racks in closets, their offices, etc. because that's the only place we have for them. Intel is just giving us ammo for tell the datacenter guys that we don't really need them if they're charging too much.

    Microsoft's experiment is simply ludicrous. For many obvious reasons, theft being the most obvious, no one would ever actually run a server on a tent unless you're on some scientific expedition to some place where there are no buildings and you're not staying long enough to build one and there's no bandwidth available, even by satellite.

    While Intel is addressing the problem of physical space costing far more than the computers we can store in it, Microsoft is almost making light of it. Stupid Redmond bastards.

Behind every great computer sits a skinny little geek.

Working...