Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Businesses Databases Programming Software The Internet IT

Google's Chiller-Less Data Center 132

1sockchuck writes "Google has begun operating a data center in Belgium that has no chillers to support its cooling systems, which will improve energy efficiency but make weather forecasting a larger factor in its network management. With power use climbing, many data centers are using free cooling to reduce their reliance on power-hungry chillers. By foregoing chillers entirely, Google will need to reroute workloads if the weather in Belgium gets too warm. The facility also has its own water treatment plant so it doesn't need to use potable water from a local utility."
This discussion has been archived. No new comments can be posted.

Google's Chiller-Less Data Center

Comments Filter:
  • Guess they'll be in big trouble when global warming strikes Belgium!
    • Re: (Score:1, Insightful)

      by Fenax ( 1094827 )
      Belgium is already too warm ! *here a Belgian complaining about weather*
    • Re: (Score:3, Funny)

      by lewko ( 195646 )

      No. They will just sponsor Al Gore to speak about global warming at a local meeting. Thanks to the Gore Effect [news.com.au], the temperature usually drops dramatically as soon as Gore arrives.

    • Re: (Score:3, Funny)

      by schon ( 31600 )

      Guess they'll be in big trouble when global warming strikes Belgium!

      Don't be silly. Everyone knows that Belgium doesn't really exist. [zapatopi.net]

    • Re: (Score:2, Informative)

      by jonadab ( 583620 )
      > Guess they'll be in big trouble when global warming strikes Belgium!

      If global warming ever did what the alarmists keep saying it's going to do, chillers would probably become completely irrelevant, since about two thirds of Belgium would be continuously surface-mounted with a very large water-cooling rig and heatsink, sometimes known as the North Sea.
  • by ickleberry ( 864871 ) <web@pineapple.vg> on Wednesday July 15, 2009 @06:48PM (#28710327) Homepage
    If it wasn't for the required internet connectivity google could go off the grid completely. But they already own so much fibre and the public internet seems to need google more than they need it.

    Soon they will generate all their own power from wind and solar, convert all their employees shit to power so they don't need the sewerage system either, send all their traffic through the network of low earth orbit satellites they are about to launch which also conveniently beam solar power back down to them.

    So basically at the end of the day they will be able to buy or swindle a plot of land from some country with low tax, bring in all their own employees, contribute absolutely nothing to the local economy and leave when the sun goes down. It's great really, saves them on lawyers that would otherwise help them pussyfoot through the swaths of modern over-regulation and the satellites will help them get past any censorship / connectivity problems.

    And if China start shooting down their satellites, Google will make satellites that shoot back
    • by Anonymous Coward on Wednesday July 15, 2009 @10:03PM (#28711965)

      I am totally buying Google stock if they do this.

    • Re: (Score:1, Funny)

      by Anonymous Coward

      You can't generate power from Google employees shit; power generation requires stinky shit and their shit does not stink.

  • Unreliable... (Score:5, Interesting)

    by Darkness404 ( 1287218 ) on Wednesday July 15, 2009 @06:49PM (#28710333)
    So basically everything gets rerouted on a hot day. Ok, that sounds fine until you realize that most of the outages of Google's products were due to, rerouting. And also, it seems odd that the cost of building a (hopefully redundant) datacenter that is this unreliable would be less than consolidating it with another one and using electrical cooling.
    • Re:Unreliable... (Score:5, Interesting)

      by martas ( 1439879 ) on Wednesday July 15, 2009 @06:55PM (#28710407)
      well, it might be unreliable, but i think you're overestimating the reliability of normal data centers. even if failure is twice as likely at this data center than others, i think it still improves overall performance and reliability enough that it's worth building. or at least google seems to think so.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Of course, if they have to do the re-routing 10 or so times a year, they will get the kinks worked out. That is far better management scheme than having a fail-over plan that never really gets tested. Also, when temps rise, they probably won't be completely off-lining this data center, just a fraction of the containers within it.

      I also wonder if they might not be fibbing a little, the air handlers come in different types. For chilled water use, they wouldn't have compressors, the chilled water is run thr

      • Re:Unreliable... (Score:5, Informative)

        by j79zlr ( 930600 ) on Wednesday July 15, 2009 @07:48PM (#28710903) Homepage
        If you have chilled water, you have a chiller, which means you have compressors. Process water or ground source water usually is not cold enough to be an effective cooling medium. You want a high delta T between the entering air temp and the entering water temp to induce heat transfer. Closed loop ground source water is extremely (prohibitively) expensive and open loop is quite a maintenance hassle due to water treatment. High efficiency chillers paired with evaporative cooled water towers with economizer capability is very efficient and reliable. Usually you can get down to around 0.5kW per ton with high efficiency chillers at full load and with multiple staged compressors you can do even better with part load conditions. The cooling towers are usually pretty low with around 0.05 to 0.15kW per ton. Use VFD's on the secondary pumps and cooling tower fans, and you can get cooling in at 0.75kW per ton for the whole plant at peak and even lower and part load conditions (95% of the time).

        I just designed a data center for a large Big Ten univeristy and there were no large air handlers involved at all. The system had two 400-ton chillers with the chilled water piped directly to rack mount APC fan coils. Without "green" being the basis of design, the chiller system still operates right at about 1kW/ton.
        • Re: (Score:3, Interesting)

          It's mildly interesting to know how many KW of power it takes to move some water but it would be more interesting to know how many KW of power it takes to transfer heat. With your measurements, how much heat can you transfer with a ton of water and how does the temperature of the computers compare to the ambient air?
          • Re: (Score:3, Informative)

            by Xiterion ( 809456 )
            A ton is a measure of the amount of heat transferred. See this [wikipedia.org] for more details. It's also worth noting how much of the heat transfer is done by way of allowing the water in the system to evaporate.
          • Re: (Score:2, Informative)

            by cynyr ( 703126 )
            the short answer is that the ton mentioned above in the HVAC industry is roughly equivlent to the amount of cooling a ton of ice (frozen water) would provide. Somedays I wish my industry would just unhitch the horse, and burn the buggy it was attached to.
          • Re:Unreliable... (Score:4, Interesting)

            by j79zlr ( 930600 ) on Wednesday July 15, 2009 @09:18PM (#28711607) Homepage
            1 ton is a unit of cooling equal to 12,000 BTU/hr, not weight. The typical rule of thumb is 2.4 GPM per ton which is based on a standard 10degF delta T, usually 44degF to 54degF. Assuming 100 feet of head and 50% mechanical efficiency, 1 BHP will move about 20 gallons of water per minute. 1 BHP is about 0.75kW.

            I am kind of confused about how many kW of power it takes to transfer heat. Heat moves from high to low, you have to pump cold water through a coil and force warm air across that coil. The amount of heat transferred is a function of the face velocity and temperature of the air across that coil, the amount of fluid moved and temperature through the coil and the characteristics (fin spacing, fin size, material) of the coil.

            The temperature of the computers isn't really the important factor, it is the heat rejected. Again using rules of thumb, you can assume that 80% of the electrical power delivered to the computers will be dissipated as heat. The total of that heat rejected along with the other heat inputs to the space, e.g. lighting, walls, roof, window loads, etc., will determine your cooling load. Almost all of this load is sensible, meaning heat only, for other occupancy types you would also have to consider latent (moisture) loads as far as people and ventilation air in determining the amount of cooling needed.
            • Re:Unreliable... (Score:4, Informative)

              by jhw539 ( 982431 ) on Wednesday July 15, 2009 @09:43PM (#28711815)
              "Again using rules of thumb, you can assume that 80% of the electrical power delivered to the computers will be dissipated as heat."

              ? 100% of the electrical power delivered to the computer is dissipated as heat. It's the law. It will be far less than the nameplate power (that electrical uses), and perhaps 80% of what is delivered to the building (after transformer, UPS, and PDUs), but it all ends up as heat (unless you're splitting hairs about the acoustical energy emissions and velocity pressure in the exhaust, which is small and quickly converted to heat).
              • by j79zlr ( 930600 )
                I guess I was unclear, 80% of the circuit size(s) delivered to the units since you can't load a circuit to more than 80% of the breaker size.
        • by jra ( 5600 )

          400 ton, and they did it with water, not glycol?

          Can you talk a little more about the decisions between those 2?

          • Re:Unreliable... (Score:5, Informative)

            by j79zlr ( 930600 ) on Wednesday July 15, 2009 @10:03PM (#28711969) Homepage
            The units were mounted on the roof, but were packaged AAON 2 x LL210 chillers (and a full 400 ton backup) with no exposed exterior piping. Glycol reduces the specific heat of the fluid and increases the specific gravity, so it can move less heat and takes more power to move. I only add glycol to the system if freezing is an issue.
        • Re:Unreliable... (Score:5, Interesting)

          by jhw539 ( 982431 ) on Wednesday July 15, 2009 @09:40PM (#28711797)
          You do not need a chiller to operate a datacenter in many environments at all. Based on the 2nd edition of ASHRAE's Thermal Guidelines for Data Processing Environments (which was developed with direct input from the major server providers), you can run a datacenter at up to 90F. Seriously, 90F into the rack. When it comes out the back of the rack, you collect the heat exhaust at 100-110F. "Chilled" water at 81F is more than enough to knock that 110F down to 90F- ready to go back into the front of the rack.

          The 81F water can be produced directly from open cooling towers (direct evaporation) whenever the wetbulb is lower than 76F (4 degree approach plus a 1F on your flat plate that isolates the datacenter loop from the open tower loop).

          You designed an efficient datacenter, but you're five years behind cutting edge (not actually a bad thing for most critical environment clients). The next wave of datacenters will have PUEs of 1.2 or less and redefine the space from a noisy but cool space to hang out to a hot machine room with industrial heat exhaust design.

          I actually just finished a chiller less 8MW schematic design and analysis for a bid. It was my second this month (the first was a cake walk - an extreme Twb of 67F, the second was west coast light conditions).

          PS: Secondary pumps? Seriously? Unless you have to boost up to 25 psi to feed a Cray or some other HPC I thought everyone who cared had moved onto variable primary-only pumping. (Sorry, feeling a bit snarky after hitting a 40 hour week on Weds...)
          • Re: (Score:3, Informative)

            by j79zlr ( 930600 )
            Our design conditions were 75degF. The server manufacturers said they can handle up to 100degF but have much longer life with cooler room temps.

            Primary loop is feeding the chiller. Most chillers don't like variable flow. The secondary loop is feeding the load.
          • Re: (Score:3, Insightful)

            by Glendale2x ( 210533 )

            Servers may be able to operate at 90-100, but they simply won't last as long being cooked compared to equipment that lives at cooler temperatures. This probably doesn't matter if you're Google and don't care about burning hardware or if you have money to spare and are always installing new equipment, or would rather generate truckloads of electronics waste replacing servers faster than a cooler facility just to get a PUE to brag about. The rest of us will have to settle for server rooms with air conditionin

            • Re: (Score:3, Informative)

              by jhw539 ( 982431 )
              "Servers may be able to operate at 90-100, but they simply won't last as long being cooked compared to equipment that lives at cooler temperatures."

              Operating 500 hours a year at 90F (the peak of the allowable range) is unlikely to impact longevity. 100F is outside of the allowable range. Your opinion is contradicted by what IBM, Intel, Dell, Sun, and numerous datacenter owners along with the design professionals at ASHRAE have developed over the course of several years of research and many (mostly dull)
              • The only actual experience I have that contradicts your opinion is a UPS shorting its inverter and static bypass after two years of being in an 80-90 degree room during the summer, giving it a peak internal temp of 110.

                • by jhw539 ( 982431 )
                  Correlation doesn't equal causation. I've seen more failures in 70F UPS rooms... Some equipment did have trouble with high temps, but all new equipment can and should be able to take 80F normal operating temperature with limited excursions up to 85-90F. And the more common correlation is claims of higher failure rate for servers at the top of a rack. In most traditional datacenters, you can find hotspots where recirculation from behind results in a continuous 80+F temperature into the server. When we talk
        • If you have chilled water, you have a chiller, which means you have compressors

          While I agree with you on that point, I think what the GP was saying is that it is indeed possible that the CRAC units being used at Google's "Chiller-less" Data Center could very well have compressors contained in them, along with condenser barrels.

          You'll notice TFA does not specifically say that there is no mechanical cooling happening at this site, only that their focus is on free cooling. It could very well be that these CRACs have the ability to modulate between mechanical cooling and water side

    • by Banzai042 ( 948220 ) on Wednesday July 15, 2009 @07:05PM (#28710521)
      Remember that even on hot days not all of the traffic through the datacenter needs to be rerouted, and I'd imagine that a location selected for a datacenter like this was chosen for the infrequency of days that will require rerouting. Do you know how much it costs to cool a datacenter, and how much this will save? I don't, but Google probably does, and they probably wouldn't make a decision to do something like this without comparing the savings with the potential cost from decreased lifespan of computers running hot and losses due to downtime. I would also imagine that Google will be working to greatly increase stability during rerouting, given the comments from the end of TFA about other power saving uses, such as routing traffic to datacenters where it's night, meaning "free cooling" can be used since it's colder outside, and off-peak electricity rates are in effect.

      I think the concept is interesting, and it makes me wonder if we'll see more datacenters built in areas of the world more conducive to projects like this in the future.
      • by jhw539 ( 982431 )
        "I think the concept is interesting, and it makes me wonder if we'll see more datacenters built in areas of the world more conducive to projects like this in the future."

        Already happening in a way. Check out EDS's Wynyard facility. They didn't eliminate the chillers entirely last I looked, but in that climate they could have if they trusted the outdoor conditions and local code officials (open cooling towers are subject to abrupt shutdown if there is a Legionella scare anywhere near by in Europe).

        Alth
        • by j79zlr ( 930600 ) on Wednesday July 15, 2009 @10:31PM (#28712157) Homepage
          That is where the ice storage systems become interesting and cost effective. In the states, usually half of a commercial energy bill is peak demand. If you can transfer that energy usage to night time to build up your ice storage and transfer your main power draw to off peak the savings can be very significant and create payback times in months not years.
    • Comment removed based on user account deletion
      • Or, if you're Google, you have a metric shit-ton of servers and don't care too much about reducing the MTBF of a few hundred racks by running them hot.

        • Or, if you're Google, you have a metric shit-ton of servers and don't care too much about reducing the MTBF of a few hundred racks by running them hot.

          I was under the impression (mostly from tons of slashdot articles on the subject), that Google had done the research on this and determined that the higher temperatures did NOT reduce the MTBF of the hardware. Seriously, 90F into the equipment isn't that hot.

    • Re: (Score:3, Funny)

      by SlashV ( 1069110 )

      So basically everything gets rerouted on a hot day.

      Exactly, and you can stop googling, get out of the basement and go to the beach. How bad is that ?

    • So basically everything gets rerouted on a hot day.

      Except that its not "everything", even on a hot day, its just that the total load at that center must be reduced (not eliminated) on a hot day.

      And also, it seems odd that the cost of building a (hopefully redundant) datacenter that is this unreliable would be less than consolidating it with another one and using electrical cooling.

      Google needs to have many data centers, and problems it has had in the past have shown that it needs to have the ability to seem

    • I've got one word for you: Iceland.

      They sit where temperate ocean water meets cold arctic air, resulting in a relatively narrow and predictable temperature band which happens to be perfect for cooling datacenters with minimal, if any, conventional HVAC. Their power is green, and they have lots of it. They use a combination of hydropower and incredibly abundant geothermal heat for power generation. Recently, undersea fiber cables have been laid down, greatly increasing their connectivity to the outsid
  • by Clockowl ( 1557735 ) <nick@nOSPAM.dotsimplicity.net> on Wednesday July 15, 2009 @06:51PM (#28710353)
    Is it really worth to be dependent on the weather in exchange for a lower energy bill?
    • by symbolset ( 646467 ) on Wednesday July 15, 2009 @07:09PM (#28710555) Journal

      If you're Google? Apparently the answer is "yes."

      More people can and should do this. 27C is plenty cool enough for servers. It annoys me to go into a nipple crinkling datacenter knowing they're burning more juice cooling the darned thing than they are crunching the numbers. A simple exhaust fan and some air filters would be fine almost all of the time, and would be less prone to failure.

    • Re: (Score:3, Interesting)

      by Junta ( 36770 )

      It's probably not as much about the energy bill as it is about the PR.

      If it wasn't PR, they'd have chillers 'just in case', even if turned off most of the time. As it stands, they may be subject to a large risk of month-long heat waves killing them on paying idle employees, taxes, and taking a hit on capital depreciation costs for zero productive output that they are presumably banking on by bothering to build another datacenter.

      Of course, there may be something unique about the site/strategy that makes th

      • I've seen facilities that are largely cooled by climate pretty far north that still keep chillers on hand in the event of uncooperative weather.

        Very true, but this is Google we're talking about; their re-routing ability is phenomenal thanks to the sheer number of data centres they have across Europe. The latency cost for a re-route away from Belgium to North France or North Germany on a hot day is minor, most companies wouldn't have so many similar data centres proximal to the one shut down by inclement weather - it's a risk that will more than pay off. Besides, everyone gets lethargic on hot days - it seems Google now does too!

    • Re: (Score:3, Interesting)

      by mckinnsb ( 984522 )

      Yes , but not because of the bill itself.

      Google has been actively developing a reputation in the corporate world for squeezing the most CPU-bang out of a buck, and a great way to do that is by cutting down on the amount of power a CPU uses.

      A few weeks back there was an article on Slashdot which discussed a before-unseen Google innovation concerning its servers - a 12 volt battery that cut the need for an APC (which lowered costs by lowering both the power flowing to the CPU and the power required to cool th

  • by basementman ( 1475159 ) on Wednesday July 15, 2009 @06:52PM (#28710373) Homepage
    Why not just reroute the weather? Once google gets into cloud seeding and all that they really will be SkyNet.
  • by Anonymous Coward

    I wonder if it would be feasible to have massive passive cooling (heat sinks, fans, exhausts from the data center, etc.) and run the data centers which are currently at night (i.e. on the dark side of the planet.) and constantly rotate the workload around the planet, to keep the hotest centers in the coolest part of the planet. The same logic could be applied moving workloads between the northern and southern hemispheres.

    Yes, there would be tons more telecommunication to do, with the impacts on performance,

    • by maharb ( 1534501 )

      Except that the highest load on data centers is generally during the local day or at least not at like 5am when its the coldest. I would imagine routing the traffic all the way to the dark side of the planet would produce less than acceptable latency for most uses. This might work for other types of work but I don't think it would work for anything web and response time based like google.

      Plus routers/bandwidth isn't exactly cheap and costs would go up if companies started using these methods. I'm not goi

      • 1 - You are thinking too small and apparently you are falling for the idea that Google is "just a search engine". It's OK. Many people do that.
        It is not all "web", nor do they have to use all their power for themselves.

        The idea is to offer "virtualized workloads" to your customers - workloads that you then shift around the world to the "cheapest work-area at the moment"
        Which MIGHT cost you a pretty penny - unless you have your own global network of datacenters built for just such a purpose.
        You know... somet

        • UPS! Lost a couple of words there, putting that link in.
          That last sentence should read:

          As for placing datacenters beyond the arctic circle - my guess is that they are going to keep building their datacenters closer to the internet backbones [nemako.net] for some time to come.

    • An Enabler for "Follow the Moon"?
      The ability to seamlessly shift workloads between data centers also creates intriguing long-term energy management possibilities, including a "follow the moon" strategy which takes advantage of lower costs for power and cooling during overnight hours. In this scenario, virtualized workloads are shifted across data centers in different time zones to capture savings from off-peak utility rates.

      This approach has been discussed by cloud technologists Geva Perry and James Urquhart as a strategy for cloud computing providers with global data networks, who could offer a "follow-the-moon" service to enterprise customers who would normally build data centers where power is cheap. But this approach could also produce energy savings for a single company with a global network - someone like Google.

  • Investing (Score:3, Funny)

    by corychristison ( 951993 ) on Wednesday July 15, 2009 @06:57PM (#28710433)

    I think Google needs to start investing some time and money into buying or building Nuclear Power Facilities.

    It could pay off for them, because they certainly don't need all of the power they would generate, and could sell some back to the Country/State/Region they build it in.

    Sounds like a win-win to me.

    P.S. - Please don't start a flame war about how Nuclear Power is 'unclean' or 'dangerous' -- in today's society it is cleaner, more efficient and just as safe, if not safer, than coal-fired generators.

    • by PJ6 ( 1151747 )
      Nuclear reactors have lead times of 10 years or more, and you are proposing this for an internet-based business. Reactors are also insanely expensive and carry enormous political problems. Um, yeah... like, that's totally going to work.
    • by Homburg ( 213427 )

      A nuclear power station does make sense as an investment for Google; after all, they've got plenty of experience [reuters.com] of investing in things that are never going to make a profit [greenpeace.org.uk].

    • Maybe this sort of initiative is just what is needed to renew public interest in nuclear power. If a business like Google can show that it is clean, safe and reliable, perhaps governments and "environmentalists" can see through the FUD and support nuclear for national grid power.

  • As the number of chiller-less data centers in the Northern Hemisphere increases, New Zealand may become the ideal location to build alternate climate data center capacity to deal with hot summers in Europe and Northern America... :)
    • Only problem is getting enough bandwidth to/from New Zealand to make the data center worth it. It's one thing to build one there and have it be super efficient and climate effective, but it's another to have all that greatness and not be able to get enough information to/from it.

      This picture [infranetlab.org] only shows 1 cable from NZ to NA, and none to Europe, so I'd say it's out of the picture.

    • by mdf356 ( 774923 )

      Not all of North America is hot in the summer...

      Average high in SFO for July-September is 72
      Average high in SEA for July-Aug is 76

      Lots of places on the west coast rarely get warm. This is one reason everyone and their kid brother has moved there, which is why real estate is so expensive...

  • by HangingChad ( 677530 ) on Wednesday July 15, 2009 @07:19PM (#28710659) Homepage

    But if your data center is in say, Minnesota, it seems like you could balance the temperature with outside air for many months out of the year. Obviously you'd need to light up the chillers in the summer, but running them 4 months out of the year seems like a huge energy savings than running them year round.

    I remember visting Superior in the summer and the lake water was freezing f'ing cold even in June. Wonder if you could run a closed loop heat exchanger without screwing up the lake environment?

    • by LoRdTAW ( 99712 ) on Wednesday July 15, 2009 @07:26PM (#28710723)

      I don't know about natural lakes but man made ponds have been used for just that purpose.

      • by TubeSteak ( 669689 ) on Wednesday July 15, 2009 @08:52PM (#28711397) Journal

        I don't know about natural lakes but man made ponds have been used for just that purpose.

        Man made ponds are used because the EPA crawls up your ass if you want to use a natural body of water for any commercial/industrial output.

        Note: I'm saying that's a bad thing. I'm glad the "good old days," when chemicals, raw sewage, and cooling water were dumped willy nilly into the waterways and drinking supply, are gone. You warm up an area of water 10 or 15 degrees farenheit and you'll kill most everything living in it but algae.

        • by afidel ( 530433 )
          Well, the Great Lakes are basically barren down in the deepest zones so that's one heck of a heatsink. I know someone was talking about using the water coming into Chicago as a heatsink for free cooling as the water is too cold to be usable immediately anyways (like 56F I think).
      • by dlevitan ( 132062 ) on Wednesday July 15, 2009 @10:22PM (#28712105)

        Cornell University actually did this exact thing to cool a good chunk of the campus. It's called lake source cooling [cornell.edu]. While there will of course be some environmental impact, the energy usage is 20% of normal chillers and thus is, I'm sure, an environmental net gain.

    • by Five Bucks! ( 769277 ) on Wednesday July 15, 2009 @07:29PM (#28710747)

      They do! Well... not Superior, but Lake Ontario.

      Toronto has a rather large system that uses deep, cool water as a heat sink.

      Enwave [wikipedia.org] is the company that provides this service.

      • by mydots ( 1598073 )
        Geothermal heat transfer is a great way to do cooling (and heating if needed). The initial investment will be high, but the savings will also be high in the long run (and not just in $). Add solar panels and/or wind turbines that can power the heat pump and some of the data center equipment. I had a solar system installed on the roof of my house a few years ago. I received major incentive rebates from the state, I can sell my SREC's and I get "free" electricity; and in a few more years time the cost of
    • by maxume ( 22995 )

      The rough answer is yes you can, but there are probably questions about how much you are willing to screw up the environment and whether or not you can get licensed:

      http://en.wikipedia.org/wiki/Presque_Isle_Power_Plant [wikipedia.org]

      (I'm not asserting anything about how much heat the Presque Isle Plant releases into Lake Superior or about how much damage that heat does, but it probably releases a significant amount of heat, and it probably has some sort of license)

      • The rough answer is yes you can, but there are probably questions about how much you are willing to screw up the environment and whether or not you can get licensed:

        In high school we went down to the Oyster Creek nuclear power plant a few times for class trips. They have a cooling water drain to a creek that eventually feeds into an estuary. In the winter, the area is teeming with wildlife that love it. There are clam beds unlike any area around it. They've actually created fish kills by turning off the

        • Re: (Score:3, Interesting)

          by jhw539 ( 982431 )
          "Why they can't further extract useful energy from this hot water I don't know."

          I blame that bastard Carnot personally for this... They could get additional work out of that hot water, but it gets prohibitively expensive the lower your delta T between hot and cold gets. I was all stoked about finding some sort of stirling heat engine to run off some datacenter waste heat, until I worked the numbers and found the annual average maximum therorectical efficiency was under 15%.

          F*cking entropy.
    • by david.given ( 6740 ) <dg@cowlark.com> on Wednesday July 15, 2009 @08:22PM (#28711177) Homepage Journal

      The short answer is yes --- water takes a staggering amount of energy to change temperature (it's one of the many properties the stuff's got that's really weird). A big lake makes an ideal dumping ground for waste heat. What's more, the environmental impact is going to be minimal: even the biggest data centre isn't going to produce enough waste energy to have much effect.

      (A big data center consumes about 5MW of power. The specific heat capacity of water is about 4kJ/kg.K, which means that it takes 4kJ to raise the temperature on one kilogram of water by one kelvin. Assuming all that gets dumped into the lake as heat, that means you're raising the temperature of about 1000 litres per second by one kelvin. A small lake, say 1km x 1km x 10m, contains 10000000000 litres! So you're going to need to run your data centre for ten million seconds, or about 110 days, to raise the temperature by one measly degree. And that's ignoring the cooling off the surface, which would vastly overpower any amount of heat you could put into it.)

      (The same applies in reverse. You can extract practically unlimited amounts of heat from water. Got running water in your property? Go look into heat pumps.)

      In fact, if you were dumping waste heat into a lake, it would make sense to try and concentrate the heat to produce hotspots. You would then use this for things like fish farming. Warm water's always useful.

    • by CAIMLAS ( 41445 ) on Wednesday July 15, 2009 @10:00PM (#28711943)

      At those latitudes the ambient subterranean temperature remains pretty ambient all year long. Drill into the side of a mountain or hill with a boring tool, leave the edges rough (with a smooth poured/paved floor for access) and just drop your server containers in there with power coming in. If you go all the way through the hill you can use the natural air currents to push/pull air through the tunnels, and the natural heat absorption qualities of stone will keep the temperature down. I'd be surprised if any active "cooling" were needed at all.

  • Yakhchal (Score:5, Informative)

    by physicsphairy ( 720718 ) on Wednesday July 15, 2009 @07:28PM (#28710745)

    The ancient Persians had a passively cooled refrigerator called the yakhchal [wikipedia.org] which "often contained a system of windcatchers that could easily bring temperatures inside the space down to frigid levels in summer days."

    Perhaps the Google datacenter could employ some variation of their technique.

    • "often contained a system of windcatchers that could easily bring temperatures inside the space down to frigid levels in summer days."

      Sounds like something out of Dune.
    • Re: (Score:3, Interesting)

      by CAIMLAS ( 41445 )

      This reminds me of a technique for cooling water in a desert which could tenably be applied to the data center as well.

      Basically, a container is filled with water, closed/sealed, and wrapped with a damp/wet towel and buried in the ground (or just placed somewhere in the sun, I suppose). The evaporation of the moisture in the rag will draw the heat from the inside of the container, resulting in frigid water.

      Put a data center on a dry coastal equatorial area and harness solar to desalinate the water. Build th

    • Re:Yakhchal (Score:4, Interesting)

      by Bruce Perens ( 3872 ) * <bruce@perens.com> on Thursday July 16, 2009 @01:29AM (#28713255) Homepage Journal
      You need both windcatchers and an underground water reservoir (a quanat). The windcatchers create a lower pressure zone which pulls air in through the quanat. There is evaporative cooling in the quanat. I don't think this would get near freezing temperature unless your water source is really cold.

      There is a way to make ice in a dry environment by exposing water to the coolness of the night sky and insulating it during the day.

  • by History's Coming To ( 1059484 ) on Wednesday July 15, 2009 @07:42PM (#28710867) Journal
    So the fundamental upshot is that the point to point speed of the internet will be directly correlated to the average temperature of various cells, on a large scale. The statistical effect will be there. I'd wager this will be a remarkably accurate and near real-time barometer of global temperature.
  • Good to see. (Score:3, Interesting)

    by Sir Hossfly ( 1575701 ) on Wednesday July 15, 2009 @07:43PM (#28710869)
    It's good to read some good news for a change...but it wont hit too many headlines..."Giant Googlebillion-dollar Company Doing Something Good" This "good" I speak of is someone with means and vision getting out there and just doing something. I still think Google could easily turn to the darkside...but is a whole different post ;)
  • by SpaFF ( 18764 ) on Wednesday July 15, 2009 @07:47PM (#28710897) Homepage

    I'm not sure I understand why they constructed their own water treatment plant. I would think that it would be more energy efficient on the whole to use the already constructed municipal system in the area.

    • Re: (Score:3, Informative)

      by seifried ( 12921 )
      You know what water costs in bulk? It adds up pretty quick. Plus they don't need potable (drinkable) water, they need water that won't clog their system up.
    • Re: (Score:3, Informative)

      Municipal water (at least here, in the US) means "chlorinated water". Chlorine does terrible things to pipes, coolers, pumps - everything. Having your own water treatment system means the chlorine never gets in, saving bundles in maintenance. To get an idea, find two similar water cooled vehicles - one which has had chlorinated water added to the radiator routinely, and another whose owner has been more choosy. Look down into those radiators. I've actually seen copper radiators corroded out in states t

    • by dbIII ( 701233 )
      They want water treated to work well with their cooling equipment and not just water good enough to drink. For instance where I am there are a lot of manganese salts in the drinking water that are perfectly safe to drink but tend to stick to hot surfaces. Using this water you would eventually clog up the pipes of a cooling system with the same brown gunk you get as a thin layer on the inside of electric kettles. There is other stuff that can precipitate out at different temperatures and basicly leave you
  • Why didn't they just build it in Maine?
  • Lots of cool weather and lots of cheap geo-thermal power.

  • by PensivePeter ( 1104071 ) on Thursday July 16, 2009 @12:44AM (#28712961)
    I wonder how much this is a cynical marketing and public policy exercise. A few months ago, the European Commission announced an ambitous programme to the IT industry for European energy conservation targets to be met by 2012 and lo and behold, look who's here preening its feathers?

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...