Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Power Businesses Google The Internet Hardware

Google Demands Higher Chip Temps From Intel 287

JagsLive writes "When purchasing server processors directly from Intel, Google has insisted on a guarantee that the chips can operate at temperatures five degrees centigrade higher than their standard qualification, according to a former Google employee. This allowed the search giant to maintain higher temperatures within its data centers, the ex-employee says, and save millions of dollars each year in cooling costs."
This discussion has been archived. No new comments can be posted.

Google Demands Higher Chip Temps From Intel

Comments Filter:
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Wednesday October 15, 2008 @10:56AM (#25382755)
    Comment removed based on user account deletion
    • by Thelasko ( 1196535 ) on Wednesday October 15, 2008 @11:07AM (#25382995) Journal

      Wouldn't Intel run into physical limitations

      Google is most likely getting the best chips out of Intel's standard production. It's akin to sorting out the best bananas at the grocery store. This sort of privilege happens when you buy enough products from a supplier.

      If they were demanding much more than 5 degrees then I would say they are getting custom made chips, but I don't think that's the case.

      • Re:Is this possible? (Score:4, Interesting)

        by Dr. Manhattan ( 29720 ) <<sorceror171> <at> <gmail.com>> on Wednesday October 15, 2008 @11:56AM (#25383873) Homepage
        This happens with resistors, too. If you want one within 5% of the nominal ohmage, you pay more. If you want want one within 10%, you pay less, but you'll find that they're all either about 10% low or 10% high, with a 'notch' in the center of the distribution. Same production process used, but they skim off the 'good ones' and charge more for them.
        • Re:Is this possible? (Score:5, Interesting)

          by Thelasko ( 1196535 ) on Wednesday October 15, 2008 @12:11PM (#25384159) Journal
          Legend has it [arstechnica.com] that the Celeron processor began its life as a way for Intel to make money off of the Pentiums that didn't pass quality control. If they sell the low performing processors at a discount, why shouldn't they sell the over performing ones at a premium?
          • Re:Is this possible? (Score:5, Informative)

            by TheRaven64 ( 641858 ) on Wednesday October 15, 2008 @12:39PM (#25384737) Journal
            It's not just the celerons, its most CPUs. A modern CPU is quite big and contains a lot of components that aren't essential to the operation of the chip. If you disable these, you have a slightly less good chip without the engineering cost of designing a entirely new die layout. AMD takes this to extremes. Their Opterons have 4 cores, three hypertransport connections and a load of cache. If there is a manufacturing flaw in the cache, they are sold as a model with less cache. If it's in the cores, then they are sold as three, two, or single core chips. If it's in the HT controllers then they only support 2- or 4-way multichip configurations. Intel's 486SX line was just a 486 (later renamed the 486DX) where the FPU didn't work.
        • Re: (Score:3, Insightful)

          by aztracker1 ( 702135 )
          Well, that's how we get the difference between a 2.4ghz and a 3.16ghz cpu as it is anyways... the higher clock ones are those that test well enough to run at that higher speed with stability. That's also why there are lower clocked CPUs that will overclock like mad (because demand is lower than yields for higher clocked parts). ... Google is probably buying the best of the best. It's not like Intel had to change a lot for that to happen. Server CPUs (XEONs, Opterons) are those that are found to stand up
      • Google is most likely getting the best chips out of Intel's standard production.

        I should also note that Intel is guaranteeing these processors will survive at temperatures 5 degrees higher. They may not even sort through the processors they sell to Google. They are just betting that they will survive based on failure statistics they already have. They may have a higher failure rate with a 5 degree increase in temperature, but the cost of the warantee replacements is more than offset by the premium they charge Google.

        The key point is that Intel isn't custom making these processors,

      • Re: (Score:3, Interesting)

        by rufty_tufty ( 888596 )

        Best isn't a term you use in test programs. At a certain temperature a chip will run at a certain speed or it won't.
        All that's going on here is that Intel will be altering its binning process to separate out the chips that are capable of running at 75 degrees from those that can't.

        Not really any different at all from the current process of binning based on speed grade. All it'll be is a different set of parameters in the test structures will cause the chip to go into a different bin.

        Now what interests me is

    • Re:Is this possible? (Score:4, Interesting)

      by onitzuka ( 1303967 ) on Wednesday October 15, 2008 @11:09AM (#25383013)

      WI'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

      There is no reason to be surprised. It is cheaper to not move the data center to where it is colder and just make all upgraded hardware use the new chips. Google's budget calls for hardware upgrades already. Upgrading to CPUs with higher temp tolerances would mean they pay the same $X-thousand for the box they would anyway and simply turn the thermostat up.

      A net savings.

    • Re:Is this possible? (Score:5, Informative)

      by Ngarrang ( 1023425 ) on Wednesday October 15, 2008 @11:10AM (#25383031) Journal

      I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

      Don't kid yourself. They probably have. But then, who did you get to work at what would probably be a very remote location?

      Additionally, such remote locations may suffer from not enough bandwidth and/or electricity.

      • Not to mention high RTTs.

      • Re: (Score:3, Interesting)

        by shaka999 ( 335100 )

        You know, they wouldn't have to go that far. I'm in Colorado, if they put a data center in one of the higher mountain towns I imagine they could significantly reduce their costs.

        I guess the other thing they could look at is using a heat exchanger and use that excess heat for hot water heating or something.

        • by Anpheus ( 908711 )

          Yeah but then they'd have to fight with Stargate Command for space.

        • Higher latitude locations would work better, you could be closer to the coastline, or a lower elevation, while still having cooler weather... Many parts of Canada, and the great lakes area would work pretty well. You don't want to move too far out thought (arctic) as this would increase latency, which needs to be a consideration as well.
        • by mikael ( 484 )

          One office block I once visited had an IBM mainframe located in the basement. One of the problems that the data center operators had, was with homeless people camping down beside the exhaust air vents. These vents generated so much heat that in Winter you could still walk around in this "bubble" of warm air in a short-sleeved shirt, while 3-4 metres away you would be frozen to your bones.

      • by rrhal ( 88665 )

        They could go to Fairbanks, AK (technically temperate sub arctic) - there's enough electricity (not as cheap as teh Hood River, OR site they currently use). It's plenty cold, there's a well educated and underutilized labor pool there, and good fiber optic networks exist between Fairbanks and Seattle/Tokyo.

      • by macraig ( 621737 )

        No, not possible: you forgot about that ZPM down in Antarctica. How many data centers can ya power with a ZPM? Doctor McKay probably knows... let's ask him.

    • Re:Is this possible? (Score:5, Interesting)

      by Zocalo ( 252965 ) on Wednesday October 15, 2008 @11:11AM (#25383049) Homepage

      Two words: "Free Cooling"

      The greater the difference between your data centre's output air temperature and whatever passive external cooling system you are pumping it though, the more heat you can dump at zero cost. That's monetary cost as well as the cost to the environment through the energy "wasted" in HVAC systems and the like. Google has a PUE (Power Usage Effectiveness; the ratio of power input to power acutally used for powering production systems) approaching 1.2 vs typical values of around 2.5-3.0 - Microsoft is around 2.75 as I recall - so they are clearly doing something right.

      • by terraformer ( 617565 ) <tpb@pervici.com> on Wednesday October 15, 2008 @11:50AM (#25383799) Journal
        Two more words, uncontrolled humidity.... Yes, there are efficiency gains to be had everywhere, but none of them are free, only less costly as in bringing in moist outside air will require that air to be dehumidified, at a cost obviously. If you go with a heat exchanger, the amount of cooling decreases significantly and so you need more of them, and each one DOES require energy to operate (aka; move air/liquid/whatever through).
        • This humidity: how much is this really an issue? (genuine question). I can imagine that when you reach 100% and your equipment is cooler than ambient you get condensation issues. However here we are talking about equipment that needs cooling, i.e. is well above ambient temperatures. Condensation is therefore surely not an issue.

          If you would say "dust", that I can see as an issue as it clogs up ventilation openings and can put a nice blanket on equipment keeping it all nice and hot. Dust however is very eas

          • Humidity can kill electronics. I recall some years back a guy down in Louisiana took his new camera out and about for a couple of weeks. It was a really nice camer which he'd just spent $1500 on, and within a few weeks it failed due to corrosion.

            So yes, humidity is a very serious thing and if you're not planning for it you can end up with destroyed equipment very quickly.

          • Re: (Score:3, Insightful)

            by terraformer ( 617565 )
            Actually it is a very good question and no one really knows the answer. Most of our IT equipment standards are derived from telecom standards from years ago. It may very well be that the tolerances are too tight and we can move away from highly controlled environments, but not enough is known about today's equipment at high or low (static problems) humidities to understand the consequences of doing so. As for dust, which is a known issue in anything with moving parts or things that don't do well with interr
    • by iminplaya ( 723125 ) on Wednesday October 15, 2008 @11:18AM (#25383171) Journal

      I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

      Oh yeah! And completely melt the ice cap.

      • by thermian ( 1267986 ) on Wednesday October 15, 2008 @11:57AM (#25383881)

        If you had say, a town sized data centre installation, it would probably have about the same effect as a smallish and partially active volcano, of which there are many in northern latitudes. Pretty much nothing apart from local effects, which is, in spite of the green crazies rantings, not too bad, not compared to the alternatives.

        What you wouldn't have is as much need for additional power to cool, which of course saves the pollution caused by its generation. You should bear in mind that the colder parts of the Earth are being far more seriously effected by polutants in the atmosphere then by anything which is just warmer than its surroundings.

        As for why I said green crazies. Well if they hadn't been so all fired determined to put governments off nuclear power, we'd have that instead of all these coal burning plants. Now we have massive pollution problems and a truly gigantic cost for building all those nuclear plants in a shorter time, instead of gradually over the last three or four decades.

      • Yeah, think of the poor polar bears... Images of sad little polar bears, trapped on an ice flow come to mind... Curse you Al Gore!
    • by jimicus ( 737525 ) on Wednesday October 15, 2008 @11:39AM (#25383585)

      Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

      Are you serious? Neither the Arctic nor the Antarctic is well known for reliable power or fast Internet connections.

      • Re: (Score:3, Interesting)

        by Skapare ( 16644 )

        Iceland. It has cheap geothermal energy. And that's energy that's going to heat the Earth, anyway, similar to solar. They just need some big pipes between there are North America and Europe.

    • Re: (Score:3, Interesting)

      Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round. We've seen reports of appealing places like that on Slashdot before. (Of course, that would just be a short-term fix before we move the Earth to a farther orbit around the sun to avoid suffocating in our own waste heat like the Puppeteers in Niven's Ringworld [amazon.com] ).

      I doubt anything physical is being done. Intel is very conservative in setting maximum operating temperatures. They're probably just promising Google that they'll cover those operated 5 C hotter under their warranty. If anything is actually being done to the hardware it's probably just altering the temp at which throttling occurs.

    • Re: (Score:3, Interesting)

      by Surt ( 22457 )

      It's going to be far cheaper to build radiator fins extending into space than to move the earth's orbit, barring some innovative invention in the orbit moving department. Also, orbit moving has the downside of reducing the solar flux, which will be bad for our solar energy efforts.

  • Environmental impact (Score:2, Interesting)

    by phorm ( 591458 )

    Uhhhh. Wouldn't making chips a bit more efficient be better, as opposed to making them "less likely to burn out at higher temps"

    Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.

    • by LWATCDR ( 28044 ) on Wednesday October 15, 2008 @11:00AM (#25382857) Homepage Journal

      Not really.
      No matter how cool the chips run they will put out heat. If you have two chips that run at X and use Y watts you will save power if you can run them a little hotter and use less power for cooling.

    • by Rogerborg ( 306625 ) on Wednesday October 15, 2008 @11:04AM (#25382915) Homepage
      Ideally, yes. And ideally, I'd come home to find Alyson Hannigan oiled up and duct taped to my bed.

      Pragmatically, if they can't run cool, then it's more efficient to run them hot than to spend more energy actively cooling them.

    • by I.M.O.G. ( 811163 ) <spamisyummy@gmail.com> on Wednesday October 15, 2008 @11:06AM (#25382955) Homepage

      I'm not sure what your getting at - if by doing this they are saving millions in cooling expense, they are certainly using less energy. What is "going green" if it isn't energy conservation? The fact that the conservation comes from less work for their AC units rather than efficient little processors is immaterial.

      Don't expect any company to do things because its right - but good companies will find win-win situations where they cut costs and do things to "Go Green".

      • by Moraelin ( 679338 ) on Wednesday October 15, 2008 @11:41AM (#25383621) Journal

        Yes, but way I see this is:

        Intel isn't arbitrarily going, "man, we could make chips that run ok 5 degrees hotter, but we're gonna piss everyone off by demanding more cooling. Just because we can." Most likely Intel is already doing the best it can, and getting a bunch of chips which vary in how good they are. And they're getting the same bunch of chips regardless of whether Google demands higher temps or not.

        Google just gets a cherry-picked bunch, but the average over Intel's production is still the same. Hence everyone else is getting a worse selection. They what remains after Google took the best.

        It's a zero-sum game. The total load on the planet is the same. The same total bunch of chips exits Intel's fabs. On the total, no energy was conserved.

        So Google's "going green" is at the cost of making everyone else less "green". They can willy-wave about how energy efficient they are, by simply dumping the difference on someone else.

        That's not "going green", that's a predatory approach. Your computers could require on the average an extra 0.5W in cooling, so Google can willy-wave that theirs uses 1W less. They just dumped their costs and their "eco burden" to someone else.

        It's akin to me willy-waving that I'm so green and produce less garbage than you... by dumping some of my garbage in random other people's garbage bins across the town. Yay, I'm so green now, you all can start worshipping me. Well, no, on the total the same amount of garbage being produced, I just dumped it and the related costs on other people. That's not going green, that's being a predator.

        I can see why a business might want to cut their own costs, and not care about yours. That's, after all, the whole "invisible hand" theory. But let me repeat it: on the whole no energy was conserved. They just passed off some of their cooling costs (energy _and_ money) to someone else.

        • by Muad'Dave ( 255648 ) on Wednesday October 15, 2008 @11:47AM (#25383745) Homepage
          I agree with most of what you said, but I think there is a _slight_ difference between Google having the higher-temp-rated chips and your average Joe user having them. Google's chips will be running full throttle/full temp 24/7; Joe user might run them full blast (and therefore full temp) 2% of the time. I bet the energy savings are not insignificant when usage patterns are taken into consideration.
          • by nabsltd ( 1313397 ) on Wednesday October 15, 2008 @01:22PM (#25385627)

            Google's chips will be running full throttle/full temp 24/7

            Is there any documentation for this?

            I seriously find it hard to believe that Google has every processor they own running 24/7 at 100% utilization. Other than the computation problems like SETI and protein folding, most problems are I/O bound, and I would think that the stuff Google does would involve a lot of I/O.

        • Re: (Score:3, Insightful)

          you miss the point entirely. Google wants Intel's processors to operate properly at a higher temperature than currently spec'd that will allow Google to use less cooling. They want the processors to tolerate more heat, not generate more heat. This means Intel will have to make the processors slightly more beefy, but how much does that really cost across millions of units once the design work is done, a few bucks per processor tops.

          Google pays dozens of dollars a MONTH to cool each processor. Intel making

        • by MadCow42 ( 243108 ) on Wednesday October 15, 2008 @12:31PM (#25384587) Homepage

          >> So Google's "going green" is at the cost of making everyone else less "green". They can willy-wave about how energy efficient they are, by simply dumping the difference on someone else.

          The difference is that Google is going to actively exploit the ability of those hand-picked CPU's to run hotter. Chances are that the users who would have otherwise received those chips would not reap any energy savings from the capabilities.

          At a minimum, Google is contributing here by forcing a vendor to differentiate chips that have a capability of running hotter from ones that don't. No matter who uses that capability, it's a benefit to the planet (versus the alternatives at least).

          MadCow.

    • by gstoddart ( 321705 ) on Wednesday October 15, 2008 @11:12AM (#25383071) Homepage

      Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.

      The amount of energy you need to use to cool that stuff is quite significant. And, in case you haven't realized it, generating cool air also create more warm air, it's just not in the data center. It's usually vented right outside.

      If the chips could run hotter, they'd have to use less energy to cool the data center, and generate less waste heat from that very cooling in the first place.

      I'm not convinced that what they're asking for isn't a good idea.

      Cheers

    • Well, cooling them down takes energy and creates heat, so depending on how much you can save in that area vs. the extra expended with the chips, it might end up being more efficient overall.

    • by Surt ( 22457 )

      Uhhhh. Wouldn't making chips a bit more efficient be better, as opposed to making them "less likely to burn out at higher temps"

      Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.

      Yes, assuming those were the tradeoffs, which they weren't.

      Google is trading existing performance levels against reduced cooling, which is a pure win, just by demanding more resilient chips from Intel.

    • Re: (Score:2, Interesting)

      by aperion ( 1022267 )

      It doesn't sounds like Google is asking for less efficient chips, that's a retarded notion. Instead it sounds like Google is asking for more durable chips.

      One that still operates at X% efficiency but operates at a higher ambient temperature., say 70F instead of 65F. I'm sure Google would like it better if the chips didn't produce any heat (ie 100% efficient), but that's impossible.

      Still, if you want to "Go Green" and be environmentally friendly, stop viewing heat as a problem. It's better to try and reclaim

  • by Junior J. Junior III ( 192702 ) on Wednesday October 15, 2008 @10:57AM (#25382791) Homepage

    If you don't have the clout of a Google-sized organization to buy higher-rated chips from Intel, I wonder if you can basically achieve the same thing by underclocking. An underclocked chip will run cooler, but I don't know if it'll run more stably at higher temps, although I think it would.

    Does anyone have any experience with doing this?

    I think it'd be interesting to see whether the cost savings in power and cooling is offset by the cost of the performance losses.

    • In the past, I've under-clocked primarily for noise benefits. Lower clock->lower temp->slower fam RPM->lower dbSPL.

    • by cduffy ( 652 )

      Several years ago I underclocked my home system (which was initially quite unreliable) and saw a substantial decrease in uncommanded reboots. The issue went away almost entirely (even at full speed) when I upgraded the power supply; I suspect that I was running near the edge of what it could handle, and underclocking (by reducing the CPU's power draw) moved its usage far away enough from that boundary to make a difference.

    • As an overclocker, your statement tends to be true in practice. The cooler a chip is kept, the closer you can get to its maximum overclocking frequency - the frequency beyond which the chip exhibits instability. Similarly, the lower the frequency is set the warmer temperature the chip will generally handle with stable operation. These are general trends, processors from different fabs or batches perform differently - but within the same batch of processors, you can reliably test and observe these results

  • Not mentioned in the story. What CPU are they talking about, and what is the upper end Google is looking for?

    (and this having to wait five minutes between posts is moronic. Look at my posting history, and all of them from the same IP address. Tell me why I have to wait this long to post.)

  • by LaminatorX ( 410794 ) <sabotage@prae[ ]tator.com ['can' in gap]> on Wednesday October 15, 2008 @10:59AM (#25382817) Homepage

    When in college, I heated my crappy little schack by putting 150W bulbs in every light. It was like my own little Easy-Bake oven.

    • Re: (Score:2, Troll)

      by Abreu ( 173023 )

      I gather you were not paying the electricity bill...

      • Re: (Score:3, Informative)

        by LaminatorX ( 410794 )

        The place was all electric. An incandescent bulb compares favorably to many space heaters in terms of V->heat efficiency, and you get light as a bonus.

  • If the chips failed prematurely at these higher temperatures, the former Googler says, Intel was obliged to replace them at no extra charge.

    Intel denies this was ever the case. "This is NOT true," a company spokesman said in an email. Google declined to comment on its relationship with Intel. "Google invests heavily in technical facilities and has dozens of facilities around the world with many computers," reads a statement from the company. "However, we don't disclose details about our infrastructure or su

  • When you are a big company that spends enough money, you can ask for this sort of thing and your demand will be met.

    "Guarantee us a higher temp CPU or we will switch to AMD...and tell everyone about it."

    I have a feeling that the CPUs can handle a bit more temp than they are rated as a CYA move by Intel, anywho.

    • by stilwebm ( 129567 ) on Wednesday October 15, 2008 @11:22AM (#25383255)

      "Guarantee us a higher temp CPU or we will switch to AMD...and tell everyone about it."

      That's not really how the negotiation goes in this type of situation where there are two major supplier choices (AMD and Intel) and Google is a relatively small customer when compared with Dell, HP, IBM, etc.

      In all likelihood, the negotiation is more of a partnership where both parties work together to create value. Google says, "We buy thousands upon thousands of your chips, but we also pay millions of dollars annually to cool them. We'd be willing to pay little premium and lock in more volume if you can help us save money by increasing the temperature inside our data centers." Google has done the math and comes prepared with knowledge of how much those specs can save them and forecast of chip need of the next 12-18 months, and the two work together to create value. For example, Google might offer to allow endorsements (as they did with Dell for their Google appliances) in exchange for favorable pricing and specifications.

      The "do this or I'll switch" tactic only works well when there are many suppliers and products are relatively undifferentiated, like SDRAM or fan motors.

      • Re: (Score:3, Interesting)

        It also wouldn't surprise me if Google were willing to offer something of a testbed setup. A while back, they put out that report on HDD reliability and its influences, so they are obviously watching that sort of thing. And, since their historical style has been very much about redundant masses of commodity gear, they can theoretically tolerate slightly higher failure rates if those lower costs in other ways.

        I suspect that, with negotiation to set the correct balance of pricing, warranty, access to handp
    • In reality that IS true.
      The Very Large bank where i was employed earlier had a special agreement with Microsoft.
      They got a customized version of XP meant specially for the bank's hardened network. Yeah i know the Admin kit allows customization, but i mean at a lower level: the NSA hookups in the system DLLs were not present!
      As soon as a Dell entered the bank, it was wiped out, and this OS was installed. It was a weird mix of little Linux bootup code and XP.
      You had all rights of an admin EXCEPT when it comes

  • by Ancient_Hacker ( 751168 ) on Wednesday October 15, 2008 @11:14AM (#25383095)

    This sounds like a scenario where lawyers are trying to act as engineers. That works about as well as you might expect.

    There are these engineering things, amusingly called "Schmoo plots", that map out a chip's operating envelope of voltage versus speed versus temperature. From those an engineer can forsee how hot you can run a chip before its rise and fall time margins start to get marginal.

    There is very little Intel can do to stretch thing by another 5 degrees. It's not something that can be imposed by fiat. Intel engineers have already juggled all the variables to come up with the best performance possible. SOMETHING is going to have to give. Either the chips will have to be selected and graded for speed, lowering the overall envelope for the chips everyone else gets, or they'll have to fudge some other parameters, hoping nobody will notice, or worse yet they'll tweak some variable right to the edge of raggedness, resulting in worse reliability down the road.

    Lawyers and accountants generally don't know you can't have everything. let's hope the engineers educate them.

    • by mindstrm ( 20013 ) on Wednesday October 15, 2008 @11:25AM (#25383333)

      Odds are this is being driven by a data-center engineering team, who are looking at the cost savings of running their data center 5 degrees hotter.

      You don't get what you don't ask for.

      Intel will do exactly as much engineering as necessary to keep their target market up, and no more.

      If the market wants chips that operate 5 degrees hotter.. the engineers will do their job and see if it can be done. Intel will charge a premium for this.

      That's business.
       

    • Right on.
      Of course, Intel will give them whatever they want because Google is such a large customer. And will then pay in terms of higher failure rates, hence warranty costs. And Google will notice the same thing, assuming they do decent data gathering on failures, and find out that this is a really bad idea because those failures cost them even more than Intel.
      Seems to me like bean counters are trying to beat physics.

    • by Alereon ( 660683 )

      There is very little Intel can do to stretch thing by another 5 degrees. It's not something that can be imposed by fiat. Intel engineers have already juggled all the variables to come up with the best performance possible. SOMETHING is going to have to give. Either the chips will have to be selected and graded for speed, lowering the overall envelope for the chips everyone else gets, or they'll have to fudge some other parameters, hoping nobody will notice, or worse yet they'll tweak some variable right to

  • Re: (Score:2, Flamebait)

    Comment removed based on user account deletion
    • Dell did (does?) the same thing by having higher temperature specs for their servers than the rest of the industry. Of course customers will see higher failure rates if they actually use the larger margin.
      It's teh physics, stupid!

  • Google said recently that it runs its data centers at 80 degrees [datacenterknowledge.com] as an energy-saving strategy, so chips that support higher temperatures would mean fewer hardware failures in their data center. Most data centers operate in a temperature range between 68 and 72 degrees, and I've been in some that are as cold as 55 degrees. Lots of companies are rethinking this. In the Intel study on air-side economizers, they cooled the data center with air as warm as 90 degrees. ASHRAE is also considering a broader temperat
  • by hackus ( 159037 ) on Wednesday October 15, 2008 @11:32AM (#25383471) Homepage

    Most of the power supply systems for my servers, which are HP G3-5 systems of various U sizes, tend to waste more power as temperature goes up.

    This has nothing to do with CPU's though. It is the power supplies on the machines. As temperature goes up, efficiency goes down. At around 80 degrees I noticed a significant larger draw on the power supply with my amp meter.

    I had a gaming system with two ATI 4870's and the 800 Watt power supply would crash my machine if I did not run the air conditioner and keep the room at 70 degrees after some fairly long Supreme Commander runs.

    I noticed that the amperage would go up, and the power output would go down as temperature would go up.

    I have not conducted any experiments in a lab setting with this stuff, but from experience, jacking the temperature up usually makes power supplies work harder and makes them less efficient.

    -gc

    • Re: (Score:3, Informative)

      Traditional 120-many voltage DC power supplies certaily suffer from lesser efficiency at higher temperatures. Running two 4870s on a single 800w power supply probably isn't a good idea, especially if you have a high-powered CPU to go with them. Most quality power supplies will be rated lower than their maximum output to allow for temperature concerns.

      These things said, google uses custom power supplies and systems that only run on 12v. These power supplies may be easier to generate in quality batches, bu

    • Do ypu think maybe that Google is using DC power? That way they have just a few large power supplies and another room and send DC power over large copper bus bars to the racks. These DC systems are expensive but you make the money back in power/cooling and you save money with the UPS too.

    • Somehow I doubt datacentres like the ones Google operates use switching power supplies, located next to the hardware they power, like in your home computer. I for one would consider building a single power supply pushing a lot of amps through some fat cables that branch off to where-ever power is needed.

      But then I've never seen a datacentre from the inside, so I may be totally wrong.

  • I might be missing something here, but why would Google be demanding "higher" chip temps to save on cooling??

    Surely they should be demanding lower chip temps.. or is it just a mistake in the headline?
    • by OneSmartFellow ( 716217 ) on Wednesday October 15, 2008 @12:05PM (#25384017)
      They are not asking for the chips to be made to produce more heat, they're demanding that Intel guarantee that the chips will still perform, even if operated at a higher than specified max operating environment temperature.

      You would be forgiven for thinking it makes more sense to for Google to insist that the chips produce less heat, rather than that they will still operate in extreme temperatures, since the majority of the cooling cost come from dissipating the chip heat from the enclosed space. But hey, it's Google, they do things a bit different.
      • Thanks both of you for putting me straight there. It makes complete sense now :) Although I still think the headline and summary could have been worded better...
    • Re: (Score:3, Informative)

      by NixieBunny ( 859050 )
      They don't want the chips to get hotter than they already do. They want them to work correctly when they are run hotter. This allows them to use passive cooling in more climates, which saves big bucks on the cooling bill.
  • ...using more effective means to extract the waste heat from the processors they already have. Lower thermal resistance equals lower operating temperatures. As many boxes as they have maybe they should invest in large-scale refrigerant-based cooling system with tiny heat exchangers for each CPU. I envision overhead refrigerant lines with taps coming down into each cabinet to serve the procs in it. Each server could have quick-disconnect lines on the back for easy removal. No need to cool all that air, and y
  • I have to say I am a bit surprised. A CPU operating at a higher temperature will draw more power and thus produce more heat at the same performance point. This is one of many temperature dependencies in silicon circuits. Now, it's possible that Google's demand is that they can run at the same speed and power at the higher temperature, which means in reality they are underclocking a faster chip to run it warmer.
  • by invalid_account ( 1258250 ) on Wednesday October 15, 2008 @12:32PM (#25384615)
    Hard disks. In fact, I am typically far more concerned with long-term issues with my data than with the computing itself. Not to mention, the CPU is NOT the only chip that can suffer from heat issues.
  • by bperkins ( 12056 ) on Wednesday October 15, 2008 @12:54PM (#25385069) Homepage Journal

    There are two issues with higher operating temp.

    One is that you get less drive current from your transistors, so you get less performance (which everyone seems to understand), but this is usually a fairly small effect for 5 degree C.

    The _big_ deal with 5 degree C would be electromigration in interconnect metal, which goes up very quickly with temperature. So the difference in failure rates might be quite large.

    If there was any deal at all, it's likely that the Intel engineers tried to remove some conservatism from their temperature estimates to see if they could squeeze out 5 degrees from the thermal budget, or perhaps information on the workload itself to get Intel to "bless" the higher data center temperature.

  • by bugs2squash ( 1132591 ) on Wednesday October 15, 2008 @02:06PM (#25386443)
    Isn't Google close enough to the ocean to pump cold water out of the depths to help pre- or post- cool air ?

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...