Google Demands Higher Chip Temps From Intel 287
JagsLive writes "When purchasing server processors directly from Intel, Google has insisted on a guarantee that the chips can operate at temperatures five degrees centigrade higher than their standard qualification, according to a former Google employee. This allowed the search giant to maintain higher temperatures within its data centers, the ex-employee says, and save millions of dollars each year in cooling costs."
Comment removed (Score:3, Insightful)
Re:Is this possible? (Score:5, Insightful)
Wouldn't Intel run into physical limitations
Google is most likely getting the best chips out of Intel's standard production. It's akin to sorting out the best bananas at the grocery store. This sort of privilege happens when you buy enough products from a supplier.
If they were demanding much more than 5 degrees then I would say they are getting custom made chips, but I don't think that's the case.
Re:Is this possible? (Score:4, Interesting)
Re:Is this possible? (Score:5, Interesting)
Re:Is this possible? (Score:5, Informative)
Re:Is this possible? (Score:5, Informative)
What was even weirder was the i487SX. This was a i486DX with an extra pin. If you had a i486SX system and wanted a FPU you bought the i487SX and plugged it in. Then during boot the i486SX was disabled and the i487SX was used for everything.
Re:Is this possible? (Score:5, Funny)
Intel's 486SX line was just a 486 (later renamed the 486DX) where the FPU didn't work.
Mod parent +5.0000000018623, Informative
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Did you take into account that the you'd need to double the resistance of the two resistors to get the right resistance?
Rt = 1/(1/R1+1/R2)
Or you could use two half value units in series.
Either way, the price difference between 5% resisters and 10% resisters is generally NOT a factor of 2, so if you need the closer tolerance, just go with a better band. They come in 20%, 10%, 5%, 2%, and 1%.
On the plus size, the resisters on the whole would be able to withstand almost twice as many watts(the lower resistanc
Re: (Score:2)
Google is most likely getting the best chips out of Intel's standard production.
I should also note that Intel is guaranteeing these processors will survive at temperatures 5 degrees higher. They may not even sort through the processors they sell to Google. They are just betting that they will survive based on failure statistics they already have. They may have a higher failure rate with a 5 degree increase in temperature, but the cost of the warantee replacements is more than offset by the premium they charge Google.
The key point is that Intel isn't custom making these processors,
Re: (Score:3, Interesting)
Best isn't a term you use in test programs. At a certain temperature a chip will run at a certain speed or it won't.
All that's going on here is that Intel will be altering its binning process to separate out the chips that are capable of running at 75 degrees from those that can't.
Not really any different at all from the current process of binning based on speed grade. All it'll be is a different set of parameters in the test structures will cause the chip to go into a different bin.
Now what interests me is
Re:Is this possible? (Score:4, Interesting)
WI'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.
There is no reason to be surprised. It is cheaper to not move the data center to where it is colder and just make all upgraded hardware use the new chips. Google's budget calls for hardware upgrades already. Upgrading to CPUs with higher temp tolerances would mean they pay the same $X-thousand for the box they would anyway and simply turn the thermostat up.
A net savings.
Re:Is this possible? (Score:5, Informative)
I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.
Don't kid yourself. They probably have. But then, who did you get to work at what would probably be a very remote location?
Additionally, such remote locations may suffer from not enough bandwidth and/or electricity.
Re: (Score:2)
Not to mention high RTTs.
Re: (Score:3, Interesting)
You know, they wouldn't have to go that far. I'm in Colorado, if they put a data center in one of the higher mountain towns I imagine they could significantly reduce their costs.
I guess the other thing they could look at is using a heat exchanger and use that excess heat for hot water heating or something.
Re: (Score:2)
Yeah but then they'd have to fight with Stargate Command for space.
Re: (Score:2)
Re: (Score:2)
One office block I once visited had an IBM mainframe located in the basement. One of the problems that the data center operators had, was with homeless people camping down beside the exhaust air vents. These vents generated so much heat that in Winter you could still walk around in this "bubble" of warm air in a short-sleeved shirt, while 3-4 metres away you would be frozen to your bones.
Re: (Score:3, Funny)
But then they'd have to find something that smells worse than homeless people, but won't get an air quality citation. Maybe set up shop next to a corn syrup plant?
Re:Is this possible? (Score:5, Funny)
Re:Is this possible? (Score:4, Interesting)
You are correct, or it may not cook correctly at all at the lower temperature even with more time. For example, if you want to boil rice in Guatemala City, you have to sautee it first.
Damn Slashdot's broken Unicode support. There should be an accent in there.
Re: (Score:3, Insightful)
Yeah, I am as well.
The problem is: the adiabatic lapse rate is about 4 degrees F per thousand feet, so you go up in altitude and the air cools down, but there's less of it. Take this to an extreme and you're in outer space, where it's *very* cold (if temperature means anything at all when there's no air) but it's *incredibly* difficult to get rid of heat because there's no air.
Ideally you'd be at sea level or below, somewhere that it's just cold.
I recommend Iceland. They have nearly free electricity becau
Re: (Score:2)
They could go to Fairbanks, AK (technically temperate sub arctic) - there's enough electricity (not as cheap as teh Hood River, OR site they currently use). It's plenty cold, there's a well educated and underutilized labor pool there, and good fiber optic networks exist between Fairbanks and Seattle/Tokyo.
Re: (Score:2)
No, not possible: you forgot about that ZPM down in Antarctica. How many data centers can ya power with a ZPM? Doctor McKay probably knows... let's ask him.
Re:Is this possible? (Score:5, Interesting)
Two words: "Free Cooling"
The greater the difference between your data centre's output air temperature and whatever passive external cooling system you are pumping it though, the more heat you can dump at zero cost. That's monetary cost as well as the cost to the environment through the energy "wasted" in HVAC systems and the like. Google has a PUE (Power Usage Effectiveness; the ratio of power input to power acutally used for powering production systems) approaching 1.2 vs typical values of around 2.5-3.0 - Microsoft is around 2.75 as I recall - so they are clearly doing something right.
Re:Is this possible? (Score:4, Insightful)
Re: (Score:2)
This humidity: how much is this really an issue? (genuine question). I can imagine that when you reach 100% and your equipment is cooler than ambient you get condensation issues. However here we are talking about equipment that needs cooling, i.e. is well above ambient temperatures. Condensation is therefore surely not an issue.
If you would say "dust", that I can see as an issue as it clogs up ventilation openings and can put a nice blanket on equipment keeping it all nice and hot. Dust however is very eas
Re: (Score:2)
Humidity can kill electronics. I recall some years back a guy down in Louisiana took his new camera out and about for a couple of weeks. It was a really nice camer which he'd just spent $1500 on, and within a few weeks it failed due to corrosion.
So yes, humidity is a very serious thing and if you're not planning for it you can end up with destroyed equipment very quickly.
Re: (Score:3, Insightful)
Re:Is this possible? (Score:5, Funny)
I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.
Oh yeah! And completely melt the ice cap.
Re:Is this possible? (Score:4, Insightful)
If you had say, a town sized data centre installation, it would probably have about the same effect as a smallish and partially active volcano, of which there are many in northern latitudes. Pretty much nothing apart from local effects, which is, in spite of the green crazies rantings, not too bad, not compared to the alternatives.
What you wouldn't have is as much need for additional power to cool, which of course saves the pollution caused by its generation. You should bear in mind that the colder parts of the Earth are being far more seriously effected by polutants in the atmosphere then by anything which is just warmer than its surroundings.
As for why I said green crazies. Well if they hadn't been so all fired determined to put governments off nuclear power, we'd have that instead of all these coal burning plants. Now we have massive pollution problems and a truly gigantic cost for building all those nuclear plants in a shorter time, instead of gradually over the last three or four decades.
Re: (Score:2)
Re:Is this possible? (Score:4, Funny)
Right. Those polar bears, if given the chance, would rip you to shreds, eat your body and then climb into the oh-so-warm data center to hunker down for hibernation.
My brother was a sysadmin in Alaska and a polar bear did exactly that, you insensitive clod.
Re:Is this possible? (Score:5, Insightful)
Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.
Are you serious? Neither the Arctic nor the Antarctic is well known for reliable power or fast Internet connections.
Re: (Score:3, Interesting)
Iceland. It has cheap geothermal energy. And that's energy that's going to heat the Earth, anyway, similar to solar. They just need some big pipes between there are North America and Europe.
Re: (Score:3, Interesting)
Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round. We've seen reports of appealing places like that on Slashdot before. (Of course, that would just be a short-term fix before we move the Earth to a farther orbit around the sun to avoid suffocating in our own waste heat like the Puppeteers in Niven's Ringworld [amazon.com] ).
I doubt anything physical is being done. Intel is very conservative in setting maximum operating temperatures. They're probably just promising Google that they'll cover those operated 5 C hotter under their warranty. If anything is actually being done to the hardware it's probably just altering the temp at which throttling occurs.
Re: (Score:3, Interesting)
It's going to be far cheaper to build radiator fins extending into space than to move the earth's orbit, barring some innovative invention in the orbit moving department. Also, orbit moving has the downside of reducing the solar flux, which will be bad for our solar energy efforts.
Re:No one mentions a more obvious approach. (Score:4, Insightful)
That means you'd need to make up for the lack of processing power with additional CPUs, which would mean more CPUs to cool.
Re: (Score:2)
Maybe it's time to think more of the performance/power consumption ratio when designing servers.
More CPU:s may actually not be that bad because you can spread the dissipated power over a larger area. However you will also have larger computers.
One way around it could be to locate datacenters at locations with natural cooling available like rivers and larger lakes.
Today many cooling systems are aircooled, but the air can be a lot warmer and not able to absorb as much heat as water.
Re: (Score:3, Interesting)
Re: (Score:3, Funny)
Slashdot Post Checklist
Bad Car Analogy - Check
Spelling Errors - Check
A solution that won't work in real world - check
Reply to a comment on top 10% of page so people read yours - check
Re: (Score:3, Informative)
Water is an excellent heat sink, but any company would likely run into serious environmental backlash if they wanted to use a lake or river as their heat sink.
Oh, I dunno, Bethlehem Steel in Indiana (just off of Lake Michigan) did that for years. It was nice going through the outflow in a Hoby Cat... drag your feet through the warm water and enjoy. Good fishing there too.
Environmental impact (Score:2, Interesting)
Uhhhh. Wouldn't making chips a bit more efficient be better, as opposed to making them "less likely to burn out at higher temps"
Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.
Re:Environmental impact (Score:5, Insightful)
Not really.
No matter how cool the chips run they will put out heat. If you have two chips that run at X and use Y watts you will save power if you can run them a little hotter and use less power for cooling.
Re:Environmental impact (Score:4, Interesting)
Pragmatically, if they can't run cool, then it's more efficient to run them hot than to spend more energy actively cooling them.
Re:Environmental impact (Score:5, Funny)
And ideally, I'd come home to find Alyson Hannigan oiled up and duct taped to my bed
You know you're pathetic when they're even unwilling in your fantasies.
Re:Environmental impact (Score:5, Insightful)
And ideally, I'd come home to find Alyson Hannigan oiled up and duct taped to my bed
You know you're pathetic when they're even unwilling in your fantasies.
You're assuming everyone prefers them willing.
Re: (Score:2)
Re:Environmental impact (Score:5, Insightful)
I'm not sure what your getting at - if by doing this they are saving millions in cooling expense, they are certainly using less energy. What is "going green" if it isn't energy conservation? The fact that the conservation comes from less work for their AC units rather than efficient little processors is immaterial.
Don't expect any company to do things because its right - but good companies will find win-win situations where they cut costs and do things to "Go Green".
But they pass it off to someone else (Score:5, Insightful)
Yes, but way I see this is:
Intel isn't arbitrarily going, "man, we could make chips that run ok 5 degrees hotter, but we're gonna piss everyone off by demanding more cooling. Just because we can." Most likely Intel is already doing the best it can, and getting a bunch of chips which vary in how good they are. And they're getting the same bunch of chips regardless of whether Google demands higher temps or not.
Google just gets a cherry-picked bunch, but the average over Intel's production is still the same. Hence everyone else is getting a worse selection. They what remains after Google took the best.
It's a zero-sum game. The total load on the planet is the same. The same total bunch of chips exits Intel's fabs. On the total, no energy was conserved.
So Google's "going green" is at the cost of making everyone else less "green". They can willy-wave about how energy efficient they are, by simply dumping the difference on someone else.
That's not "going green", that's a predatory approach. Your computers could require on the average an extra 0.5W in cooling, so Google can willy-wave that theirs uses 1W less. They just dumped their costs and their "eco burden" to someone else.
It's akin to me willy-waving that I'm so green and produce less garbage than you... by dumping some of my garbage in random other people's garbage bins across the town. Yay, I'm so green now, you all can start worshipping me. Well, no, on the total the same amount of garbage being produced, I just dumped it and the related costs on other people. That's not going green, that's being a predator.
I can see why a business might want to cut their own costs, and not care about yours. That's, after all, the whole "invisible hand" theory. But let me repeat it: on the whole no energy was conserved. They just passed off some of their cooling costs (energy _and_ money) to someone else.
Re:But they pass it off to someone else (Score:5, Insightful)
Re:But they pass it off to someone else (Score:4, Interesting)
Google's chips will be running full throttle/full temp 24/7
Is there any documentation for this?
I seriously find it hard to believe that Google has every processor they own running 24/7 at 100% utilization. Other than the computation problems like SETI and protein folding, most problems are I/O bound, and I would think that the stuff Google does would involve a lot of I/O.
Re: (Score:3, Insightful)
you miss the point entirely. Google wants Intel's processors to operate properly at a higher temperature than currently spec'd that will allow Google to use less cooling. They want the processors to tolerate more heat, not generate more heat. This means Intel will have to make the processors slightly more beefy, but how much does that really cost across millions of units once the design work is done, a few bucks per processor tops.
Google pays dozens of dollars a MONTH to cool each processor. Intel making
Re:But they pass it off to someone else - WRONG (Score:5, Insightful)
>> So Google's "going green" is at the cost of making everyone else less "green". They can willy-wave about how energy efficient they are, by simply dumping the difference on someone else.
The difference is that Google is going to actively exploit the ability of those hand-picked CPU's to run hotter. Chances are that the users who would have otherwise received those chips would not reap any energy savings from the capabilities.
At a minimum, Google is contributing here by forcing a vendor to differentiate chips that have a capability of running hotter from ones that don't. No matter who uses that capability, it's a benefit to the planet (versus the alternatives at least).
MadCow.
Re:Environmental impact (Score:4, Insightful)
The amount of energy you need to use to cool that stuff is quite significant. And, in case you haven't realized it, generating cool air also create more warm air, it's just not in the data center. It's usually vented right outside.
If the chips could run hotter, they'd have to use less energy to cool the data center, and generate less waste heat from that very cooling in the first place.
I'm not convinced that what they're asking for isn't a good idea.
Cheers
Re: (Score:3, Interesting)
Yes, but which is easier: making the chips more efficient, or allowing them to run a little hotter without melting?
I honestly don't know. My first thought is that efficiency is harder than durability, but that's pulled completely out of my backside.
I still think they're right in asserting that if they could handle a little more heat, then data centers would spend less energy trying to cool them to their operating rang
Re: (Score:3, Interesting)
I expect that increasing efficiency is considerably more R+D-intensive than just increasing tolerances (see: Pentium to Core architecture transition), but the latter may make a decent short-term solution until the former can be implemented.
Of course, it's not just the processors that would need higher tolerances. Hard drives, while not generating nearly as much heat (or consuming as much energy) tend to be fairly picky, and as mechanical parts are probably much harder to improve tolerances, that could quic
Re: (Score:2)
Re: (Score:2)
It's all about the temperature difference. The hotter something is, the more heat it will radiate and the faster it will cool. The bars on an electric heater get hot quickly and then stay the same temperature, because all of the energy being put in to them is being radiated away. A CPU is like this, only at a certain temperature it will fail. If you can bump up the temperature at which it fails a little bit then you can run it hotter, which means you need less cooling on the chip to stop it failing. Mo
Re: (Score:2)
Well, cooling them down takes energy and creates heat, so depending on how much you can save in that area vs. the extra expended with the chips, it might end up being more efficient overall.
Re: (Score:2)
Uhhhh. Wouldn't making chips a bit more efficient be better, as opposed to making them "less likely to burn out at higher temps"
Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.
Yes, assuming those were the tradeoffs, which they weren't.
Google is trading existing performance levels against reduced cooling, which is a pure win, just by demanding more resilient chips from Intel.
Re: (Score:2, Interesting)
It doesn't sounds like Google is asking for less efficient chips, that's a retarded notion. Instead it sounds like Google is asking for more durable chips.
One that still operates at X% efficiency but operates at a higher ambient temperature., say 70F instead of 65F. I'm sure Google would like it better if the chips didn't produce any heat (ie 100% efficient), but that's impossible.
Still, if you want to "Go Green" and be environmentally friendly, stop viewing heat as a problem. It's better to try and reclaim
Underclocking if you're poor? (Score:4, Interesting)
If you don't have the clout of a Google-sized organization to buy higher-rated chips from Intel, I wonder if you can basically achieve the same thing by underclocking. An underclocked chip will run cooler, but I don't know if it'll run more stably at higher temps, although I think it would.
Does anyone have any experience with doing this?
I think it'd be interesting to see whether the cost savings in power and cooling is offset by the cost of the performance losses.
Re:Underclocking if you're poor? (Score:4, Informative)
In the past, I've under-clocked primarily for noise benefits. Lower clock->lower temp->slower fam RPM->lower dbSPL.
Re: (Score:2)
Several years ago I underclocked my home system (which was initially quite unreliable) and saw a substantial decrease in uncommanded reboots. The issue went away almost entirely (even at full speed) when I upgraded the power supply; I suspect that I was running near the edge of what it could handle, and underclocking (by reducing the CPU's power draw) moved its usage far away enough from that boundary to make a difference.
Re: (Score:2)
As an overclocker, your statement tends to be true in practice. The cooler a chip is kept, the closer you can get to its maximum overclocking frequency - the frequency beyond which the chip exhibits instability. Similarly, the lower the frequency is set the warmer temperature the chip will generally handle with stable operation. These are general trends, processors from different fabs or batches perform differently - but within the same batch of processors, you can reliably test and observe these results
Temp specs (Score:2)
Not mentioned in the story. What CPU are they talking about, and what is the upper end Google is looking for?
(and this having to wait five minutes between posts is moronic. Look at my posting history, and all of them from the same IP address. Tell me why I have to wait this long to post.)
Waste Heat reclamation (Score:5, Funny)
When in college, I heated my crappy little schack by putting 150W bulbs in every light. It was like my own little Easy-Bake oven.
Re: (Score:2, Troll)
I gather you were not paying the electricity bill...
Re: (Score:3, Informative)
The place was all electric. An incandescent bulb compares favorably to many space heaters in terms of V->heat efficiency, and you get light as a bonus.
Intel says they don't (Score:2)
Not too surprising (Score:2)
When you are a big company that spends enough money, you can ask for this sort of thing and your demand will be met.
"Guarantee us a higher temp CPU or we will switch to AMD...and tell everyone about it."
I have a feeling that the CPUs can handle a bit more temp than they are rated as a CYA move by Intel, anywho.
Re:Not too surprising (Score:5, Insightful)
"Guarantee us a higher temp CPU or we will switch to AMD...and tell everyone about it."
That's not really how the negotiation goes in this type of situation where there are two major supplier choices (AMD and Intel) and Google is a relatively small customer when compared with Dell, HP, IBM, etc.
In all likelihood, the negotiation is more of a partnership where both parties work together to create value. Google says, "We buy thousands upon thousands of your chips, but we also pay millions of dollars annually to cool them. We'd be willing to pay little premium and lock in more volume if you can help us save money by increasing the temperature inside our data centers." Google has done the math and comes prepared with knowledge of how much those specs can save them and forecast of chip need of the next 12-18 months, and the two work together to create value. For example, Google might offer to allow endorsements (as they did with Dell for their Google appliances) in exchange for favorable pricing and specifications.
The "do this or I'll switch" tactic only works well when there are many suppliers and products are relatively undifferentiated, like SDRAM or fan motors.
Re: (Score:3, Interesting)
I suspect that, with negotiation to set the correct balance of pricing, warranty, access to handp
Re: (Score:2)
In reality that IS true.
The Very Large bank where i was employed earlier had a special agreement with Microsoft.
They got a customized version of XP meant specially for the bank's hardened network. Yeah i know the Admin kit allows customization, but i mean at a lower level: the NSA hookups in the system DLLs were not present!
As soon as a Dell entered the bank, it was wiped out, and this OS was installed. It was a weird mix of little Linux bootup code and XP.
You had all rights of an admin EXCEPT when it comes
WTF? Lawyers as engineers, not so much (Score:4, Informative)
This sounds like a scenario where lawyers are trying to act as engineers. That works about as well as you might expect.
There are these engineering things, amusingly called "Schmoo plots", that map out a chip's operating envelope of voltage versus speed versus temperature. From those an engineer can forsee how hot you can run a chip before its rise and fall time margins start to get marginal.
There is very little Intel can do to stretch thing by another 5 degrees. It's not something that can be imposed by fiat. Intel engineers have already juggled all the variables to come up with the best performance possible. SOMETHING is going to have to give. Either the chips will have to be selected and graded for speed, lowering the overall envelope for the chips everyone else gets, or they'll have to fudge some other parameters, hoping nobody will notice, or worse yet they'll tweak some variable right to the edge of raggedness, resulting in worse reliability down the road.
Lawyers and accountants generally don't know you can't have everything. let's hope the engineers educate them.
Re:WTF? Lawyers as engineers, not so much (Score:5, Insightful)
Odds are this is being driven by a data-center engineering team, who are looking at the cost savings of running their data center 5 degrees hotter.
You don't get what you don't ask for.
Intel will do exactly as much engineering as necessary to keep their target market up, and no more.
If the market wants chips that operate 5 degrees hotter.. the engineers will do their job and see if it can be done. Intel will charge a premium for this.
That's business.
Re: (Score:2)
Right on.
Of course, Intel will give them whatever they want because Google is such a large customer. And will then pay in terms of higher failure rates, hence warranty costs. And Google will notice the same thing, assuming they do decent data gathering on failures, and find out that this is a really bad idea because those failures cost them even more than Intel.
Seems to me like bean counters are trying to beat physics.
Re: (Score:2)
There is very little Intel can do to stretch thing by another 5 degrees. It's not something that can be imposed by fiat. Intel engineers have already juggled all the variables to come up with the best performance possible. SOMETHING is going to have to give. Either the chips will have to be selected and graded for speed, lowering the overall envelope for the chips everyone else gets, or they'll have to fudge some other parameters, hoping nobody will notice, or worse yet they'll tweak some variable right to
Re: (Score:2, Flamebait)
Re: (Score:2)
Dell did (does?) the same thing by having higher temperature specs for their servers than the rest of the industry. Of course customers will see higher failure rates if they actually use the larger margin.
It's teh physics, stupid!
Re: (Score:3, Informative)
Google Runs Its Data Centers at 80 Degrees (Score:2)
Re: (Score:3, Insightful)
You can run a data center cheaper at a cooler temperature simply by having better insulation.
That assumes that the outdoor temperature is higher than the indoor temperature. My bet is that a data center run at 80 F in the Pacific Northwest would be warmer inside for most of the year than outside. Insulation under those conditions could actually increase cooling costs.
Are they saving MILLIONS? (Score:5, Interesting)
Most of the power supply systems for my servers, which are HP G3-5 systems of various U sizes, tend to waste more power as temperature goes up.
This has nothing to do with CPU's though. It is the power supplies on the machines. As temperature goes up, efficiency goes down. At around 80 degrees I noticed a significant larger draw on the power supply with my amp meter.
I had a gaming system with two ATI 4870's and the 800 Watt power supply would crash my machine if I did not run the air conditioner and keep the room at 70 degrees after some fairly long Supreme Commander runs.
I noticed that the amperage would go up, and the power output would go down as temperature would go up.
I have not conducted any experiments in a lab setting with this stuff, but from experience, jacking the temperature up usually makes power supplies work harder and makes them less efficient.
-gc
Re: (Score:3, Informative)
Traditional 120-many voltage DC power supplies certaily suffer from lesser efficiency at higher temperatures. Running two 4870s on a single 800w power supply probably isn't a good idea, especially if you have a high-powered CPU to go with them. Most quality power supplies will be rated lower than their maximum output to allow for temperature concerns.
These things said, google uses custom power supplies and systems that only run on 12v. These power supplies may be easier to generate in quality batches, bu
Re: (Score:2)
Do ypu think maybe that Google is using DC power? That way they have just a few large power supplies and another room and send DC power over large copper bus bars to the racks. These DC systems are expensive but you make the money back in power/cooling and you save money with the UPS too.
Re: (Score:2)
Somehow I doubt datacentres like the ones Google operates use switching power supplies, located next to the hardware they power, like in your home computer. I for one would consider building a single power supply pushing a lot of amps through some fat cables that branch off to where-ever power is needed.
But then I've never seen a datacentre from the inside, so I may be totally wrong.
higher chip temps??? (Score:2)
Surely they should be demanding lower chip temps.. or is it just a mistake in the headline?
Re:higher chip temps??? (Score:5, Insightful)
You would be forgiven for thinking it makes more sense to for Google to insist that the chips produce less heat, rather than that they will still operate in extreme temperatures, since the majority of the cooling cost come from dissipating the chip heat from the enclosed space. But hey, it's Google, they do things a bit different.
Re: (Score:2)
Re: (Score:3, Informative)
They should also consider... (Score:2)
Higher temp = higher power (Score:2)
But heat also affects... (Score:3, Interesting)
Higher operating temp (Score:4, Informative)
There are two issues with higher operating temp.
One is that you get less drive current from your transistors, so you get less performance (which everyone seems to understand), but this is usually a fairly small effect for 5 degree C.
The _big_ deal with 5 degree C would be electromigration in interconnect metal, which goes up very quickly with temperature. So the difference in failure rates might be quite large.
If there was any deal at all, it's likely that the Intel engineers tried to remove some conservatism from their temperature estimates to see if they could squeeze out 5 degrees from the thermal budget, or perhaps information on the workload itself to get Intel to "bless" the higher data center temperature.
davey jones' AC (Score:3, Funny)
Re: (Score:2, Insightful)
Re:good. (Score:4, Insightful)
What? American businesses like saving money almost as much as they like making it. It's environmentalism is not as a big motivator as profit, at least in the US. Make being efficient profitable long term, and some businesses will do it. Make it profitable short term and businesses will fall over one another to do it.
Re: (Score:2)