Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Google Calls For Power Supply Design Changes 377

Raindance writes "The New York Times reports that Google is calling 'for a shift from multivoltage power supplies to a single 12-volt standard. Although voltage conversion would still take place on the PC motherboard, the simpler design of the new power supply would make it easier to achieve higher overall efficiencies ... The Google white paper argues that the opportunity for power savings is immense — by deploying the new power supplies in 100 million desktop PC's running eight hours a day, it will be possible to save 40 billion kilowatt-hours over three years, or more than $5 billion at California's energy rates.' This may have something to do with the electricity bill for Google's estimated 450,000 servers."
This discussion has been archived. No new comments can be posted.

Google Calls For Power Supply Design Changes

Comments Filter:
  • In the old days, disk drive motors and fans. But many of these now run on 5V, hence the cheap USB-powered drive cases out there. Chips at CMOS power levels run at 3.3v, TTL is 5v, but hardly anything runs at 12v anymore. It seems to me that if they'd just pick their hardware carfully, they could run their entire server rack off of 5v+- rails.
    • by Anonymous Coward on Tuesday September 26, 2006 @05:03PM (#16206781)
      Video cards use a ton of 12v power, enough that high-end cards get a dedicated connector featuring two wires of it.
      • by ArcherB ( 796902 ) on Tuesday September 26, 2006 @05:14PM (#16206979) Journal
        This is true, but Google is not throwing 7950's in their servers. These systems run with on-board video at best. Google has no need for a video card that can do anything more than text, as with all non-windows based servers. For that matter, after the first boot, there is no need for a video card at all.

        • by Jahz ( 831343 ) on Tuesday September 26, 2006 @05:29PM (#16207221) Homepage Journal
          Actually I would bet that Google servers DON'T have a video card, and that all of them have RJ-45 SOL support (or something like it). The reason being that Google has admitted that they fully embrace the commodity distributed server system. Google will periodically host talks at my university where they explain all this in [too much] detail.

          Basically, when a machine fails, it is pulled from the rack and replaced with an identical machine with a cookie cutter image. Kinda like the Borg :)

          When a box fails it is probably instantly detected by some machine monitor and taken offline (think: the 'crop' tenders in the Matrix I). The sysadmins arent going to waste time plugging a video cable into the rack... just pull it. Toss the box into a repair queue and let the tech's put a video card into it if needed. Remeber: 100's of machines fail for them every day. That's a fact from the Google talk in 05.
          • Re: (Score:3, Insightful)

            by drsmithy ( 35869 )
            Actually I would bet that Google servers DON'T have a video card, and that all of them have RJ-45 SOL support (or something like it). The reason being that Google has admitted that they fully embrace the commodity distributed server system. Google will periodically host talks at my university where they explain all this in [too much] detail.

            Who sells commodity servers without motherboard-integrated video cards ?

            • Re: (Score:3, Insightful)

              by dcam ( 615646 )
              Someone who wants to sell to google? They must get through a *lot* of machines, this would give them some buying power.
        • by poot_rootbeer ( 188613 ) on Tuesday September 26, 2006 @05:30PM (#16207231)
          Google is not throwing 7950's in their servers. These systems run with on-board video at best. Google has no need for a video card that can do anything more than text, as with all non-windows based servers. For that matter, after the first boot, there is no need for a video card at all.

          Seems to me Google doesn't want to fracture the commodity hardware market into server-class hardware using 5VDC power and desktop-class hardware using 12VDC. One standard, applied equally across the entire range of products.

          • Re: (Score:3, Insightful)

            Mod Parent up please. That would definitely be the non-dick move, which is what we'd all like to expect from Google.
          • Re: (Score:3, Interesting)

            by kfg ( 145172 ) *
            One standard, applied equally across the entire range of products.

            The one used by the majority of DC electric devices, not just computers. The one compatable with existing external power supplies such as solar, home gas powered generators, your car battery, etc.

            If motherboards were designed to run on 12v DC you could put a socket on the back of the case and jack into anything that gave you 12v DC. You could take your home desktop straight to the RV, boat, or cabin in the woods running off a turbine in the l
        • You'd think that with all the parallel operations modern GPUs are capable of, Google would find a way to use them in their database (or at least use them for something)...

      • Thank you- you're the first person to teach me something NEW. I didn't know GPUs still used 12V technology- especially since they all seem to output TTL level voltages.
      • Re: (Score:3, Informative)

        by Bassman59 ( 519820 )

        Video cards use a ton of 12v power, enough that high-end cards get a dedicated connector featuring two wires of it.

        Video cards with a disk-drive-type power connector always use point-of-load switch-mode supplies to convert the +12V to whatever voltages are needed by the chips on the board. Nothing on the board uses the 12V directly, except maybe the fan (if that).

        They use the disk-drive connector because:

        • The GPUs and other devices on the board use more current than you can push through the PCI/AGP or P
    • by Anonymous Coward on Tuesday September 26, 2006 @05:10PM (#16206903)
      Almost nothing, but that is irrelevant. A modern PC uses so much power that it would be plain stupid to try and deliver it at lower voltages. The power is stepped down close to the actual load, because otherwise you'd need much heavier wires or lose much power to heated cables. That kinda is the point of the proposal: Every PC already has the necessary regulators because there's simply no other sane way to deliver the kind of power that graphics cards and CPUs consume. So what's the point in keeping the power supply complicated when the main consumers in a PC use the 12V line anyway?
    • by Lonewolf666 ( 259450 ) on Tuesday September 26, 2006 @05:14PM (#16206981)
      Modern CPUs run on core voltages of 1.5 v or less, depending on model. DDR RAM is 2.5V IIRC.
      So you will have to convert most of your power from 5 V to something else. And if you have to re-convert anyway, 5V as intermediate voltage is not optimal. When converting to 5V, the voltage drop in the power diodes and in the wires to the mainboard eats a much higher proportion of the power than with 12V as intermediate voltage.
      24V or even 48V would be even better. The auto industry is currenly starting to introduce 48V systems BTW.
      • by Hirsto ( 601188 ) on Tuesday September 26, 2006 @07:46PM (#16208835)
        12V happens to be a sweet spot in terms of cost of the converter components as well as overall efficiency. Wire guage, voltage drop and capacitor size is significantly smaller than 5V or 3.3V primary supply. Think in terms of millions of units per month and compare the price of an NMOS FET and storage capacitor rated for 35V (safety margin in a 24V design) verses the cost of similar FETs and capacitors rated for 20V. In a synchronous buck design you can easily save $0.75 per converter section by using 12V rather than 24V and significantly increase conversion efficiency for free. Assuming a constant switch frequency the switching losses increase with the square of applied voltage, "I squared R" conduction losses in conductors will decrease with the square of current but the voltage dependant switching losses will dominate once the input voltage gets too high. For a given cost the overall converter efficiency is usually highest if your input voltage is relatively close to the output voltage. 12V to 3.3V conversion is significantly more efficient and less costly than 24V to 3.3V conversion.
        • by NoMaster ( 142776 ) on Wednesday September 27, 2006 @01:04AM (#16211101) Homepage Journal
          Actually, when I learned it, I was taught ~32V was the sweet spot between practicality, I^R losses, component size, and load type. Mind you, back then it was electrical / electromechanical loads (lights, motors, contactors, relays, etc) not electronic. And that was from a telco background (-48V), which made me wonder "well, why tell us that about 32V?" ;-)

          But the point is, what the sweet spot is depends mostly upon the characteristics of the load - so it's wrong to come out with blanket statements like "12v happens to be a sweet spot in terms of cost of the converter components as well as overall efficiency". Yes, today, particularly with switchmode supplies and the actual maximum load V being 5v or less, it is. Tomorrow, when everything runs on 3.3v or less, it'll be closer to 5v~6v.

          The other half of your argument only holds for certain types of power supplies too - but I'll give you a pass on that, seeing as you did explicitly state "synchronous buck" designs. It doesn't necessarily hold true, however, for other classes like linear, boost, buck-boost, etc. Your final assertation, however - that, for a given cost, the overall converter efficiency is usually highest if your input voltage is relatively close to the output voltage - is spot-on. Too far away from that, and the ol' V=I*R rule starts to bite you...

      • if you have to re-convert anyway, 5V as intermediate voltage is not optimal. When converting to 5V, the voltage drop in the power diodes and in the wires to the mainboard eats a much higher proportion of the power than with 12V as intermediate voltage. 24V or even 48V would be even better.

        Telephony has been running on redundant -48V DC supplies to the racks (typically from rooms full of floating storage batteries) since the early relay days. Much modern networking equipment also conforms to this standard,
    • If you're cabling up a rack then you should not be delivering low voltages directly to the boards for two main reasons:

      1) Losses are I^2R. This means that you have more power loss if you transfer power at low voltages through the same wires, connectors etc. You need switchmode power supplies anyway, so may as well switch down from a from a higher voltage.5V means more current than 12V, meaning thicker wires, higher current connectors etc and less headroom in the system for voltage loss.

    • by zootjeff ( 531575 ) on Tuesday September 26, 2006 @05:23PM (#16207129)
      If you look at just routing 12 volts everywhere, you just would have to put the regulators in the hard drives, and CDROMS so they don't need 3.3 and 5 volts. Then what do you do about +5 Stanby that allows you to hibernate? Do you still need a stand by voltage? It isn't and easy answer and will take the whole industry to adopt it. Checkout formfactors.org for ATX and BTX specifications that Intel is pushing. What's also interesting is the 600 and 700, etc Watt power supplies just keep their 3.3 and 5 volts at around 30 amps max, but keep adding +12V1 +12V2 +12V3, etc.. Looks like the industry is already going to mostly 12 volts for distribution anyway. But don't you still need PS_ON, PowerOK, etc.. You're just trying to phase out the +5 and +3.3, and -12 which hardly any motherboards use these days, and maybe the +5 Standby, then it's going to happen eventually anyway. Most of the power is going on the 12 volt lines anyway, so having inefficient +3.3 and +5 isn't really a big deal. I've studied this for a while as my big hobby is computers in cars, I built a power supply called DSX12V that takes a 8-16 volt input and makes a solid 12v output that I got over 97% efficiency on. This is good for people sticking computers in cars or running them off banks of batteries for solar power applications etc.
    • by seanadams.com ( 463190 ) on Tuesday September 26, 2006 @05:35PM (#16207297) Homepage
      In the old days, disk drive motors and fans. But many of these now run on 5V, hence the cheap USB-powered drive cases out there. Chips at CMOS power levels run at 3.3v, TTL is 5v, but hardly anything runs at 12v anymore. It seems to me that if they'd just pick their hardware carfully, they could run their entire server rack off of 5v+- rails.

      You are correct that hard drives generally use just 5V, but the rest of your points are not even close. Modern CPUs require lower voltages, higher current, and tighter regulation, which is why DC-DC power supplies are now on motherboards instead of running directly from an ATX supply.

      Furthermore, running a rack of servers on 5V rails would be absolutely absurd. Do you have any idea what the amperage would be? The bus bars would have to be several inches thick, the transmission loss would be enormous, and if you accidentally shorted them.... forget it!

      Something like 48VDC might work but then you lose out on all the economies of scale driven by the 110/240VAC standard.

      Just match the power supply to the motherboard and be done with it. Standardizing on one voltage is impractical, and besides, how would it improve "efficiency"?
      • Re: (Score:3, Interesting)

        by Alchemar ( 720449 )
        I use to work on a lot of embeded controls. Ones with lots of different plug in boards to do different things, including all kinds of control signal inputs and outputs of various voltages. The best design I saw was a 50Khz 48VAC power supply. At those frequencies, even a good wattage xfmr is small enough to be soldered to the board. Every where they needed power they installed a xfmr, bridge, and votage regulator. Had to be a little careful about seperating the power from the signals, but all the power
    • MOSFETs use 12V (Score:2, Insightful)

      by wtarreau ( 324106 )
      Many recent motherboard use 12V to control voltage regulators' MOSFETs gates because the higher the voltage, the lower the internal resistance, so the higher the efficiency. 5V is generally too low to achieve good efficiency, but 12V is fine.
      From 12V, the MB can produce 3.3V and 1.xxx Volt for the CPU. It's easy to also provide 5V on the MB.
    • by ottffssent ( 18387 ) on Tuesday September 26, 2006 @06:19PM (#16207881)
      Disk drive motors use 12V. Laptop drives (2.5" drives) use 5V exclusively, but standard desktop and server drives use 5V and 12V. SATA drives get 3.3V, 5V, and 12V. The VRMs that power your CPU and video card probably take their power off the 12V rail, as do many other components.

      The reason you wouldn't want to power a machine off 5V is because you would need huge busses. Suppose you've got 40 svelte 1U servers in a rack, each drawing 100W. That's 4kW. Assuming that's a purely resistive load (hint: it's not), you'd need 800A at 5V for the whole rack. Are you familiar with the big connectors on car batteries? They're designed to pass less than half the 800A you'd need to run a rack off 5V, and your car battery only has to handle that for a few seconds while the engine is starting up; a rack would need to deal with that continuously. And that's for a pretty low-power rack.

      Using 12V instead of 5V lets you get away with busses about 40% the size. Also, and probably more importantly, 12V DC is (IIRC - correct me if you're a PSU designer) easier to get efficiently than 5V DC. Once you split the 12V off into a few dozen servers, you can drop it down with small, fairly efficient CMOS regulators.
    • Re: (Score:3, Informative)

      by camperslo ( 704715 )
      The only things that natively use 12 Volts at a current high enough to be significant are the drives.
      I think that misses the point however. Designing a power supply for higher output voltage, and switching regulators for higher input voltage, raises efficiency. (24 or 48 Voltage would likely be better yet, except for the need to come up with 12 Volts too)

      It is unfortunate that the article (and the others that I could find) don't link to the white paper for some specifics.
      Instead I'll have to base my comme
  • good idea but... (Score:3, Insightful)

    by grapeape ( 137008 ) <mpope7@nOSPAm.kc.rr.com> on Tuesday September 26, 2006 @04:58PM (#16206691) Homepage
    Its a nice idea and one that is probably a long time coming, but phasing something like that into place will take an incredibly long time. Look at the struggles of PCI express, its still not in 50% of the newer motherboards and systems though its benefits are more than apparent. Its just been in the past couple years that we have seen a shift to full usb and most machines still come with ps2, serial and parallel ports anyway. Dramatic changes to the PC standards are very difficult, there are millions of existing machines that still need support. Perhaps if it was tied to a new socket standard in the future it could slowly be phased in through upgrades, but I see the chances as very very slim.
    • by griffjon ( 14945 )
      To be fair though, I don't think anyone has a trademark or patent on "12V", unlike getting USB/USB2 certification, PCI Express, etc. etc. etc.
  • I'm surprised Google isn't running off 48V DC power supplies already, which, from what the documentation has shown me, already exhibit some of these savings...
    On the other hand, it might have to do with Google's policy to use as much off-the-shelf equipment as possible, which 48V is not(iirc), so unless the "off-the-shelf-standard" changes, google might be in a position that they have to either break their own rules, or pay for following them.

    Considering the number of servers google has, I'm surprised they
    • by synx ( 29979 )
      48 V equipment is COTS - just not for the PC industry. The telco industry uses 48 V all over the place. I remember cisco's 2000 line of routers having a 48 V DC feed/plug.

      Of course, how do you know Google isn't doing exactly what you mention? No one knows what really happens inside those giant datacentres. Protect the competitive secrets I guess.

      If you've ever read the Xoogler's blog then you'd know that google doesn't exactly build machines the way you and I might. Meaning, your assumption of a case i
  • At the recent linuxworld in sf, I noticed some dc power supplies being pushed in the rack pc sector. I guess the lack of conversion from AC to DC saves a bit of juice, which makes a difference in large colocation centers. Combined with dc conversion on the motherboard, it would just be a matter of hooking up 12V DC direct to the board, which would be much nicer on the equipment and save a bit of power.
  • by JavaManJim ( 946878 ) on Tuesday September 26, 2006 @05:03PM (#16206791)
    You can say goodbye to USB powered devices. An example would be the canned drink cooler.

    Thanks,
    Jim
    • Re: (Score:3, Insightful)

      by dgatwood ( 11270 )

      Why? I can turn 12VDC into 5VDC (what USB uses) with nothing more than a voltage regulator (or if you want to waste a ton of power, a relatively trivial voltage divider).

  • Any links to the Google white paper detailing their reasons for this system architecture?
  • by dgatwood ( 11270 ) on Tuesday September 26, 2006 @05:07PM (#16206859) Homepage Journal

    The ability to have all my machines powered by a heavy cable carrying 12VDC would be pretty useful for several reasons.

    • The UPS could be integrated into the power supply, avoiding lots of energy lost in converting it up to 110VAC and right back down again.
    • The power supply would then be external, where it could be a fanless brick instead of being inside the case where it adds heat that must be dissipated.
    • A switching power supply is theoretically more efficient than a wall wart. If everything were 12V, all those stupid little outboard devices could draw power off of the same supply source, resulting in better overall efficiency. More importantly, I would never let out the magic smoke when I accidentally plug a wall wart into the wrong device. :-)
    • A 12V system can more easily be integrated with solar panels to reduce load on the power grid.

    *sigh*

    • Re: (Score:2, Interesting)

      by sebol ( 112743 )
      it make sense...

      Next generation computer should have 12v plug and special cable, so that it can take 12v source from outside.
      What's important is the cable and socket but be different with 110v or 235v to avoid "accident".

      i would love to see conputer running from a car battery
    • by mrmeval ( 662166 )
      You still miss the Commodore 64 don't you?
      • Re: (Score:3, Interesting)

        by fm6 ( 162816 )

        No, he misses the Convergent Technologies NGen. This was a pretty powerful x86 platform that also used external power supplies. The nicest thing about it was that it was quiet: the power supplies (yes, plural; the number you need varied according to your internal hardware) used passive cooling, so only internal heat sources needed to be cooled.

        This was 1983, which was when IBM introduced the PC-AT [wikipedia.org], the machine which defines "compatibility" to this very day. And the AT used a big, noisy internal power supp

    • Re: (Score:3, Insightful)

      by fm6 ( 162816 )
      You make a good point about wall warts, except you don't go far enough. If all portable devices accepted 12V power, somebody would come out with a single brick with multiple 12V plugs, which would be a godsend to travellers who currently schlep one wall wart for each device.

      **big sigh**

      • by dgatwood ( 11270 )

        Sort of like an iGo?

        Unfortunately, they still don't have the one thing I want, which is to combine the iGo with a lithium ion charger for camera batteries with swappable tips. Camera chargers take up more space than other power supplies by far.

        The day I have my first battery failure on my Canon Digital Rebel, I'm cutting that sucker open and fitting it with a 9V battery clip and a switching regulator.

      • If all portable devices accepted 12V power, somebody would come out with a single brick with multiple 12V plugs, which would be a godsend to travellers who currently schlep one wall wart for each device.

        Guitar players have just such a device for stompboxes. The vast majority of stompboxes run off a 9V DC, from a battery or wallwart, so there are several bricks you can purchase with just that. Of course, you still have to be careful around those oddballs which call for 5V, 12V, 18V, or inverted power, so i
    • Re: (Score:3, Insightful)

      by genericacct ( 692294 )
      I'm all about the solar angle! Someday I'll wire my house with an off-grid 12-volt solar system, with 12-volt "car lighter" sockets and DC lighting (both LED and mini halogen). Laptop and WiFi router plug in to it.

      And everything can plug into the car with the same cord. That's another awesome advantage, being able to put these same computers in cars and RVs.
    • Your points are interesting, but that's not what Google is talking about. They're just proposing that the cable from the power supply to the motherboard inside a PC or server should only carry 12V. The power supply is still internal, and each device still has a separate supply.
    • I don't get it. (Score:3, Interesting)

      by pavon ( 30274 )
      I agree that a standardized 12VDC connector on all electronic devices would be nice, like every other poster here has pointed out, but I don't think that is what google is talking about. You can already get power supplies that take 12VDC in, or even dual 48VDC (telecom standard), and I would be surprised if google isn't using something like that already.

      What they are recommending is that the power supply only have 12V out, and all other DC-DC conversions take place on the mother board. Unfortunately, the ar
  • The rate at which the Google computing system has grown is as remarkable as its size. In March 2001, when the company was serving about 70 million Web pages daily, it had 8,000 computers, according to a Microsoft researcher granted anonymity to talk about a detailed tour he was given at one of Google's Silicon Valley computing centers. By 2003 the number had grown to 100,000.

    I've done my share of Google-bashing (mainly due to their inability to move their newer products beyond "beta"), but here's an accomp

  • by Anonymous Coward
    The motherboard itself is an outdated concept. It's no longer really necessary if you've dealt with small form factor boards you can easily see that the boards are just a substrate to stick the chips on and for that a flat board-like surface doesn't make sense. What you really need is a cubic cartridge like device that gives you access to more surface area for interfaces close to the memory and CPUs and other chips in a smaller area. It would also facilitate cooling reducing power requirements at the system
  • Microsoft will specify a Vista-compatible power supply that not only uses a single voltage but can radiate power like Star Trek's newest Enterprise does for nearby devices.
  • Seems like there was some talk a couple of years ago about doing the AC-DC conversion on a massive level, then running individual servers off a server-room wide source. If you create +-12v output on a block of plugs, and +-5v outputs on another set of plugs, you could achieve much better efficiencies. You'd also probably cut your costs significantly. If you put a massive AC-DC transfomer in another area, you could isolate the cooling systems, etc. One large cooling system for a single, large power supply wo
  • Why not -48? (Score:4, Interesting)

    by AaronW ( 33736 ) on Tuesday September 26, 2006 @05:15PM (#16206995) Homepage
    A lot of telco equipment is designed to run on -48 volts DC and PC and server power supplies are readily available at this voltage.

    The advantage of -48 over 12 volts is that there will be less loss through resistance and smaller conductors can be used. Of course, there is a greater risk of electric shock, but I would think -48 would be pretty safe.

    48 volts is also the standard for Power over Ethernet (IEEE 802.3af) [wikipedia.org]. This may not be compatible, though, since telcos run -48, not +48, though some equipment can operate with either (though some cannot).
    • by springbox ( 853816 ) on Tuesday September 26, 2006 @06:08PM (#16207755)
      48 volts is also the standard for Power over Ethernet (IEEE 802.3af) [wikipedia.org]. This may not be compatible, though, since telcos run -48, not +48
      Sure they can! Just reverse the polarity and reroute all power to the main deflector dish!
  • Low-voltage power supplies in racks might make sense. Not in desktops, because low-voltage power takes requires more copper to distribute it, because there's more current. Copper is very expensive of late.

    Bruce

    • They're not talking about reducing the voltage the PS uses, they're talking about not having the PS produce things like +5 and -5 as well as +12, INSIDE the computer.
  • Bad idea (Score:3, Interesting)

    by ErMaC ( 131019 ) <ermac@ermacstudi ... org minus author> on Tuesday September 26, 2006 @05:33PM (#16207273) Homepage
    Google's whitepaper is interesting but the fact is that DC in the Datacenter is already happening, and it's not gaining much momentum for multiple reasons.
    Google's perspective is rather unique, they use super-cheap desktop systems that individually do not use a lot of power and thus running them off 12v DC might make sense. But in any other, more conventional datacenter, servers have multiple power supplies that can EACH pull 800w of power. Now when you're running 110v AC that means you're pulling ~7 amps through a single cable. You need datacenter grade power cables for this, but it's still sane. Now you can get datacenter equipment that runs 48v DC, but those cables end up running ~15 amps through them, so now you need substantially stronger cable - cable so thick that running it becomes a seriously difficult task due to the guage of the wire!
    More likely the direction people are going (and have been for some time) is to 208v AC or 3 phase 220v AC. Now you've just halved the current draw, meaning that your PDUs don't need to be as hefty, your wire doesn't have to be as thick, your coils don't get as hot, etc.
    Running 12v DC in any real data center would be ludicrous - the amount of current you'd have to draw through your cables would be way beyond a safe level.
    Also AC/DC conversions are cheap these days. And remember, DC can kill you just as easily as AC when your DC Voltage is that low.
    • RTFA (Score:3, Informative)

      Google is proposing 12V between the server's internal power supply and the motherboard. Everything outside the server would still be 208V.
  • by Animats ( 122034 ) on Tuesday September 26, 2006 @05:36PM (#16207325) Homepage

    Most of the postings so far have it all wrong. Google is not proposing 12VDC into a desktop PC or 12VDC distribution within the data center. What they're proposing is that the only DC voltage distributed around a computer case should be 12VDC. Any other voltages needed would be converted on the board that needed it.

    This is called "point of load conversion", and involves small switching regulators near each load. Here's a tutorial on point of load power conversion. [elecdesign.com]

    It's been a long time since CPUs ran directly from the +5 supply. There's already point of load conversion on the motherboard near the CPU. Google just wants to make that work off the +12 supply, and get rid of the current +5/-5/+12/-12 output set.

    • Though getting rid of all the voltage levels will take more than the motherboard work... you'll also need to do something about all the disks and other components that are currently getting a mixed feed.
    • Oh. So, we have lots of switching power supplies and tantalum capacitors (because we have to supply lots of current at low-voltage) on the MB. Thus moving work from a cheap part of the computer to an expensive part. Not sure I want more power-supply electronics on the MB than is already there.
      • Re: (Score:3, Insightful)

        by inKubus ( 199753 )
        There are over 6500 Walmart stores with an average size of 120000 square feet. Every 500 sqaure feet they have a 4-tube fluorescent light fixture, drawing 4x40 or 160 watts. Multiplying out, the total square footage is ~6500*120000=780,000,000 square feet. Divide by 500 square feet to get the total number of fixtures, 1,560,000, and multiply that by 160 watts to get the total watts, 249,600,000. Probably 75% of those Walmart stores are 24 hour, while the rest are 12 hour: (.75*24)+(.25*12) = 21 average
  • S
    ave the conversion process all together and tell motherboard makers to rectify 120 V directly on the motherboards. It worked for old TVs, it could work again. Maybe.
  • There's already a small-scale example of this: the PicoPSU [silentpcreview.com]. You use an external 12V power brick, and then internally replace your entire computer PSU with something about the size of a matchbox. However, it is only 120W, and a bit short on connectors.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday September 26, 2006 @05:52PM (#16207533)
    Comment removed based on user account deletion
  • by Skapare ( 16644 ) on Wednesday September 27, 2006 @03:30AM (#16211785) Homepage

    Another way to get more efficiency is to operate the Switched-mode power supply [wikipedia.org] at the higher voltage it supports, usually 220 to 250 volts. In most of the world this is already done. In North America computers are typically run on 120 volts (in Japan this is 100 volts). In general, these power supplies are more efficient by about 3% or so, on the higher voltage. Of course, be sure to flip the voltage switch if it has one, or otherwise verify that it does support the higher voltage.

    For a single computer, it would not be worth adding the extra circuit to get 240 volts. But if you run several, it could be worth doing so, especially if you have so many that it exceeds the capacity of one 120 volt 15 or 20 amp circuit (you could have twice as many on the same amperage if operating at 240 volts). If you already have a circuit dedicated to the computers, that circuit could be converted from 120 volts to 240 volts by changing out the circuit breaker from a one pole to a two pole type, marking the white neutral wire with red or black tape to comply with electrical code identification requirements, attaching these wires to that new breaker (not to the neutral bus), and installing a 240 volt style outlet (NEMA 6-15R or 6-20R). These are the steps that would be used to install an outlet for a big window air conditioner (which you might need anyway with so many computers). Then you can use this [interpower.com] power cord.

Things are not as simple as they seems at first. - Edward Thorp

Working...