Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet Hardware

Google Reveals "Secret" Server Designs 386

Hugh Pickens writes "Most companies buy servers from the likes of Dell, Hewlett-Packard, IBM or Sun Microsystems, but Google, which has hundreds of thousands of servers and considers running them part of its core expertise, designs and builds its own. For the first time, Google revealed the hardware at the core of its Internet might at a conference this week about data center efficiency. Google's big surprise: each server has its own 12-volt battery to supply power if there's a problem with the main source of electricity. 'This is much cheaper than huge centralized UPS,' says Google server designer Ben Jai. 'Therefore no wasted capacity.' Efficiency is a major financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: 'We were able to measure our actual usage to greater than 99.9 percent efficiency.' Google has patents on the built-in battery design, 'but I think we'd be willing to license them to vendors,' says Urs Hoelzle, Google's vice president of operations. Google has an obsessive focus on energy efficiency. 'Early on, there was an emphasis on the dollar per (search) query,' says Hoelzle. 'We were forced to focus. Revenue per query is very low.'"
This discussion has been archived. No new comments can be posted.

Google Reveals "Secret" Server Designs

Comments Filter:
  • Pretty cool stuff (Score:3, Interesting)

    by Sethus ( 609631 ) on Thursday April 02, 2009 @12:13PM (#27431833)
    I'm no guru of servers, but from my own limited experience in installing servers at the small to midsized company I work at, space is always a looming issue. And shrinking the size of the UPS you need can only save money and space in the long run; which any IT manager will tell you is a huge benefit and a great selling point.

    Nothing to do but wait for a finished product at this point though.
  • Re:The New Mainframe (Score:5, Interesting)

    by spiffmastercow ( 1001386 ) on Thursday April 02, 2009 @12:14PM (#27431867)
    But wasn't the mainframe just the old cloud? I seem to remember there was a reason we moved away from doing all the processing on the server back in the 80s.. If only I could remember what it was.
  • Re:The New Mainframe (Score:5, Interesting)

    by AKAImBatman ( 238306 ) * <akaimbatman@gmaYEATSil.com minus poet> on Thursday April 02, 2009 @12:22PM (#27431987) Homepage Journal

    I don't know which 80's you lived through, but mainframe processing was alive and well in the 80's I lived through. Minicomputers were a joke back then, and were seen as mostly a way to play video games. (With a smattering of spreadsheet and word processing here and there.) In the 90's, PCs started to take hold. They took over the word processing and spreadsheet functionality of the mainframe helper systems. (Anybody here remember BTOS? No? Damn. I'm getting old.)

    Note that this didn't retire the mainframe despite public impressions. It only caused a number of bridge solutions to pop up. It was the rise of the World Wide Web that led to a general shift toward PC server systems over mainframes. All we're doing now is reinventing the mainframe concept in a more modern fashion that supports multimedia and interactivity.

    Welcome to Web 2.0. It's not thin-client, it's rich terminal. The mainframe is sitting in a cargo container somewhere far away and we're all communicating with it over a worldwide telecom infrastructure known as the "internet". MULTICS, eat your heart out.

  • by David Gerard ( 12369 ) <slashdot AT davidgerard DOT co DOT uk> on Thursday April 02, 2009 @12:35PM (#27432245) Homepage

    Many data centres expressly forbid UPSes or batteries bigger than a CMOS battery in installed systems - because when the fire department hits the Big Red Button, the power is meant to go OFF. IMMEDIATELY.

    So while this is a nice idea, applying it outside Google may produce interesting negotiation problems ...

  • by rotide ( 1015173 ) on Thursday April 02, 2009 @12:46PM (#27432427)
    Isn't the red button for safety of the employees? As in, I'm under the floor and somehow the sheathing on a power feed to the rack next to me gets stripped? I start to light up and someone notices and hits the "candy red button" to save me?

    Pretty sure if the fire department is coming in to throw water lines around, they are going to cut the power to the building and not to just the circuit on the datacenter floor.

    I could be mistaken, but I don't think a 12 volt battery backup in these applications are going to pose much of a "life" risk. Obviously you don't want to put your tongue on the terminals, but I don't think they pose the same threat that the power lines under the floor do.

  • by wsanders ( 114993 ) on Thursday April 02, 2009 @12:46PM (#27432429) Homepage

    Hundreds of thousands of servers == thousands of dead batteries each month, since those batteries don't last more than a few years.

    Now I'd think their design could be gentle on the 12V batteries, since it's possible to design UPSes that don't murder batteries at the rate cheap store-bought UPSes do. But still, they must have an army of droids swapping out batteries on a continuous basis.

    Or maybe they are more selective, and only swap out batteries on hosts that have suffered one or two outages. It only takes one or two instances of draining a gel cell to exhaustion before it is unusable.

  • Re:The New Mainframe (Score:5, Interesting)

    by jellomizer ( 103300 ) on Thursday April 02, 2009 @12:55PM (#27432585)

    Technology sways back and forth, and there is nothing wrong with that.

    1980s 2400/9600 bps Serial connections displayed the data that the people wanted fast enough for them to get their work done. And the computer had a lot of processing that can handle a lot of people for such simple tasks. And computers were expensive heck it was a few thousand bucks for a VT terminal.

    1990s More graphic intensive programs are coming out, Color Displays, Serial didn't cut it, way to slow. Cheaper hardware made it possible for people to have individual computers and networks were mostly for file sharing. So you are better off processing locally and allowed more load per demmand

    2000s Now people have high speed networks across wide distances Security and stability issues begin to happen so it is better to have your data and a lot of the processing done in one spot. So we go back to the thin client and server where the client actually still does a lot of work but the server does too to give us the correct data.

  • by wonkavader ( 605434 ) on Thursday April 02, 2009 @01:00PM (#27432691)

    I'm a little surprised by the keyboard and mouse port and the two USB ports. If it uses USB, why not just use that for the keyboard and mouse? And why the second USB port? I suspect the second port doesn't consume extra energy directly, but it causes air resistance where they'd like to clear path to drag air across the RAM and CPUs.

    And why the slots which will never get used? In quantities like Google buys, you'd think those would be left off.

    Maybe they don't make any demands on Gigabyte (the manufacturer) and just buy a commodity board? When they're buying this many, you'd think Gigabyte would be happy to make a simpler board for them. On a trivial search, I don't see the ga-9ivdp for sale anywhere, but maybe it's just old.

  • by ehud42 ( 314607 ) on Thursday April 02, 2009 @01:02PM (#27432719) Homepage

    So this sounds like one of those "so obvious, no one thought of it" questions - if Google is so concerned about precious mW that it standardizes on 12V hardware to reduce current losses of sending 5V & 3V power from the powersupply to the board, why do the CPU's have fans???? The side view of the chasis seems to suggest that with a few minor tweaks the units could rely on passive cooling and use the data centre / container fans for air flow.

    1) Move hotter components like the CPUs to the front and replace fans with larger passive heat sinks.
    2) RAM modules lined up to ensure proper airflow to the back of the chasis, chipset heat sinks lined up accordingly.
    3) HD's laid over top of voltage regulators with appropriate heatsinks
    4) power supply and battery at the rear.

    Have the hot air return duct work arranged at the back of the rack with appropriate holes and seals so that the units make a good connection to maximize airflow.

  • by TheSunborn ( 68004 ) <mtilsted.gmail@com> on Thursday April 02, 2009 @01:06PM (#27432785)

    A google mainframe would be stupid.

    If you take the price of a mainframe, and compare that to what google can get for the same money using their current solution, their current solution offers at least 10 times as much cpu performance, and much much more aggregate io(Both hard disk and memory) bandwidth.

    There are only 2 reasons to use mainframes now.

    1: Development cost. Building software that can scale on commodity hardware is expensive and difficult. It require top notch software developers and project managers. It make sense for Google to do it, because they use so much hardware(>100000 computers at last count).

    2: Legacy support.

  • by Anonymous Coward on Thursday April 02, 2009 @01:14PM (#27432961)

    Wow, you missed the point. Poster is contending that the patent FAILS to protect IP, BY MAKING AVAILABLE the instructions to REPLICATE said IP.

    Yeah, it may work against Yahoo!, but it doesn't save you from companies in China and India, who can undercut you on labor costs, and have a much more rapidly expanding market.

  • Sigh... (Score:3, Interesting)

    by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Thursday April 02, 2009 @01:21PM (#27433071) Homepage
    Once upon a time, maybe 6 or 8 years ago now, I got to sit down with the CEO of APC and basically told him I wanted battery backed in-computer power-supplies, something small yet efficient. I wanted functionality like my laptop does, unplug PC, move it, plug it back in. Same for my servers (might have been when that whole Netshelter product line started up.

    Ah, too bad I kept no notes, no logs, could have made a fortune suing Google. :-)
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday April 02, 2009 @01:38PM (#27433375) Homepage Journal

    There's only so many places you can connect a battery to a PC and all of them have already been implemented by someone at some point. There's been motherboards with second power connectors, motherboards with battery connectors, power supplies with batteries, power supplies with battery connectors, DC power supplies connected to external batteries, integrated UPS systems which take in and put out AC and which are basically just hooked up in line with the power supply... Off the top of my head I immediately think of AS/400 systems which were offered with integrated UPS before they even renamed it to zSeries or whatever it is. (I always forget. AS/400 was a good, IBM-sounding name.) The solution which comes immediately to my mind for a google-style distributed data center would be to use something power-efficient hooked up to a PicoPSU hooked up to a SLA battery hooked up to a charger hooked up to your power source. Cheap, simple, and built with commodity parts. (They seem to sell a UPS charger unit where you can get the PicoPSU as well.)

  • by gweihir ( 88907 ) on Thursday April 02, 2009 @01:49PM (#27433577)

    I could design this PSU configuration, and I do electronics only as a hobby.

    First, your main PSU delivers 12V in this scheme. Then this is stepped down to 5V and 3.3V for mainboard use, a design that is already employed by some Enermax PSUs, for example. For the 12V line, remember that +/-10% lower is acceptable. The lead-acid battery delivers up to 14V, so you need a step-down converter to 12V. In fact, you can design a switching regulator that steps the input voltage down to 13.2V (12V+10%), if it is larger, and just passes it through for 13.2V...10.8V with very, very low losses. A similar design can be done for 5% tolerances. Modern switching FETs go down to 4mR per transistor and you can do the transition from switching mode to pass-through mode very easily, e.g. with a small microcontroller that can then also do numerous monitoring and safety things. I had actually considerd such a design (purely analog though) for a lower-power, 12V external supply system myself some years ago, but a single UPS was so cheap that I did not went through with it.

    I do not mean to belittle the what the Google folks do, though. The real ingeniuity is relaizing you can do it this way on a datacenter scale when nobody else does it. The engineering is then not too demanding, at least for folks that know what they are doing.

  • Re:The New Mainframe (Score:3, Interesting)

    by eric76 ( 679787 ) on Thursday April 02, 2009 @02:15PM (#27434003)

    1980s 2400/9600 bps Serial connections displayed the data that the people wanted fast enough for them to get their work done.

    We used to run a small company off of a single 2400 baud link with an 8 port statmux (statistical multiplexor) to a remote VAX minicomputer.

    It worked fine.

    heck it was a few thousand bucks for a VT terminal.

    If I remember correctly, a VT100 was something like $1,200 or $1,600. After a while, there were third party VT100 compatibles that were much cheaper.

    I bought a brand new out of the box ten year old VT100 compatible monitor on eBay a couple of years ago for about $60.

    I love it. I actually get more work done on it than from my usual Linux and OpenBSD workstations.

  •     Actually, they're not.

        Laptops run slower than their PC counterparts.

        Laptop drives run slower than their PC counterparts.

        Laptops run hotter under load than their PC counterparts.

        If you look carefully at the picture, they've found a 12v motherboard, tied a 12v battery directly into it, and used otherwise commodity parts. That's been the mantra for Google for as long as I can remember. Oddly enough, that was my mantra when I built up a big network. Lots and lots (and lots and lots) of cheap servers are better than a handful of really expensive ones. That saved our cumulative posteriors on more than one occasion.

        I've spoken with some people who have personal knowledge of Google's equipment. They were setting up with RAID 01 or 10. I suspect with the two drive configuration, they're only setting up with RAID 0 now, and the redundancy is across multiple servers. I can confirm that they are using this open tray system for it's superior cooling.

        I had considered open trays like this, except there's one huge downfall. You would have to be amazingly careful of what happens near the rack. If you are screwing something in, and the screw or screwdriver falls, that can become very bad very quickly. Did you see any fuses or breakers from the battery to the power supply?

        Short of making the area around the rack a metal-free zone (no screws, screwdrivers, rings, keys, watches, etc), you'd seriously run the risk of shorting something out. I know I've been working up in the higher areas of a rack, and dropped screws. You listen to it rattle it's way down across several machines until it finally hits the floor. Since I use closed servers cases, it's never a problem. Maybe they don't have a big problem with it at Google, but I'd be terrified of it. Anyone who says they've spent any substantial time working in and around racks, and haven't ever dropped anything, are lying. I do love the idea for free airflow and better cooling, but ... well ... I like to keep magic smoke in it's place. :)

        The one-battery-per-server is a nice idea though. I may look into that for future builds. Most PC's have 5v and 12v output. That power supply only indicated a 12v output, and didn't have any wires that indicated anything different.

  • by inasity_rules ( 1110095 ) on Thursday April 02, 2009 @05:20PM (#27436727) Journal

    The problem I have with running a motherboard directly from a 12V battery is that most batteries are 12V nominal; actual voltage varies quite a bit (10.5-13.8 for a Pb-Acid). So the question is how well do the 12V components cope with the lower/high voltage? Most of the logic should be OK; that's all 5/3.3/1.xV. I'm guessing the only stuff that really uses 12V anymore is actually disk drive circuitry(not technically on the MB).

    I have a suspicion that you really don't want to be running a hard drive off a voltage supply that varies by up to 25%. They must have solved this somehow (step up + step down converter? But that is not efficient) but I really see no point in using 12V motherboards unless everything else can reliably run off the battery first. The home consumer may as well stick with getting 5V from the PSU and letting that dissipate the heat from the step down conversion until we're all using 5V disk drives. In which case, we can probably move to lower voltages (and lower voltage batteries); ~8V seems about right to get a stable 5V.

  • by Bill, Shooter of Bul ( 629286 ) on Thursday April 02, 2009 @06:28PM (#27437593) Journal

    Form a quick once over of from IBM ftp://ftp.software.ibm.com/common/ssi/pm/sp/n/zsd03005usen/ZSD03005USEN.PDF [ibm.com]
    There z10 can hold a max of 1.5 tb of Ram

    Lets say for the load that is processed by each server, Google needs 8 gigs of memory. Which they can supply for $2,000 each.
    No lets be generous to IBM and its reliability and say that we need twice as many google servers per the equivalent ibm reliability.

    We can replace 375 (1.5 TB of ram /8gigs per google mache * 2googlemachines/ibm equvalence) google servers per z10.
    Assuming each google server costs google $2000 to make, they would spend $750,000 on google servers. Now lets assume the IBM is better at power as well, to the tune of $10,000 per year and both expected lifetimes are 20 years. That comes out to a 20 year cost for google servers of $950,000.

    If the ibm price for the z10, is greater than $950,000, then google should continue making their own servers. Otherwise, they should switch.

    Obviously these are all ballpark figures, which I don't expect to be correct. There are quite a few variables and just because a mainframe may be more reliable and power efficient, it may not be the best choice even when dealing with hundreds ore even thousands of servers. Typically the price per performance unit ratio goes skyward as you move towards bigger and bigger servers.

  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Thursday April 02, 2009 @06:33PM (#27437667)

    Fly the data center above the arctic circle in the northern hemisphere's summer, and fly it down below the antarctic circle in the southern hemisphere's summer, and you could do the solar thing 24 hours a day with cheaper cooling.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...