Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Hardware

How Many Google Machines, Really? 476

BoneThugND writes "I found this article on TNL.NET. It takes information from the S-1 Filing to reverse engineer how many machines Google has (hint: a lot more than 10,000). 'According to calculations by the IEE, in a paper about the Google cluster, a rack with 88 dual-CPU machines used to cost about $278,000. If you divide the $250 million figure from the S-1 filing by $278,000, you end up with a bit over 899 racks. Assuming that each rack holds 88 machines, you end up with 79,000 machines.'" An anonymous source claims over 100,000.
This discussion has been archived. No new comments can be posted.

How Many Google Machines, Really?

Comments Filter:
  • Nice Rack! (Score:4, Funny)

    by turnstyle ( 588788 ) on Sunday May 02, 2004 @12:04PM (#9034254) Homepage
    No wonder I'm'a Googlin'
  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Sunday May 02, 2004 @12:04PM (#9034258) Homepage
    * of servers in the world
    * of servers in the USA
    * of servers running Linux
  • $278k ?? (Score:5, Insightful)

    by r_cerq ( 650776 ) on Sunday May 02, 2004 @12:04PM (#9034261)
    That's $3159 per machine, and those are today's prices... They weren't so low a couple of years ago...
    • Re:$278k ?? (Score:4, Informative)

      by toddler99 ( 626625 ) on Sunday May 02, 2004 @12:20PM (#9034404)
      google doesn't buy pre-built machines they have been building costum machines from the very beginning. Although, with fab'n their own memory, i'm sure today they do a lot more. Google runs the cheapest most unreliable hardware you can find. It's in the software that they make up for the unreliable hardware. Though unreliable hardware is ok so long as you have staff to get the broken systems out and replaced with a new unreliable cheap ass system. When google started they used lego's to hold their costum built servers together
      • Re:$278k ?? (Score:3, Flamebait)

        by hjf ( 703092 )
        so it means if you are smart enough, you don't need to have a $1,500,000 Sun server or that kind of shit. leave that for big corporations with lame-ass programmers.
        imagine what google could do with that kind of shit
        • Re:$278k ?? (Score:3, Insightful)

          by jarich ( 733129 )
          Agreed. If you are able to code in your fault tolerance, it's a heck of a lot cheaper than buying it.

          What's cheaper... buying a round robin DNS router (hardware) or coding your client to try the next web server in it's list (software). Now, multiply that savings for every customer you sell to.

          The problem is finding someone who knows how to do that robustly and reliably. Most places have troubling finding developers whose programs don't crash every 15 minutes. This sort of thing is a little more adva

        • Re:$278k ?? (Score:5, Insightful)

          by sql*kitten ( 1359 ) * on Sunday May 02, 2004 @01:39PM (#9034844)
          so it means if you are smart enough, you don't need to have a $1,500,000 Sun server or that kind of shit. leave that for big corporations with lame-ass programmers. imagine what google could do with that kind of shit

          The difference is that if Google loses track of a few pages due to node failure it's no big deal because a) they don't guarantee to index every page on the web anyway and b) the chances are that page will be spidered again in the near future - and it may not even still exist anyway.

          Your bank, on the other hand, can't just "lose" a few transactions here and there. FedEx can't just lose a few packages there and there. Sure they occasionally physically lose one, but they never lose the information that at one point, they did have it. Your phone company can't just lose a few calls you made and not bill you for them. Your hospital can't just lose a few CAT scans and think oh well, he'll be in for another scan eventually.

          Now, I'm not saying that Google's technique isn't clever - I'm saying that it can't really be generalized to other applications. And that's why very smart people - and big corporations can afford to hire very smart people - keep on buying Sun and IBM kit by the boatload.
          • by glpierce ( 731733 ) on Sunday May 02, 2004 @02:46PM (#9035254)
            "Your phone company can't just lose a few calls you made and not bill you for them."

            Wait, what's wrong with that one?
          • Re:$278k ?? (Score:5, Interesting)

            by jburroug ( 45317 ) <slashdotNO@SPAMacerbic.org> on Sunday May 02, 2004 @04:58PM (#9036013) Homepage Journal
            Your hospital can't just lose a few CAT scans and think oh well, he'll be in for another scan eventually.

            You've never worked in a medical field have you? You'd think that that would be a big deal and in theory data integrity is a very high priority but in reality...

            I used to work as the IT Manager for a diagnostic imaging and cancer treatment center (and still do contract work with them because my replacement is kind of a noob) While loosing studies isn't exactly a "no big deal" situation it's still far more common than patients will ever realize. The server that stores and processes all of the digital images from the scanning equipment is a single CPU home rolled P4 using some shitty onboard IDE raid controller (doesn't even do RAID5!) running Windows 2K. The most money I could get for setting up a backup solution was the $200 an external firewire drive cost. Somehow we never managed to loose a study once it reached my network in the 9 months I worked there but I know three or four were deleted from the cameras themselves before being sent properly so whoops it's gone, gotta reschedule (and bill their insurance or Medicare again!) Two weeks ago one of the drives in that 0+1 array failed and despite my pleadings they still haven't ordered a replacement yet...

            Now it's tempting to think that this place is just a special case of cheapness and sloppiness but from talking to the diagnostic techs (the people that operate the cameras) that's not so. That clinic is a little worse than average in terms of loosing patient information but by no means the worst some of them at seen/heard of/worked at in their careers. It's worse in general at small facilities but even large hospitals often suffer from the same unprofessionalism.

            Your bank and the phone company keep much better track of your calls or your ATM transactions than most hospitals do with your CT or MRI scans...
          • Redundancy (Score:4, Interesting)

            by crucini ( 98210 ) on Sunday May 02, 2004 @09:41PM (#9037549)
            The google file system is redundant. Loss of one node does not lose data.

            Some of the reasons these techniques aren't used in enterprise computing:
            1. They're hard, and business programmers are not that bright. And nobody has encapsulated these technologies in an IT product.
            2. The system can only respond quickly to a finite set of transactions that was known at design time. It lacks the flexibility of a standard file system or relational database.
            3. By the time a business has a lot of data, it usually has enough money to store the data conventionally. Search engines are a bit different.

            Since I've seen it up close a few times, I can say that the standard "enterprise way" (Oracle/Sun/EMC) delivers very poor bang for the buck. If Google wanted to, they could deliver a modified GFS with any desired level of reliability by increasing the redundancy. And even after that bloating, it would still deliver greater bang for the buck than the conventional solutions.
      • lego? (Score:3, Insightful)

        by sfraggle ( 212671 )
        Sounds like a pretty stupid idea to me. Lego is expensive stuff.
        • Re:lego? (Score:3, Interesting)

          by james b ( 31361 )
          I think the parent is probably referring to some of the pictures on google's early hardware photos [archive.org] page, courtesy of the wayback machine. If so, the lego never necessarily went into `production', it was just when they were messing around.
    • Re:$278k ?? (Score:5, Interesting)

      by Gilk180 ( 513755 ) on Sunday May 02, 2004 @12:43PM (#9034538)
      I really doubt they are spending anywhere near this for the machines themselves. A former student a google employee made one of those recruiting/marketing visits to my university last semester. I got to speek to him at length about Google's operation. According to him (and he had pictures to back this up). All of their boxen are a motherboard, an ide drive and a processor sitting on a shelf in the rack. No cases, no fans, no cd, etc. Plus they buy in bulk and get good prices.
      • Re:$278k ?? (Score:5, Insightful)

        by geniusj ( 140174 ) on Sunday May 02, 2004 @02:09PM (#9035025) Homepage
        I can confirm this as well.. I have seen their racks in Equinix in Ashburn, VA. I pass by their cages every time I go to my cage there. I believe I also saw them in Exodus in Santa Clara a couple of years ago. They are 1U half depth and do indeed lack a case. There are definitely thousands of their servers in Ashburn, VA, and they are very space efficient (as they would need to be).
    • Acquisition (Score:5, Insightful)

      by MrChuck ( 14227 ) on Sunday May 02, 2004 @01:12PM (#9034688)
      recall that important mantra:
      The cost of acquiring the machine is a fraction of the cost of owning it.

      And lets not forget the overhead of 2 networks per machine and all the patch panels, wiring, switches. Toss in console management (which may not be on all machines at all time), monitoring and management of said machines. Oh, and one really tired guy running around.

      Disks are going to fail at a rate of several hundred or thousand PER DAY, just statistically. (along with power supplies etc)

      Toss in that in three years, ALL of those machines are obsolete.
      That's huge.

      I've got ~300 racks in a half full data center upstairs from me. All network cables run to a room below it to patch panels. Around 50% the size of the DC is cable management. Next to that is a room FILLED with chest high batteries - these are used during outages until the generators need to be kicked on. And a NOC takes up about 1/5th the space of the DC (monitoring systems worldwide, but it's got seating for maybe 40 people - tight and usually filled with 10 folks, but in a crunch we live up there).

      So that $3159 is only a bit of it. And in 3 years, all those machines will likely be replaced for whatever $3k buys then. That's about to be a 2 CPU Athlon64 box. If Sun can pull a rabbit out of its ass, we'll have 8 and 16CPU Athlon64 boxes. At least with that, some of the CPUs can talk to each other really really really fast.

      • Re:Acquisition (Score:5, Informative)

        by Anonymous Coward on Sunday May 02, 2004 @02:03PM (#9034983)
        >>Disks are going to fail at a rate of several hundred or thousand PER DAY

        that's a little over the top big guy. i've worked at a 10,000 node corp doing desktop support. We lost ONE disk perhaps a week....if that much. We often went several weeks with no disks lost.

        even if you factor in multiple drives per server, say TWO (because they are servers not desktops)

        Interpolate for 100,000, that's a max of 20 disks per week...on the high end.

  • by Sadiq ( 103621 ) on Sunday May 02, 2004 @12:05PM (#9034262) Homepage
    Can you imagine a beowul.... oh.. wait..

  • by Anonymous Coward on Sunday May 02, 2004 @12:05PM (#9034265)
    1) google is so pretty and smart
    2) google is worth so much money
    3) google has a huge rack!!
  • IPO changes things (Score:5, Interesting)

    by Have Blue ( 616 ) on Sunday May 02, 2004 @12:05PM (#9034266) Homepage
    There was an article recently about how Google constantly understates various statistics about itself to mislead potential competitors. This article also said that the SEC would not allow them to do this once they became a publically traded company.
  • by the_raptor ( 652941 ) on Sunday May 02, 2004 @12:06PM (#9034276)
    Seriously? What is the point of this article? What next? Linus found to prefer blue ink, over black ink?
  • Not unexpected... (Score:5, Insightful)

    by avalys ( 221114 ) * on Sunday May 02, 2004 @12:06PM (#9034281)
    I don't think this is that strange: after all, that 10,000 machines figure is several years old. It's only logical that Google has expanded their facilities since then.
  • by earthforce_1 ( 454968 ) <earthforce_1@yaho[ ]om ['o.c' in gap]> on Sunday May 02, 2004 @12:06PM (#9034282) Journal
    SCO now knows how big an invoice to send Google! :-D
  • by Anonymous Coward on Sunday May 02, 2004 @12:06PM (#9034283)
    I'm sure a single IBM mainframe could do the same amount of work in half the amount of time and cost a fraction of what that Linux cluster cost.

    I hang around too many old-timer mainframe geeks. MVS forever!!! and such.

    • Re:What a waste (Score:5, Interesting)

      by phoxix ( 161744 ) on Sunday May 02, 2004 @12:18PM (#9034377)
      If you've ever read a white paper of Google's, you'd realize that they even tell people why they deal with massive clusters over mainframes: lower latency.

      Sunny Dubey
    • Re:What a waste (Score:5, Informative)

      by Waffle Iron ( 339739 ) on Sunday May 02, 2004 @12:31PM (#9034476)
      I'm sure a single IBM mainframe could do the same amount of work in half the amount of time and cost a fraction of what that Linux cluster cost.

      Mainframes are optimized for batch processing. Interactive queries do not take full advantage of their vaunted I/O capacity.

      Moreover, while a mainframe may be a good way to host a single copy of a database that must remain internally consistent, that's not the problem Google is solving. It's trivial for them to run their search service off of thousands of replicated copies of the Internet index. Even the largest mainframe's storage I/O would be orders of magnitude smaller than the massively parallel I/O operations done by these thousands of PCs. Google has no reason to funnel all of the independent search queries into a single machine, so they shouln't buy a system architecture designed to do that.

  • Assumptions? (Score:5, Interesting)

    by waytoomuchcoffee ( 263275 ) on Sunday May 02, 2004 @12:07PM (#9034293)
    According to calculations by the IEE, in a paper about the Google cluster, a rack with 88 dual-CPU machines used to cost about $278,000

    Um, don't you think if you were buying 899 racks you might actually, you know, negotiate for a better price?

    This isn't the only assumption in your analysis, and the problems with them will be compounded. What's the point of this, really?
  • Maybe just me... (Score:4, Insightful)

    by hot_Karls_bad_cavern ( 759797 ) on Sunday May 02, 2004 @12:07PM (#9034295) Journal
    Might just be me, but damn, don't you think this has raised the interested of our three letter entities? i mean, damn that is just some serious computing and indexing power on cheap, "disposable" hardware...with a filesystem that can keep track of that many machines? If i headed one of such entities, i'd sure want to know more about it!
  • wait (Score:5, Insightful)

    by Docrates ( 148350 ) on Sunday May 02, 2004 @12:08PM (#9034302) Homepage
    Remember there's a little thing called "volume discount"...

    It's gotta be more than that.
  • by Chucklz ( 695313 ) on Sunday May 02, 2004 @12:09PM (#9034307)
    With all those TFlops, no wonder Google converts units so quickly.
  • Really? (Score:5, Funny)

    by irikar ( 751706 ) on Sunday May 02, 2004 @12:10PM (#9034317)
    You mean the PigeonRank [google.com](tm) technology is a hoax?
  • by 2MuchC0ffeeMan ( 201987 ) on Sunday May 02, 2004 @12:11PM (#9034325) Homepage
    because with ~80,000 machines, they can easily put a few hard drives in each, and give everyone 1gb of gmail space... I didn't think it was possible.

    where do you go to buy 80,000 hard drives?

  • by cyclop5 ( 715837 ) on Sunday May 02, 2004 @12:12PM (#9034330)
    In your standard 42U cabinet, you're talking a half-U per server. Umm.. not happening. Let's just say I happen to know they use 2U servers, for a total of 21 per cabinet. Custom jobs - just the "floor pan" (i.e. no sides, or top for the case), system board, power supply, and I think a single (or possibly dual) hard drive (I didn't want to be too nosy staring into someone else's colo space). Oh, and network. And rumor has it, they're putting in close to 200 cabinets in just this location alone.
    • by PenguinOpus ( 556138 ) on Sunday May 02, 2004 @12:22PM (#9034413)
      Racksaver was selling dual-machine 1U racks for several years and I owned a few of them. Think deep, not tall. Racksaver seems to have renamed itself Verari and only has dual-Opteron in a 1U now. Most dense configs seem to be blade-based these days. Verari advertises 132 processors in a single rack, but I suspect they are not king in this area.

      If Google is innovating in this area, it could either be on price or in density.
  • Power (Score:3, Funny)

    by ManFromAnotherPlace ( 740650 ) on Sunday May 02, 2004 @12:15PM (#9034354)
    This many computers must use quite a bit of power and they probbably also need some serious airconditioning. I sure wouldn't want to receive their electricity bill by mistake. :)
  • Google hosting (Score:5, Interesting)

    by titaniam ( 635291 ) * <slashdot@drpa.us> on Sunday May 02, 2004 @12:15PM (#9034355) Homepage Journal
    I wonder if google will start up a web-hosting business? I bet you can't beat their uptime guarantees. They could provide sql, cgi, etc, and build in multi-machine redundancy for your data just like they do for theirs. It'll be the google server platform, just one more step to replacing Microsoft as the evil monopoly.
  • by Durandal64 ( 658649 ) on Sunday May 02, 2004 @12:16PM (#9034364)
    The number of machines Google uses is considered a trade secret. By attempting to determine how many machines they have, you're in violation of the DMCA. I'm calling the FBI.
  • by Anonymous Coward on Sunday May 02, 2004 @12:17PM (#9034371)
    working at abovenet google has pulled there machines in and out of our data centers many a times. its incredible the way they have there shit is setup.

    they fit about 100 or so 1u's on each side of the rack, there double sided cabinets that look like refrigerators. there seperated in the center by noname brand switches and they have castor wheels on the bottoms of them. google can at the drop of a dime roll there machines out of a datacenter onto there 16 wheeler, move, unload and plug into a new data center in less than a days time.
  • by peterdaly ( 123554 ) <petedaly@@@ix...netcom...com> on Sunday May 02, 2004 @12:18PM (#9034376)
    Since the 10k server number was first floated, I believe google has added quite a few, meaning 6 to 10 whole new datacenters around the world.

    It would only make sense that the server count would now be in the ballpart of what is mentioned here.

    Google hasn't been standing still, and I've heard the "Google has 10k servers" for 1-2 years now.

    -Pete
  • 15 Megawatts (Score:5, Interesting)

    by SuperBanana ( 662181 ) on Sunday May 02, 2004 @12:21PM (#9034405)

    ...assuming 200W per server, which is probably low, but probably compensates for 79,000 being most likely an overestimate. However, that doesn't even begin to account for the energy used to keep the stuff cool.

    Anyone know how many trees per second that would be? Conversion to clubbed-baby-seals-per-sec optional.

    • by gspr ( 602968 ) on Sunday May 02, 2004 @12:30PM (#9034473)
      According to Google herself [66.102.9.104] dried wood contains 15.5 MJ of energy per kg. It seems that Google consumes about 1 kg of wood per second (if they've found a way to utilize 100% of the energy, which they of course have - they're Google, after all), and that the pigeons [google.com] are just there to use their wings to dry the wood!
      We're on to you, Google!
      • by glenstar ( 569572 ) on Sunday May 02, 2004 @05:31PM (#9036199)
        According to Google herself...

        Hm... Google seems decidedly male to me.

        1) Answers questions rapidly without offering any description of how the answer was derived? Check.

        2) Works in short, fast bursts of energy and then tells you proudly it only took them .009 seconds? Check

        3) Has an inability to accessorize his appearance? Check.

        4) Returns 82,200,000 results when asked about porn? Check and match!

    • The servers could be powered by 15 Megahamsters on treadmills (@ 1 watt/hamster). But that would require sufficient management to motivate the hamsters with the threat of off-shoring their jobs.

  • Heat (Score:5, Informative)

    by gspr ( 602968 ) on Sunday May 02, 2004 @12:22PM (#9034415)
    A Pentium 4 dissipates around 85 W of heat. I don't know what the Xeon does, but let's be kind and say 50 W (wild guess). Using the article's "low end" estimate, that brings us to 4.7 MW!
    I hope they have good ventilation...
  • SCO (Score:3, Insightful)

    by WindBourne ( 631190 ) on Sunday May 02, 2004 @12:23PM (#9034421) Journal
    Since it is known that Google has the largest installed base of Linux and now they are about to go IPO in the billions, I wonder why SCO has not gone after them? Apparently, it is not use of Linux that makes SCO persue a company.

    The interesting thing is, that if SCO really has MS backing and MS is pulling strings, then I would think that MS would want SCO to persue google to tie them up for awhile.
  • hardcore (Score:5, Funny)

    by mooosenix ( 773281 ) on Sunday May 02, 2004 @12:24PM (#9034431)
    After many scientific and time consuming experiments, we have found the number of servers to be.........

    42.

  • by SporkLand ( 225979 ) on Sunday May 02, 2004 @12:29PM (#9034469)
    When you can just open "Computer Architecture: A Quantitavie Approach, 3rd Edition" by Hennessy and Patterson to page 855 and find out that in summary:
    Google has 3 sites (two west coast, one east)
    Each site connected with 1 OC48
    Each OC48 hooks up to 2 Foundry BigIron 8000 ...
    80 Pc's per rack * 40 racks(at an example site)
    = 3200 PC's.
    A google site is not a homogenous set of PC's instead there are different types of PC's that are being upgraded on different cycles based on the price/performance ratio.

    If you want more info get the patterson hennessy book that I mentioned. Not the other version they sell. This one rocks way harder. You get to learn fun things like Tomosulo's algorithm.

    If I am violating any copy rights feel free to remove this post.
  • inside information (Score:5, Interesting)

    by sir_cello ( 634395 ) on Sunday May 02, 2004 @12:32PM (#9034479)

    Interesting People 2004/05 [interesting-people.org]:
    I know for a FACT they passed 100,000 last November. One thing the Louis calculation may have missed is Google's obsession with low cost. For example read the company's technical white paper on the Google file system. It was designed so that Google could purchase the cheapest disks possible, expecting them to have a high failure rate. What happens when you factor cost obsession into his equation?

    • by gammelby ( 457638 ) on Sunday May 02, 2004 @02:12PM (#9035040) Homepage
      In the talk mentioned in a previous posting, mr. Hölzle also talked about disk failures: They have so many disks (obviously of low quality, according to you) and read so much data, that they cannot rely on standard CRC-32 checks. They use their own checksumming in a higher layer to circumvent the fact that CRC-32 gives false positive results in one out of some-large-number.

      Ulrik

  • by Anonymous Coward on Sunday May 02, 2004 @12:32PM (#9034485)
    All those machine, all that complexity and activity, all boiled down to one little box under a Google logo. The most useful input box on the internet.

    Thanks Google!
  • by gregwbrooks ( 512319 ) * <gregb.west-third@net> on Sunday May 02, 2004 @12:33PM (#9034492)
    Google is all about two things from an operational standpoint:

    • Keep costs down; and
    • What happens inside the company, stays inside the company.
    Figuring out the number of servers they have is why we're noodling over the second point, but the first point is what probably as us all thrown off. Someone in a position to know said recently that he could state as a an absolute fact they have more than 100,000 servers -- and added that merely mentioning it probably violated multiple NDAs he had.
  • by duckpoopy ( 585203 ) on Sunday May 02, 2004 @12:43PM (#9034537) Journal
    They better have at least 10^100 machines, or they will be getting a call from my lawyers.
  • by XavierItzmann ( 687234 ) on Sunday May 02, 2004 @12:45PM (#9034547)
    Did anyone think of the electricity needed to power and cool 50,000 servers

    The 1,100 Apple cluster at Virginia tech uses 3 megawatts, sufficient to power 1,500 Virgina homes
    http://www.research.vt.edu/resmag/2004resmag/HowX. html

    Yes, it is true: every time you hit Google, you are polluting the Earth.

  • Scary... DDOS? (Score:3, Interesting)

    by moosesocks ( 264553 ) on Sunday May 02, 2004 @01:03PM (#9034633) Homepage
    Isn't it scary that according to these figures, Google's datacenter should theoretically be able to DDOS the entire Internet?

    Someone mentioned that they have enough bandwidth/processing power to saturate a T1000 line. Scary...
    • No (Score:5, Informative)

      by metalhed77 ( 250273 ) <andrewvcNO@SPAMgmail.com> on Sunday May 02, 2004 @04:43PM (#9035900) Homepage
      It would not be a very distributed DDOS and that would stop any attack quite quickly. Quite simply google's bandwidth providers (or the providers above them) would just unplug them. They may be global, but they probably have less than 40 datacenters. It would not be distributed enough to sufficiently attack. If you could take over the same number of machines with the same amount of bandwidth, but distributed globally on various subnets (say a massive virus), *then* you'd have a DDOS machine. As is, google's DDOS would be shut down quite quickly.
  • by quasi-normal ( 724039 ) on Sunday May 02, 2004 @01:24PM (#9034760)
    He displayed a little numerical dyslexia... it's 359 racks, not 539 for $100 Mil. which makes the stats a little different: 31592 machines 63184 CPU's 63184 GB RAM 2527.36 TB of Disk space and I'm not sure what his logic is behind the Teraflops calculations... looks like he's taking 1Ghz==1TFlop which would give about 126.4 TFlops. Aside from that error, the figures sound pretty realistic to me. But I wanna know how much bandwidth they use.
  • Server pricing (Score:5, Informative)

    by JWSmythe ( 446288 ) <jwsmythe@@@jwsmythe...com> on Sunday May 02, 2004 @03:15PM (#9035423) Homepage Journal
    His pricing in the summary may be a bit off.

    Every article I've read about Google's servers says they use "commodity" parts, which means they buy pretty much the same stuff we buy. They also indicate that they use as much memory as possible, and don't use hard drives, or use the drives as little as possible. From my interview with Google, they asked quite a few questions about RAID0, RAID1 (and combinations of those), I'd believe they stick in two drives to ensure data doesn't get lost due to power outages.

    We get good name brand parts wholesale, which I'd expect is what they do too. So, assuming 1u Asus, Tyan, or SuperMicro machines stuffed full of memory, with hard drives big enough to hold the OS plus an image of whatever they store in memory (ramdrives?), they'd require at most 3Gb (OS) + 4Gb (ramdrive backup). I don't recall seeing dual CPU's, but we'll go with that assumption.

    The nice base machine we had settled on for quite a while was the Asus 1400r, which consisted of dual 1.4Ghz PIII's, 2Gb RAM, and 20Gb and 200Gb hard drives. Our cost was roughly $1500. They'd lower the drive cost, but incrase the memory cost, so they'd probably cost about $1700, but I'm sure Google got better pricing, buying the quantity they were getting.

    The count of 88 machines per rack is a bit high. You get 80u's per standard rack, but you can't stuff it full of machines, unless you get very creative. I'd suspect they have 2 switches, and a few power management units per rack. The APC's we use take 8 machines per unit, and are 1u tall. There are other power management units, that don't take up rack space, which they may be using, but only the folks at Google really know.

    Assuming the maximum density, and equipment that was available as "commodity" equipment at the time, they'd have 2 Cisco 2948's and 78 servers per rack.

    $1700 * 78 (servers)
    +
    $3000 * 2 (switches)
    +
    $1000 (power management)
    --------
    $139,600 per rack (78 servers)

    Lets not forget core networking equipment. That's worth a few bucks. :)

    Each set of 39 servers would probably be connected to their routers via GigE fiber (I couldn't imageine them using 100baseT for this) Right now we're guestimating 1700 racks. They have locations in 3 cities, so we'll assume they have at least 9 routers. They'd probably use Cisco 12000's, or something along that line. Checking eBay, you can get a nice Cisco 12008 for just $27,000, but that's the smaller one. I've toured a few places who had them, and pointed at them citing them to be just over $1,000,000.

    So....

    $250,000,000 (ttl expenses)
    - $ 9,000,000 (routers)
    ------
    $241,000,000
    / $ 139,600
    ------
    1726 racks
    * 78 (machines per rack)
    ------
    134,682 machines

    Google has a couple thousand employees, but we've found that our servers make *VERY* nice workstations too. :) Well, not the Asus 1400r, those are built into a 1u case, but other machines we've built for servers are very easy to build into midtowers instead. Those machines don't get gobs of memory, but do get extras like nice sound cards and CD/DVD players. The price would be the same, as they'd probably still be attaching them to the same networking equipment. 132,000 servers, and 2,682 workstations and dev machines is probably fairly close to what they have.

    I believe this to be a more fair estimate, than the story gave. They're quoting pricing for a nice fast *CURRENT* machine, but Google has said before that they buy commodity machines. They do like we do. We buy cheap (relatively) and lots of them, just like Google does. We didn't pattern ourselves after Google, we made this decision long before Google even existed.

    When *WE* decided to go this router, we looked at many options. The "provider" we had, before we went on our own, leasing space and bandwidth directly from Tier 1 providers, opted for the monolythic sy
  • by melted ( 227442 ) on Sunday May 02, 2004 @03:31PM (#9035503) Homepage
    I think they include infrastructure and air cooling into their $250M figure. I these things can actually cost MORE than the racks themselves, especially if these racks consist of commodity hardware, and considering the size of their data center.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...