How Many Google Machines, Really? 476
BoneThugND writes "I found this article on TNL.NET. It takes information from the S-1 Filing to reverse engineer how many machines Google has (hint: a lot more than 10,000).
'According to calculations by the IEE, in a paper about the Google cluster, a rack with 88 dual-CPU machines used to cost about $278,000. If you divide the $250 million figure from the S-1 filing by $278,000, you end up with a bit over 899 racks. Assuming that each rack holds 88 machines, you end up with 79,000 machines.'" An anonymous source claims
over 100,000.
I have seen the light (Score:3, Informative)
they fit about 100 or so 1u's on each side of the rack, there double sided cabinets that look like refrigerators. there seperated in the center by noname brand switches and they have castor wheels on the bottoms of them. google can at the drop of a dime roll there machines out of a datacenter onto there 16 wheeler, move, unload and plug into a new data center in less than a days time.
Re:88 machines per rack? hardly. (Score:2, Informative)
IBM has a blade center that can hold 84 2-way blades in a 42U cabinet.
Re:$278k ?? (Score:4, Informative)
Re:88 machines per rack? hardly. (Score:5, Informative)
If Google is innovating in this area, it could either be on price or in density.
Heat (Score:5, Informative)
I hope they have good ventilation...
Why reverse engineer... (Score:5, Informative)
Google has 3 sites (two west coast, one east)
Each site connected with 1 OC48
Each OC48 hooks up to 2 Foundry BigIron 8000
80 Pc's per rack * 40 racks(at an example site)
= 3200 PC's.
A google site is not a homogenous set of PC's instead there are different types of PC's that are being upgraded on different cycles based on the price/performance ratio.
If you want more info get the patterson hennessy book that I mentioned. Not the other version they sell. This one rocks way harder. You get to learn fun things like Tomosulo's algorithm.
If I am violating any copy rights feel free to remove this post.
Re:Can you imagine (Score:1, Informative)
First, funny moderations don't affect karma so making a joke like that isn't karma-whoring as the user is not gaining karma.
Second, you can turn off the meta-moderation messages by simply deselecting the check box that says "willing to moderate" in your user preferences - which is what I am assuming you want based on your somewhat cryptic sentence in all caps.
Third, decaf is your friend.
Re:What a waste (Score:5, Informative)
Mainframes are optimized for batch processing. Interactive queries do not take full advantage of their vaunted I/O capacity.
Moreover, while a mainframe may be a good way to host a single copy of a database that must remain internally consistent, that's not the problem Google is solving. It's trivial for them to run their search service off of thousands of replicated copies of the Internet index. Even the largest mainframe's storage I/O would be orders of magnitude smaller than the massively parallel I/O operations done by these thousands of PCs. Google has no reason to funnel all of the independent search queries into a single machine, so they shouln't buy a system architecture designed to do that.
When the CIO was at SVLUG (Score:2, Informative)
Re:Heat (Score:5, Informative)
Ulrik
Re:At $699 per CPU (Score:3, Informative)
Re:Acquisition (Score:5, Informative)
that's a little over the top big guy. i've worked at a 10,000 node corp doing desktop support. We lost ONE disk perhaps a week....if that much. We often went several weeks with no disks lost.
even if you factor in multiple drives per server, say TWO (because they are servers not desktops)
Interpolate for 100,000, that's a max of 20 disks per week...on the high end.
Re:$278k ?? (Score:2, Informative)
Re:inside information (Score:4, Informative)
Ulrik
Re:$278k ?? (Score:5, Informative)
Re:$278k ?? (Score:5, Informative)
Doesn't work like that, kid. A CPU on a high-end Sun fails, and the system will keep on running. You can swap the CPU out and replace it with a new one, the system will simply pick it up, assign threads to it, and keep on running. Had a couple of CPUs fail a little while ago... the first we users noticed of it was that the application slowed down slightly. Sysadmin just said yeah, I know, I'll replace 'em when the parts arrive this afternoon. Cool, we said. No data lost, no need to shut down or even restart our app. 'Course you gotta architect your app to deal with that - like don't have just one thread that does a crucial task, 'cos there's a chance that might be on the CPU that fails. But still, it's no big deal.
Re:$278k ?? (Score:5, Informative)
And hey, if you want to mix and match CPU types (uSparc 2 and 3, etc), speeds, etc, no problem either. So if you wanna upgrade your server's CPUs, there will be zero downtime, you just do it a board at a time (board = 2 or 4 CPUs).
Re:$278k ?? (Score:2, Informative)
google doesn't buy pre-built machines
Yes, they do. [rackable.com]
Server pricing (Score:5, Informative)
Every article I've read about Google's servers says they use "commodity" parts, which means they buy pretty much the same stuff we buy. They also indicate that they use as much memory as possible, and don't use hard drives, or use the drives as little as possible. From my interview with Google, they asked quite a few questions about RAID0, RAID1 (and combinations of those), I'd believe they stick in two drives to ensure data doesn't get lost due to power outages.
We get good name brand parts wholesale, which I'd expect is what they do too. So, assuming 1u Asus, Tyan, or SuperMicro machines stuffed full of memory, with hard drives big enough to hold the OS plus an image of whatever they store in memory (ramdrives?), they'd require at most 3Gb (OS) + 4Gb (ramdrive backup). I don't recall seeing dual CPU's, but we'll go with that assumption.
The nice base machine we had settled on for quite a while was the Asus 1400r, which consisted of dual 1.4Ghz PIII's, 2Gb RAM, and 20Gb and 200Gb hard drives. Our cost was roughly $1500. They'd lower the drive cost, but incrase the memory cost, so they'd probably cost about $1700, but I'm sure Google got better pricing, buying the quantity they were getting.
The count of 88 machines per rack is a bit high. You get 80u's per standard rack, but you can't stuff it full of machines, unless you get very creative. I'd suspect they have 2 switches, and a few power management units per rack. The APC's we use take 8 machines per unit, and are 1u tall. There are other power management units, that don't take up rack space, which they may be using, but only the folks at Google really know.
Assuming the maximum density, and equipment that was available as "commodity" equipment at the time, they'd have 2 Cisco 2948's and 78 servers per rack.
$1700 * 78 (servers)
+
$3000 * 2 (switches)
+
$1000 (power management)
--------
$139,600 per rack (78 servers)
Lets not forget core networking equipment. That's worth a few bucks.
Each set of 39 servers would probably be connected to their routers via GigE fiber (I couldn't imageine them using 100baseT for this) Right now we're guestimating 1700 racks. They have locations in 3 cities, so we'll assume they have at least 9 routers. They'd probably use Cisco 12000's, or something along that line. Checking eBay, you can get a nice Cisco 12008 for just $27,000, but that's the smaller one. I've toured a few places who had them, and pointed at them citing them to be just over $1,000,000.
So....
$250,000,000 (ttl expenses)
- $ 9,000,000 (routers)
------
$241,000,000
/ $ 139,600
------
1726 racks
* 78 (machines per rack)
------
134,682 machines
Google has a couple thousand employees, but we've found that our servers make *VERY* nice workstations too.
I believe this to be a more fair estimate, than the story gave. They're quoting pricing for a nice fast *CURRENT* machine, but Google has said before that they buy commodity machines. They do like we do. We buy cheap (relatively) and lots of them, just like Google does. We didn't pattern ourselves after Google, we made this decision long before Google even existed.
When *WE* decided to go this router, we looked at many options. The "provider" we had, before we went on our own, leasing space and bandwidth directly from Tier 1 providers, opted for the monolythic sy
Re:Acquisition (Score:2, Informative)
Now we have had some great luck also, where we found a brand that almost never failed for 12 to 18 months at a time, so we set up a specific policy that we used those drives as back-up redundancy drives for every main drive ( about 2500 drives ), to this day I have yet to see more than 1 failure per every 3 weeks with those drives.
Now I have a pc at home that has been abused daily and have never had drive failure, it's been turned on every day since 1999, so it cycles completely from hot/cold and sleep/aware. maybe I'm lucky but I've abused that drive consistantly ( and back up weekly ) so maybe I'm due.
Drive spin has become a huge factor in relation to drive failure in a web server farm. You want the fastest spin rate and at the same time you need the fast read times, but the faster spin rates give you higher failures, so you really have to learn to blend cache's, hardware and software and the dreaded mix drive raid.
best of luck to all
Onepoint
No (Score:5, Informative)
Re:$278k ?? (Score:0, Informative)
When a Sun server loses a CPU or DIMM it will crash. The benefit is that it will generally come back up with those parts blacklisted so you won't see another crash from the same parts. When the parts are replaced, you can bring the new ones back without taking an outage. If you happen to see that you are getting soft errors (e.g. persistent ECC corrections), you can take it offline preemptively without causing an outage.
Re:$278k ?? (Score:2, Informative)
If its a soft error - "cosmic ray" or what ever then it will log it and keep going. For CPU's if you haven't affinitied any processes/threads to it you should be able to do
To take the processor offline yourself, obviously failing is not exactly the same but I thought I have had some fail without crashing. - Unless, of course you are trolling and if so, you got me.
Re:Not to sound like your Mom [or Señor Ashcr (Score:3, Informative)
Of course, most of it is more for show than practicality. I mean, they have hand scanners on every single cage. Definitely a little bit excessive
-JD-
Re:$278k ?? (Score:3, Informative)
Sure, 10 years ago... (Score:2, Informative)
Yes, 10 years ago this was a important thing to have... As were many other "big iron" features. And it still sounds very cool in a geeky kinda way.
But with redundant relatively cheap clusters available, these types of things aren't worth the $$$ they used to be.
Except at the extreme high end of the computing world hardware is steadily progressing to commodity level.
Re:$278k ?? (Score:2, Informative)