Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Social Networks IT

Inside Facebook's Infrastructure 77

miller60 writes "Facebook served up 690 billion page views to its 540 million users in August, according to data from Google's DoubleClick. How does it manage that massive amount of traffic? Data Center Knowledge has put together a guide to the infrastructure powering Facebook, with details on the size and location of its data centers, its use of open source software, and its dispute with Greenpeace over energy sourcing for its newest server farm. There are also links to technical presentations by Facebook staff, including a 2009 technical presentation on memcached by CEO Mark Zuckerberg."
This discussion has been archived. No new comments can be posted.

Inside Facebook's Infrastructure

Comments Filter:
  • Freaking SEOs... (Score:3, Insightful)

    by netsharc ( 195805 ) on Thursday September 30, 2010 @09:13AM (#33745966)

    Facebook is... Facebook has... fucking SEO monkeys must be at work making sure the company isn't referred to as "it", because that ruins the google-ability of the article, and they'd rather have SEO ratings than text that reads like it's been written by a fucking 3rd grader.

    SEO-experts... even worse than lawyers.

  • by njko ( 586450 ) <naguil.yahoo@com> on Thursday September 30, 2010 @11:27AM (#33747676) Journal
    The purpose of server farms with comodity hardware is just to avoid vendor lock-in, if you have a good business but you are tied to a vendor the Vendor has a better business than you. they can charge you whatever they want.
  • by mlts ( 1038732 ) * on Thursday September 30, 2010 @11:42AM (#33747892)

    That is a good point, but to use a car analogy, isn't it like strapping a ton of motorcycles together with duct tape and having people on staff to keep them all maintained so the contrivance can pull a 18-wheeler load? Why not just buy an 18-wheeler which is designed and built from the ground up for this exact task?

    Yes, you have to use the 18-wheeler's shipping crates (to continue the analogy), but even with the vendor lock-in, it might be a lot better to do this as opposed to trying to cobble a suboptimal solution that does work, but takes a lot more man-hours, electricity, and hardware maintaining as opposed to something built from the factory for the task at hand.

    Plus, zSeries machines and pSeries boxes happily run Linux LPARs. That is as open as you can get. It isn't like it would be moving the backend to CICS.

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...