Huge Traffic On Wikipedia's Non-Profit Budget 240
miller60 writes "'As a non-profit running one of the world's busiest web destinations, Wikipedia provides an unusual case study of a high-performance site. In an era when Google and Microsoft can spend $500 million on one of their global data center projects, Wikipedia's infrastructure runs on fewer than 300 servers housed in a single data center in Tampa, Fla.' Domas Mituzas of MySQL/Sun gave a presentation Monday at the Velocity conference that provided an inside look at the technology behind Wikipedia, which he calls an 'operations underdog.'"
Re:Impressive (Score:4, Interesting)
I was always impressed with how fast pages loaded, after seeing how small their operation is I'm even more impressed now!
Go to any newspaper from the NYT to any one in a smaller city (say, Springfield's State Journal-Register) and the difference in load times is HUGE. Probably has to do with all the ads served from third party servers in the newspapers, what's the use of having a humungous server with giant pipes if your readers' pages have to wait for a flash ad served from a 486 powered by gerbils?
If I link to the SJR form one of my journals it slows down! I mean, I can see if it's a front page slashdotting a little paper like that but come on, a user journal?
And Wikipedia isn't all their servers serve; iinm the uncyclopedia shares servers. Impressive, indeed.
More importantly (Score:5, Interesting)
Off-topic, I know, but...what about /.'s hardware? (Score:5, Interesting)
I.e. the promised follow-up to this story [slashdot.org] about moving to the new Chicago datacenter? You know, the one where Mr. Taco promised a follow-up story "in a few days" about the "ridiculously overpowered new hardware".
I was quite looking forward to that, but it never eventuated, unless I missed it. It's certainly not filed under Topics->Slashdot.
Re:What is the role of Open Source (Score:5, Interesting)
Re:Note to self (Score:5, Interesting)
Or do a hurricane dance, and let nature do its thing...
Having all their servers in Tampa, FL (of all places given hurricanes, frequent lightning, flooding, etc there) doesn't seem too smart - I would have thought, given Wikipedia's popularity, their servers would be geographically spread out in multiple locations.
Though to do that adds a level of complexity and costs that even many for-profit ventures, such as Slashdot, likely can't afford / justify; Slashdot's servers are in one place - Chicago ... to digress a bit, I notice this site's accessibility (ie. more page not found / timeouts lately) has been spotty since the servers move.
Ron
Re:Impressive (Score:4, Interesting)
Yea, a single datacenter seems really risky, especially considering some of the shenanigans [google.com] that have been going on
Re:I was just thinking that (Score:3, Interesting)
Simplicity (Score:5, Interesting)
Although much of the Mediawiki software is a hideous twitching blob of PHP Hell, the base functionality is fairly simple and run perpetually and scale massively as long as you don't mess with it.
What spoils a lot of projects like this is the constant need for customization. Wikimedia essentially can't be customized (except for plugins obviously, which you install at your own peril) and that is a big reason why it scales so massively.
As for Wikipedia itself, I suspect it is massively weighted in favor of reads. That simplifies circumstances a lot.
Re:The power of low standards (Score:5, Interesting)
Exactly. A bank requires "six nines" of performance (i.e., right 99.9999% of the time) and probably wants even better than that. Six nines works out to about 30 seconds of downtime per year.
It seems like Wikipedia is getting things right 99% of the time, or maybe even 99.9% of the time ("three nines"). That's a pretty low standard relative to how most companies do business.
Re:What amazes me... (Score:5, Interesting)
Slashdot is great at taking down sites on crappy shared hosting, but anything with a decently configured dedicated server will likely survive just fine.
Wikipedia's probably getting hit with hundreds of times the traffic Slashdot is at all times.
Re:Impressive (Score:3, Interesting)
That would make a lot more sense.
Given the sheer amount of people who access it, it seems like the perfect use for GSLB [networkcomputing.com]
Wikipedia = much more traffic than slashdot (Score:5, Interesting)
Slashdot does .. what? 40 mbit of traffic at peak? Wikipedia
is roughly 100 times larger [nedworks.org]. (And WP has three datacenters, not one)
Slashdot traffic hasn't created noticeable blips on Wikipedia's radar for years.
OTOH, if Wikipedia linked slashdot on every page slashdot would go down, if do to nothing else but bandwidth exhaustion.
Re:I was just thinking that (Score:5, Interesting)
I don't actually know anything about the total computing power Google employs, but I do know that they will purchase on the order of 1,000-10,000 processors merely to evaluate them prior to making a real purchase.
Re:Confused by the title (Score:3, Interesting)
Good point. Perfect example: the Bill and Melinda Gates Foundation has a budget of billions of dollars, easily exceeding the budget of many private corporations.
Re:The power of low standards (Score:2, Interesting)
Right, banks actually traditionally used such techniques as planned downtime to allow for maintenance. The "banker's hours" allowed for a large period of time, daily, where little-to-no 'data' was changing in the system and the system could be 'balanced'.
Re:The power of low standards (Score:3, Interesting)
Re:I was just thinking that (Score:4, Interesting)
You know what I thought was interesting? This story [cnet.com] (which was linked to from this /. story titled A Look At the Workings of Google's Data Centers [slashdot.org] contained the following snippets.
and
But this was immediately followed by:
For some reason I'd always believed they used pretty much standard components in everything.
Re:I was just thinking that (Score:2, Interesting)
Although the idea that Google may in fact be serving all our searches with just one server seems kind of appealing, let's not kid ourselves, they have many large data centers. They use relatively cheap, commonplace equipment, but in every data center they have guys with shopping carts (really) swapping out defective servers as they walk down the aisles. (their infrastructure and file system is really interesting, actually)
But don't forget that Google doesn't just provide search. They also provide storage-intensive services such as email (more than 6GB of storage space per account now I think) or video (youTube). One of the main reasons for having many data centers is to be able to push content (email, youTube videos, etc.) as close as possible to the end user before the user asks for it to minimize latency. User A in NY wants to watch a video, it goes much faster to send it from a data center in NY than to have to send it from CA. Serving video content or generally large amounts of data is a very capital intensive business that requires a lot of network and server infrastructure.