Huge Traffic On Wikipedia's Non-Profit Budget 240
miller60 writes "'As a non-profit running one of the world's busiest web destinations, Wikipedia provides an unusual case study of a high-performance site. In an era when Google and Microsoft can spend $500 million on one of their global data center projects, Wikipedia's infrastructure runs on fewer than 300 servers housed in a single data center in Tampa, Fla.' Domas Mituzas of MySQL/Sun gave a presentation Monday at the Velocity conference that provided an inside look at the technology behind Wikipedia, which he calls an 'operations underdog.'"
Impressive (Score:5, Insightful)
Given that their topic sites are generally in the top three for any search engine query, the volume of traffic they're dealing with (and the budget that they have!) is very impressive. I always thought that they had much beefier infrastructure than the article says.
Re:Impressive (Score:5, Funny)
Wikipedia = much more traffic than slashdot (Score:5, Interesting)
Slashdot does .. what? 40 mbit of traffic at peak? Wikipedia
is roughly 100 times larger [nedworks.org]. (And WP has three datacenters, not one)
Slashdot traffic hasn't created noticeable blips on Wikipedia's radar for years.
OTOH, if Wikipedia linked slashdot on every page slashdot would go down, if do to nothing else but bandwidth exhaustion.
Re:Wikipedia = much more traffic than slashdot (Score:5, Funny)
OTOH, if Wikipedia linked slashdot on every page slashdot would go down, if do to nothing else but bandwidth exhaustion.
Re:Wikipedia = much more traffic than slashdot (Score:5, Funny)
Re:Wikipedia = much more traffic than slashdot (Score:4, Funny)
"My internet is running very slowly tonight. Why is that?"
Well sir, it looks like you've been downloading from the other side of the continent. I'd say that your packets are just very tired by the time they reach you...
Re:Impressive (Score:4, Interesting)
I was always impressed with how fast pages loaded, after seeing how small their operation is I'm even more impressed now!
Go to any newspaper from the NYT to any one in a smaller city (say, Springfield's State Journal-Register) and the difference in load times is HUGE. Probably has to do with all the ads served from third party servers in the newspapers, what's the use of having a humungous server with giant pipes if your readers' pages have to wait for a flash ad served from a 486 powered by gerbils?
If I link to the SJR form one of my journals it slows down! I mean, I can see if it's a front page slashdotting a little paper like that but come on, a user journal?
And Wikipedia isn't all their servers serve; iinm the uncyclopedia shares servers. Impressive, indeed.
Re:Impressive (Score:5, Informative)
No, actually - the Wikimedia servers serve all Wikimedia projects (all the Wikipedias, Wikimedia Commons, all the other projects), but Uncyclopedia is part of Wikia, which is a private company owned by Jimmy Wales to do wikis and isn't actually linked to the Wikimedia Foundation in any way.
Re:Impressive (Score:5, Informative)
Re:Impressive (Score:4, Interesting)
Yea, a single datacenter seems really risky, especially considering some of the shenanigans [google.com] that have been going on
Re:Impressive (Score:5, Informative)
Re: (Score:3, Interesting)
That would make a lot more sense.
Given the sheer amount of people who access it, it seems like the perfect use for GSLB [networkcomputing.com]
Re:Impressive (Score:5, Informative)
Single database, though. All the databases for all the projects are in Tampa - one master for English Wikipedia and two for all the other 700+ Wikimedia projects.
(They tried running the databases for Asian languages from the Yahoo!-sponsored datacentre in Seoul for a while, but it didn't actually work much faster than it did with everything in Tampa.)
Re: (Score:3, Insightful)
As somebody who has been serving the Internet for a good length of time, I remember when busy web servers serving a 10 Mb stream were "ultra-high capacity" with a Pentium II 350 Mhz chip and 256 MB of RAM.
The reality is that today, if you pay any attention at all to performance and a reasonable architecture, modern commodity hardware has just utterly incredible delivery capacity. A cheap, 1U 4-core x86 with 8 GB of RAM and a couple of SCSI 10k drives can easily saturate a 1 Gb stream of static pages, or eve
I've always wondered... (Score:4, Insightful)
It would be neat to have a deeper look at their budget to see how I can save money and boost performance at work. It's always nice having the newest/fastest systems out there, but it's rarely the reality.
Re:I've always wondered... (Score:5, Funny)
"It would be neat to have a deeper look at their budget to see how I can save money and boost performance at work."
Since they are using LAMP, obviously they could save money by following Microsoft's "Get The Facts" advice!
Re:I've always wondered... (Score:5, Informative)
It's easy... (Score:2)
If wikipedia is anything to go by, you just don't include a decent search engine.
Re:It's easy... (Score:4, Insightful)
Why? If you want search, go to google. If you want an encyclopedia, go to wikipedia. Its pretty simple, really.
The power of low standards (Score:5, Insightful)
Our organizations' databases (also a non-profit) get several thousand writes per second. Losing 'a few seconds' would mean potentially hundreds of users' record changes were lost. If that happened here, it would be a huge deal. If it happened regularly, it would destroy the business.
Re:The power of low standards (Score:5, Insightful)
Okay. So pay attention to the sentence before the one you quoted which read, "I'm not suggesting you should follow how we do it."
Re:The power of low standards (Score:5, Insightful)
Don't be too harsh -- the standards are dependent on the application. Your application, by the nature of the information and its purposes, requires a different standard of reliability than Wikipedia does. You're certainly entitled to be proud of yourself for maintaining that standard.
But don't let that turn into being derogatory about the Wikipedia operation. Wikipedia has identified the correct standard for their application, and by doing so they have successfully avoided the costs and hassle of over-engineering. To each his own...
Re:The power of low standards (Score:5, Interesting)
Exactly. A bank requires "six nines" of performance (i.e., right 99.9999% of the time) and probably wants even better than that. Six nines works out to about 30 seconds of downtime per year.
It seems like Wikipedia is getting things right 99% of the time, or maybe even 99.9% of the time ("three nines"). That's a pretty low standard relative to how most companies do business.
Re:The power of low standards (Score:5, Informative)
A bank requires "six nines" of performance (i.e., right 99.9999% of the time) and probably wants even better than that.
Re: (Score:2)
The nines can refer to both.
I agree that banks can't withstand data loss, but they can withstand data errors. If there's a 30-second period per year when data doesn't properly move, and that requires manual cleanup, that's acceptable.
Re:The power of low standards (Score:4, Insightful)
"Six nines" is meaningless. Unrealistic.
It is a promise that you cannot be hit by a single accident, fuckup, pissed-off-employee or act of god.
Re: (Score:3, Insightful)
You can achieve 100% service availability by clustering
Is that where when I run "DROP TABLE reallyimportanttable;" it drops it on all the servers at once?
Re: (Score:3, Insightful)
Re:The power of low standards (Score:5, Funny)
Re: (Score:3, Funny)
Screw that, it needs to be a prime number.
or at least irrational.
Re: (Score:2, Interesting)
Right, banks actually traditionally used such techniques as planned downtime to allow for maintenance. The "banker's hours" allowed for a large period of time, daily, where little-to-no 'data' was changing in the system and the system could be 'balanced'.
Re: (Score:3, Insightful)
Thats amazing considering I get an error page on bank of america around 5% of the time if I move to quickly though the site.
Re: (Score:2)
Losing 'a few seconds' would mean potentially hundreds of users' record changes were lost. If that happened here, it would be a huge deal.
If you don't deal with financial data, it's likely that even your business would survive should such an event like that happens. Sure if it happens all the time users would flee, but I haven't seen such problems at Wikipedia. He wasn't talking about doing it regularly, just that when disaster does strike, no pointy haired guy appears to assign blame.
Re: (Score:2, Informative)
Changes are never just lost, when an error does happen and the action cannot be completed then it is rejected and the user notified of this so they can try what they were doing again. You have vastly overstated the severity of such issues.
Re: (Score:3, Interesting)
I was just thinking that (Score:3, Funny)
Re: (Score:2)
Ever pay attention to the render times, though?
Their infrastructure is scary-massive, from almost every report [datacenterknowledge.com]
Re:I was just thinking that (Score:5, Interesting)
I don't actually know anything about the total computing power Google employs, but I do know that they will purchase on the order of 1,000-10,000 processors merely to evaluate them prior to making a real purchase.
Re:I was just thinking that (Score:4, Interesting)
You know what I thought was interesting? This story [cnet.com] (which was linked to from this /. story titled A Look At the Workings of Google's Data Centers [slashdot.org] contained the following snippets.
and
But this was immediately followed by:
For some reason I'd always believed they used pretty much standard components in everything.
Re: (Score:3, Interesting)
But why would they think it was a bad thing to expose? The whole "Look what we can do with so little" angle seems appealing; efficiency is something to boast about nowadays.
On one hand, you're right, efficiency is admirable. But on the other hand, if Google has insane amounts of processing power, it would likely mean much higher barriers to entry for its competitors. The threat of Google's power in processing such data could deter others from even attempting to compete with Google. After all, when Google started it was only funded with a few hundred thousand dollars.
Re:I was just thinking that (Score:5, Insightful)
Maybe they do well because they are amazingly CPU-efficient on a per-query basis. Maybe it's the opposite; they may be masters at lavishing CPU on every query, but know how to do that very cheaply. Most likely, it's a clever mix of the two.
Regardless, Google's engineering-fu and operations-fu are mighty, and a major competitive advantage. Releasing detailed data doesn't boost their reputation, as everybody already knows they are great. But it does give potential competitors an idea of what works well, making it easier for them to catch up with Google. As a rule, expect that any details you see from inside Google are old, boring, or vague. As Intel's Andy Grove said, "Only the paranoid survive."
Re: (Score:2)
I'd have thought they'd use a caching solution just like wikipedia. After all, just as Wikipedia has some very popular pages and some less so, Google has many popular searches and many less so. Wouldn't they cache these? After all if you're dealing with millions of searches for 'george carlin' you wouldn't want to go query your entire DB every time, would you?
Easy to Increase the budget or add servers (Score:5, Funny)
How hard can it be to increase the budget or add more servers?
Just go to the Wikipedia page with those numbers and change them. You don't even need to have an account.
Re:Easy to Increase the budget or add servers (Score:5, Funny)
Re: (Score:2)
Maybe... (Score:3, Funny)
Note to self (Score:5, Funny)
If you ever find yourself in a flamewar on Wikipedia you cannot win, bomb Tampa, Florida out of existence.
Re:Note to self (Score:5, Funny)
Re: (Score:3, Funny)
-WOPR
Re:Note to self (Score:5, Interesting)
Or do a hurricane dance, and let nature do its thing...
Having all their servers in Tampa, FL (of all places given hurricanes, frequent lightning, flooding, etc there) doesn't seem too smart - I would have thought, given Wikipedia's popularity, their servers would be geographically spread out in multiple locations.
Though to do that adds a level of complexity and costs that even many for-profit ventures, such as Slashdot, likely can't afford / justify; Slashdot's servers are in one place - Chicago ... to digress a bit, I notice this site's accessibility (ie. more page not found / timeouts lately) has been spotty since the servers move.
Ron
Re:Note to self (Score:5, Informative)
They're not all in Tampa, they have a bunch in Netherlands and a few more in South Korea.
Re: (Score:3, Funny)
China hates North Korea?
Re: (Score:2)
Tampa hasn't been hit by many Hurricanes. They don't have issues with flooding that I know about and lightning is lightning. It can happen anywhere just do your best to protect your systems from it.
If you are a few miles inland in Florida Hurricanes are not that big of an issue. If you have a good backup generator then it isn't that big of a problem.
Oh did I mention I was born, live, and work in Florida. My office was hit by Frances, Jean, and Wilma. Total damage to the office... Nothing. Total Damage to my
Re: (Score:2)
Tampa is pretty safe from all that. I have grandparents that live in St. Petersburg (right next to Tampa) and they have never had any damage or been in danger from the weather. If Tampa had major flooding, then pretty much the whole state of Florida will be submerged too. At that point Wikipedia is low on the list of things to worry about.
Re: (Score:2)
FutureQuest is a highly rated web host with its data center in Orlando, FL. It has never gone down, even in hurricanes. Very occasionally the network connects or upstreams fritz, but not due to storms (usually it's BGP, etc.).
If you recall there was some heroic blogging out of New Orleans after Katrina. Some guys at an ISP in a tall building downtown kept themselves wired, and described hard core telecom types patrolling the streets. Surreal.
More importantly (Score:5, Interesting)
Simplicity (Score:5, Interesting)
Although much of the Mediawiki software is a hideous twitching blob of PHP Hell, the base functionality is fairly simple and run perpetually and scale massively as long as you don't mess with it.
What spoils a lot of projects like this is the constant need for customization. Wikimedia essentially can't be customized (except for plugins obviously, which you install at your own peril) and that is a big reason why it scales so massively.
As for Wikipedia itself, I suspect it is massively weighted in favor of reads. That simplifies circumstances a lot.
Sure they do it without ads... (Score:4, Informative)
Sure they do without ad income. But they also do it without having to pay salaries, or co location fees, or bandwidth costs... (I know they pay some of those, but they also get a metric buttload of contributions in kind.)
When your costs are lower, and your standard of service (and content) malleable, it is easy to live on a smaller income.
Re: (Score:2)
But they also do it without having to pay salaries, co location fees, or bandwidth costs...
Well, as far as salaries go, yeah, they don't have to pay for a full team of developers and administrators for the business, but they do need to pay people to go and check on the servers, replace faulty hardware, etc. Also, as far a co-location costs go, I'd say that running your own data center (i.e. providing your own electricity, cooling, backup power supplies, etc.) can't be cheap either.
That's easier than it sounds (Score:3, Funny)
Out like a light (Score:2)
I lost power for about a week when that happened and I only live about 15 miles from Tampa, right over the Courtney Campbel Causeway actually.
Re: (Score:2, Informative)
We've never lost external power while we've been at Tampa, but if we did, there are diesel generators. Not that it would be a big deal if we lost power for a day or two. There's no serious problem as long as there's no physical damage to the servers, which we're assured is essentially impossible even with a direct hurricane strike, since the building is well above sea-level and there are no external windows.
Re: (Score:2)
Off-topic, I know, but...what about /.'s hardware? (Score:5, Interesting)
I.e. the promised follow-up to this story [slashdot.org] about moving to the new Chicago datacenter? You know, the one where Mr. Taco promised a follow-up story "in a few days" about the "ridiculously overpowered new hardware".
I was quite looking forward to that, but it never eventuated, unless I missed it. It's certainly not filed under Topics->Slashdot.
Works great because it's not "Web 2.0" (Score:5, Insightful)
Most of Wikipedia is a collection of static pages. Most users of Wikipedia are just reading the latest version of an article, to which they were taken by a non-Wikipedia search engine. So all Wikipedia has to do for them is serve a static page. No database work or page generation is required.
Older revisions of pages come from the database, as do the versions one sees during editing and previewing, the history information, and such. Those operations involve the MySQL databases. There are only about 10-20 updates per second taking place in the editing end of the system. When a page is updated, static copies are propagated out to the static page servers after a few tens of seconds.
Article editing is a check-out/check in system. When you start editing a page, you get a version token, and when you update the page, the token has to match the latest revision or you get an edit conflict. It's all standard form requests; there's no need for frantic XMLHttpRequest processing while you're working on a page.
Because there are no ads, there's no overhead associated with inserting variable ad info into the pages. No need for ad rotators, ad trackers, "beacons" or similar overhead.
Re: (Score:2)
Oh really? Because O'Reill seems to think it is [oreillynet.com], and I thought he was the main pusher of this terminology. Is the term Web 2.0 actually meaningful?
Re: (Score:2, Informative)
If you haven't noticed, "Web 2.0" is a long estabilished buzzword [wikipedia.org] - which means it carries little meaning, but it looks good in advertising. Just like "information superhighway", "enterprise feature" or "user friendly".
so what's "Web 2.0"? (Score:2)
I take it that "Works great because it's not "Web 2.0" " means its fast and dynamic, whereas Web 2.0 generally means slow and dynamic.
The technology behind it is irrelevant, if content is provided by users then its web 2.0 (as I understan the term), so Wikipedia definitely is web 2.0, its just that they have some fancy caching mechanism to get the best of both worlds. If only more systems were built in a pragmatic way instead of worrying about what its "supposed" to be.
Re: (Score:2)
I take it that "Works great because it's not "Web 2.0" means that its fast and dynamic, whereas Web 2.0 generally means slow and dynamic.
Web 2.0 is a shorthand version of saying "dynamic pages served using Asynchronous JavaScript and XML (AJAX)". Now, if you reread the parent, you'll see that he says:
Most of Wikipedia is a collection of static [emphasis mine] pages. Most users of Wikipedia are just reading the latest version of an article... So all Wikipedia has to do for them is serve a static page.
In other words, the parent is saying that Wikipedia is effective because avoids any sort of dynamism for the majority of use cases. Heck, even article editing isn't dynamic on Wikipedia. When you click the edit link, you're taken to a separate page which has a prepopulated form with the wikitext of the article. The only bit of dynamic co
Nonsense. Wikipedia is THE web 2.0 (Score:5, Insightful)
Web 2.0 is not just about flashy Ajax or what not, it's about user generated dynamic content. WP's "everything is a wiki" architecture might /look/ a bit archaic compared to fancy schmancy dynamic rotating animated gradient-filled forums, but it's much more powerful.
Moreover, WP is not a collection of static pages, if you're logged in at least, every pages is dynamically generated, and every page's history is updated within a few seconds.
Re: (Score:2)
Moreover, WP is not a collection of static pages, if you're logged in at least, every pages is dynamically generated, and every page's history is updated within a few seconds.
That's not how it works. If you're just browsing Wikipedia, you're just looking at a collection of static pages that were generated earlier and cached. Only when you actually edit the page and save it is the page updated.
If Wikipedia had to freshly create every page for every user, even computational power on the order possessed by Google wouldn't be up to the task.
Confused by the title (Score:5, Insightful)
What does "Non-Profit Budget" mean, anyway? There are non-profits bigger than the company I work for. Non-profit isn't the same as poorly financed.
Re: (Score:3, Interesting)
Good point. Perfect example: the Bill and Melinda Gates Foundation has a budget of billions of dollars, easily exceeding the budget of many private corporations.
Link to wikipedia? (Score:5, Funny)
The summary was wrong to include a link to the Wikipedia homepage without a Wikipedia link about Wikipedia [wikipedia.org] in case you don't know what Wikipedia is. I myself had to Google Wikipedia to find out what Wikipedia was so I am providing the Wikipedia link about Wikipedia in case others were likewise in the dark regarding Wikipedia.
-l
P.s., Wikipedia.
Re:Link to wikipedia? (Score:5, Funny)
Wait, what's this Google thing you're talking about?
Re:Link to wikipedia? (Score:5, Funny)
Nevermind, found it:
http://www.google.com/search?q=google [google.com]
Re: (Score:2, Funny)
Re: (Score:2, Funny)
Distributed computing? (Score:2)
Servers and locations (Score:2, Informative)
According to http://meta.wikimedia.org/wiki/Wikimedia_servers [wikimedia.org] Wikimedia (and by extension, Wikipedia):
"About 300 machines in Florida, 26 in Amsterdam, 23 in Yahoo!'s Korean hosting facility."
also: http://meta.wikimedia.org/wiki/Wikimedia_partners_and_hosts [wikimedia.org]
Obviously if you're not in Silicon Valley (Score:2)
Obviously U can pay much less outside Silicon Valley. If you want investment capital & lots of customers you have to be physically in Silicon Valley and pay the millions of dollars. Even Kiwipedia had to move its office to San Francisco & the data center is going to follow if they can get enough donations.
What about the Internet Archive (Score:5, Informative)
Wikipedia's pretty impressive, but how about the Internet Archive [archive.org]? Also a non-profit that doesn't run ads, and not only do they, like Google and Yahoo, "download the Internet" on a regular basis, but the Archive makes backups! Plus, they have huge amounts of streaming audio and video (pd or creative-commons). The first time I ever heard the word "Petabyte" being discussed in practical, real world terms (as in, "we're taking delivery next month") was in connection with the Internet Archive. Several years ago. And it was being used in the plural! :)
They may not have as much incoming traffic as Wikipedia, but the sheer volume of data they manage is truly staggering. (Heck, they have multiple copies of Wikipedia!) When I do download something from there, it's typically in the 80-150 MB range, and 1 or 2 GB in a pop isn't unusual, and I know I'm not the only one downloading, so their bandwidth bills must still be pretty impressive.
The fact that these two sites manage to survive and thrive the way they do never ceases to amaze me.
Where are the PHP/MySQL doom criers? (Score:3, Insightful)
I notice they are conspicuously absent in the comments. They tend to jump up and down in any other post about PHP and MySQL. This is such a great example of the scalability and performance of it WHEN USED CORRECTLY.
Re: (Score:2)
Re: (Score:3, Insightful)
Which is somehow different from any other open source project how?
Re: (Score:2)
Re:Some thoughts (Score:5, Insightful)
This is so true; I've always said, "you get what you pay for."
Do you want to pay for software, or do you want to pay for people?
Only one can create the other.
Re:Some thoughts (Score:5, Funny)
Re:What is the role of Open Source (Score:5, Interesting)
Re: (Score:3, Insightful)
I don't know what else but open source you could use especially on the database side. You have only a few choices:
Microsoft ($$$) (approx. $50,000 per server per year in licensing costs since it's a public (unlimited CAL) enterprise-level site)
IBM ($$) (approx. $500,000 per year for leasing the whole operation, another load for support)
Oracle ($) (approx. $20,000 per backend and about 30 contractors for the next 5 years for the implementation)
Linux, MySQL, PHP (Free)
Not to mention, with Microsoft you'll nee
Re: (Score:2)
I'm not aware of any software that Wikipedia uses that isn't open-source. They've got a very strong commitment to the free-content movement -- sometimes a little too strong: the only sound format they accept is Ogg Vorbis, the only video format Ogg Theora
Re: (Score:2, Funny)
Re: (Score:2, Informative)
Re:What amazes me... (Score:5, Interesting)
Slashdot is great at taking down sites on crappy shared hosting, but anything with a decently configured dedicated server will likely survive just fine.
Wikipedia's probably getting hit with hundreds of times the traffic Slashdot is at all times.
Re:What amazes me... (Score:4, Insightful)
Looking at some old data and extrapolating, I'd guess a modern slashdotting would peak at 200 pageviews/min, or ~3 pv/sec. Get mentioned on Good Morning America or Oprah, on the other hand, and you're looking at 20-200 pageviews/sec. I'd guess that getting on Digg's front page is somewhere in the 20-40 pv/sec range.
A slashdotting was a big deal back when every nerd used it and the Internet was mainly nerds. Neither is true anymore.
Re: (Score:2)
To be quite honest, I'd say that the Slashdot surge is probably a drop in the bucket as far as Wikipedia is concerned. I mean, they're the top result for loads of Google queries, and plenty of people go straight to Wikipedia when they need to look something up.
Re: (Score:2)
Correct [alexa.com]
Re: (Score:3, Insightful)
That said, I'm sure that the traffic to Wikipedia is probably several orders of magnitude higher than that of Slashdot.
Re: (Score:2)
Re: (Score:3, Informative)
It exists. Its called "validators". There are strong and weak validators. You can Vary on your validators, and thus have multiple copies of the same object but in different forms (so given a text document, you can have it in different languages, compressed/uncompressed, etc.)
Your browser will then quite happily ask the origin server (which may not be the "origin" origin) for an object and provide validators. (Last-Modified -> If-Modified-Since; ETag->If-None-Match) which the origin (or the cache which