Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking Data Storage The Internet IT

Huge Traffic On Wikipedia's Non-Profit Budget 240

miller60 writes "'As a non-profit running one of the world's busiest web destinations, Wikipedia provides an unusual case study of a high-performance site. In an era when Google and Microsoft can spend $500 million on one of their global data center projects, Wikipedia's infrastructure runs on fewer than 300 servers housed in a single data center in Tampa, Fla.' Domas Mituzas of MySQL/Sun gave a presentation Monday at the Velocity conference that provided an inside look at the technology behind Wikipedia, which he calls an 'operations underdog.'"
This discussion has been archived. No new comments can be posted.

Huge Traffic On Wikipedia's Non-Profit Budget

Comments Filter:
  • Impressive (Score:5, Insightful)

    by locokamil ( 850008 ) on Tuesday June 24, 2008 @12:19PM (#23920399) Homepage

    Given that their topic sites are generally in the top three for any search engine query, the volume of traffic they're dealing with (and the budget that they have!) is very impressive. I always thought that they had much beefier infrastructure than the article says.

  • by mnslinky ( 1105103 ) on Tuesday June 24, 2008 @12:22PM (#23920471) Homepage

    It would be neat to have a deeper look at their budget to see how I can save money and boost performance at work. It's always nice having the newest/fastest systems out there, but it's rarely the reality.

  • by Itninja ( 937614 ) on Tuesday June 24, 2008 @12:22PM (#23920477) Homepage
    From TFA: "But losing a few seconds of changes doesn't destroy our business."

    Our organizations' databases (also a non-profit) get several thousand writes per second. Losing 'a few seconds' would mean potentially hundreds of users' record changes were lost. If that happened here, it would be a huge deal. If it happened regularly, it would destroy the business.
    • by robbkidd ( 154298 ) on Tuesday June 24, 2008 @12:37PM (#23920799)

      Okay. So pay attention to the sentence before the one you quoted which read, "I'm not suggesting you should follow how we do it."

    • by Anonymous Coward on Tuesday June 24, 2008 @12:47PM (#23921057)

      Don't be too harsh -- the standards are dependent on the application. Your application, by the nature of the information and its purposes, requires a different standard of reliability than Wikipedia does. You're certainly entitled to be proud of yourself for maintaining that standard.

      But don't let that turn into being derogatory about the Wikipedia operation. Wikipedia has identified the correct standard for their application, and by doing so they have successfully avoided the costs and hassle of over-engineering. To each his own...

      • by WaltBusterkeys ( 1156557 ) * on Tuesday June 24, 2008 @01:01PM (#23921379)

        Exactly. A bank requires "six nines" of performance (i.e., right 99.9999% of the time) and probably wants even better than that. Six nines works out to about 30 seconds of downtime per year.

        It seems like Wikipedia is getting things right 99% of the time, or maybe even 99.9% of the time ("three nines"). That's a pretty low standard relative to how most companies do business.

        • by Nkwe ( 604125 ) on Tuesday June 24, 2008 @01:21PM (#23921765)

          A bank requires "six nines" of performance (i.e., right 99.9999% of the time) and probably wants even better than that.

          Banks don't require "six nines"; banks require that no data (data being money), once committed, get lost. The "nines" rating refers to the percentage of time a system is online, working, and available to its users. It does not refer to the percentage of acceptable data loss. It is acceptable for bank systems to have downtime, scheduled maintenance, or "closing periods" -- all of these eat into a "nines" rating, none of which lead to data loss.
          • The nines can refer to both.

            I agree that banks can't withstand data loss, but they can withstand data errors. If there's a 30-second period per year when data doesn't properly move, and that requires manual cleanup, that's acceptable.

            • by PMBjornerud ( 947233 ) on Tuesday June 24, 2008 @03:36PM (#23923997)

              If there's a 30-second period per year when data doesn't properly move, and that requires manual cleanup, that's acceptable.
              And if there is a 1-hours downtime, EVER, you just blew through the scheduled downtime for the next 120 years.

              "Six nines" is meaningless. Unrealistic.

              It is a promise that you cannot be hit by a single accident, fuckup, pissed-off-employee or act of god.

          • Re: (Score:3, Insightful)

            by Waffle Iron ( 339739 )
            Indeed. Some of us are old enough to remember the days of "banker's hours" and before ATMs, when banks used to make their customers deal with less than "one two" (20%) availability.
          • Re: (Score:2, Interesting)

            by Anonymous Coward

            Right, banks actually traditionally used such techniques as planned downtime to allow for maintenance. The "banker's hours" allowed for a large period of time, daily, where little-to-no 'data' was changing in the system and the system could be 'balanced'.

        • Re: (Score:3, Insightful)

          by astrotek ( 132325 )

          Thats amazing considering I get an error page on bank of america around 5% of the time if I move to quickly though the site.

    • Losing 'a few seconds' would mean potentially hundreds of users' record changes were lost. If that happened here, it would be a huge deal.

      If you don't deal with financial data, it's likely that even your business would survive should such an event like that happens. Sure if it happens all the time users would flee, but I haven't seen such problems at Wikipedia. He wasn't talking about doing it regularly, just that when disaster does strike, no pointy haired guy appears to assign blame.

    • Re: (Score:2, Informative)

      Changes are never just lost, when an error does happen and the action cannot be completed then it is rejected and the user notified of this so they can try what they were doing again. You have vastly overstated the severity of such issues.

    • Re: (Score:3, Interesting)

      by az-saguaro ( 1231754 )
      Your reasoning may be a bit specious. If your databases get "several thousand writes per second", it sounds like this may be massive underuse of your bandwidth - i.e. your servers or databases may be able to handle hundreds of thousands or millions of writes per second. If a few seconds were lost or went down, then the incoming traffic might get cached or queued, waiting for services to come back on line. Once the connection is re-established, the write backlog might take only a few seconds or a few frac
  • by imstanny ( 722685 ) on Tuesday June 24, 2008 @12:22PM (#23920481)
    Every time I Google something, Wikipedia comes near the top most of the time. Maybe that's why Google doesn't want to disclose its processing power, it may very will be a lot smaller than people assume.
    • by Bandman ( 86149 )

      Ever pay attention to the render times, though?

      Their infrastructure is scary-massive, from almost every report [datacenterknowledge.com]

    • by Chris Burke ( 6130 ) on Tuesday June 24, 2008 @01:32PM (#23921969) Homepage

      I don't actually know anything about the total computing power Google employs, but I do know that they will purchase on the order of 1,000-10,000 processors merely to evaluate them prior to making a real purchase.

      • by kiwimate ( 458274 ) on Tuesday June 24, 2008 @04:14PM (#23924517) Journal

        You know what I thought was interesting? This story [cnet.com] (which was linked to from this /. story titled A Look At the Workings of Google's Data Centers [slashdot.org] contained the following snippets.

        On the one hand, Google uses more-or-less ordinary servers. Processors, hard drives, memory--you know the drill.

        and

        While Google uses ordinary hardware components for its servers...

        But this was immediately followed by:

        it doesn't use conventional packaging. Google required Intel to create custom circuit boards.

        For some reason I'd always believed they used pretty much standard components in everything.

  • by Subm ( 79417 ) on Tuesday June 24, 2008 @12:23PM (#23920513)

    How hard can it be to increase the budget or add more servers?

    Just go to the Wikipedia page with those numbers and change them. You don't even need to have an account.

  • Maybe... (Score:3, Funny)

    by nakajoe ( 1123579 ) on Tuesday June 24, 2008 @12:28PM (#23920577)
    Datacenterknowledge.com might want to take lessons from Wikipedia as well. Slashdotted...
  • by Anita Coney ( 648748 ) on Tuesday June 24, 2008 @12:28PM (#23920591) Homepage

    If you ever find yourself in a flamewar on Wikipedia you cannot win, bomb Tampa, Florida out of existence.

    • by canajin56 ( 660655 ) on Tuesday June 24, 2008 @12:43PM (#23920949)
      That's your solution to everything.
      • Re: (Score:3, Funny)

        by TubeSteak ( 669689 )

        That's your solution to everything.
        I did ask if you wouldn't prefer a nice game of Chess.
        -WOPR
    • Re:Note to self (Score:5, Interesting)

      by Ron Bennett ( 14590 ) on Tuesday June 24, 2008 @12:48PM (#23921073) Homepage

      Or do a hurricane dance, and let nature do its thing...

      Having all their servers in Tampa, FL (of all places given hurricanes, frequent lightning, flooding, etc there) doesn't seem too smart - I would have thought, given Wikipedia's popularity, their servers would be geographically spread out in multiple locations.

      Though to do that adds a level of complexity and costs that even many for-profit ventures, such as Slashdot, likely can't afford / justify; Slashdot's servers are in one place - Chicago ... to digress a bit, I notice this site's accessibility (ie. more page not found / timeouts lately) has been spotty since the servers move.

      Ron

      • Re:Note to self (Score:5, Informative)

        by OverlordQ ( 264228 ) on Tuesday June 24, 2008 @01:22PM (#23921791) Journal

        They're not all in Tampa, they have a bunch in Netherlands and a few more in South Korea.

      • by LWATCDR ( 28044 )

        Tampa hasn't been hit by many Hurricanes. They don't have issues with flooding that I know about and lightning is lightning. It can happen anywhere just do your best to protect your systems from it.
        If you are a few miles inland in Florida Hurricanes are not that big of an issue. If you have a good backup generator then it isn't that big of a problem.
        Oh did I mention I was born, live, and work in Florida. My office was hit by Frances, Jean, and Wilma. Total damage to the office... Nothing. Total Damage to my

      • by skeeto ( 1138903 )

        Tampa is pretty safe from all that. I have grandparents that live in St. Petersburg (right next to Tampa) and they have never had any damage or been in danger from the weather. If Tampa had major flooding, then pretty much the whole state of Florida will be submerged too. At that point Wikipedia is low on the list of things to worry about.

      • by colfer ( 619105 )

        FutureQuest is a highly rated web host with its data center in Orlando, FL. It has never gone down, even in hurricanes. Very occasionally the network connects or upstreams fritz, but not due to storms (usually it's BGP, etc.).

        If you recall there was some heroic blogging out of New Orleans after Katrina. Some guys at an ISP in a tall building downtown kept themselves wired, and described hard core telecom types patrolling the streets. Surreal.

  • More importantly (Score:5, Interesting)

    by wolf12886 ( 1206182 ) on Tuesday June 24, 2008 @12:36PM (#23920755)
    I don't care how few servers they have, whats more interesting to me is that they run an ultra-high traffic site, which they aren't having trouble paying for, and do it without adds.
    • Simplicity (Score:5, Interesting)

      by wsanders ( 114993 ) on Tuesday June 24, 2008 @01:01PM (#23921373) Homepage

      Although much of the Mediawiki software is a hideous twitching blob of PHP Hell, the base functionality is fairly simple and run perpetually and scale massively as long as you don't mess with it.

      What spoils a lot of projects like this is the constant need for customization. Wikimedia essentially can't be customized (except for plugins obviously, which you install at your own peril) and that is a big reason why it scales so massively.

      As for Wikipedia itself, I suspect it is massively weighted in favor of reads. That simplifies circumstances a lot.

    • by DerekLyons ( 302214 ) <fairwater.gmail@com> on Tuesday June 24, 2008 @01:20PM (#23921751) Homepage

      Sure they do without ad income. But they also do it without having to pay salaries, or co location fees, or bandwidth costs... (I know they pay some of those, but they also get a metric buttload of contributions in kind.)

      When your costs are lower, and your standard of service (and content) malleable, it is easy to live on a smaller income.

      • But they also do it without having to pay salaries, co location fees, or bandwidth costs...

        Well, as far as salaries go, yeah, they don't have to pay for a full team of developers and administrators for the business, but they do need to pay people to go and check on the servers, replace faulty hardware, etc. Also, as far a co-location costs go, I'd say that running your own data center (i.e. providing your own electricity, cooling, backup power supplies, etc.) can't be cheap either.

    • I don't care how few servers they have, whats more interesting to me is that they run an ultra-high traffic site, which they aren't having trouble paying for, and do it without adds.
      I can do that too; I just emulate the adds. x+y is the same as x-(0-y). You have to be careful to use signed numbers for everything (or else have a lot of casting), but that's not really all that hard.
  • 300 servers housed in a single data center in Tampa, Fla.

    Did Wikipedia go down when hurricanes Chralie/etc came through a few years ago ?
    I lost power for about a week when that happened and I only live about 15 miles from Tampa, right over the Courtney Campbel Causeway actually.
    • Re: (Score:2, Informative)

      by timstarling ( 736114 )

      We've never lost external power while we've been at Tampa, but if we did, there are diesel generators. Not that it would be a big deal if we lost power for a day or two. There's no serious problem as long as there's no physical damage to the servers, which we're assured is essentially impossible even with a direct hurricane strike, since the building is well above sea-level and there are no external windows.

  • by kiwimate ( 458274 ) on Tuesday June 24, 2008 @12:44PM (#23920963) Journal

    I.e. the promised follow-up to this story [slashdot.org] about moving to the new Chicago datacenter? You know, the one where Mr. Taco promised a follow-up story "in a few days" about the "ridiculously overpowered new hardware".

    I was quite looking forward to that, but it never eventuated, unless I missed it. It's certainly not filed under Topics->Slashdot.

  • by Animats ( 122034 ) on Tuesday June 24, 2008 @12:45PM (#23921009) Homepage

    Most of Wikipedia is a collection of static pages. Most users of Wikipedia are just reading the latest version of an article, to which they were taken by a non-Wikipedia search engine. So all Wikipedia has to do for them is serve a static page. No database work or page generation is required.

    Older revisions of pages come from the database, as do the versions one sees during editing and previewing, the history information, and such. Those operations involve the MySQL databases. There are only about 10-20 updates per second taking place in the editing end of the system. When a page is updated, static copies are propagated out to the static page servers after a few tens of seconds.

    Article editing is a check-out/check in system. When you start editing a page, you get a version token, and when you update the page, the token has to match the latest revision or you get an edit conflict. It's all standard form requests; there's no need for frantic XMLHttpRequest processing while you're working on a page.

    Because there are no ads, there's no overhead associated with inserting variable ad info into the pages. No need for ad rotators, ad trackers, "beacons" or similar overhead.

    • Oh really? Because O'Reill seems to think it is [oreillynet.com], and I thought he was the main pusher of this terminology. Is the term Web 2.0 actually meaningful?

      • Re: (Score:2, Informative)

        by Tweenk ( 1274968 )

        If you haven't noticed, "Web 2.0" is a long estabilished buzzword [wikipedia.org] - which means it carries little meaning, but it looks good in advertising. Just like "information superhighway", "enterprise feature" or "user friendly".

    • I take it that "Works great because it's not "Web 2.0" " means its fast and dynamic, whereas Web 2.0 generally means slow and dynamic.

      The technology behind it is irrelevant, if content is provided by users then its web 2.0 (as I understan the term), so Wikipedia definitely is web 2.0, its just that they have some fancy caching mechanism to get the best of both worlds. If only more systems were built in a pragmatic way instead of worrying about what its "supposed" to be.

      • I take it that "Works great because it's not "Web 2.0" means that its fast and dynamic, whereas Web 2.0 generally means slow and dynamic.

        Web 2.0 is a shorthand version of saying "dynamic pages served using Asynchronous JavaScript and XML (AJAX)". Now, if you reread the parent, you'll see that he says:

        Most of Wikipedia is a collection of static [emphasis mine] pages. Most users of Wikipedia are just reading the latest version of an article... So all Wikipedia has to do for them is serve a static page.

        In other words, the parent is saying that Wikipedia is effective because avoids any sort of dynamism for the majority of use cases. Heck, even article editing isn't dynamic on Wikipedia. When you click the edit link, you're taken to a separate page which has a prepopulated form with the wikitext of the article. The only bit of dynamic co

    • Web 2.0 is not just about flashy Ajax or what not, it's about user generated dynamic content. WP's "everything is a wiki" architecture might /look/ a bit archaic compared to fancy schmancy dynamic rotating animated gradient-filled forums, but it's much more powerful.
      Moreover, WP is not a collection of static pages, if you're logged in at least, every pages is dynamically generated, and every page's history is updated within a few seconds.

      • Moreover, WP is not a collection of static pages, if you're logged in at least, every pages is dynamically generated, and every page's history is updated within a few seconds.

        That's not how it works. If you're just browsing Wikipedia, you're just looking at a collection of static pages that were generated earlier and cached. Only when you actually edit the page and save it is the page updated.

        If Wikipedia had to freshly create every page for every user, even computational power on the order possessed by Google wouldn't be up to the task.

  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday June 24, 2008 @12:52PM (#23921179) Homepage Journal

    What does "Non-Profit Budget" mean, anyway? There are non-profits bigger than the company I work for. Non-profit isn't the same as poorly financed.

    • Re: (Score:3, Interesting)

      by quanticle ( 843097 )

      Good point. Perfect example: the Bill and Melinda Gates Foundation has a budget of billions of dollars, easily exceeding the budget of many private corporations.

  • by Luyseyal ( 3154 ) <swaters@@@luy...info> on Tuesday June 24, 2008 @12:54PM (#23921229) Homepage

    The summary was wrong to include a link to the Wikipedia homepage without a Wikipedia link about Wikipedia [wikipedia.org] in case you don't know what Wikipedia is. I myself had to Google Wikipedia to find out what Wikipedia was so I am providing the Wikipedia link about Wikipedia in case others were likewise in the dark regarding Wikipedia.

    -l

    P.s., Wikipedia.

  • I'm kind of surprised there's not been more talk about a distributed computing effort for wikipedia. Seems like it would be a good candidate. I'm more of an honorary geek than an actual hardcore tech-savvy person - does anyone know if a distributed computing effort could work? I don't really see any problem with data integrity, since it's not confidential and is open to editing by definition (except maybe user info?), so it'd basically be a big assymetric RAID, right? I would worry more about it having f
  • by Anonymous Coward

    According to http://meta.wikimedia.org/wiki/Wikimedia_servers [wikimedia.org] Wikimedia (and by extension, Wikipedia):

    "About 300 machines in Florida, 26 in Amsterdam, 23 in Yahoo!'s Korean hosting facility."

    also: http://meta.wikimedia.org/wiki/Wikimedia_partners_and_hosts [wikimedia.org]

  • Obviously U can pay much less outside Silicon Valley. If you want investment capital & lots of customers you have to be physically in Silicon Valley and pay the millions of dollars. Even Kiwipedia had to move its office to San Francisco & the data center is going to follow if they can get enough donations.

  • by Xtifr ( 1323 ) on Tuesday June 24, 2008 @02:51PM (#23923321) Homepage

    Wikipedia's pretty impressive, but how about the Internet Archive [archive.org]? Also a non-profit that doesn't run ads, and not only do they, like Google and Yahoo, "download the Internet" on a regular basis, but the Archive makes backups! Plus, they have huge amounts of streaming audio and video (pd or creative-commons). The first time I ever heard the word "Petabyte" being discussed in practical, real world terms (as in, "we're taking delivery next month") was in connection with the Internet Archive. Several years ago. And it was being used in the plural! :)

    They may not have as much incoming traffic as Wikipedia, but the sheer volume of data they manage is truly staggering. (Heck, they have multiple copies of Wikipedia!) When I do download something from there, it's typically in the 80-150 MB range, and 1 or 2 GB in a pop isn't unusual, and I know I'm not the only one downloading, so their bandwidth bills must still be pretty impressive.

    The fact that these two sites manage to survive and thrive the way they do never ceases to amaze me.

  • by trawg ( 308495 ) on Tuesday June 24, 2008 @08:48PM (#23927599) Homepage

    I notice they are conspicuously absent in the comments. They tend to jump up and down in any other post about PHP and MySQL. This is such a great example of the scalability and performance of it WHEN USED CORRECTLY.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...