Twitter Not Rocket Science, but Still a Work in Progress 111
While it may not be rocket science, the Twitter team has been making a concerted effort to effect better communication with their community at large. Recently they were set-upon by a barrage of technical and related questions and the resulting answers are actually somewhat interesting. "Before we share our answers, it's important to note one very big piece of information: We are currently taking a new approach to the way Twitter functions technically with the help of a recently enhanced staff of amazing systems engineers formerly of Google, IBM, and other high-profile technology companies added to our core team. Our answers below refer to how Twitter has worked historically--we know it is not correct and we're changing that."
That's "effect", not "affect" (Score:5, Informative)
Re:And Twitter is... (Score:5, Informative)
Re:twitter hate... (Score:2, Informative)
Re:Twitter Google App Engine (Score:3, Informative)
Re:i wonder what they mean by.. (Score:2, Informative)
It's the algorithm, stupid (Score:5, Informative)
When I started irately writing this post, I wrote it in a tone that would have gotten me modded into oblivion. But then I realized that ignorance, not idiocy, drives the particular myth I'm debunking. let me educate, not flame, those of you who haven't formally studied computer science.
It's become fashionable to blame Ruby for Twitter's problems, but that's wrong. The particular choice of language doesn't matter a bit when you talk about scalability, no matter what the language or the problem.
First, sending a twitter message is an algorithm. An algorithm is just a recipe for doing something to some data. Although most computer science literature deals with more abstract and general algorithms, like those for sorting and searching, the same principles applies to even the most mundane processes, like what rm foo does to a file system, or how a database engine runs an INSERT.
One way we can talk about algorithms is to use something called Big-O Notion [wikipedia.org], which describes the relationship between how much stuff an algorithm processes and how long it takes to run.*
It's easier to see things with examples. Say we have an algorithm and we give it three sets of data, D1 and D2, and D3, each twice as large as the last, so that D2 is twice as large as D1, and D3 is four times as large as D1.
If we call the algorithm O(1), it will take the same amount of time to process D1, D2, and D3. If we instead say it's O(N), D2 will take twice as long to run as D1, and D3 will take four times as long.
If N represents the number of users for a web application, and we want to double N, twice as many users, we'd need twice as many web servers if the bottleneck algorithms are O(N). If the database is the bottleneck, we'd need a twice-beefier database server, or some partitioning.
Things start to get interesting with O(N^2). In that case, D2 takes four times longer to run than D1, and D3 takes four times longer than D2, which sixteen times longer than D1.
That means that if we want to support twice as many users, we need four times as many web servers, or more likely, a four-times beefier database server.
It can get a lot worse than O(N^2) too, especially if you're not paying attention to complexity. For example, many graph (think social networking) algorithms can easily become O(2^N), which is a lot worse than N to a constant power.
When you try to scale a poorly-designed algorithm (pretty much anything worse than O(N)), you start running out of cores, rack space, electricity, and atoms in the universe.
One useful bit about big-O notation is that it lets us ignore piddly details that don't matter. Say we had an O(2N) version of the O(N) algorithm. Sure, the O(2) algorithm might take twice as long to run, but it can still handle double the data with double the capacity or double the time. Even if it's O(10N), you don't start boiling the oceans to cool your data center when you want to increase your visit capacity a thousandfold.
This observation is why the choice of language doesn't matter. If a language implementation is slow, all it does is add a constant factor to any algorithms written in that language. A Python application might be ten times slower than one written in C, but its big-O complexity will be the same.
At the worst, that means you'll need ten times as many servers as with the C web application. The increase in development efficiency writing in Python (or Ruby on Rails, or Lisp, or anything else) might make the trade-off worth it. You can deal with a constant factor slowdown.
If on the other hand, you code a wicked fast implementation of an O(N^3) algorithm in C, no amount of hardware will save you. You'll hit a number of users beyond which your servers slow to a crawl and you lose blagosphereic karma. Even if you double your capacity, or buy a four-times-beefier database server, that
'team' (Score:3, Informative)
Psst, there's actually no "Twitter team." It's just one guy with like ten accounts.