Building a Scaleable Apache Site? 60
bobm writes "I'm looking for feedback on any experience building a scaleable site. This would be a database driven site, not just a bunch of static pages. I've been looking for pointers to what other people have learned (either the easy way or hard way). I would like to keep it Apache based and am looking for feedback on the max # of children processes that you've been able to run, etc. Hardware-wise, I'm looking at using quad Xeons or even Sun E10K systems. I would like to stay non-clustered if possible."
Slashdot (Score:4, Funny)
Increase the threads etc. until it stays up.
Uptime (Score:1)
I agree with many of the things that the other posters here are telling you to do. I have a few suggestions though. Put the money in to areas where it will do you some good. Buy the cheaper servers. The real bottleneck on most servers is the PCI bus and the hard drive IO, anyway. Multiple smaller servers eliminiates this issue since the same load is spread across more PCI buses and read-write heads. It's nice to have the budget for an E10K, but we have two at work and I'm not terribly impressed, and especically not for the price tag, nevermind the footprint, weight, power usge, or massive BTU's.
My first concerns your network. You are far more likely to saturate your network than to run out of server capacity. Sites with page views in thousands per second generally run multiple high end connections, like DS-3's. You are going to need gear that can handle this kind of load, not to mention the redundancy and multipathing. Some serious gig gear is probably definitely in order.
I'd also suggest that you do some tweaking to your firewalls. Security is, or at least, should be a major concern for sites that take those kind of hits. Just ask Wingspan Bank. You can build a great site, but if people get their information stolen, they won't be back.
My next suggestion would be to put a dedicated load balancing hardware device out in front of them. Either use Squid or use an actual hardware device that is supposed to do this kind of thing, ala Cisco's CSS products.
I'd also suggest sinking some of that cabbage into a real database. Understand that I'm not knocking PostgreSQL or any of the other open source databases. Oracle has roughly 85% of the high end database market FOR A REASON. Once you shuck out for the Oracle instance, shuck out for someone one to tune it as well. Tuning is VERY important to databases.
Anyway that's my 2 cents worth. This advice is worth exactly what you have paid for it.
Persistent Connections Are Your Friend (Score:5, Informative)
Re:Persistent Connections Are Your Friend - MAYBE (Score:4, Interesting)
However, persistent connections may be too much of a burden for an overworked db server. If you're using PHP/MySQL for example, mysql_pconnect may not be the way to go if you have a few front end servers hitting the database. It seems that the PHP connection pooling limit is per process. If you have 100 Apache processes w/ a 10 connection limit per and 10 web servers, that's a max of 10,000 db connections!!!
One idea might be an intermediate "connection broker" on a per server basis. We use something similar to this.
Apache's fork() model is great for stability, but it really hinders interprocess resource sharing. We're mostly Java based here, which allows us to use beans and such. Does mod_perl allow for resource sharing between processes?
Re:Persistent Connections Are Your Friend - MAYBE (Score:1)
Then use Apache 2.x in threaded mode -- that way you don't create a new process for every connection. A threaded server may be a good idea anyway.
---
Harrison's Postulate:
For every action, there is an equal and opposite criticism.
Re:Persistent Connections Are Your Friend - MAYBE (Score:1)
With perl 5.8, apache 2, and mod_perl 2, you can share resources between threads. With earlier multi-process versions you can use shared memory or disk. On Linux, sharing on disk is very fast since frequently accessed files are kept in memory.
Cache, Cache, Cache (Score:5, Insightful)
I've built a site that's able to handle 1-2 million dynamic page views per day. There's not a single static page on the whole site except for the 404 page.
One trick that we currently use is a little daemon that runs on our app servers (custom java app). It's essentially a tcp socket interface to a hashtable with an expiration timestamp. Here's how the site works:
Of course, the real weak point of the system (without clustering) is the database. Make sure that your data is index properly and that your queries are optimised. We have 2 tables with over a million rows each that get hit all the time. Proper data layout, quick queries and the local caches help our puny dual P3-733 (NON xeon) with a paltry 1GB of RAM dish out well over a million dynamic pages per day.
Re:Cache, Cache, Cache (Score:1)
Re:Cache, Cache, Cache (Score:2)
The local cache is simply time based. Each element in the cache has it's own expiration time and part of the API allows you to specify a TTL for each element. The element's timestamp is checked against its TTL with every request - if it's expired, the daemon deletes the element and simply reports that it couldn't find the object.
Another reactive behavior of the daemon is that it will call a trim() (which walks through the hashtable and purges any expired objects that simply haven't been requested since they turned "sour") whenever the hashtable grows to a specified max size. There's some additional logic that keeps trim() storms from occuring.
On a proactive side, the daemon itself does some housekeeping. After X seconds (we have it set to about 2 hours) it trim()'s itself.
Re:Cache, Cache, Cache (Score:1)
Clearly you have to index the cache based on the composite of the data input they page requires to process.
Re:Cache, Cache, Cache (Score:1)
Re:Cache, Cache, Cache (Score:3, Informative)
When optimising any system, relaxing granularity is something that you should look at. Do I really need the latest version of the news story up to this very second - or can I deal with one that's a minute or more old. In our case, the news stories are edited and reviewed before they're published, so it doesn't matter if the story is 1 minute old or 10 days old.
In an emergency, we can forceably expire an element.
There are cases on our site where we can't cache the data - we *need* the live data. Those cases are scrutinized thoroughly before we actually make a live call to the db to see if there's some way to get around it. However, most of our data is cacheable and we have a hit rate of ~80%
Re:Cache, Cache, Cache (Score:2)
If you were to log into our site, your name is displayed on *each* page - how would you cache that whole page effectively?
We cache objects and have the web servers assemble the objects in a page on the fly. So, a news story is an object, a poll box is an object, etc.
One reason for this is different TTLs. Our news stories don't change that often - if ever - once they've been published. A news story TTL may be set to 1 or 2 hours, while our polls are constantly changing and need a much shorter TTL.
Our main goal with this design was to ease our database load. Scaling a database up is *expensive* (Oracle quoted us $250,000 - for the one box) and complex once you start moving to clusters, etc. Scaling the front end is simple - add another server behind the load balancer. We currently have 6 web servers for redundancy and general zippyness, but our load can be handled by 2 or 3 of those.
Re:Cache, Cache, Cache (Score:1)
Re:Cache, Cache, Cache (Score:1, Informative)
Re:Cache, Cache, Cache (Score:1)
The CGI looks to see if the page has already been generated. If it exists, it just dumps out the pre-created page to the user. If it doesn't, it creates the page..
Ugly as it may be, we have a cron which deletes any files older than x minutes. It works very well. There are 2 machines handling the CGI's for that area, and a few other lesser functions. There is one database machine which is actively used, and a backup machine which is never hit unless the primary dies. Multiple machines are for redundancy, should I want to do something silly like take one down and play with the hardware.
It definately reduced the load when we started caching the results.. It's easy for a perl script to dump out a HTML file, rather than doing several SQL queries and generating the HTML from the results...
To get an idea of the section I'm talking about, go to voyeurweb.com, look at a set of pictures, scroll to the bottom, and click "Leave a Comment for this Contri".
Look in the right place (Score:5, Informative)
It would be helpful to know, for example, what portion of the traffic (both # of requests and bytes) is static and what is dynamic (include images)? What is the peak (say 98th percentile) expected traffic? What are typical page sizes and how much are they compressible with gzip? etc.
Apache itself doesn't really handle dynamic content - its modules or an underlying app server do that. That is probably where you will have to do the most work.
As another poster mentioned, persistent database connections are essential. You may want to look into a "real" app server. JBoss is open source and just won some awards at Java One. If that is too much complexity at least be sure to use persistent connections in whatever other technology you select.
Persistent connections have a down side. Don't forget that your underlying database must be able to handle both your number of requests and your number of connections. If you just increase Apache processes you may find that the database is unable to manage that many simultaneous connections efficiently. Opening/closing connections for each request kills you. Maintaining hundreds of open connections kills you. This is one of the real strengths of any technology that can handle connection pooling - you will probably find that you only need a handful of connections to handle lots of front-ends and connection pooling allows you to do it efficiently. It can also help you scale by distributing connections to multiple database servers for you when your needs dictate.
The faster you can dispense with a request the better. This includes not only all your processing time but the transmission time to the client. A process/thread can't move on till the client has the data. Therefore...
Design your pages to give yourself a fighting chance. For example: if you have any static images be sure to set your http headers to prevent browsers from reloading them. Even the request overhead to the server to determine that the cached image is up-to-date is more than the size of the image itself so set a LONG expiration.
Trim unnecessary whitespace, using short names (ie. i/x.png instead of buttonimages/left_page_arrow_top.png) and so on.
If the pages are large enough and the clients slow enough then you may want to use gzip (mod_gzip) to compress the data. It will cost you processing time to compress dynamic content but will save you transmission time. If you pay for bandwidth you can see a 50-80% reduction in your bandwidth usage as well.
Note: if your spec of "non-clustered" and scalable still allows multiple machines and if you do have images or other static content you may want to move that content to a separate machine. The Linux kernel http server screams on static content (of course the static-content load on your server may be so small a percentage that it isn't worth the effort).
Try Apache 2.x first. One problem with 1.x on most (all??) platforms is the "thundering herd" problem. You may try to increase performance by running lots of processes but when a request comes in, all sleeping processes are awoken (the thundering herd) and although only one will end up servicing the request, the effect of waking up huge numbers of sleeping processes can be "bad".
Be sure to test with clients of varying speed. We discovered we could crash a site faster with slow clients than fast. Once while testing a Cold Fusion/IIS site it seemed like we could realy get some screaming throughput when testing on the LAN. Unfortunately when the server had to keep threads/connections alive long enough to service slow clients it wasn't so pretty. When we ran the simulation that way we could crash the server in 2 seconds.
Give me more specifics and I may be able to give better advice.
Re:Look in the right place (Score:5, Informative)
Re:Look in the right place, but what is Clustering (Score:2)
Like I say, there are different ways of doing this, and really you ought to browse through a good bookstore or two to get more details. One strategy that's easy to implement might be to split up your content so that plain html is on www.site.com while your images are on img.site.com, your cgi scripts are on cgi.site.com, and your data is housed on db.site.com (which probably shouldn't be web accessible, by the way -- this protects you!). This is a vertical split. Or you can go horizontal by placing everything behind a load balancer that redirects incoming requests to one of several web servers -- each of which can be getting content from a single shared NFS partition. Or you can do a mix of those: maybe all the front end web servers communicate with dedicated database etc boxes behind them (which, again, would not be otherwise internet accessible).
On a Linux or Unix system, NFS is a pretty easy way to mirror content across all your servers. For a Win2k served site, you could probably get away with CIFS Windows shared drives. Or if you want to be really cutting edge, WebDAV might be able to meet similar needs. Less clever -- but debatably easier -- ways to do it might involve rsync'ing content from a master content server to a set of web-facing server "clients". A variation on that idea ends up being more or less identical to content proxy caching, as one big expensive app server in the back gets it's data cached onto a pool of cheap web facing proxies.
But, like I said at the beginning, the devil is in the details and you really ought to pick up a couple of good books if you want to learn more about this. Strategies for e.g. database "clustering" can vary widely depending on the RDBMS being used: I doubt the method would be the same for MySQL as it would for PostgreSQL, Oracle or SQLServer, for example. Some of those might be able to do this work almost transparently, while others would involve more manual planning & setup.
Re:Look in the right place (Score:1)
Works fine with me. I have been running PHP 4.2.0 on Apache 2.0.35 on Linux 2.4.19-pre7 and I have not experienced any problems whatsoever.
---
A computer without COBOL and Fortran is like a piece of chocolate cake
without ketchup and mustard.
Re:Look in the right place (Score:4, Interesting)
The site will be mostly serving dynamic content with the average page being about 60-120k of code and around 10k of images. And yes, that's a lot of code but the site is serving up reports and whatnots. There are small pages between reports and the usual login, etc screens.
The real purpose of the question was to see how different tuning is being used in the real world, as the web has matured there has to be some interesting information on keeping the systems up 24/7, etc.
For example we're looking into a replicated database with just the important info (and I know that important is a real fuzzy term) for periods when we need to bring the primary database down.
what would be interesting is the proactive analysis (when do you add more hardware, etc) that is done on a live running system.
thanks
Re:Look in the right place (Score:1)
It's more processing on the server side but you'll save bandwidth and the user-perceived performance will be better.
Re:Look in the right place (Score:2)
While the rest of your post contains many good points, I find this comment bizarre. The overhead of a few extra bytes is insignificant compared to the benefit of having maintainable code.
Re:Look in the right place (Score:1)
You need to provide way more info (Score:3, Informative)
Anyway, you need to provide way more information in order to get help. There is no magic way to make a site scalable. It just depends on the answers to all the above questions and more.
Re:You need to provide way more info (Score:4, Interesting)
Dynamic: currently mod_perl but open to something faster (if there is a proven faster technology).
Apache: current 1.3.x move to 2.0.x when it's ready for prime time.
OS/Hardware: open, currently Solaris/Sun, open to quad Xeon/Linux if it has the performance.
The reason for asking about a single vs multiple machines is that I wanted to get a handle on what one box could do as opposed to the gut reaction to just keep adding servers.
Although I'm not expecting magic I didn't want to get too specific because I'm interested in feedback from across the board, for example how does Orbitz or Yahoo or *New York Times* maintain uptime? I haven't found anywhere that discusses places like that.
Re:You need to provide way more info (Score:2)
Services provided by Digital Island, Mirror Image and Akamai will distribute your content to a node as close to the client as possible. We use those services for our images (only static content we have), but Akamai (at least) is pushing a new distributed processing model. You give them a Java WAR file or a
Re:You need to provide way more info (Score:1, Informative)
I've recently started with a NYT sibling company. Suffice to say, our network design -- while far from perfect & entirely too arcane in a lot of ways (like, say, daily munging of data from old VAX & PDP mainframes into a web presentable form) -- works. I'm told we had emails from visitors telling us that on 9/11 last year, our site was the only major one that a lot of people could get to: when NYT, CNN, MSNBC etc were inaccessible for several hours, our site was able to handle it.
Now granted, us with that traffic spike still might not have been the level of them on an average day, but still -- the ability of the system to withstand sudden shocks like that day (or, say, a Slashdotting earlier this week) has been well proven. And in an abstract way, the points being raised in this thread -- by several posters -- are all design aspects incorporated into the site I work for.
Gimme an email address & I might elaborate. I don't want to go into detail on Slashdot... :-/
How about... (Score:3, Informative)
The database on the Xoom side was an E450 IIRC. Snap used much burlier hardware because they were basically a silver-spoon project of CNET/NBC.
The lesson for scalability is simple, cache like a motherfucker and make everything you can static. And run DSR.
If you decouple the database from the webservers you need to make extra sure that you proxy the high-traffic requests, either by running a static-file-dumping daemon process (for content) or a proxy daemon (for authentication). My moderately-low-traffic site at my current job can handle two saturated DS3's worth of traffic with 1024 apache child processes running on each of 2 dual PII boxes w/512MB RAM, plus the database running on a dual PII w/1GB RAM. Doesn't even break a sweat. Postgres (the database) runs 1024 child processes with a lot of buffers, NFS caches are pretty good sized (if your frontend webservers are Sun, you can use cachefs aggressively, I would), and overall it just took some serious tuning to make sure that nothing fazes it.
I'm working on a couple of "community" sites with similar demands (~1million visitors/month) and mod_throttle + caching will solve one's problems, the other is where I stole the throttling idea from
For the whiners, Xoom failed in the end because it lost sight of the cheap-ass principles that made it a good stock market scam. Right up until the end, performance on the member servers was sub-4 second per page on average.
Re:You need to provide way more info (Score:1)
The only thing that's truly proven to be faster than mod_perl is custom-coded Apache modules written in C. You can do that, but it will take you a long time.
Re:You need to provide way more info - mod_perl (Score:1)
http://www.fastcgi.com/
Session Management (Score:2, Interesting)
Another option is to store session data the your top level frame on the client, but this can be messy and hard to debug. Storing session in your database is elegant and easy to debug but can increase the hits on your database to a prohibitive degree. Adding database bandwidth in the future is difficult and expensive. Adding web servers to your system is comparatively cheap and easy.
Re:Session Management (Score:1)
everybody likes to make fun of java - the poor thing, it was created to be a run anywhere client language, but it's true calling was serving up applications!
A good article (Score:4, Informative)
Kegel's site (Score:5, Informative)
Re:Kegel's site (Score:4, Insightful)
Re:Kegel's site (Score:1)
Code maintenance (Score:1)
Don't be short-termist just because the person generating the business requirements thinks like that. After you're up and running, things may still change. By using good design patterns [c2.com] you'll find it easier to add new functionality or change the system behaviour.
This person is lucky to have a job (Score:1)
How does someone who's obviously never done this, let alone think about it for more than a few minutes have a job DOing this? Maybe it's just my area but there are NO web architecting type jobs around here and this numb-skull is having slashdot their job for them....
Life is like so fair!
never spend more than $2,000 on a web server (Score:1)
Get a single or dual processor intel/AMD rackmount system for your web servers, spending the extra money on a quad system isnt' worth it. You don't need SCSI either for them.
Sun's idea of a web server is a $20,000 E280R. Their Netra T1's are ony single 500 Mhz Ultrasparc IIe at roughly the same price as a dual processor intel/AMD machine, and they don't really compare performance wise running as web server.
For the backend, you can go with the huge systems to run the database. I wouldn't recommend running MySQL, it won't scale. You probably need something like Oracle if it's going to be a heavily trafficed/high transaction site.
Re:never spend more than $2,000 on a web server (Score:1)
ACS/OpenACS (Score:1)
Check out http://www.openacs.org/. It is a toolkit derived from the original ACS, but instead of Oracle, it works with PostgreSQL. It is under active development. Greenspun's website survived a slashdot, and so did OpenACS. Certainly saying something about the scalability.
The problem with Apache 1.x/PHP/mod_perl/MySQL/PostgreSQL is that the so-called persistent database connection is per-process based. There is no guarantee that requests for a particular site will always be served by the same process with the appropriate db connections. Then there is the problem of running out of db connections for any particular process. It seems to be a lot of fine tuning work, a lot of memory and CPU power. Apache 2.0 is likely to be better in this respect, but I still think that AOLServer is cleaner.
Re:ACS/OpenACS (Score:2, Informative)
And how is this a problem exactly? If your server is handling only dynamic pages (your static stuff should be split onto another server) you will almost certainly need a database handle on every request. Connection pooling is only useful if your application spends a lot of time NOT using the database.
Then there is the problem of running out of db connections for any particular process.
Why would a particular process need more than one database connection? Each process only handles one request at a time.
Apache 2.0 is likely to be better in this respect, but I still think that AOLServer is cleaner.
Apache 2 provides full support for threading, so it can use the same approach as AOLServer. It doesn't sound like you know very much about it, so maybe you should check it out before you tell everyone it's no good.
You can read how we did it (Score:1)
got cash? (Score:1)
Great article on Scaling your DB (Score:2, Interesting)
- H
Whoa... (Score:1)
However, as a sane person, you would realize that would be dumb. Alright look, if you've got the loot to spare why not try this approach.
Get yourself a solid database machine. Any kind of multiprocessor (2 or 4 proc should be fine) sun, ibm, or hp, would do, now not the E10k or that Regatta from IBM. I recommend that you get a true Unix system, because you'll end up with amazing uptime and it can take lots of load. Now remember this fact for the database:
Its the I/O stupid.
Well its the I/O and RAM. Get the best disks you can get your hands on and lots of em. And if you can get a gig or two of RAM. Now, your database may have specific requirements for RAID, I was told once that with DB2 RAID 5 was adequate (but that's hearsay).
Whereas I have seen many Oracle people recommend Raid 0+1. What I understand to be true, is that RAID 5 is good for datawarehouses, and RAID 0+1 for OLTP. You need to decide, most websites are probably in the realm of OLTP, but your situation may differ. Please remember: Get at least SCSI, better if you can afford it. You would be better off getting a penguin computing system and a fiber channel drive array than going with a high price unix vendor and plain old SCSI (IMHO). Now if you're just going to be using MySQL or Postgres, don't be dumb, get dual Xeon and run Linux, but the same rules apply.
As for replication I happen to work with a database cluster system with IBM's HACMP, if we need to take down one system we can fail the database over to the second machine, the process takes about two minutes and the database is back up and running. You then switch it back over when you're done doing whatever.
Ok, now I've addressed your database server, lets can evaluate your website needs.
Firstly, apache (of course) allows you to handle multiple websites as virtual hosts. It does it very well. Now, I'm not sure what kind of scalability you're talking about but that's one kind right there.
I would suggest that if you've got the means, you may want to look at an application server that's based on apache. Such as Oracle's or Websphere. It may make administering a ton of web addresses easier.
You want to stay away from clustering? Why? To save on administration costs? But yet you're entertaining an E10k.... I don't get it.
Since I don't think your server consolidation is a smart move I'm going to propose this:
IBM and Sun and HP (HP just came out with a really inexpensive rack mounted 1U Unix machine) all have small 1U webservers. But honestly, those systems are great for databases, where you may want to get the extra performance of specialized hardware, but you could do quite will with a rack full of dual processor Xeons running redhat. (this would be where an app server would be a good idea because a lot of them feature cloning so you don't have to copy all the new html over with each change) With all the modules and the native ability to compile it from scratch I think you're better off.
What you would do probably then is load balance them with Round Robin DNS (there's some expensive hardware load balancers that I'm not too familliar with that you can buy too) look it up, its very simple IP based load balancing from the DNS.
Remember also, that if you're looking to get 10 mil hits a month you're going to need bandwidth (I'd guess a DS3?) to support that.
This is of course my opinion, I have administered and set up the webserver and databases for several companies, and I've never figured out a really good answer to this problem of scalability, except to keep open the possibility of future growth (although there are some pretty specific formulas for capacity planning for databases). You're making a huge hardware investment, you need to keep that in mind too, if you need 2 CPU's then get the capability for four, etc. Keep in mind that companies grow (hopefully) and the CXO's that are approving this purchase won't like it if next week you come down and say Oh sorry, we need to get the E15k the E10k wasn't enough (but they won't dislike it so much if its 'only' a 5k XEON system you're getting). Also, let me say this try to get a professional opinion from someone say, not on slashdot (the problem is obviously where, vendors will lie, so will consultants) . If you're spending megabucks, you may have to defend your suggestion at some point, and telling em, 'Um, this guy on slashdot said it would be cool.' never really pans out.
Re:Whoa... (Score:1)
Good Resource (Score:1)
Re:Good Resource (Score:1)
NFS vs. rsync? (Score:2, Insightful)
Re:NFS vs. rsync? (Score:1)
Short answer, NFS is simpler to setup and manage (not much) and people are more familiar with it.
There are a few reasons I can see using NFS over rsync (I have an e-mail spool over nfs to a bunch of frontend email servers [using nfs aware maildir, really its ok ;]), but most of them are pretty thin.