Best Solution For HA and Network Load Balancing? 298
supaneko writes "I am working with a non-profit that will eventually host a massive online self-help archive and community (using FTP and HTTP services). We are expecting 1,000+ unique visitors / day. I know that having only one server to serve this number of people is not a great idea, so I began to look into clusters. After a bit of reading I determined that I am looking for high availability, in case of hardware fault, and network load balancing, which will allow the load to be shared among the two to six servers that we hope to purchase. What I have not been able to determine is the 'perfect' solution that would offer efficiency, ease-of-use, simple maintenance, enjoyable performance, and a notably better experience when compared to other setups. Reading about Windows 2003 Clustering makes the whole process sounds easy, while Linux and FreeBSD just seem overly complicated. But is this truly the case? What have you all done for clustering solutions that worked out well? What key features should I be aware for successful cluster setup (hubs, wiring, hardware, software, same servers across the board, etc.)?"
1000+ a day isn't very much (Score:5, Insightful)
1000+ unique visitors is nothing. Even if they all hit the site at lunchtime (1 hour window), and look at 30 pages each (very high estimate for a normal site) that's only 8 requests a second. That isn't a lot. A single server could cope easily, especially if it's mostly static content. As an example, a forum I run gets a sustained 1000+ users an hour and runs fine on one server.
As for "high availability", that depends on your definition of "high". If the site being down for a morning is a big problem then you'll need a redundant failover server. If it being down for 15 minutes is a problem then you'll need a couple of them. You won't need a load balancer for that because the redundant servers will be sitting there doing nothing most of the time (hopefully). You'll need something that detects the primary server is offline and switches to the backup automatically. You might also want to have a separate database server that mirrors the primary DB if you're storing a lot of user content, plus a backup for it (though the backup DB server could always be the same physical machine as one of the backup webservers).
Whoever told you that you'll need as many as 6 servers is just plain wrong. That would be a waste of money. Either that or you're just seeing this as an opportunity to buy lots of servers to play with, in which case buy whatever your budget will allow! :)
Is It Mission Critical? (Score:5, Insightful)
budget? (Score:5, Insightful)
there is also more of them than you can poke a stick at and prices are very reasonable. places like rackspace for this kind of thing for $100/mo.
the other advantage is you don't need to pony up for the hardware.
Plan or Implementation? (Score:5, Insightful)
Why are you purchasing six or so servers before you even have one online?
You say that you expect "1,000+ a day" visitors which frankly is nothing. A single home PC with Apache would handle that.
This entire posts strikes me as either bad planning or no planning. You're flirting with vague "out of thin air" projections that are likely impossible to make at this stage.
Have a plan in place for how you will scale your service *if* it becomes popular or as it becomes popular but don't go wasting the charities money just in case your load jumps from 0 to 30,000+ in 24 hours.
Re:Is It Mission Critical? (Score:3, Insightful)
Well a dedicated server requires maintenance. All my customers come to me saying that they will eventually get 100,000 visitors per day. I make the calculation for them for the monthly cost: $100 for a decent dedicated server, plus $250 for a sysadmin etc.
Eventually they all settle for shared hosting except when privacy is an issue.
1000+ a day is trivial have you thought of amazon? (Score:5, Insightful)
Lets get more blunt. Depending on what you are doing and if you want to worry about failover then 1000 a day is bugger all. Simple set up of Apache and Tomcat (if using Java) with running round-robin load-balancing will give you pretty much what you need.
If however you really are worried about scale up and scale down then have a look at Amazon Web Services as that will probably more cost effective to cope with a peak IF it occurs rather than buying 6 servers to do bugger all most of the time.
2 boxes for hardware failover will do you fine, if you are worried about HA the its the COST of downtime that you are worried about (i.e. down for an hour exceeds $1000 in lost revenue) which will justify the solution. Don't just drive availability to five nines because you feel its cool, do it because the business requires it.
KISS (Score:3, Insightful)
Sites which get slashdotted typically use a badly structured and resourced database to directly feed external queries. If you must use a database put some kind of simple proxy between it and the outside world. You could use squid for that or a simple directory of static html files.
Re:You will be OK (Score:4, Insightful)
Round-robin DNS with 2 or 3 Apache Boxes (Score:2, Insightful)
I learned that "managed" is actually a hosting company euphemism for "shared" and performance was seriously degraded during "prime time" everyday.
We eventually overcame our network latency issues by ditching the provider's loadbalancer and using round-robin DNS to point our domain name at all three of the 3 servers.
I was using Apache + JBoss + MySQL, and on each server I configured Apache's mod_jk loadbalancer to failover using AJP over stunnel to the JBoss instances on the other 2 servers. I also chose to configure each JBoss instance to talk to a MySQL instance on each box, these being configured in a replication cycle with the other MySQL instances for hot data backup.
For our load, we've never had any problems with this. The biggest component with downtime was JBoss (usually administrative updates), but Apache would seamlessly switch over to use use a different JBoss instance.
One of the servers was hosted with a different provider in a different site.
You don't need high availability (Score:4, Insightful)
First, I suggest you read and think deeply about Moens Nogood's essay "So Few Really Need Uptime" [blogspot.com].
Key quote:
And that corresponds pretty well to my experience: the more effort people make to duplicate hardware and build redundant failover environments the more failures and downtime they experience. Consider as well the concept of ETOPS and why the 777 has only two engines.
sPh
One more thing. (Score:5, Insightful)
Re:budget? (Score:3, Insightful)
The problem being that you're paying $100 per month in perpetuity. Sometimes you get awarded capital to spend on things in a lump sum, whereas the ability to garner a revenue commitment could not necessarily be made.
At the spend rates you mentioned, that's a basic server per year. Say the server is expected to last 5-8 years, that'll be an outlay of at least $6000-$9600+, with more to spend if you want to keep things running.
That would cover the cost of a couple of generations worth of hardware, depending on how it was implemented.
If there's no skill around (and definitely won't be), then by all means, the revenue based datacentre rental is a great move, but if there is skill around to perform a task, then you gain far greater flexibility by DIY.
Guess a fair bit of this comes down to whether it's possible to get at least $6k+ allocated to revenue spend over the next 5 years (at today's prices), of if it has to be capital.
Google (Score:4, Insightful)
Use Google. Why spend all that money buying up equipment for a non-profit that could be spent on your REAL mission.
Do it in Google sites and dump the data center. I even think google offers google apps for free to non-profits.
1000 FTP Users is not 1000 HTTP users (Score:4, Insightful)
Re:1000+ a day isn't very much (Score:3, Insightful)
I think it's sort of fortunate that the submitter was vague. This way, I get to read about all sorts of HA solutions, where as if he really wanted 2 apache servers and a hot/cold mysql instance, I'd have been way more bored ;-)
Re:budget? (Score:3, Insightful)
The problem being that you're paying $100 per month in perpetuity. Sometimes you get awarded capital to spend on things in a lump sum, whereas the ability to garner a revenue commitment could not necessarily be made.
At the spend rates you mentioned, that's a basic server per year. Say the server is expected to last 5-8 years, that'll be an outlay of at least $6000-$9600+, with more to spend if you want to keep things running.
That would cover the cost of a couple of generations worth of hardware, depending on how it was implemented.
Your math does not appear to account for the other components necessary to meet similar uptimes to a hosted environment. Eg: multiple internet connections, redundant networking equipment, multiple power feeds, UPSes, disaster recovery site, etc.
Re:1000 FTP Users is not 1000 HTTP users (Score:3, Insightful)
No not really. Any new server should be able to handle atleast 300Mbit/s.
(And most likely also handle a full 1Gbit/s but that might require a dual cpu system, with a fast disk subsystem)
The only way that 1000 users/day can require more then one server to handle the load, is if each user request require multiple complicated database query to reply to.
(Or if the design/implementation should be featured no "the daily wtf").
Re:STOP. You have no idea what you're doing. (Score:1, Insightful)
It's funny that you care about offending this guy. Let me paraphrase his post;
"Help! I obviously have no experience running a web server but have managed to convince a non-profit that I'm the right guy for the job. I've used my social science degree to good effect and done lots of reading to discover the right buzzwords and where all the tech geeks hang out. I think that with $8-24k of servers that you guys could set this up really well for me."
Anyone actually qualified to do this would have already done it. The scary part is what he hasn't talked about - security and backups. He's talking about an FTP repository. How long before that gets pwned?