Clustering vs. Fault-Tolerant Servers 321
mstansberry writes "According to SearchDataCenter.com fault-tolerant server vendors say the majority of hardware and software makers have pushed clustering as a high-availability option because it sells more hardware and software licenses. Fault-tolerant servers pack redundant components such as power supply and storage into a single box, while clustering involves the networking of multiple, standard servers used as failover machines." Perhaps some readers on the front lines can shed a bit more light on the debate based on both proprietary and Linux-based approaches.
It depends on what you want to do. (Score:3, Interesting)
Also the one thing the article mentions is that clustering is just as expensive as fault-tollerence due to software licesing. Last I checked if its one copy of Debian + Apache + MySQL + Perl or 200 copies - its going to cost me the same price (free). And windows doesnt support clustering yet - in any decent way shape or form - so I dont see the problem here.
Re:It depends on what you want to do. (Score:2, Insightful)
Re:It depends on what you want to do. (Score:3, Informative)
http://www.microsoft.com/windowsserver2003/techno
Clustering (NOT performance clustering mind you, which is NOT the topic at hand anyway) has been around in Windows NT as far back as I can remember. With NT4, you needed to have Enterprise Edition, but it was there.
Re:It depends on what you want to do. (Score:5, Interesting)
Yes, Windows has supported clustering since NT4 (Wolfpack), and per the GP, it SUCKED BOLLOCKS. I had to deal with that shite every damn day for almost 3 years (1997-2000). We used active-active failover, and the joke around the company was that MS were halfway there: the "fail" worked just fine.
-paul
Re:It depends on what you want to do. (Score:2)
It is possible, though, that I am in the minority.
Re:It depends on what you want to do. (Score:5, Insightful)
Some things have changed, for example Windows 2003 Server came out and MSCS is now quite a decent HA solution.
(BTW, the grandparent post didn't say that Microsoft's own clustering solution was lame, he made a general statement about all clustering software for the Windows platform).
Re:It depends on what you want to do. (Score:3, Informative)
Since I was in support I didn't see a cross-section, I only saw the failures. That said, there were a LOT of installations out there that would have had better availability with a beige box, and MUCH better availability with a single fault-tolerant server.
It didn't help that sales constantly sold invalid configurations and set unreasonable expectations.
B
Re:It depends on what you want to do. (Score:5, Insightful)
Anyway, I do agree that I've seen more trouble caused by DB Clustering solutions than it helps...
A cluster adds complexity to the environment, Complexity == Cost, even without the expensive software.
a Java layer that load balances??!!??!!?!? (Score:2, Insightful)
Microsoft Windows Server DOES support clustering (Score:2)
If you want to see the latest Microsoft offering on clustering services, check out this site http://www.microsoft.com/windowsserver2003/techno
Re:Microsoft Windows Server DOES support clusterin (Score:2)
Interestingly, one of them also runs a Linux and a HP cluster and say they were much easier and were moving their code base to Linux only.
Re:It depends on what you want to do. (Score:5, Informative)
Windows Server 2003 actually supports two different types of clustering. One is called network load balancing, which enables up to 32 clustered servers to run a high-demand application to prevent a single server from being bogged down. If one of the servers in the cluster fails, then the other servers instantly pick up the slack.
Network load balancing has been most often used with Web servers, which tend to use fairly static code and require little data replication. If a clustered web site needs more performance than what the cluster is currently providing, additional servers can be instantaneously added to the cluster. Once the cluster reaches the 32-server limit, you can further expand the cluster by creating a second cluster and then using round-robin DNS to divide traffic between the two clusters.
The other type of clustering that Windows Server 2003 supports by default is often referred to simply as clustering. The idea behind this type of clustering is that two or more servers share a common hard disk. All of the servers in the cluster run the same application and reference the same data on the same disk. Only one of the servers actually does the work. The other servers constantly check to make sure that the primary server is online. If the primary server does not respond, then the secondary server takes over.
This type of clustering doesn't really give you any kind of performance gain. Instead, it gives you fault tolerance and enables you to perform rolling upgrades. (A server can be taken offline for upgrade without disrupting users.) In Windows 2000 Advanced Server, only two servers could be clustered together in this way (four servers in Windows 2000 Datacenter Edition). In Windows Server 2003, though, the limit has been raised to eight servers. Microsoft offers this as a solution to long-distance fault tolerance when used in conjunction with the iSCSI protocol (SCSI over IP).
Re:It depends on what you want to do. (Score:2)
Ignoramus (Score:5, Informative)
1. (because I have yet to meet a clustering DB solution that didnt suck).
Where do you live? In Ruanda?
Perhaps you have heard of Oracle RAC. And there are other very good clustering solutions for DBMS.
2. one copy of Debian + Apache + MySQL + Perl or 200 copies
mySQL isn't enterprise-reliable even in stand-alone configuration, let alone clustering. I can't believe this...
3. And windows doesnt support clustering yet - in any decent way shape or form, I dont see the problem here.
Hah, hah! Enough said.
And also - what's it to you? If Microsoft (in your view) had a good clustering solution, you'd lose sleep over that?
When you're biased like that, no wonder you can't have a quality, unbiased opinion on this topic.
Re:Ignoramus (Score:2)
Re:It depends on what you want to do. (Score:5, Informative)
Let me preface this by saying I'm the Enterprise IT Manager for a large, Big-10 University. "Enterprise" means I am responsible for all servers that run the University, not just a small department. My userbase is 70,000+ students, and somewhere between 15,000-20,000 faculty and staff.
We run a variety of hardware platforms, including a large Linux deployment. Yes, it really does depend on what you want to do with that server, before you can decide to go with a bunch of servers behind a load balancer v. a larger, fault-tolerant server.
For our production web servers (PeopleSoft, web registration, etc.) we run a bunch of cheap servers running Red Hat Enterprise Linux, and we distribute them across two data centers (for redundancy.) We run a load balancer in front of them, so that users access one URL, and the load balancer automagically distributes traffic to the servers on both data centers. For a lightly-used application, we may only run 2 web servers. For heavily-used applications (web registration) we run 5 web servers. Those are IBM x-series now, but we are in the process of moving to IBM BladeCenters.
With multiple servers in production, I can lose any single web server and not experience downtime on the application. We usually only have a single PSU in each server, because there's no point in the extra expense when we have redundancy at the server level. And because we've split our web servers across two data centers, I can actually lose an entire data center and only experience slow response time on the application. (Note to the paranoid: while the data centers are only 1.4miles apart, they are on separate power grids, etc. The other back-end infrastructure is also split between data centers.) We run a lot of sites behind load balancers, so we can afford to have a separate load balancer pair at each site (which can provide backup to each other.)
However, for large applications we may use a single fault-tolerant Linux server. For example, we used to do this with a database server. Multiple power supplies, multiple network connections, RAID storage, etc. To be honest, though, we tend to run databases on "big iron" hardware such as Sun SPARC (E25000, V890, etc.) and IBM p-series. We don't have any Linux database servers left, but that's not because Linux wasn't up to the task (our DBAs preferred to have the same platform for all databases, to make debugging and knowledge-sharing easier.)
In a few cases, we have a third tier. If the application is low-priority (i.e. a development server) and/or low-volume (i.e. a web site that doesn't get much traffic), we run a single server for that. The server is a cheap IBM x-series box running Red Hat Enterprise Linux, usually with no built-in redundancy.
Yes, for us Linux has been able to play along quite nicely with the "big iron" UNIX systems. We've run Linux at the Enterprise level since 1998 or 1999, and Linux is definitely considered part of our Enterprise solution.
Re:It depends on what you want to do. (Score:5, Insightful)
The usual clustering I've seen is "Hot Spare" clustering. The primary runs until it goes kaput, then the second takes over. For database clustering, the two boxes usually share the same disks. I think I've seen more outages from false takeovers by the seconday than real failures of the primary.
The other problem with clustering is that all of your software applications have to be cluster tolerant. If the user app keeps a database connection open and a rollover occurs, the connection state doesn't and can't rollover with it. To a client system, a cluster failover looks like a server reboot. Don't underestimate the difficulty of this problem. A new application has to be designed with that in mind. Retro-fitting it in later is hard - and costly, even with free platforms.
Another issue that can't be solved with clustering is application failure or application limits. You may recall the airline system failure last Christmas? Some 80% of Slashdot readers asked where was the backup? (there was) should have used Unix (they were). The box (RS6000) and operating system (AIX) kept running just fine. A hundred computer cluster couldn't solve the the real problem: the application couldn't handle the volume of information it was required to hold and they at the mercy of a proprietary source code vendor.
More clustering benefits (Score:2, Insightful)
Re:It depends on what you want to do. (Score:3, Insightful)
Re:It depends on what you want to do. (Score:2, Interesting)
Fault tolerant is the
Re:It depends on what you want to do. (Score:3, Informative)
There are "Best Practises" for doing this sort of thing that take the religion out of server-farm design.
First thing to work out:
(1) How many minutes of APPLICATION downtime are acceptable
(2) How much money will I lose for each miunte the application is down.
Multiply (1) by (2), and you have a rough idea of your budget. Ideally, this should be the last thing - you work out your needs and then pay for them - and that was tru
I don't see why anybody would use their own server (Score:2, Funny)
Re:I don't see why anybody would use their own ser (Score:2)
Oh the irony (Score:2, Funny)
I know where this is going (Score:2, Funny)
Software vendors (Score:5, Insightful)
Fault tolerant hardware is not the solution (Score:2, Insightful)
Re:Fault tolerant hardware is not the solution (Score:3, Informative)
When hardware fails, you bring up the required packages on a different physical host, and other applications access it using the virtual IP. Going this route allows you to do
So the choice is between... (Score:3, Funny)
And the moral of the story is..... (Score:2)
Re:So the choice is between... (Score:5, Funny)
Re:So the choice is between... (Score:4, Informative)
Re:So the choice is between... (Score:2)
Not the same. (Score:5, Informative)
Absolutely right (Score:4, Informative)
Clustering provides you with Fault Tollerant OS/Applications. A single server with tons of redundant bits, doesn't help you if the OS or Applications that it servers get borked.
This is dead-on correct. For example, if a CGI hits a problematic state where it eats a lot of memory putting the server into a state where it's swapping, then it takes longer to service each http transaction, which means each more httpd transactions queue up, which means more memory gets allocated which means more swapping .. rendering the machine useless for a little while (until a sysadmin or a bot notices the state and either restarts the httpd or kills a few select processes). If we were running this on one mammoth server with lots of redundant bits, then 100% of our web service capacity would be down in the interim. But since we run a pool of ten http servers under keepalived/IPVS, we only lose 10% of our capacity during that time.
Other reasons I've traditionally preferred clustering: easy to incrementally scale up infrastructure (no big buy-in in the beginning to get the server which can be expanded), fully parallel resources (an independent memory bus, an independent IO bus, two independent CPU's, an independent network card, and a few independent disks for each server, as opposed to a mammoth shared bus on a leviathan crossbar, which will inevitably run into contention), and more flexibility in how resources are divided amongst mutually exclusive tasks.
One of those reasons is getting less relevant -- point-to-point bus technologies like LightningTransport and PCI-Express are inexpensively replacing the "one big shared bus" with a lot of independent busses, transforming the server into a little cluster-in-a-box. It is a positive change IMO, and shifts the optimal setup away from the huge cluster of relatively small machines, and towards a more moderately-sized cluster of more medium-sized cluster-in-a-box machines.
The price of licenses is, IME, rarely an issue (in my admittedly limited career -- I don't doubt that it's relevant to many companies) because the places I've worked for have tended to use primarily free-as-in-beer (and often free-as-in-speech) open source solutions. What is more of an issue, IME, is the necessity of staffing yourself with cluster-savvy sysadmins and software engineers. Those of that ilk tend to be a bit rare and expensive, and difficult to keep track of. It takes a distributed systems professional to look at a distributed system and understand what is being seen, and this makes it easy to bend the spec or juggle the schedule on the sly, or run skunkworks projects outright. By contrast, the insanely redundant, mondo-expensive uberserver was created and programmed by very smart hardware and software specialists so that your IT staff doesn't need to be so specialized. This makes useful talent easier to acquire, and understanding the system closer to the reach of mere mortals.
Just my two cents
-- TTK
Since information wants to be free (Score:5, Funny)
More about the cost of hardware? (Score:5, Interesting)
Real world example and cost (Score:3, Interesting)
The agency bought a pair of dual proc Dells with lots of RAM and a full software stack (Windows Server, SQLServer, and ColdFusion Server). Total cost: ~$57,000.
That's right, nearly 60k.
Now, I've read that Google buys their white boxes at $1k each for their server farm. And I couldn't help but think what they'd (or I) would do with 57 box
Clustering (Score:3, Insightful)
Apples and Oranges (Score:4, Funny)
Why are clusters better? (Score:3, Interesting)
Anyone make a case for clusters for high-uptime situations?
Catrastrophic loss (Score:3, Insightful)
--LWM
You shouild use both (Score:3, Insightful)
Clustering is safer (Score:2, Interesting)
Re:Clustering is safer (Score:2)
If you're willing to lay out the cash, you can get a server that will let you swap out bad cards, memory, and even CPUs while the thing is running without missing a beat.
Re:Clustering is safer (Score:2)
I don't know, I've replaced almost every component on my production servers without taking them down... Of course, that's a $1Million server...
Re:Clustering is safer (Score:2)
I think you don't quite understand the concept of "fault tolerant servers".
The entire point of a fault-tolerant server is that you don't have to power it off to open the case or replace a part.
It all depends (Score:2, Insightful)
Clusters can be in different server racks, building, city even country.
It depends what the goal is. Fault tolerance, scalability, disaster recovery, etc.
They both have their uses, let's not discount one or the other, just use them properly.
**Typically, the goal is a mix of the ones I enumerated, hence I typically choose clusters. However, I always re-evaluate every time a new requirement comes in.
Clustering is really... (Score:2)
If you've got the space for the extra servers clusters are great, if you don't have that kind of excess space then fault tolerance is top of the mark.
Not to be too snide... (Score:2)
If the people that pay me are willing to invest in the extra HW and SW to make a critical app available, then we do it.
Not either/or (Score:5, Interesting)
Maybe if they want to write an article, they should spend some time in the real world and see how the HA industry works instead of making up some arbitrary demarkation line to hang a preconception on.
Re:Not either/or (Score:2, Insightful)
Re:Not either/or (Score:3, Interesting)
Actually that's not really fair, the problem is the term clustering has become overloaded. What Google does is would be more completely described as "shared nothing" distributed computing. They use cheap as chips iron beacuse nobody cares if a transaction fails, because no data is lost, the end user just pushes refresh. Similarily the various grid compute "clusters" (SETI, Folding@Home etc.) can recover from a lost unit of work by sending it out for rep
Flexibility (Score:2)
For certain tasks, clustering will certainly offer a performance advantage from a scalability standpoint. Yet a fully fault tolerant hardware system like from Stratus [stratus.com] offers just a touch more reliability than a fault tolerant software system.
The Good, The Bad, and The Ugly (Score:3, Insightful)
Clustering Potentially Solves More Problems (Score:4, Informative)
Clustering gets you a set of services that keep running in the face of hardware failures and maintenance. The switchover time can range from negligible to huge depending on the application involved.
However, clustering also helps you to solve other problems, including scaling, software failures, software upgrades, A-B testing (running different versions side by side), major hardware upgrades, and even data center relocations.
Clustering tends to require a lot more local knowledge to get right.
So if you narrow the problem definition to hardware only, they solve the same class of problems. But when you broaden it to the full range of what clustering offers you find a greater opportunity for cost savings - because one technique is covering multiple needs.
False dicotemy (Score:2)
SneakerNet * (Score:5, Interesting)
We desperately need a better way to access data in a corporate network.
My favorite customers are those architects and engineers who avoid networking except for the Net. Seriously, sneakernet and peer-to-peer has shown the least downtime I've seen.
I think p2p networks will see a comeback if a torrent-like protocol can grow to be speedy. My customers are not banks, but they need 100% uptime as every day is a beat-the-deadline day.
If someone can extend and combine an internal torrent system with a decent file cataloging and searching system, they'll see huge money. I have some 150 user CAD networks just waiting for it.
What would a hive network need?
* Serverless
* Files hived to 3+ workstations
* Database object hiving
* File modification ability (save new file in hive, rename previous file as old version, delete really old versions after user configurable changes)
* "Wayback Machine" feature from old versions
* PCs disconnected from hive will self correct upon reconnection
It is very complex right now, but my bet is that the P2P network will trump client-server for the short run. The "client is the server" vs "the server is the client"?
Re:SneakerNet * (Score:3, Insightful)
Speaking of which, what about freenet [sf.net]? The only thing it's missing is "guaranteed availability of critical business data", eh? And I hear it might have some performance problems.
--Robert
Re:SneakerNet * (Score:2)
The war is on:
A. huge megaservers online serving thin/dumb terminals over high speed network connections (renting processors and storage and even apps all on demand with backups)
B. P2P with cheap clients and cheap shared in-client storage
I don't know which way is better. High bandwidth will get cheaper and more available every day.
For now, I'm betting on DumbClient/MonsterServer being the cheapest both initially and in the long run when 10Mb connection
Re:SneakerNet * (Score:2)
Re:SneakerNet * (Score:2)
Re:SneakerNet * (Score:2)
It IS possible to chuff things up, mainly by making administrative errors.
Re:SneakerNet * (Score:2)
I may try to torrent a corporate network if I can find a good file "explorer" or file access subsystem that integrates into Windows.
Re:SneakerNet * (Score:2)
* backups
* authentication/permissions
* simultaneous use of the same file
etc...
These are problems that have already been addressed in most corporate LANs. Fault tolerance is an issue, yes, but if I had to trade the few items above for the extra tolerance that a P2P network gives me, I'd stay with the regular 'ol client-server model.
I'm not saying that P2P isn't a potential solution for the future, but for this application, it's not ready yet. In my experience, the problem i
Re:SneakerNet * (Score:2)
Authentications/permissions can be realized by using a registry-like Address/Key/Source structure. The address of a chunk in the hive designates what data it is, the key can be 0 for public or an encryption key known to client apps permitted to access the chunk. Source is the data (encrypted or otherwise).
Since the client node is responsible for reassimilating chunks it hived out, the encryption is twofo
Re:SneakerNet * (Score:2)
Re:SneakerNet * (Score:2)
Why couln't something like SVK work for this?
Simple (Score:2)
I've struggle with this one myself... (Score:2)
No difference, just a matter of packaging. (Score:4, Informative)
Having built both true high-reliability fault-tolerant devices and clustered systems, I don't see any fundamental theoretical difference. In both cases, you have redundant hardware capacity in place, theoretically to allow you to tolerate the failure of a certain amount of your hardware (and, sometimes, your software) for a certain amount of time. Neither option guards you against failures outside of the cluster or FT system box. Neither one is a panacea. Both are sold as snake-oil insurance against "badness".
In a single fault-tolerant box, you generally have environmental monitoring, careful attention to error detection, and automatic failover. You also have customer-replaceable units for failure-prone components, utiilties for managing all of the redundancy, and a fancy nameplate. In exchange for that, you have more complexity, more cost, serious custom hardware and software modifications, and often (but not always) performance constraints.
In a clustered system, you treat each individual server as a failure unit. Good fault detection is a challenge, especially for damaging but non-catastrophic failure, but it's much easier to configure a given level of redundancy and it's easier to take care of environmental problems like building power (or water in the second floor) -- you just configure part of the cluster a longer distance away.
Where clustering is inadequate is when you have a single mission-critical system where any failure is disaster (like flight-control avionics or nuclear power plant monitoring). There are applications where there's no substitute for redundant design, locked-clock processors and "voting" hardware, and all of the other low-level safeguards you can use.
For Web applications, however, where a certain sloppiness is tolerable, and where the advantages of load balancing, off-the-shelf hardware and software, and system administration that doesn't require an EE with obsessive-compulsive disorder, clusters are the natural solution.
The fact that you get to sell more licenses for the software is just gravy.
Don't forget Load Balancing (Score:2)
On the web side of things, clustering (actual clustering) sure hasn't come up much in my world. But I use nativ
Standard and up to date hardware and software (Score:2)
The main problem is that building a fault-tolerant server is an ardous task. It take a lot of engineering and testing. This slows you down and your product cycles get long. When you bring your new machine to the market it will look old and slow compared to 'standard' competitors. In addition your database will be a specialized, proprietary version which does not work with any tool and the admin staff needs special education to manage and operate it.
Clusters are different. Just take your latest and greatest
Fault tolerance only goes so far (Score:2, Insightful)
The big disadvantage (Score:2)
Re:The big disadvantage (Score:2)
Clustering has a MAJOR problem going with it. Clustering requires applications to be written specifically to support clustering. All sorts of libraries have been written to "make this process easier", but one thing's for sure : it will require a recompile
This is not true at all for many of the most-common cluster applications. Framework software exists which "gangs together" a pool of servers, each of which can run ordinary, non-cluster-aware software. No need to write code, no need for a recompile.
F'ed up terminology.... (Score:2)
Fault Tolerant Servers: Serval systems will a failover loadbalancers in front.
I get frustrated when people use the latter and call it the former. True, you could hae fault tolerant servers in a single box, but why? In fact I'm rolling out infrastructure of the latter in large dose.
This is how google dunnit. Very well in fact.
Re:F'ed up terminology.... (Score:2)
Fault Tolerant Servers: Serval systems will a failover loadbalancers in front.
I get frustrated when people use the latter and call it the former.
Not necessarily:
High Availability Clusters are what we're talking about here.
You're talking about High Performance Clusters, which is NOT what we're talking about...
For firewalls and/or routers (Score:3, Informative)
http://www.openbsd.org/faq/pf/index.html [openbsd.org]
http://www.openbsd.org/faq/faq6.html#CARP [openbsd.org]
This is not an XOR (Score:2)
BTW, SQL server does not require that you buy li
Availability vs. Reliability (Score:3, Insightful)
Fault tolerant servers are nice, even the simplest true server should offer some fault tolerance to a degree (IE: RAID drives). This is handy but may not help your availability in the event that you have a SLA promising xx% of uptime and then find yourself needing to take the server down to apply service packs or other patches.
Clustered servers allow you to increase the availability of your machines, because when you need to take one down for some updates, you can simply fail over all your traffic to the other server in the cluster accordingly. Clustering may increase the availability of the services those machines are offering, but it doesn't not help the reliability of the machines themselves.
Therefore, I personally choose to start with fault tolerant machines initially (RAID and dual power supplies at a minimum). It makes for a good base. If the services on that machine are 'mission critical', then cluster that machine with other fault tolerant machines.
Capacity Planning (Score:2)
With a cluster, you simply add another machine to the cluster when you need more computing power. You can also take a single machine off the cluster for upgrades, hardware troubleshooting, or to reallocate the single machine to do something else.
As other posters have said, a l
All about the cost/benefit. (Score:2)
A cluster can, if properly designed, protect against all sorts of failures: disc, power supply, controller, motherboard, CPU, backplane, cable, network, some designs can even deal with physical disaster like a fire in one of your
Fault Tolerance (Score:2)
Google as an example (Score:3, Interesting)
Google built massive clusters of thousands of machines out of very cheap unreliable hardware. They have tons of hardware failure due to the extremely cheap components (and sheer number of machines), but everything is redundant (And fully fault tolerant).
They did this, again, using dirt cheap hardware.
Re:Google as an example (Score:3, Informative)
Google does not have to worry about ACID compliance in their database. From what I've read about the google file system, cluster nodes lazily share new data amongst themselves. Serving up old data is explicitly allowed.
To cluster something like an OLTP database, every node has to be immediately informed about updates to the data, and they all have to report back that they have said data intact before the transaction commits. This can be something of a problem when you have hundreds of thousands of updates
NetWare and Windows Clustering... (Score:2)
In my experience, the Windows Cluster Nodes will fail into some sort of "undead" state, in which the dead node isn't quite dead yet and the live node never quite picks up the slack, so you end up having to reboot both of them...
The NetWare Cluster Nodes have such a hair-trigger with the default settings that they seem to fail-ove
Scalibility (Score:2)
For my business needs I usually see clustered systems as a much greater solution than fault tollerance. When dealing with systems that require fault tollerance you mostly are concerned with keeping the data they store avalible (database servers, file systems, etc). When dealing with systems where high avali
Re:Scalibility (Score:2)
Usually a clustered server solution is very comprible in cost to a fault tollerant solution. In general your clustered boxes are pretty cheap off the shelf deals, while fault tollerant machines are not. When a critical error occurs with a fault tollerant system, the cost to repair is much greater than in a clustered solution, and downtime can be exponetially higher.
Clustered solutions are designed to maintain uptime even when their is failure.. FT solutions are designed not
How much load? (Score:2)
This especially goes for multiple services, and you may want to mix-and-match. For a CGI+SQL combo, you may prefer to split the web load over a cluster, but you may want to forego the complexity of a clustered database and put your SQL serv
One word. (Score:3, Insightful)
Where did you get the idea SMP is cheaper? (Score:2)
Well, let's see (Score:3, Informative)
http://store.sun.com/CMTemplate/CEServlet?process
For around $20,000 you could build a PC cluster that includes:
20+ x Intel P4 D820 at ~$500 ea.
20+ x AMD64 X2 3800+ at ~$750 ea.
You could almost get a cluster of 40 Intel PCs, each with a dual-core chip running at 2.8 Ghz. Or almost 30 AMD64 PCs, each with a dual-core chip running at 1.8 Ghz. If you shop smart you can get gigabit ethernet on the motherboard and have a fault-tollerant / redundant system with over 10 times the performance of the Sun system.
I don't know about you, but I would take the cluster of AMD X2s. The Intels might beat 'em on price/performance, but the X2s might be a lil bit nicer to work on.
Clustered FT hardware is the proper solution (Score:3, Insightful)
Take a server: Hot-swappable mirrored OS disks, N+1 power supplies, dual NICs (which support failover), dual cards initiating separate paths to your storage (through independent switches, if fibre-attached), ECC RAM with on-system logic to take out a failing DIMM. Oh yeah, and multiple CPUs, again with logic to remove one from active use if need be. (chipkill sort of stuff.)
Now take another identical server (or two) and cluster them. By cluster, I mean add the heartbeat interconnects and software layer to monitor all of the mandated hardware and application resources, and fail over as necessary, or take other appropriate actions. Gluing a pile of machines together in a semi-aware grid is NOT a cluster, and does not properly address the same problem!
Now once you've got this environment in place, add the most crucial aspect: Highly competent sysadmins, and a strict change control system. The former will cost you a fair sum of money in salary, and the latter will likely necessitate duplicating your entire cluster for dev/test purposes, before rolling out changes.
That's the beginning of an HA environment. Still up for it?
Re:And what's so difficult about... (Score:2)
Re:And what's so difficult about... (Score:5, Funny)
Never build systems on a core of failure. (Score:3, Informative)
It is not a good idea to build a system out of parts that you know will fail, and then proceed to design the system around such failure. A far better idea is to spend some money, and design a system that will work. Of course you do take into account hardware failure, and you build i
Re:Never build systems on a core of failure. (Score:5, Informative)
Re:Queue... (Score:3, Informative)
If you are one of those people, stop. A Beowulf cluster is a performance cluster, but it is not a replacement for an SMP system. You more or less have the master node delegate actual comput
Re:This brings to mind Google's strategy. (Score:5, Interesting)
Other interesting tid bits that I remember:
-over 300,000 x86 machines make up the network, with clusters all over the place which make searces return in under
-commodity hardware (maxtor, western digital, whatever is available) is used.
-over a thousand machines fail daily. Most are automatically reboot, and it sounded like admins only come into play when a machine needs to be replaced.
-the longest uptime of a single machine has been 7 years
-they use a heavily modified redhat distro.
-real time stats of the entire network can be seen at any moment
i'm sure there were more interesting facts but that's all I can regurgitate at the moment.