Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking Software Linux

High-Performance Linux Clustering 129

An anonymous reader writes "High Performance Computing (HPC) has become easier, and two reasons are the adoption of open source software concepts and the introduction and refinement of clustering technology. This first of two articles discusses the types of clusters available, uses for those clusters, reasons clusters have become popular for HPC, some fundamentals of HPC, and the role of Linux in HPC."
This discussion has been archived. No new comments can be posted.

High-Performance Linux Clustering

Comments Filter:
  • Imagine (Score:5, Funny)

    by commodoresloat ( 172735 ) on Wednesday September 28, 2005 @08:46PM (#13672529)
    Single-processor implementations of this!

    *ducks*

  • Geek (Score:5, Interesting)

    by mysqlrocks ( 783488 ) on Wednesday September 28, 2005 @08:48PM (#13672540) Homepage Journal
    With Linux and other freely available open source software components for clustering and improvements in commodity hardware, the situation now is quite different. You can build powerful clusters with a very small budget and keep adding extra nodes based on need.

    Yea, I'd like to build one but I'm not sure what I'd use it for. Does that mean I'm a geek?
    • Re:Geek (Score:3, Interesting)

      Use it to crack some password withCisilia [securiteam.com].
    • Re:Geek (Score:1, Funny)

      by Anonymous Coward
      geek (gk) pronunciation
      n. Slang.

      1.
      1. A person regarded as foolish, inept, or clumsy.
      2. A person who is single-minded or accomplished in scientific or technical pursuits but is felt to be socially inept.
      2. A carnival performer whose show consists of bizarre acts, such as biting the head off a live chicken.

      No, it would seem, unless you eith
      • b) took it on the road and used it to perform mind control on chickens

        Well, with a powerful enough cluster I'm sure I could simulate the inner workings of a chickens mind. Well not exactly mind control, it could help predict and manipulate the behaviour of chickens. I'm sure there would be some scientific use in this besides, of course, biting the heads off of chickens.
    • A real geek would know what to use it for.
      • A real geek would know what to use it for.

        I can think of a hundred things to use it for. The only problem is that none of them have any practical application to my life.
    • Re:Geek (Score:5, Informative)

      by burnin1965 ( 535071 ) on Wednesday September 28, 2005 @09:54PM (#13672805) Homepage
      Do you watch DVDs? Do you dream of squeezing all your DVDs onto a harddrive and streaming them to a media PC attached to your TV?

      You could copy the DVDs at ~8GB each to some large harddrives or you could transcode them to much smaller formats with all the garbage removed and go from ~8GB/movie to less than 4GB/movie. But to do this you need lots of processing power. A cluster works very good for this and the software is already there for you:

      http://www.exit1.org/dvdrip/doc/cluster.cipp [exit1.org]

      For the cost of some overpriced Dell crap video editing PC you could build a decent diskless cluster. Who needs harddrives, monitors, video cards, keyboards, mice, etc. At least more than one set. ;)

      burnin
    • Yea, I'd like to build one but I'm not sure what I'd use it for. Does that mean I'm a geek?

      Doesn't matter. Just slap a "Type-R" sticker on it and you'll have the world's largest artifically inflated e-penis. :)
    • I'd like to build a cluster consisting entirely of 486 and Pentium I machines, each with RAM between 4mb and 32mb, to use as a single desktop PC equivelant to a 3ghz machine with 1gb RAM...

      Somehow I think, despite my collection of such systems, that it's not entirely practical to assemble that cluster.
  • by Work Account ( 900793 ) on Wednesday September 28, 2005 @08:50PM (#13672549) Journal
    We spent $849,000 on an Itanium cluster and have recently found ourselves SOL since it's a dying architecture.

    You can't even run Java on them.
    • You can't even run Java on them.

      What do you mean? I thought 1.4.2 and up had support for Itanium. Check this white paper [sun.com] (search for Itanium). Are their claims false, or are you running and older version of the JRE?
    • You can't even run Java on them.

      And you would want to run a low performance non-scalable application development base on them why?

      Or you just purchased the HPC with hopes of getting good performance out of JAVA? :*)

      This is like saying, "We heated the syrup to 400 degrees F. so it would come out of the jar faster, but now realize this won't work cause the jar keeps breaking."

      Who in the heck would be trying to use something like JAVA on a HPC in the first place, and WHY?

      Do you realize how much performance you
      • I completely agree, but you'd be surprised how many scientists (computer scientists even!) will complain that XYZ algorithm is too computationally intensive...and then you find out they implemented it in Java.

        It's like "DUH!" C or Fortran. Pick one. Or C++, even. But Not Java.

        Of course, when you broach the topic, you will hear things like 'C is old. Fortran is older!' Wtf cares? It's F A S T.

        • You could be the best programmer in the world in FORTRAN or C but don't think that because of that you understand Java. The vast majority of complainers and those who knock Java haven't written a line of code in Java ever in their life. Their experience may be with an applet or two in the browser, if that.

          That being said, go read this article and then report back. I doubt you'll post because it's hard to dispute raw facts from an unbiased research team :)

          ---------------

          Performance of Java versus C++
          J.P.L
          • I'm sorry, but you're wrong.

            Java performance depends entirely on the JVM. If the JVM sucks, then the Java program runs slow. The JVM may be good on PIV x86, but what about Power series? What about Itanium? Clusters don't often use x86's because they consume lots of power and require lots of cooling. And new computing architectures require ... a new JVM! So when the Cell processors are available for scientific computing, C and Fortran compilers will be written to take advantage of their amazing capabilities.
            • Intel *does*, however, make a C compiler and a Fortran compiler, as well as an MPI implementation. They all work pretty darned nicely on a cluster of Itaniums. Granted, I'm not getting the java compiler to work right now (I seem to have gcj on here), but then, I'm not positive that I even installed the whole Java setup since I don't really care about Java on this (or any) cluster. :)
              • *Exactly*. And Intel's C/Fortran compiler is F A S T. (I use it on Itaniums as well. Not that Itaniums are all that fast, but who cares if you have 100+ of them. ;) )

                For Linux, the Fortran compiler is also free right now (as in beer, for non-commercial use), which allows me to code and test from home on my x86 and then recompile it on Itanium. Easy!

          • Thank you for the link. I enjoyed reading the paper. Very well written, and the shpiel about "psychology and mythology" in Comp Sci people was great.

            I couldn't help noticing thought:

            Your company shelled out close to a million on hardware. You use java, and the article does specifically discuss numeric benchmarks .... so either you're from the NSA, or (I think more likely) doing some kind of financial/market prediction/analysis.
      • I would wager you've never used Java. I say that not as an insult, but because you simply have yet to realize that a change made back in 1996 made the average piece of Java code run as fast as the average piece of C/C++ code.

        ---

        Five composite benchmarks listed below show that modern Java has acceptable performance, being nearly equal to (and in many cases faster than) C/C++ across a number of benchmarks.

        1. Numerical Kernels

        Benchmarking Java a
        • Java performance was very reasonable compared to C (e.g, 20% slower),

          Right, except the entire point of high-performance computing is, well, high-performance. 20% means that a run that takes 5 days would now take 6. Or it means buying 120 computers instead of 100.

          Many users of HPC try to eek out every possible bit of speed. "If I use network card X instead of network card y it shaves 2% off the run-time!". 20% is massive to them.
    • Don't worry, Debian will support it. :)

      Although I'm being slightly humorous, I really do appreciate this aspect of Debian, and I was a little sad (although I realize it was completely necessary) when they finally got around to dropping some of the lesser-used architectures. But it is nice to know that if I ever get a masochistic urge to run Linux on a Motorola 68LC040 that I know right where to get it.

      I think IA64 will probably linger around for a while. Eventually it will just become a question of how much
    • Smokin Crack.. (Score:3, Interesting)

      by tempest69 ( 572798 )
      We spent $849,000 on an Itanium cluster and have recently found ourselves SOL since it's a dying architecture.

      You can't even run Java on them.

      ok, Where to begin...

      First, spending a million bucks on a machine that doesnt meet your needs. I hope there is an accountant ready to spank someone over this.

      Second, using Java in a massivly parallel fashion.. Last I knew there wasnt a MPI or PVM port that used Java, plus it kinda defeats the purpose of having big hardware running a slower language(yes I k

    • Why can't you run Java?

      The TeraGrid has several large IA-64 clusters (mostly running SuSE Linux); as far as I know, Java works just fine on them.
    • We spent $849,000 on an Itanium cluster and have recently found ourselves SOL since it's a dying architecture.
      I will make your frustation go away and pay you $849.00!!
    • I wouldn't call Itanium a dying architecture.

      We run a cluster of dual Itanium 2 servers and they work great - Java servers and all. They can handle massive loads compared to Xeon-based servers they replace.
  • So easy today (Score:1, Redundant)

    Cluster is very easy to implement today because there is a lot of software that can configure itself and connect to cluster nodes like OpenMosix [sourceforge.net]
  • Imagine! (Score:1, Interesting)

    by Comatose51 ( 687974 )
    Imagine a Beowulf Clus... oh.

    Jokes aside, when people say Linux cluster, do they usually mean Beowulf? Or are there other clusters and how do they compare? How difficult is it to setup a Beowulf cluster?

    • Re:Imagine! (Score:5, Informative)

      by maswan ( 106561 ) <slashdot2.maswan@mw@mw> on Wednesday September 28, 2005 @09:11PM (#13672647) Homepage
      Beowulf is a specific project/software for doing clusters. In reality, it is not that popular. There are lots of different "whole clustering solutions", and beowulf is one of those. Even more common in the HPC world is probably homegrown solutions, based on common components.

      /MattiasWadenstein - HPC sysadmin during weekdays

      • Thanks. By components, you mean software components or hardware? What are some of these common components? What I'm trying to get at is if there is some way of kick starting a cluster computing project other than Beowulf (or I assume that's why it's well known), even if it means you have to do some in house development?
        • Re:Imagine! (Score:3, Informative)

          by maswan ( 106561 )
          Well, other than beowulf, there is NPACI(sp?) Rocks and a few others like that. I don't have personal experience with those though, so I've probably missed alot. Then you have the turn-key ready cluster from a vendor type of ready clusters. There you pay IBM or Penguin Computing or whoever to do all this for you before startup, of course, then the maintenance is up to you.

          By components I mean software, since hardware is basically just a bunch of servers (or desktops), with optionally faster than commodity n

        • In addition/supplement to what the other poster mentioned, there's Oscar: http://oscar.openclustergroup.org/ [openclustergroup.org] and there's the C3 stuff: http://www.csm.ornl.gov/torc/C3/index.html [ornl.gov]. There's also ROX, which we're not using (not because it's bad, but because we used something else, and I can't find a URL now anyway).

          We're using parts of those first two systems here, combined with some in-house stuff (which we're planning to release when it reaches an acceptable maturity level).
    • Re:Imagine! (Score:3, Interesting)

      by burnin1965 ( 535071 )
      There are other cluster solutions, i.e. http://warewulf.lbl.gov/pmwiki/ [lbl.gov]

      But you can also roll your own. I did mine with Fedora by taking a fresh Fedora install, duplicating the common parts into a common NFS share, duplicating the distinct parts into a template and subsequent node NFS shares, compiled a custom NFSroot Fedora kernel, then setup a DHCP and TFTP server for the diskless nodes to PXE boot from.

      burnin
    • There are other cluster solutions, i.e. http://warewulf.lbl.gov/pmwiki/ [lbl.gov]

      But you can also roll your own. I did mine with Fedora by taking a fresh Fedora install, duplicating the common parts into a common NFS share, duplicating the distinct parts into a template and subsequent node NFS shares, compiled a custom NFSroot Fedora kernel, then setup a DHCP and TFTP server for the diskless nodes to PXE boot from.

      burnin
    • How did this get modded interesting? It is REDUNDANT.

      A moronic question (the answer is in the fucking article) that wasted other reader's time and created nothing but glut (since the answer is at the URL given in the story - http://www-128.ibm.com/developerworks/linux/librar y/l-cluster1/ [ibm.com]).

      • Modding my question up was probably the wrong thing to do but I don't think it was a redundant or even a moronic question since many of us have absolutely no experience or knowledge about clustering, other than hearing "Beowulf" being repeated over and over here. Given that, the question asks if Beowulf is the dominant cluster in the Linux world and if there are others, how do they compare? I don't see where in the article these questions are answered. Given that the first real reply to my question was m
  • A Thought (Score:3, Interesting)

    by Crusader7 ( 916280 ) on Wednesday September 28, 2005 @09:00PM (#13672601) Journal
    Okay, so I'd really enjoy trying something like the clustered model, just for academic kicks, but a relevant question comes to mind, at least for me.

    Where do people get the commodity systems cheap enough to be able to play around with this? I hardly want to spend two thousand bucks on some old P2s just to play around. Anyone have some hot tips where you can find real cheap (dare I dream... free) commodity systems to build a low-end cluster for kicks?

    Also, I'm a Windows guy by trade. Will making a Linux cluster make me instantly cool? :)
    • Also, I'm a Windows guy by trade. Will making a Linux cluster make me instantly cool? :)

      No, that makes you a geek....of course, that means you're in good company here ;)

      As far as spending 2k on some P2's, the biggest concern I would have is power consumption for all those machines...but hell, let em rip! :)
    • Also, I'm a Windows guy by trade. Will making a Linux cluster make me instantly cool? :)

      You'll have to at least brush off most of that sulphur first...

    • The club I belong to on my college campus built a Mosix cluster awhile back. It was one of our major projects. The way we got our computers was mostly by donation (due to the fact that we were academic in nature.) Really, all the computers we got were basically going into the garbage, so we say we picked them out of the garbage. From what I hear though, that's not all that uncommon or unbelievable. Take a look around perhaps and ask around.
    • Re:A Thought (Score:3, Interesting)

      by burnin1965 ( 535071 )
      linux cluster on the cheap:

      Go with a diskless cluster.
      Buy all in one motherboards, video, ethernet.
      Cases are pretty cheap, but you can save by creating a custom rack solution.
      Spend a little extra on 80% efficiency power supplies ( http://www.seasonic.com/co/index.jsp [seasonic.com] ).

      with that route you could build a decent little cluster for under $2k (USD).

      Will it make you cool, doubt it, but the path to the solution will teach you many lessons.

      burnin
    • Re:A Thought (Score:4, Interesting)

      by Procyon101 ( 61366 ) on Wednesday September 28, 2005 @10:18PM (#13672982) Journal
      I make it known to friends and relatives I will set up their new computers in exchange for taking their old ones off their hands... I transfer the data over, make sure it's configured, etc... then take my new cluster node home. :) I get some pretty nice systems this way, since running XP on less than 1 Ghz/512 MB ram is pretty painful nowdays people upgrade in droves.
    • If you just want to try clustering out, try using virtual machine software such as VMWare Workstation [vmware.com]. Every VM instance you run is a node in your cluster.

      I've used VMWare very successfully to test a site which was to be hosted on physically load balanced servers. We needed to know that if a node failed, or a user was redirected to another server in the cluster, that their session information would be retrievable without the need to logon again.

      It worked perfectly. The other nice thing about using VMWar

  • by composer777 ( 175489 ) * on Wednesday September 28, 2005 @09:04PM (#13672622)
    From everything I've seen, MOSIX is having some issues right now. Unfortunately, MOSIX is one of the easiest, most flexible ways to set up an HPC, and ever since they forked, development has been slow. I did research about 2 months ago to look into setting up a small MOSIX cluster with a few computers. My main goal was to get my feet wet in setting up a cluster using a few desktop and laptop computers. I figured that setting up a cluster with my Athlon 64 x2, Athlon 64 3500+, and a few laptops would speed up compile times by quite a bit. But, it appears that the 2.6 version of MOSIX is still beta and won't support the kernel I need for my Athlon 64 x2 (versions before 2.6.9 don't support powernow with the x2, and also tend to be flaky). So, I have the choice of running a cluster with slower PC's, or waiting for better support. If you look at the year on some of those whitepapers, only one was written this year, and I'd be willing to bet they are describing how to use MOSIX with the 2.4 kernel, not 2.6. I finally gave up on the idea, as running the latest kernel is more important to me.
    • For distributed compiling use distcc [samba.org].
      Regards,
      Steve
      • Thanks, I've been there, done that. Distcc works fairly well, but there was some other stuff I wanted to do in addition to compiling the kernel which I didn't mention. I work in bioinformatics, and quite a bit of the software I write scales nicely across multiple processors, so I wanted to see how well it would run on a mosix cluster. I figured that it could help speed up some of my testing when I am working at home. We already have clusters at work, I was mainly wanting to set it up as a learning exper
      • I just thought I would add, distcc also breaks gdb, which is a pretty big drawback if you ask me (it doesn't produce debug executables properly). However, my guess is that compiling code on a cluster that is set up with MOSIX probably would produce debug executables properly, since gcc would just see it as one big machine, rather than using a distcc front-end.

    • MOSIX is really more of a halfway-point between a traditional cluster and a "single system image" sort of cluster. Unfortunately, some aspects of clustered computing are still extremely difficult to abstract away into an ssi type of implementation. I had hoped over the years that the MOSIX work would get folded in with mainstream Linux's NUMA scheduling and memory allocation, essentially treating non-local cpu and memory resources (other nodes) like a second layer of NUMA with even less connectivity than
      • The key problem OpenMOSIX has for me right now, is that threaded applications do not migrate.

        That's a real killer if your number 1 CPU eater is called Matlab, if Matlab uses a separate thread for nothing else than its licensing heartbeat (it does so by default), and if you can't afford the number of licenses you'd have to buy (check out the commercial prices for Matlab: they're horrible (we could easily buy our current cluster hardware several times over with nothing else than our annual Matlab maintainan

    • I mean, what do you expect?
      First, it sounds like you don't really need a cluster (speeding up compile times - do you really do that much compiling?). If you did, you wouldn't use laptops.
      Second, there are both commercial and professionally supported open source solutions for compile farms - why don't you rather buy the right software (or support) and instead focus on coding or other tasks?
      • I expect it to work.

        We use pbs at work, and our bladecenter has several hundred processors, which is nice, but you have to share, and sometimes I like to work at home. Unfortunately, working remotely doesn't always work that well, since some of the applications we use to view the finished data are X11 apps that are rendering to large bitmaps, so running them over the net doesn't work that great. So, my solution so far has been to set up my own pipeline at home, and wait a couple of hours when I am testing
    • I'm curious. Are you talking about MOSIX or OpenMOSIX? You mentioned the fork, but not which branch (Or both?) you were looking at. I've got a small (much smaller than yours) cluster of computers at home and on my Project List is investigating an OpenMOSIX cluster since most of my machines sit idle and it would be nice to use the extra CPU cycles and RAM for various things.
  • by Erris ( 531066 ) on Wednesday September 28, 2005 @09:07PM (#13672633) Homepage Journal
    Cool, IBM on software. Add that to this hardware from a year ago [slashdot.org] and you are off to the races. Of course, you could just build the system as designed. Performance does not have to suck electricity and heat your home.

    I'm wanting to build one of these, but I really don't need it. Time may change that.

  • Aggregate.org (Score:5, Informative)

    by PAPPP ( 546666 ) on Wednesday September 28, 2005 @09:11PM (#13672648) Homepage
    For some very good information on F/OSS based clustering, check out aggregate.org [aggregate.org]. They have really neat ideas, that are reasonably well doccumented and freely implementable/usable. I built a little cluster (AFAPI on a WAPERS switch) with them for my highschool senior project, and it was a great experence.
  • by flaming-opus ( 8186 ) on Wednesday September 28, 2005 @09:15PM (#13672668)
    Though mpp's are kind of like clusters, and the boundary between the two is vague, I think there's definately a distinction. In many MPPs, nodes share access to memory, just at a performance penalty. Often the scientific binary is written using a message-passing tool like MPI, but the OS is often run with direct memory access. Definately from a systems-administration point of view, an mpp is different from a cluster. In an MPP you don't have 4000 root hard drives and 4000 power supplies to replace when they break. An mpp may be like a (fast) cluster from the programmer's point of view, but they are a lot simpler to deploy and manage. (Blue Gene, xt3, altix)

    I also contest some of the distinctions drawn about vector processor systems. The two vector systems currently on the market, the cray X1 and the NEC SX-8 are clusters. Each node just happens to be a vector-smp. The earth simulator is a 640 node cluster of 8-way SMP boxes, where each of the processors in the smp is a vector cpu. However, the predominant programming method even on these boxes is with explicite message passing like MPI. Co-array fortran and Unified Parallel C are faster, but slow to catch on.

    Good summary of the common case though.
    • It's very nice to see someone who gets it. One of the more interesting links off the article is Sandia's reevaluation of Amdahl's Law. If you read Sandia's other research involving the Red Storm project, they point out that Amdahl's Law doesn't have to be as big a killer as previously thought on truly massive systems when paired up with a system that has good balance in memory, network, and processor bandwidth. Most clusters have drastically worse balance than custom MPPs and it causes many algorithms to
    • In many MPPs, nodes share access to memory, just at a performance penalty.

      These are typically called Distributed Memory systems. Each node and CPU can address any memory in the machine, but locality is visible to the programmer.

      Unfortunately, there's no standard taxonomy for HPC systems. I tend to think of MPPs as a tightly-coupled machine with lots of processors. They can support shared-memory, distributed-memory, and message-passing programming models. Some have all three, others just one or two


      • This part of the article makes it plain that the author really doesn't know much about HPC systems. A vector machine uses vector processors. That's it. It says nothing about whether the machine is a cluster or MPP and what kinds of programming models it supports (distributed memory, shared memory, etc.).

        The SX-6/8 and X1/X1E are not clusters.


        Now that's a bit inconsistent. First you say that there's no agreed upon taxonomy of HPC systems, and then you blast the parent poster for not using YOUR definition of
        • There is no solid definition of what differentiates a cluster, but it's a bit misleading and confusing to refer to an architecture that doesn't have any of the characteristics of a traditional cluster as a cluster. Most clusters have a star or bus network topology of Infiniband, Myrinet, or Ethernet. An X1 uses a 3D torus of redundant, high-bandwidth links. Each node is a board in a backplane, not just a rackmount system. While defining a cluster as a machine where each node has its own OS is a convieni

          • There is no solid definition of what differentiates a cluster


            Exactly my point.


            but it's a bit misleading and confusing to refer to an architecture that doesn't have any of the characteristics of a traditional cluster as a cluster.


            But then, what are the traditional characteristics of a cluster? And why haven't those become the normal definition then?


            Most clusters have a star or bus network topology of Infiniband, Myrinet, or Ethernet. An X1 uses a 3D torus of redundant, high-bandwidth links.


            I don't think netw
      • As I said, the distinction of cluster is a soft and squishy one. One must distinguish between a "commodity cluster" and a cluster. The Sx-8 (or sx-6, or earth simulator) uses 8-cpu shared-memory bus nodes, and connects them with the IXS crossbar switch. Alternately, one can get these nodes connected with hippi for a lower cost. While they support distributed memory operations, NEC suggests using message-passing methods between nodes, as this often performs the best. I don't know if this recommendation is ba
  • by Frumious Wombat ( 845680 ) on Wednesday September 28, 2005 @09:29PM (#13672717)
    But their links could at least have mentioned OSCAR http://oscar.openclustergroup.org/ [openclustergroup.org] or my personal favorite, ROCKS http://www.rocksclusters.org/ [rocksclusters.org], as these are more prevalent than xCat systems.

    Personally, I like Rocks, as I ran three parallel architectures (i386/AMD64/IA64), on the same based distribution, just with each tuned to their particular processor. Comes with SGE and Myrinet support out of the box, and there are Rolls, i.e. custom software assemblages, for OpenPBS, for those who prefer it, as well as PVFS. It's easy to set up, and easy to administer, as the nodes are presumed to be interchangeable and disposable. When you reboot a node, it's obliterated and a fresh OS and supplementary package repository are laid down on a clean disk. No questions about version skew.

    They now have a custom roll to help you build a visualization wall, but I never had a chance to try that one. (try convincing your boss that you want 4 digital projectors and a big room to play with)

    The downside to the above distributions are that they presume batch-queue environments, which is appropriate for most of my work, but less so for many people trying to simulate owning an SMP, without paying SMP prices.

    Other people assure me that the current version of OSCAR is solid as well, but they seem to lag in the multiple architecture support area (Itanium is always behind), and don't current support AMD64 natively. On the other hand, they build on top of several RedHatish linuces, as opposed to Rocks where you get Centos (RHEL), period.
    • Yeah, Rocks is nice a Cluster In a Box(tm) just install and an instant cluster will appear :-)
    • Rocks definitely has its flaws though. It's documentation is lacking too.. especially in troubleshooting.

      Try making a separate home partition during the install. You'll end up with a borked kickstart server due to the /export/home automount to /home failing. So you reinstall

      Then you'll decide to add openpbs/torque after you've got your cluster up.. D'oh! That roll can only be installed during the initial install... so you reinstall again.

      Then you decide to make some changes to your systems.. say se
  • Rocks Clusters (Score:5, Informative)

    by lheal ( 86013 ) <{moc.oohay} {ta} {9991laehl}> on Wednesday September 28, 2005 @09:51PM (#13672797) Journal

    Rocks [rocksclusters.org] has a great system for making high-performance clusters from similar machines. A Rocks cluster consists of a front-end ("master") node and a bunch of compute nodes (and I think special-purpose nodes).

    The master gets a full Linux (RedHat-based) install. It's a NFS/DHCP/Kickstart server for the compute nodes, and runs whatever other services you want the compute nodes to use. The master has two network cards and acts as a firewall (NAT optional).

    The compute nodes boot via DHCP and Kickstart, downloading their kernel and whatever other OS files you want to their local disk. You decide how much NFS or local disk to use.

    Job queueing is handled by, e.g., Sun Grid Engine [sunsource.net] (an Open Source queueing package) or some other queueing software.

    Here's the neat thing: to make a change to a compute node setup, you change the Kickstart config and reboot all the compute nodes (as they finish whatever queued work they're doing, or immediately if you want). That makes the sysadmin's life easy, while still maintaining the speed of having the OS on the local disk.

  • How do I create a database of 8 billion records with 100k size each?
  • For those interested, there is a new website on clusters called ClusterMonkey [clustermonkey.net]. It just got started and has plenty of good free content (and more is coming).

  • Anyone remember Transmeta? Well check out what they do now!

    http://orionmulti.com/ [orionmulti.com]
  • by cmholm ( 69081 ) <cmholm&mauiholm,org> on Wednesday September 28, 2005 @11:00PM (#13673239) Homepage Journal
    I had originally posted this 'way back during the Ask Donald Becker [slashdot.org] call for comments. AFAIK, we never got a Donald Becker Replies, but life goes on. I should note that my shop is loaded to the gunwales with IBM clusters, some of the nodes for which are 32 cpu SMPs.

    [Donald's] work in making the "piles of PCs" approach to high performance computing a reality with Beowulf has been responsible for vastly expanding the construction and use of massively parallel systems. Now, viturally any high school - never mind college - can afford to construct a system on which students can learn and apply advanced numerical methods.
    In retrospect, however, it would seem that the obvious cost benefits of Beowulf very nearly killed the development and use of large SMP and vector processing systems in the US. My understanding of the situation is this:

    * Before Beowulf, academics had a very hard time getting time on hideously expensive HPC systems.

    * When Beowulf started to prove itself, particularly with embarrassingly parallel problems using MPI, those academics who happened to sit on DARPA review panels pushed hard to choke off funding for other HPC architectures, promising that they could make distributed memory parallel systems all singing, all dancing, and cheap(er).

    * They couldn't really deliver, but in the meantime, Federal dollars for large shared memory and vector processing systems vanished, and the product lines and/or vendors with it.... at least in the US.

    * Eight years later, only Fujitsu and NEC make truly advanced vector systems [top500.org], and Cray is only now crawling back out of the muck to deliver a new product. Evidently someone near the Beltway needs a better vector machine, and Congress ain't paying for anything made across the pond.


    Cutting to the chase, did [Donald Becker] advance a "political" stand among [his] peers within the public-funded HPC community, or [was he] just trying to get some work done with the budget available at NASA?

    • Cutting to the chase, did [Donald Becker] advance a "political" stand among [his] peers within the public-funded HPC community, or [was he] just trying to get some work done with the budget available at NASA?

      C. None of the above. Clusters are about economics and the effect commodity hardware has on the market. Don did what any good engineer does, "he asks what if?"

    • Nah, it's all about using the right tools for the job.

      Clusters are a good thing, as they provide a very cost-effective platform for running codes with modest communication requirements. Just like running a communication-intensive code on a cluster will limit performance, running a code with little communication on a "real" supercomputer is a waste of money.

      The sad thing about the current "HPC crisis" is not the rise of clusters, but the use of clusters for tasks which they are ill suited for (typically, "gr
  • from a business point of view. And try to sell it across the Business Community, Companies are still cn windows. They need to not just encourage such endeavours(frm their high towers) but also adapt them in order to help Small Scale and Medium Scale businesses take full advantage of HPC on Linux.
  • my experience (Score:3, Interesting)

    by netjiro ( 632132 ) on Thursday September 29, 2005 @03:24AM (#13674140)
    I have deployed several clusters throughout the years, mainly for research in academic environments and small companies, and I can say that clustering makes a lot of things soo much easier.

    Diskless SSI clustering makes maintainance a breeze, and ensures that all systems are always in sync and up to date. All nodes can run the same system image, whether they are servers, dedicated compute nodes, or regular desktop machines.
    Of course you can still have local hard disks if you want, and for some apps it is recommended, but the system boots from the servers nontheless.

    OpenMosix dynamic distribution makes it possible to use heterogenous hardware, and handles highly dynamic computational load quite well. The applications just wander off to whatever physical machine will run them the fastest.
    This also makes simple parallel implementations of code a lot simpler, just fork and forget, and you will pay a small overhead for the benefit of having good load-balancing automagically.

    Dymanic distribution also makes it possible to use regular desktops as cluster nodes along with the dedicated compute nodes.
    Need windows dualboot on some nodes? no problem, when you shut them down do boot windows, the processes that used to run on those machines just migrate to another node. When you go back to linux, processes come back.

    Need explicit parallelism? no probs, MPI / PVM etc works fine together with the dynamic distribution and complements it for applications that are already well parallelized.

    Scaling? This has never been an issue as long as the network infrastructure is up to speed. A decent 100mb or gigabit system has proven to be good enough for just about everything I've seen.

    High availability? How about having several servers that can run hot or cold spare for each other, and which can function as compute nodes as well... Nice when a server MB catches fire (yes, I've had that, and lost as much as a few minutes of work time, (the time for someone to walk to the server room, unplug the smoking machine and restart a running (cold spare) backup server). Most of the people at the lab didn't even notice the hickup.)

    Batch/job queues? no probs, use sun grid engine, write your own, or whatever. simple as cake.

    I have mainly used gentoo linux for the flexibility and ease of maintainance and I can highly recommend it. It is all fairly simple to implement on gentoo. Just read up on gentoo system administration, pxelinux, tftp, openmosix, and whatever you feel you need to use it for.

    The main problem right now is the lack of good openmosix support for 2.6 series of kernels. But I'm sure that some or all of this can be built with any or all of the other dynamic distribution systems out there.

    If you have off-list questions please contact me at my nick at gmail.com.
  • Ten years ago Informix supported MPP configurations in which it would sit on top of MPP servers like the IBM SP2 (Deep Blue - the chess computer). In this configuration you'd have 20-200 separate nodes, each with its own memory & disk and Informix would be responsible for spreading its data across the nodes, running a query across nodes, and joining the results.

    The sp2 was originally intended for scientific computing, but then most were sold as giant database servers. Around 1996 (I think) db2 also su
  • OpenVMS clustering, arguably the most mature and most flexible clustering available, was somehow omitted from IBM's view of the clustering universe. Why didn't they address hot/hot[/hot[/hot...]] configurations? How about Single System Image (every member boots from the same system disk) configurations? (These two are not mutually exclusive).

    These two are the Holy Grail of clustering capabilities. Um, no wonder IBM didn't mention them. And only the grey-haired /.ers remember VMS anyway.
  • off the front page.

    My company (and me specifically) designed/built/runs a Windows 2000 cluster. It's not as affordable as a linux cluster, but our simulation engine is a windows-only product and there does not exist anything close for other platforms (I wish!!!). We have a huge efficiency rating with our in-house designed cluster system. A simulation that takes 8 minutes on a single serial processor takes less than 1 minute on an 8 computer cluster. Yes, you read that right, we are more efficient in a c
    • Eventually I suspect that 128bit systems will remove that inherent problem, we have simulations that easily take 16+gb of memory to complete that we just can't run on a single system. Until then, clusters are the way to go.


      You can currently get up to 32G of ram on a dual opteron, 64 on a quad, or 128 on an 8way. This is using 4g (expensive) dimms. 2g dimms are much cheaper nowadays though, and 16/32/64 are still respectable numbers!

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...