Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Truly Off-The -Shelf PCs Make A Top-500 Cluster 231

SLiDERPiMP writes: "Yahoo! News is reporting that HP created an 'off-the-shelf' supercomputer, using 256 e-pc's (blech!). What they ended up with is the 'I-Cluster,' a Mandrake Linux-powered [Mandrake, baby ;) ] cluster of 225 PCs that has benchmarked its way into the list of the top 500 most powerful computers in the world. Go over there to check out the full article. It's a good read. Should I worry that practically anyone can now build a supercomputer? Speaking of which, anyone wanna loan me $210,000?" Clusters may be old hat nowadays, but the interesting thing about this one is the degreee of customization that HP and France's National Institute for Research in Computer Science did to each machine to make this cluster -- namely, none.
This discussion has been archived. No new comments can be posted.

Truly Off-The -Shelf PCs Make A Top-500 Cluster

Comments Filter:
  • by Andorion ( 526481 ) on Thursday October 04, 2001 @04:14PM (#2389888)
    Can you imagine a Beowulf cluster of these... erm... clusters?

    -Berj
  • "We had to face heterogeneity by spreading it over Linux and Windows too," she said. "It's not scientific, but technically it's good experience." its good to see the two getting along...
  • I've got an old POS Compaq with a Pentium 1. I say we all get our old POS's out and make our own cluster to get in that top-500 list :)
  • by UserChrisCanter4 ( 464072 ) on Thursday October 04, 2001 @04:56PM (#2389907)
    Was when the HP-powered cluster started assimilating some of the Compaq multi-Alpha machines as it's own.
  • $210,000 ?? (Score:5, Interesting)

    by a.out ( 31606 ) on Thursday October 04, 2001 @04:58PM (#2389914)
    How about $0 Baldric [baldric.uwo.ca] a student run beowulf at the University of Western Ontario built one on hardware dontations. It's not exactly top 500 but it still kicks ass.
    • I'm wondering: if you use many old CPUs (486, early Pentiums) vs. not so many recent (PIII/Athlon, ~1GHz) wouldn't you pay for your elecricity bill more than you saved on the hardware?

      Is there anything like a MIPS/Wh rating for CPUs? (Would thermodynamics dictate a certain minimum?)

      With a seperate power supply and hard disk per CPU (i.e. complete box) I would imagine that old PCs generate a *lot* of heat per CPU cycle.

      Has anybody done measurements/calculations on this?

      • Actually, the best MIPS/Wh is probably with the slower versions of the current laptop chips. Maybe portable G3/G4?

        Also, I don't think you'd get much useful stuff done with early Pentiums and 486. Consider that a P4 2 GHz has 20 times the clock speed and probably does twice as much per cycle, so it's ~40X faster. Now, if you connect 40 P100 together, unless your problem is completly parallel (like breaking keys, as opposed to most linear algebra), you're going to lose at least a factor of 2 there. This means that in order to equal 1 P4 @ 2 GHz, you'll need almost 100 Pentium 100 Mhz. This means that 10 P4 would be like a thousand Pentiums. At these numbers, it's going to cost so much in networking and power...

        I'd say (pure opinion here) the slower you'd want to have today is something like a Duron 1 GHz and the best MIPS/$ is probably with a bunch of dual Athlon 1.4 GHz (A dual is not cheaper that 2 single, but you get more done because of parallelism issues).
        • My experience doesn't suggest that the P4 does twice as much per cycle. I'm seeing P4s do a fair bit less than the P3 per cycle, and the P3, P2, and PPro cores didn't seem *that* much faster per clock than the original Pentiums. My gut tells me that the P4 doesn't do any more than the original Pentiums per clock cycle, and the only thing they have going for them is Intel's ability to manufacture them at high clock speeds.

          If you really want a cpu that does a lot in a single cycle, look at the IBM POWER series. IIRC, on the floating point side, a 2xx MHz POWER III is darn not too far from an Alpha 21264 at 733 MHz. And now there are 1.1GHz and 1.@GHz POWER IV chips, in the new IBM p690 machines. I don't know how they compare to the POWER III per cycle, though, because the POWER IV opens a whole new (good) can of worms.

          -Paul Komarek
          • I'm seeing P4s do a fair bit less than the P3 per cycle, and the P3, P2, and PPro cores didn't seem *that* much faster per clock than the original Pentiums.

            For "unoptimized" applications, that may be approximatly true. However, if you're going to build a cluster, you're also going to optimize your code for it. What kills the P4 is branch misprediction. However, by carefully writing your code, you can avoid most of these problems. Also, most of the big clusters are for numerical code, for which branch predictions does well (plus you can do lots of loop unrolling).

            Another thing that the P4 (and PIII) has is SSE. On a P4 2 GHz, you can do theoratically 8 gflops. In practice, if you write good code, you'll bet between 1 and 2 gflops. On a plain pentium, the FPU is not pipelined, so a P100 (I'm guessing here) probably has a theoretical maximum of ~25 mflops with a performance for real code around ~10 mflops. That means the P4 is probably 100-200 times faster at floating that a P100.

            Of course, you're right in saying that other architectures are probably faster that P4. ...and by the way I'm not saying that the P4 is great... but if you're doing numerical stuff and using SSE, it's VERY fast (in my experience, 3DNow! has been faster than SSE at the same clock rate, but 2GHz is too much higher than the fastest Athlon).
            • My experience with optimizing research code for a particular platform: it will never happen, unless you're starting from scratch and expect your code to have a short lifetime. We've got libraries that are several years old, written to be portable across various unix and Windows platforms, running on MIPS, Alpha, x86, SPARC and PA-RISC. These libraries aren't optimized for any particular platform, and nobody has time to mess with platform-specific optimizations.

              I've never tried optimizing for SSE, but someone in lab did once. He reported higher performance when doing his computations element-at-a-time than vector-at-a-time. His conclusion, for his particular application, was that memory latency was killing SSE. He was better off doing lots of work on a few numbers, than he was doing fancy stuff to a lot of numbers with SSE. On the other hand, some people have had some luck with SSE optimized FFTs, or so I've heard.

              At 2GHz, I'll bet that you're better off doing element computations than vector computations because of the radical difference in memory-versus-processor performance, if the P4's L1 take more than a cycle to feed the registers. Otherwise, do whatever fits in L1 and can be prefeteched -- like elementwise computations on long vectors. Anyone have any real or anecdotal evidence to refute or support this theory?

              In the end I think that platform-specific optimizations are a waste of time for research code. I seem to remember some people eventually including hooks in BLAS or LAPACK to allow the user to specify cache sizes; and FFTW does some runtime experiments for optimization. But my guess is that, overall, SSE, 3DNOW, and even AlitVec are irrelevant to most computer researchers. I'll bet their highly relevant to most embedded engineers or many robotics researchs, i.e. people targetting specific hardware.

              -Paul Komarek
              • I've never tried optimizing for SSE, but someone in lab did once. He reported higher performance when doing his computations element-at-a-time than vector-at-a-time. His conclusion, for his particular application, was that memory latency was killing SSE.

                Maybe his problem was really special, but most likely he didn't know how to write SSE code. First, if you write your code correctly, the worse you can do is a bit better than the x87, because you can use SSE with scalar and take advantage of the linear register model (as opposed to stack).

                The only time I've converted some code for SSE, I got a 2-3x improvement in speed. There is one thing you really have to be careful when writing SSE code: ALIGNE EVERYTHING TO 16 BYTES. That's very important. The "movaps" (move aligned packed single) is more than twice faster than the "movups" (unaligned) when data is unaligned (movups on aligned is not too bad). That makes all the difference. Also, sometimes you need to change the way you do your computations.

                For the case I have (optimizing a speech recognition engine), just by changing the order of of the loops (inner vs. outter), we got a 2-3x improvement (still with x87) because of increased L1 hit rate. Then when switching to SSE, it ended up to a 5x improvement over the original code. Had I just re-written the code in SSE (with cache optimization), the gain would have been around 25%, because memory would still be the bottleneck.

                As for libraries not being optimized, just look at FFTW (www.fftw.org) or ATLAS (www.netlib.org/atlas) and you might change your mind.
  • by Pulzar ( 81031 ) on Thursday October 04, 2001 @04:58PM (#2389915)
    Should I worry that practically anyone can now build a supercomputer?

    Unless "practically anyone" has the funds, the storage room, and the manpower to maintain this monstrosity, there is nothing to worry about.

    And even if anyone could build a supercomputer, what's there to worry about? We don't live in the "War Games" world where supercomputers play chess, tic-tac-toe, and start nuclear wars for fun.

    • Actually I worry that people like Saddam Hussein might get ahold of enough parts and knowlege to build a system specifically for modeling atomic weapons design. He's tried it before when he attempted to get hundreds of PS2's when they first came out.
      • Actually, the worry about the PS2 machines was that their imaging capabilities are strong enough to be used in the missile guidance systems. I think he never actually attempted to get any of them, but US blocked shipments to Iraq just in case.

      • My guess is that people who understand how to use computers to do modeling for nuclear weapons design would be somewhat harder to come up with than the appropriate degree of computing power.

        Knowing nothing about it, I would nonetheless guess that it's rather non-trivial.

        Keep in mind that nukes were invented without the aid of a Beowulf or a Cray.
        • It is true that the first nukes were developed without "a Beowulf or a Cray"- BUT, to develop good nukes without doing lots of tests (and the U.S. led the world in sheer number of tests) you need fast computers. To develop small nukes you definitely need fast computers... Hence the paranoia over supercomputers in the Wrong Hands(TM).
    • think I'd be more worried about someone building a cluster using AMD Athlons, and thus reducing everything for a 500 meter radius into a smoking pile of ash.
    • >Should I worry that practically anyone can now build a supercomputer?

      Unless "practically anyone" has the funds, the storage room, and the manpower to maintain this monstrosity, there is nothing to worry about.

      It's only 256 machines, how hard could it be to steel(steal?) 256 computers. Rip off a couple of Gateway (are they still in business) and Dell trucks, maybe rob a few schools. Now that you've gotten (by fair means or foul) a super-computer that is rated in the top 500, what would you like to do with it? How about any brutt force crack that you feel like (it's not like you need some neat algorthyms with that much firepower)? How about generating insane encryption levels that FbI and CiA can't crack?

      Of course, about 256 people with laptops and wireless modems, all running mandrake, sitting in the train station solving tough mathematical problems just annoy people....

  • by nairnr ( 314138 ) on Thursday October 04, 2001 @04:59PM (#2389921)
    Well, it seems like super clusters are becoming very easy to build hardware-wise. If you throw enough commodity at a problem, it becomes easier. I would think the biggest problem with supercomputers is no longer the hardware itself, but networking, and the programming to take advantage of the hardware. These computers still only really work for something that distributes easily. The biggest factors are now the ability to distribute, and schedule work for each node. The more nodes you engage, the more you hope your problem is CPU bound, so it will scale more.

    Data transfer and message passing are such a big issue I belive the most important developments are in the networking topologies and hardware for these environments.

    That said, I still want one in my basement :-)
    • That said, I still want one in my basement :-)

      Yeah. Me too.

      This even more strongly underscores the current state of the art: we have astonishing hardware. OK, now what do we do with it? Other than grumble about Windows XP memory requirements? :-)

      I wouldn't mind finding out. I bet it will be cool.

      ...laura

    • That said, I still want one in my basement

      That'd be good in the winter, but, hey, it's going to be really hot at your place next summer!
  • by bstrahm ( 241685 ) on Thursday October 04, 2001 @05:01PM (#2389926) Homepage
    How powerful standard desktop computers are. There is only two orders of magnitude between a normal desktop computer (I refuse to call a Pentium III 733 as outdated) and a mainframe computer.


    Now all we need are ways of getting local connections significantly faster (Did someone say Gig Ethernet) to allow faster communication between the nodes and we will be able to scale beyond several hundred and break the top 100. I hear 1gig NICs will be falling in price to under $100 US retail soon...


    How fast do you connect to your cluster ?

    • No... What really is needed is not more bandwidth, but lower latency (more bandwidth IS nice, however)... That way we could have hard real-time distributed supercomputers. Wouldn't mind one or two of those myself...

      • Well here I thought we were staying with commodity hardware, but if we are going to go beyond commodity hardware into stuff that is engineered to provide low latency, high bandwidth, lets look at technologies like Infiniband [infinibandta.org] which is engineered to avoid almost all OS latencies by delivering data directly to applications from hardware with OS bypass (almost no software between your app and the hardware anymore), and at 2 Gb/sec it isn't too bad on the performance scale. However I am betting that when this stuff comes out it will be a little pricey and go slightly over the 210,000 dollar budget
    • I hear 1gig NICs will be falling in price to under $100 US retail soon...

      I hear Gigabit switches won't be...
    • by BrentN ( 90935 ) on Thursday October 04, 2001 @09:01PM (#2390677)
      The problem with Ethernet in clustering isn't bandwidth, its the latency.

      The real issue is how parallel-efficient your algorithms are. We do molecular dynamics (MD) on large clusters, and we can get away with slow networks because each node of the cluster has data that is relatively independent of all other nodes - only neighboring nodes must communicate. If you have a case (and most cases are like this) where every node must communicate to every other node, it becomes a more difficult problem to manage. To deal with this, you need a high-speed, low-latency switch like the interconnects in a Cray. The only real choice for that is a crossbar switch, like Myrinet.

      And Myrinet is tres expensive.

      • I'm curious. Given that neigbor communications are the most important, how do you network the machines? I mean, it seems that a useful design might be to have extra network cards in all the machines, to make overlapping network topologies reminiscent of the physical dependencies.

        Or is th elatter to problem-dependent to make this practical?
  • worry? (Score:3, Funny)

    by motherhead ( 344331 ) on Thursday October 04, 2001 @05:02PM (#2389932)

    Should I worry that practically anyone can now build a supercomputer?

    Yes, you should probably worry that practically anyone can build a supercomputer. But you could mitigate all that fear with the fact that not practically anyone can whip up software that takes full advantage of it.

    Thank god there isn't any off the shelf "missile trajectory" software in the CDW catalog. you would hope that any society that can whip together motivated coders to write such code already has access to some pretty spiffy kit.

    (yeah i said "kit"... and I'm from Chicago... I feel like such a wanker.)

    • As we all know, "kit" is a british slang term for computer hardware. What many people may not know is that it is also the secret weapon in a British campaign of cultural assimilation.

      Yes, you heard me right. Cultural assimilation. The brits are sick of seeing Mickey Mouse and Donald Duck and the sexy chick from Enterprise on TVs all over the world, and they're going to do something about it.

      The British invented the English language, and in many circles certain British accents are percieved as more sophisticated or upper-class. They're capitalizing on that by inventing slang terms - "kit" being among the forerunners - that other English-speaking peoples appropriate. Thus is begins.

      Soon, British TV will move off of PBS, where it belongs. British computer games and hardware will surpass American in popularity. And there is nothing - absolutely nothing - we can do about it.

      (In case you hadn't realized it, yes this is a joke. And yes, I know it's offtopic and will be moderated as such. But this was fun to write.)
      • Kit == Rig (Score:2, Informative)

        by kindbud ( 90044 )
        As we all know, "kit" is a british slang term for computer hardware.

        No it isn't. That's just the only context in which you've heard it used (translation: you read too much Slashdot, and should get out more often). "Kit" is the British equivalent of the American "rig" when used in this context. It is not used specifically to refer to computers.
        • You're right, I should get out more.

          Of course, you know that makes the threat presented by the word even more insidious. If non-techies can use it well - I shudder to think of the potential for linguistic infiltration!
      • Your plan is futile.


        Like most Americans, when I hear the word "kit" used in a technological context I instantly think of the popular 80's television show "Knight Rider" starring David Hasselhoff [davidhasselhoff.com].



        Now if you were German...

  • by bstrahm ( 241685 )
    I have not had the chance to play with Beowulf clusters at all. Do I still get a local desktop on certain clustered computers ???


    The ultimate Linux selling tool, every linux box in your company is a node in a cluster, add a few servers for extra speed, add a few computers to provide file I/O and backup capability, and you have one of the fastest supercomputers available to your company without having to spend an extra dime (everyone needs a desktop anyway). Can you imagine the extra cycles available for simulation, whatever when people start going home at 5 PM.

    • And can you imagine the damage to your data set when some dipshit spills coffee all over his/her desktop/node, or tries to open the latest Outlook virus? :)
      • Absolutely none I would hope... The dataset resides on a centrally managed server, and because they are running a Linux desktop I get to laugh at what a trojan horse virii can do to a user account on a Unix box. This can also be removed as a problem by putting a keyboard, mouse and monitor on the desktop and locking the PC into a cabnet under each desk... What the user can't touch the user can't screw-up


        That is a serious problem though, and one I assume Beowulf clusters will take care of, what if a node goes down in the middle of processing, how does the cluster respond to it ?

        • Software such as Platform's [platform.com] LSF take care of this magicly... it even allows for checkpointing, assuming your task allows it. Because my render software didn't really do checkpointing, I had to add that in to my wrapper code.

          We do use desktops at night to work with our render farm. Platform has some cool tools to work with for such environments. I have never tried LSF in conjunction with PVM or MPI but they have support for it, so I imagine it does pretty well.
  • http://www.theregister.co.uk/content/2/15584.html [theregister.co.uk]

    You'd think a top computer manufacturer would be able to beat out a part-time dictator from the third-world in gigaflops, but I'm thinking it was more a demonstration by HP that they're getting set to embrace Linux and shelve HP-UX.
  • Yahoo! news (Score:3, Funny)

    by Anonymous Coward on Thursday October 04, 2001 @05:09PM (#2389967)
    shouln't that be Yahoo! Serious News now?
  • They mention in the end that they are working with Microsoft to support this approach. They also suggest using spare cycles. Unlike SETI@home, where you download some stuff, work on it, send it back, this appears to be a system where the power scales linearly with nodes.

    Windows support makes a difference. Take a large company (10,000+ in a single location) that has some intensive projects. In this case, they could just drop the $210,000 (call it $750,000 with installation, support, etc.) and put it in a room.

    However, a smaller shop, say, 50-250 employees, being able to install this software on the staff's machines. They rarely use their computers to capcacity, and can probably contribute 90% of the CPU 90% of the time. This approach could let people doing giant calculations do so cheaply.

    The real question, however, is who needs that kind of horse-power. For those that need the horse power, is the savings with off-the-shelf components meaningful.

    Its a tremendous accomplishment, and I wonder how much of the changes were new (vs. Beowulf clusters that we always hear about). However, if this fills a need, congratulations, its an impressive accomplishment regardless.

    Alex
  • by Anonymous Coward
    Uh, I was doing this in the early 90's (as were many others). The idea of using idle cycles from your workstations is beyond old. Is it somehow newsworthy because HP did it? The article makes it sound like a revelation. I'm willing to bet what I was doing was more sophisticated. My processes would relocate themselves whenever a regular user logged in and would even save the system state to prevent any lost work. Hmmm ... sounds like a nasty virus! And while I'm at it, Beowulf was nothing more than rehash as well. How far back does PVM date? Guess it is just because the name sounds cool.
  • Argh. Is it possible that when news articles come listed from Yahoo that a non-Yahoo source could be used instead, or at least added as a secondary link?

    As soon as I see a blur (pop-behind) ad I quickly close the page I'm on in the naive hope that they track when people stop visiting their sites and don't leave via an ad... ;)
  • i'll give you $210,000 so you can do exactly what with your new supercomputer?

    also, who will pay your power bills?

    i don't get this "drool factor" thing some people have for supercomputers... sure, they're cool and all, but they can do exactly nothing you would want or need to do on a day-to-day basis...
    • Photo realistic first-person shooters, 75 frames per second.

    • The combined vibration of the coolings fans would make timothy's pr0n browsing that much better.

      THAT'S the drool factor, so to speak.
    • Kinda depends on what you do for a living doesn't it? I'd like to see you predict the weather acurately (well, as I guess I should say "as accurately as the standard weather forcast") over the coming week or two without a supercomputer. Or design an aerodynamic car. Or an airplane. Or a new medical drug. Or a spaceship. Or any number of defense/military applications. There is plenty that a supercomputer can do that people do on a day to day basis. You just have to be in the right line of work. (And by the way, a supercomputer *can* do everything a desktop machine could do, it'd just be rather pointless to use a machine that big as a desktop...)
  • by Saint Aardvark ( 159009 ) on Thursday October 04, 2001 @05:17PM (#2390006) Homepage Journal
    Sez the cost was $210k US w/o cabling...why the qualification? What *would* cabling for 225-odd boxen cost?
  • The individual machines that made up the I-Cluster are now out of date, each running on 733MHz Pentium III processors with 256MB of RAM and a 15GB hard drive. HP introduced a faster version at the beginning of this month and will launch a Pentium 4 e-PC by the end of the year.

    this kind of hardware is out of date? unless i'm mistaken HP markets these e-PCs toward home users looking for light processing power, such as the ability to view web pages, read emails, and play solitaire. this looks more like a power-user rig, or something a gamer would have as a decent Q3A machine. how in the world could this hardware be obsolete? i guess i should replace the pentium III 933 i'm running because lord knows it just won't hold up to today's high powered apps! man it's almost a year old, i should start worrying...

    pxg
    • It's probably out of date because processors that speed are either already unavailable or will be shortly. They could presumably underclock, but it makes more sense to just tweak the model number slightly.
  • by CormacJ ( 64984 ) <cormac DOT mcgaughey AT gmail DOT com> on Thursday October 04, 2001 @05:23PM (#2390030) Homepage Journal
    The latest top 500 list is here : http://www.top500.org/list/2001/06/ [top500.org]

    The cluster is at #385

  • What I'd like to see is a shot at a distributed supercomputer cluster utilizing the spare cpu cycles of computers on high-speed internet connections (cable or DSL). Since efficiency would be remarkably degraded by slow communication times and the fact that many of these computers would be running Office (ahem), you'd have to scale up at least one order of magnitude.

    Technically I can't see why this wouldn't be feasible. It would be beyond SETI and protein folding in that the 'control center' could change what problem was being worked on at any time. It may not be incredibly practical compared to setting up specific machines in a single large room, but it would be free and have a potential user base in the hundreds of thousands or millions.

    Imagine: instead of the same SETI screen output time and again, you'd get a message on your SS saying "would you like to see what your computer is working on right now? How about high-pressure fluid dynamics in environment x?"

    Max
    • Actually, this is exactly a project I've been working towards this last 6 months. Granted, I'm not an expert in the arena of parallel algorithms, but I've done some reading and have some legitimate exposure to load balancing complex jobs across workstations in my current employment environment.

      That being said, my approach has been to design a C/C++ like language that intrinsically understands multithreading and remote processing, but has a minimal standard library, and is by default intrinsically secure by design. People running the client software can allow default security privileges for anyone to run under, or can set up specific execution profiles for certain people or programs, controlling cpu load and even restricting the hours they share out their machine, etc. Basically, think SETI@Home, except with any program that might be written in the language automatically taking advantage of all available clients.

      Trust me, there's a _lot_ of technological wrinkles to iron out in the language to make it safe to run on another person's machine without authorization, as this is the heart and soul of the project. Rather than reinvent the distributed processing technology for many different purposes and try to get many people to run several clients, it's better to get a single client that works for all sorts of problems. I think the time is now to prepare to capitalize on spare processing power. As we plunge further into the information age of cheap hardware and don't seem to require much more power to do the same tasks (unless you run Windows XP), the ratio of spare cycles to used ones will only increase over time.

      The topology of a network of distributed clients over (potentially) slow connections makes it undesirable for solving some kinds of problems, though. What it's _not_ good for are things like converting video files from one format to another, like the HP I-Cluster is. Making it good at such tasks means slackening the security aspect a bit... And even then, slow transfers can invalidate even a fast remote CPU.

      Of course, I'm always looking for funding. I'd love to make this a full time job in the future. Know anybody with loose purse strings? Didn't think so. :-/
  • Richard said that supercomputing power could come in handy for certain tasks, like converting large video files from one format to another, that currently take a good amount of patience.

    Any one else notice that this seems to be a very elaborate (and expensive) project just to bootleg encrypted DVDs?
  • Power Usage (Score:2, Interesting)

    by Xunker ( 6905 )
    I wasn't able to get hard facts about this, so I'm going to throw out the question for general "gee whiz" value.

    I was pondering the computrons per watt of a cluster such as this versus a real honest-to-Bob supercomputer (Something from Cray/Terra/SGI, for example). we can assume that each machine in HPs cluster uses probably 60-80 watts (because they're sans monitor), so youre looking at about between 1.2 and 1.8 kilowatt hours to power this thing. I'm not sure what a Cray TSE [cray.com] uses, but I have to think it's nowhere near that because of all the redundancy that PC clusters use (one Power supply, chipset, etc per Core).

    Though, I'm sure if you can afford either a Cray or 256 PCs, you can afford the power bills, too. If you have to ask how much it will cost you, you can't afford it. But while CIP (Cluster of Inexpensive PCs) is cheaper, is it as efficient?
    .
    • Crays use rather signifigant voltage, but what gets you is not the power. It's the heat...


      Here's some info: http://www.cray.com/products/systems/t3e/> [slashdot.org]


      Regarding efficient, in a word: no.
      These systems, much like an OS/390 are written and built around a whole different theory of architecture. They are also often custom made for specific solutions. The system for 'missile guidance simulations' would have different cards, different chips, and way different software from the system for 'global thermo-nuclear warfare simulations'.


      Cool to play on, though.


      -WS

    • there is a certain amount of storage redundancy as well

      with an efficient cooling system of network booting pcs (no hard drives, no moving parts except for fans) then you have a bank of systems which is not only incredibly cheap and off the shelf but efficient as well
  • what about... (Score:2, Interesting)

    by Phalkin ( 413055 )
    using a bunch of those 1U dual athlon rackmount boxes for this? seems like it would reduce the overall footprint by several orders of magnitude, as well as easily doubling (if not tripling) the power. comments, anyone?
    • Rackmount boxes aren't usually considered off the shelf components... sure they raise the power and reduce the size, but they also raise the cost significantly.

      Unless you know where you can get rackmounts case and powersupply for less than $50 each... you can't technically make a Beowulf cluster with them.
  • Hey remember all those completely and hopelessly out of work Russian PhD CS grads sitting around and starving and writing strong crypto software for the Russian Mafia? You might even have heard that the Russian Mafia is always looking to explore new business ideas and strategy.

    Well hell wouldn't this be a great business opportunity for both of them?Call it RMBM (Russian Mafia Business Machines), and then build cheap super-clusters and turnkey code for "specialized" clients. The possibilities are endless.

    This is where you get them now: Support. You sell them the machines at a 25% markup and then charge a ridiculous annual service agreement.

    From the presentation:
    "Using "borrowed" Post-CCCP Mi-8TV assault/commando choppers RMBM support staff can be deployed to your corner of the desert in a matter of hours! Lets see IBM match that! Not even Larry Ellison and his personal Mig can touch that! (canned laugh track)"

    I don't know, maybe not.

  • I guess this is what you do with all of that extra inventory. Clusters coming from Gateway and Dell next.
  • Notice (Score:2, Funny)


    Anyone who posts a comment containing the word "Beowulf" will be shot.

    Including me.

    Uh-oh.
  • I'm a first time poster, so please bear with me.

    Anyway... I just got a job working with the NCSA as part of a project called OSCAR [sf.net]. It's basically an open-source solution to the problem of creating a cluster. I'm part of the team working on documentation and training materials for people trying to implement OSCAR. I can say, from my own experience, that installing a cluster (even only with 4 PCs) is not a simple task. OSCAR is still young as far as software development goes, but it will do the job well once it is finished.

    For those interested, OSCAR 2.0 is on its way soon.
  • Yahoo! didn't report this story, they just licensed it. Give credit where it's due [cnet.com] . :-)
  • amd for less (Score:2, Interesting)

    by Jaeger- ( 63372 )
    i think i could build a better supercomputer
    for less money with amd procs/mobos/etc

    1gig tbird $100
    decent cheap amd mobo w/integrated vid/snd/net $100
    256meg ram $25
    15gig ide $50
    floppy drive (needed??) $15
    cdrom (needed??) $25
    decent nic $20
    cheap case $40

    total $375
    subtract 10% (due to quantity purchase) gives less than $350 total each

    pay a bunch of college kids $10/hour
    they'd build 2 machines/hour
    so 125hrs total to build comps is $1250

    $350 x 250 machines is $87,500
    add in (8) good quality 32 port switches @ $200 each and you're up another $2k
    add in 250ish cat5 cables for another $1k (who wants to make them, buy for $3-4 ea)

    your total cost is way under $100k

    or even better
    use the new SMP durons, 1gig each

    not much more $$ since durons are cheap
    add like $50 for the 2nd proc (total $150 for 2 duron 1gig smps, unsure if thats reasonable pricing) and another $50 to mobo cost for dual smp mobo

    thats $450 ea box
    250 x $450 gives us $112.5k for the boxes
    add in networking stuff etc
    less than $125k prob

    man i want to do this
    need someone with $$ =P

    --wayne =)
    • why in god's name would you want a cd and a floppy for each node ?

      Also why have a mb with integrated sound (and video) - this is a beowulf cluster, not a John Cage album...

      graspee
  • Maybe I've got a different view on things, but I find it pretty funny when a guy gets all excited about calling a Mandrake cluster a supercomputer, then gives a blech! when he announces they used e-pc's. Does it matter?

    So maybe by using e-pc's the peak performance went down some, but anytime you tie anything together in a clustered environment the sustained performance dies (not just takes a hit) too. Only way to make it hit close to peak is assign each node a process that doesn't require any interaction between processes/nodes. In that case, you wouldn't need to tie them together to make a top-500 cluster. Just assign some IP's, cross mount their filesystems and call it a morning.

    Besides, government agencies (and scientific companies) are beginning to realize that when you cluster 500 boxes together, you're still administering 500 boxes. When real supercomputer companies make real supercomputers, you've only got to administer one computer. Maybe that's why the term "Supercomputer" isn't plural.

    If clusters keep on being called supercomputers, you might as well call "the internet" a supercomputer too since it's an environment where lots of computers are connected and running processes that don't depend on another. "Wow! Look at the sustained power of all 5000 Counter-strike servers out there! It's a super-counter-strike-computer!"

  • I'm a student at a vocational college in New Hampshire, and I'd like to get a Beowulf cluster set up here. The obvious question, though, is how do I convince the local administration to go for it? Any suggestions? I'm thinking of having it be on a donation basis, although of course any support the schol can give will be supported. ;D I'd deeply and sincerely appreciate any suggestions. Dave
  • Yawn. Another One. (Score:3, Insightful)

    by 4of12 ( 97621 ) on Thursday October 04, 2001 @06:41PM (#2390351) Homepage Journal

    You know this Beowulf business is getting to be pretty staid and routine by now.

    In fact, I'd almost say it would be newsworthy if there were any organization (university, company, govt lab) that had not yet built "a supercomputer from the COTS components".

    What I'd like to see now is more metrics (some of which the article does, admittedly, reveal).

    1. hardware cost per FLOP (everyone already tells you this)
    2. FLOPS per human time to build
    3. FLOPS per sysadmin time to maintain
    4. FLOPS per kilowatt of electricity
    5. FLOPS per cubic foot of rack space
    6. can it run smoothly if Bad Andy goes behind the rack and unplugs a few network connections, a few power cords to some nodes?
    Everyone knows that you can spend your own time scouring dumpsters for cast-off computers and coaxing them to life, bringing up an old 486 with an ISA 10bT card as a member of your cluster. Unless you're doing it for your own educational benefit, it's just not worth it.

    Don't get wrong. I love these clusters and want to use them. It's just that, in 2001, their mere existence is no longer as exciting as it was in the mid 1990s.

    Now days, I care more about ease of use and ease of maintenance, taking the low cost of a Beowulf cluster as a given.

    With the size of these clusters going up and the ratio of hardware cost to human time constantly decreasing, I'd be more impressed to see how a system with many hundreds of nodes was brought up in a short time, never rebooted for a year, even as 13 of the nodes developed variously problems and become unproductive members of the cluster.

  • A more daunting task might be taking the model to a consumer environment, which, Richard pointed out, is full of often dormant processors like those in printers and DVD players.

    HP imagines "clouds" of devices, or "virtual entities," which could discover and use the resources around a user.


    Anyone else reminded of A Deepness in the Sky by Vernor Vinge? In that story, IIR, one of the protagonists controlled a ubiquitous cloud of invisible compute "particles". Each particle was networked to the rest through its neighbors floating in the air around it.

    Ok, so I thought it was a cool idea.
  • I fail to see what is impressive about this.
    It looks like the wheel reinvented several times.
    For cluster installs on several machines, use system imager [systemimager.org] .
    For using and controlling a cluster of machines for various taskes, use LSF [platform.com] .

    The number of machines is pathetic too ... 225 @ 733 mhz? That makes it to #325?
    How sad. I need to bench mark our render farm (200+ boxes, 120 are dual 1ghz) and see what we can come up with. I know it is higher than that... and we have a smaller install for the industry.

    I looked for info to spec our machines but I couldn't find any info.... any help?

  • Yahoo! News is reporting ...

    No, it isn't. Yahoo! News is repeating a story which, if you'd bother to read the byline they wrote, was

    By CNET News.com [news.com] Staff CNET News.com

    The article [cnet.com] on CNET's site should be getting the Slashdot treatment, don't you think?

  • Since you could now build a Supercomputer with OTC stuff (well, Im sure you could before today as well), will that make software like this (read: Beowulf) fall under export laws preventing exportation to countries like Iraq and China?

    Any insites?
  • by segfaultcoredump ( 226031 ) on Thursday October 04, 2001 @09:13PM (#2390701)
    No, a beowulf cluster is the last thing that one would use for nuclear simulation.

    While great at highly parallel tasks that require very little synchronization between threads (think code cracking), nuclear testing (and almost all other fluid dynamic problems) generally requires all of the cpu's to have high speed access to all of the memory. So one needs a huge shared memory system (think Cray or Sun StarCat).

    And for this reason, I find the top 500 list to be a bit misleading in these days of massively parallel systems. Its great as a test of how many flops the system can crank out, but it does not take into account the memory bandwidth between the cpu's, and that is often more important than raw cpu horsepower.
    • Ahh... Somebody else who gets it...

      I find too that people assume that an "X" type of cluster will solve all problems, regardless of what they are. Each cluster type serves a purpose. Cray and then SGI spent time developing the Cray Link for a reason. Sun, IBM, HP and others have gotten into the game as well. Sometimes you need a ton of procs with access to the same memory, sometimes the task divides well.

      I see this from almost the opposite side of the spectrum with rendering. To render a shot, you can divide the task amongst ignorant machines. They just need to know what they are working on. The cleverness goes into the managment of these machines. A place where the massively parallel machines would be nice is rendering a single frame. After the renderers initial load of the scene file and preping for render, the task can be divided amongst many processors on the same machine. To divide it beowulf style would throttle the network with the memory sharing over the ethernet ports.

      So from my experience:

      big data, long time ... massivley parallel machinebig data, short time ... generic cluster with smart master
      little data, long time ... beowulf style cluster
      little data, short time ... generic cluster with smart master

Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke

Working...