Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Truly Off-The -Shelf PCs Make A Top-500 Cluster 231

SLiDERPiMP writes: "Yahoo! News is reporting that HP created an 'off-the-shelf' supercomputer, using 256 e-pc's (blech!). What they ended up with is the 'I-Cluster,' a Mandrake Linux-powered [Mandrake, baby ;) ] cluster of 225 PCs that has benchmarked its way into the list of the top 500 most powerful computers in the world. Go over there to check out the full article. It's a good read. Should I worry that practically anyone can now build a supercomputer? Speaking of which, anyone wanna loan me $210,000?" Clusters may be old hat nowadays, but the interesting thing about this one is the degreee of customization that HP and France's National Institute for Research in Computer Science did to each machine to make this cluster -- namely, none.
This discussion has been archived. No new comments can be posted.

Truly Off-The -Shelf PCs Make A Top-500 Cluster

Comments Filter:
  • by nairnr ( 314138 ) on Thursday October 04, 2001 @05:59PM (#2389921)
    Well, it seems like super clusters are becoming very easy to build hardware-wise. If you throw enough commodity at a problem, it becomes easier. I would think the biggest problem with supercomputers is no longer the hardware itself, but networking, and the programming to take advantage of the hardware. These computers still only really work for something that distributes easily. The biggest factors are now the ability to distribute, and schedule work for each node. The more nodes you engage, the more you hope your problem is CPU bound, so it will scale more.

    Data transfer and message passing are such a big issue I belive the most important developments are in the networking topologies and hardware for these environments.

    That said, I still want one in my basement :-)
  • by Pulzar ( 81031 ) on Thursday October 04, 2001 @06:11PM (#2389975)
    Actually, the worry about the PS2 machines was that their imaging capabilities are strong enough to be used in the missile guidance systems. I think he never actually attempted to get any of them, but US blocked shipments to Iraq just in case.

  • by CormacJ ( 64984 ) <cormac@mcgaughey.gmail@com> on Thursday October 04, 2001 @06:23PM (#2390030) Homepage Journal
    The latest top 500 list is here : http://www.top500.org/list/2001/06/ [top500.org]

    The cluster is at #385

  • Re:Technicly.. (Score:2, Informative)

    by flegged ( 227082 ) <anything @ my third level domain> on Thursday October 04, 2001 @06:29PM (#2390062) Homepage
    Yes, we all saw the Apple ads for the G4 being capable of 1GFlop. What you didn't see, was that the Pentium III 500 was capable of ~2GFlop. Now that can run an 1GHz. You also didn't see that AMD's Athlon, having a superscalar FPU, is faster than a P3. And now they can run at 1.6GHz. The P4 has new instructions to speed up certain types of multimedia processing as well. By contrast, the G4 is only now approaching 1GHz. Go figure (as you Americans say.. :o)

    An Apple is not a supercomputer.

    RISC does not mean faster. It allows for simpler design which can lead to increased speed, but as we have seen, Apple have consistently failed to compete with Intel and AMD (not that they even make thier own chips...). CISC is actually a good idea, since with the huge speed differential between CPU and memory, and the introduction of cache, the bottleneck in any system is the memory bandwidth. Think for a moment : why did Intel add instructions to the x86 architecture in every iteration? Because its faster having one instruction doing something complex, than many simple ones, simply because of the reduced frequency of memory access. In todays computers, RISC doesn't mean anything, since memory, storage and network bandwidth is the bottleneck.

    The moral of this story:
    1: Don't believe Apple's advertising.
    2: Don't believe what a Mac Zealot will tell you about RISC or some other claptrap.
    3: Get ppc Mandrake if you're unfortunate enough to have actually bought a G4.

    Yes, I use Macs. Daily. And I hate Apple. But my PHB is a Mac zealot. It frightens the hell out of me seeing all our company's work being stored on a Mac (OS9 (no pre-emption, memory protection, RAID, journalling, or anything you would want for a server...)).
  • Re: RISC (Score:2, Informative)

    by Turq ( 319326 ) on Thursday October 04, 2001 @06:38PM (#2390103) Homepage
    While I agree with what much of flegged said, his/her post implies that modern Intel/AMD CPUs -are- largely CISC devices. This simply isn't the case. Both (the AMD moreso though) make heavy use of RISC-type design and technique.

    RISC does matter, or Intel and AMD wouldn't be using it.
  • Kit == Rig (Score:2, Informative)

    by kindbud ( 90044 ) on Thursday October 04, 2001 @07:11PM (#2390236) Homepage
    As we all know, "kit" is a british slang term for computer hardware.

    No it isn't. That's just the only context in which you've heard it used (translation: you read too much Slashdot, and should get out more often). "Kit" is the British equivalent of the American "rig" when used in this context. It is not used specifically to refer to computers.
  • by BrentN ( 90935 ) on Thursday October 04, 2001 @10:01PM (#2390677)
    The problem with Ethernet in clustering isn't bandwidth, its the latency.

    The real issue is how parallel-efficient your algorithms are. We do molecular dynamics (MD) on large clusters, and we can get away with slow networks because each node of the cluster has data that is relatively independent of all other nodes - only neighboring nodes must communicate. If you have a case (and most cases are like this) where every node must communicate to every other node, it becomes a more difficult problem to manage. To deal with this, you need a high-speed, low-latency switch like the interconnects in a Cray. The only real choice for that is a crossbar switch, like Myrinet.

    And Myrinet is tres expensive.

  • by jmv ( 93421 ) on Friday October 05, 2001 @12:04AM (#2390850) Homepage
    Actually, the best MIPS/Wh is probably with the slower versions of the current laptop chips. Maybe portable G3/G4?

    Also, I don't think you'd get much useful stuff done with early Pentiums and 486. Consider that a P4 2 GHz has 20 times the clock speed and probably does twice as much per cycle, so it's ~40X faster. Now, if you connect 40 P100 together, unless your problem is completly parallel (like breaking keys, as opposed to most linear algebra), you're going to lose at least a factor of 2 there. This means that in order to equal 1 P4 @ 2 GHz, you'll need almost 100 Pentium 100 Mhz. This means that 10 P4 would be like a thousand Pentiums. At these numbers, it's going to cost so much in networking and power...

    I'd say (pure opinion here) the slower you'd want to have today is something like a Duron 1 GHz and the best MIPS/$ is probably with a bunch of dual Athlon 1.4 GHz (A dual is not cheaper that 2 single, but you get more done because of parallelism issues).
  • by Paul Komarek ( 794 ) <komarek.paul@gmail.com> on Friday October 05, 2001 @12:14AM (#2390877) Homepage
    My experience doesn't suggest that the P4 does twice as much per cycle. I'm seeing P4s do a fair bit less than the P3 per cycle, and the P3, P2, and PPro cores didn't seem *that* much faster per clock than the original Pentiums. My gut tells me that the P4 doesn't do any more than the original Pentiums per clock cycle, and the only thing they have going for them is Intel's ability to manufacture them at high clock speeds.

    If you really want a cpu that does a lot in a single cycle, look at the IBM POWER series. IIRC, on the floating point side, a 2xx MHz POWER III is darn not too far from an Alpha 21264 at 733 MHz. And now there are 1.1GHz and 1.@GHz POWER IV chips, in the new IBM p690 machines. I don't know how they compare to the POWER III per cycle, though, because the POWER IV opens a whole new (good) can of worms.

    -Paul Komarek
  • by tolldog ( 1571 ) on Friday October 05, 2001 @12:30AM (#2390914) Homepage Journal
    Ahh... Somebody else who gets it...

    I find too that people assume that an "X" type of cluster will solve all problems, regardless of what they are. Each cluster type serves a purpose. Cray and then SGI spent time developing the Cray Link for a reason. Sun, IBM, HP and others have gotten into the game as well. Sometimes you need a ton of procs with access to the same memory, sometimes the task divides well.

    I see this from almost the opposite side of the spectrum with rendering. To render a shot, you can divide the task amongst ignorant machines. They just need to know what they are working on. The cleverness goes into the managment of these machines. A place where the massively parallel machines would be nice is rendering a single frame. After the renderers initial load of the scene file and preping for render, the task can be divided amongst many processors on the same machine. To divide it beowulf style would throttle the network with the memory sharing over the ethernet ports.

    So from my experience:

    big data, long time ... massivley parallel machinebig data, short time ... generic cluster with smart master
    little data, long time ... beowulf style cluster
    little data, short time ... generic cluster with smart master

The one day you'd sell your soul for something, souls are a glut.

Working...