Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Education Hardware

Student and Professor Build Budget Supercomputer 387

Luke writes "This past winter Calvin College professor Joel Adams and then Calvin senior Tim Brom built Microwulf, a portable supercomputer with 26.25 gigaflops peak performance, that cost less than $2,500 to construct, becoming the most cost-efficient supercomputer anywhere that Adams knows of. "It's small enough to check on an airplane or fit next to a desk," said Brom. Instead of a bunch of researchers having to share a single Beowulf cluster supercomputer, now each researcher can have their own."
This discussion has been archived. No new comments can be posted.

Student and Professor Build Budget Supercomputer

Comments Filter:
  • heat buildup issues? (Score:3, Interesting)

    by toQDuj ( 806112 ) on Friday August 31, 2007 @04:00AM (#20421871) Homepage Journal
    And it looks like they'll be running into heat buildup issues. An enclosure ventilated by one or two desktop fans would have provided sufficient cooling. Mere convection (outside of the tiny on-board fans) is often not enough. The Sun E450's were well ventilated machines, with a clear air path going from the front to the back. The temperature monitors (ambient, cpu (x4), PSU (x3)) were useful as well. One was used for a long time at Stack (www.stack.nl) as a room temperature monitor.

    B.
  • the google way (Score:5, Interesting)

    by arabagast ( 462679 ) on Friday August 31, 2007 @04:11AM (#20421909) Homepage
    This seems pretty similar to the way google builds their racks, with just mb's and no cabinets. What would have been really cool was if someone made som e kind of network driver for a pci express slot, with them being able to use external cables, is it possible to use a dedicated pci express slot as a interface to another computer, skipping the network bottleneck ?
  • by Kantana ( 185308 ) on Friday August 31, 2007 @04:28AM (#20421993)
    I see a few people making the expected "It's just four motherboards wired together with Gig E"-comments. While I won't object to that, I'd say this is not about a groundbreaking evolution in hardware, more a case of demonstrating what's possible today with COTS parts. Adding to that the compact packaging, and the ability to run off of a single power cord, it's a nice setup IMHO.

    While it does not have the interconnect of "true HPC" hardware (a bit of a fleeting distinction, but bear with me) it'll surely be suitable for a lot of the simpler, yet still compute-intensive tasks out there ("simple" here meaning not needing a lot of intra-node communication).

    On the flip side, it might fuel the "hell, I'll just build my own cluster"-mentality going around these days. I work in the HPC group at a university, running linux clusters, IBM "big iron" and a couple of small, old SGI installation, and we certainly see a bit of that going around. Problem is, sure, the hardware is cheap and affordable, but getting it to run in a stable and sensible manner without spending large amounts of time just keeping the thing together is a challenge, mainly due to the immature state of clustering software. As many researchers are not exactly keen on spending time solving problems outside their specific field, they're usually better off letting somebody else administer things, so they can just log on and run their stuff.

    But for individuals and small groups of people who are computer savvy enough to handle it, things like these are definately a "good thing" (TM).
  • by Solra Bizna ( 716281 ) on Friday August 31, 2007 @04:30AM (#20421997) Homepage Journal

    The more computing power is available in the world, the less it will be used to its potential. If everyone had an Earth Simulator in their basement, how much of that power would be wasted?

    Not saying that proliferation of computers is bad, just food for thought.

    -:sigma.SB

    P.S. SETI@home, Folding@home, etc. are cheating. :P

  • by bundaegi ( 705619 ) on Friday August 31, 2007 @04:31AM (#20422001)
    Sure, nothing beats off-the-shelf components... but powering 4 motherboards using 4 separate PSUs sounds like waste!

    Look at this design: http://www.mini-itx.com/projects/cluster/ [mini-itx.com]. It uses DC-DC converters on each motherboards (mini-itx, so low power), a single 12V PSU and a UPS for regulation:

    The DC-DC converters require a clean, well-regulated 12VDC source. I chose to use a heavy duty 60 ampere 12VDC switching power supply capable of delivering 60 amperes peak current which I ordered from an online electronics test equipment supplier. Since badly conditioned AC power is potentially damaging to expensive computing equipment, I use a 1 KVA UPS purchased at an office supply store to make sure the cluster can't be "bumped off" by power line glitches and droputs.
  • GigaFlops (Score:5, Interesting)

    by jma05 ( 897351 ) on Friday August 31, 2007 @04:32AM (#20422011)
    Is 26 GigaFlops significant anymore? I hear that the PS3 can do 20-25 from Folding@Home people. And it is only about a 5th the price. But I hear so many different numbers that I can no longer make sense of them. Why do they bother comparing with DeepBlue, an over 10 yr old super computer? Can anyone with a PS3 can report what their PS3 with Yellow Dog Linux is doing? And what are the numbers for the latest desktop processors? Any recommendations on software to benchmark in flops for my own computers?
  • Wussywulf? (Score:3, Interesting)

    by MikeFM ( 12491 ) on Friday August 31, 2007 @04:53AM (#20422101) Homepage Journal
    I'm to lazy to run the numbers tonight to compare actual speeds but our dual CPU four-core Xeon (8 cores total) servers cost around $2500 each to build. Looking at their specs I doubt they could be doing much better and they require special clusterish programming.
  • by Anonymous Coward on Friday August 31, 2007 @05:11AM (#20422201)
    You can get single chips that outperform this. Specifically Intel's quad-core Xeon (Clovertown), which has a peak performance of 4 flops per cycle per core. Clocked at 3 GHz, that's 4 cycles times 4 cores times 3 GHz, or 48 gigaflops. This "supercomputer" is very unimpressive.
  • by Anonymous Coward on Friday August 31, 2007 @06:34AM (#20422521)
    So 1 Hz equals 1 FlOp? And a 3.2 GHz CPU can do 3.2 gigaflops, right? so how are they getting more then 3.2*4 = 12.8 gigaflops out of four CPUs? Can they execute multiple FlOps per tick then? And do we care that these will bottleneck at the rather limited bus (even forgetting about the switch).

    If the bus speed is 1 Ghz x 32 bit, doesn't that mean that the whole computer is limited to 1.3 gigaflops at best (need to move at least 96 bits to perform a FlOp?), or even less if a lot of data has to travel over the 1GBit ethernet:

    I know I am clueless, sorry, but that's how I learn. THanks for your help.
  • by vrmlguy ( 120854 ) <samwyse&gmail,com> on Friday August 31, 2007 @06:37AM (#20422535) Homepage Journal

    Others have pointed out that this is useful for tasks where the interconnect speed doesn't matter. I'll point out that the first "node" only costs $765, and the next seven are $564 each (then you need a bigger switch). Of course, the 8-way version won't fit in an airplane's overhead luggage compartment anymore. You might want to add a UPS.

    I seem to recall a post earlier this year about some other university building something similar using two quad-core CPUs on each motherboard. Their version, too, wouldn't fit over your seat, as it stood about six feet tall. Hmmm, either Slashdot nor Google can find anything, but I thought it used a frame built of pine 2x2s.

    BTW, is there a benchmark you have to pass to get called a supercomputer? Why couldn't someone grab a bunch of three-year-old desktops that are due to be junked and tie them together for a shot at the title of cheapest supercomputer? Do those ad hoc arrays that the animation studios re-build for every movie count?

  • by DrXym ( 126579 ) on Friday August 31, 2007 @07:50AM (#20422897)
    IBM have already done just that. For example they have a demo [youtube.com] of cluster of PS3s providing a real time ray tracing scene.

    I expect the design is very well suited to clustering. The PPUs handle all the data dispatching & balancing with the SPUs left to do the leg work.

  • by Savantissimo ( 893682 ) on Friday August 31, 2007 @08:03AM (#20422993) Journal
    Good paper - it also says that by using mixed precision (iterated 32-bit math for rough matrix factorization then fine-tuning the precision in 64-bit) the double-precision matrix performance is up to 155 Gflops.
  • Re:the google way (Score:3, Interesting)

    by Petaris ( 771874 ) on Friday August 31, 2007 @08:34AM (#20423209)
    Myself and some other students (back when I was in college) played with doing this via PCI SCSI cards, it worked to a point but wasn't quite the same as all you were really doing is providing SCSI access to each systems HDDs. Still it would have allowed quite fast data sharing if configured correctly. As we had no real goal, it was just one of those "I wonder if we can do it" times, we didn't play further then just the HDD connections and copying files across, which was very fast. :)
  • Re:On an airplane? (Score:3, Interesting)

    by arivanov ( 12034 ) on Friday August 31, 2007 @08:58AM (#20423397) Homepage
    You are overestimating the amount of EM noise emitted by a motherboard outside the case. Very few computer components are noisy. The ones that are like some modems, wireless cards, etc feature additional individual shielding.
  • Re:the google way (Score:3, Interesting)

    by Stultsinator ( 160564 ) on Friday August 31, 2007 @09:22AM (#20423597)
    (Commenting rather than modding)

    I've often wondered the same myself. Sure, you can get some speed optimizations by running a slimmed-down wire protocol over the Ethernet, but it's intuitive that any additional hardware between nodes adds latency. Unless NIC hardware is essential for something like buffering, I'd think some sort of PCI bridging driver would be much better suited for this sort of setup.

    If anyone's heard of anything like this please share. I'm off to do some more Googling for it myself.

    -S
  • Re:the google way (Score:3, Interesting)

    by dave420 ( 699308 ) on Friday August 31, 2007 @11:21AM (#20425197)
    You'd have to implement some sort of switching, as the motherboards in question only had 1 PCIE slot. You'd have to find a motherboard with as many PCIE slots as computers wanting to speak to each other to act as a switch, or have them all talking over one connection, which would diminish performance greatly.
  • by Traa ( 158207 ) on Friday August 31, 2007 @11:43AM (#20425479) Homepage Journal
    I thought the hip thing was GPU based supercomputing. NVidia even has a dedicated GPU based, desktop sized, scalable supercomputer line called Tesla.

    The basic Tesla unit c870 = 518 Giga flops for ~$1300.
    Tesla s870 = 2 Terra flop for ~$12000 (still desktop size)

    NVidia Tesla [nvidia.com]
  • by ILongForDarkness ( 1134931 ) on Friday August 31, 2007 @11:55AM (#20425623)
    Since when is a four CPU node a supercomputer? I remember when the a new Apple system came out, I believe it was the dual CPU G4 system, and they tooted it as a supercomputer on the desktop because it can do 1GFLOP single precision.

    I code for systems with 800 4 Optron nodes, with 10GB/s interconnect, and a couple hundred terabytes of SAN attached storage. That is a supercomputer :) Well sort of, a lot of people in the HPC community consider it just a cluster, as some programs need 64+ CPU's in SMP mode, so any loosely coupled memory model would be considered a serial farm :)

    Also, note that high end platforms, would have redundant power, redundant high end interconnects, redundant hot swap drives etc. There also would be enough of them to need, high end switches, blowers, power conditioners, air circularators, and various other room coolers. Of course a custom built workstation without a graphics card, monitor, or even case is going to beat the pants off of HPC architecture price per flop, good work to the group, but hardly newsworthy in the HPC community.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...