Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Education Hardware

Student and Professor Build Budget Supercomputer 387

Luke writes "This past winter Calvin College professor Joel Adams and then Calvin senior Tim Brom built Microwulf, a portable supercomputer with 26.25 gigaflops peak performance, that cost less than $2,500 to construct, becoming the most cost-efficient supercomputer anywhere that Adams knows of. "It's small enough to check on an airplane or fit next to a desk," said Brom. Instead of a bunch of researchers having to share a single Beowulf cluster supercomputer, now each researcher can have their own."
This discussion has been archived. No new comments can be posted.

Student and Professor Build Budget Supercomputer

Comments Filter:
  • Imagine... (Score:4, Funny)

    by Glowing Fish ( 155236 ) on Friday August 31, 2007 @02:54AM (#20421819) Homepage
    A beowulf cluster full of these!

    (Okay, now back to responsible mature posting)
    • by Anonymous Coward on Friday August 31, 2007 @03:20AM (#20421959)

      (Okay, now back to responsible mature posting)

      You forgot to provide a link to that...
    • Re:Imagine... (Score:4, Insightful)

      by Jonner ( 189691 ) on Friday August 31, 2007 @04:58AM (#20422405)
      In this case, I think it's a somewhat serious idea. This design has only four nodes, so connecting several in a modular fashion might make sense, and retain some of the advantages in portability and cost. You could move the individual Microwulfs around, but bring them together for really big problems. Think of it as a LAN party for scientists.
      • Re: (Score:3, Informative)

        by mikael ( 484 )
        That's probably what would happen if a dozen of these systems were made. Instead of a system in each office, they would probably be placed in a lab, if not in a server room somewhere with remote access through a thin client.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      Hmmm....

      NCSU Computer Science Dept. has PS3 cluster topping out at 218Gflops using 8 PS3s. PS3's are not $500 each, so that quite a bit better in terms of bang fot the buck. It's even better than the reduced price PC from Newegg.

      http://moss.csc.ncsu.edu/~mueller/cluster/ps3/ [ncsu.edu]

      http://moss.csc.ncsu.edu/~mueller/cluster/ps3/coe. html [ncsu.edu]
    • by Bluesman ( 104513 ) on Friday August 31, 2007 @12:03PM (#20426483) Homepage
      (Okay, now back to responsible mature posting)

      No, stay with us on Slashdot!
  • by toQDuj ( 806112 ) on Friday August 31, 2007 @02:56AM (#20421833) Homepage Journal
    It's just four motherboards sitting in a single frame. connected by an ethernet switch.

    True supercomputing machines (sun, ibm) have a little bit better interconnectivity between the components than a mere 1Gb/s line. This can serve its purpose though, VASP will run wonderfully on it. GAMESS probably as well.

    B.
    • Re: (Score:3, Informative)

      by dbIII ( 701233 )
      Mobs like Verari were selling something similar a while ago - not cheap though and I can't see it on their web page now. What is nice now from other places is things like 2 x 8 core machines in 1U (maxtron and probably a few others). The relatively small supermicro boards in that thing would mean you could put a few in a server case - not cheap though.
    • Anyone know if there has been a top500-compatible measurement of a PS3? If PS3 costs about $500, one could build a "ps3wulf" with four nodes and some network equipment for $2500. Anyone have any idea how it could compare with the Microwulf?
      • by MacroRex ( 548024 ) on Friday August 31, 2007 @04:20AM (#20422243)
        Sorry for replying to myself, but I found an interesting paper [netlib.org] about the subject. Seems that a PS3 should have Rpeak of 14 Gflop/s with double precision floating point operations. Sounds to me that with a proper clustering solution a four-node PS3 cluster would be significantly faster than Microwulf. And it would probably be a smaller, too :)
        • by bhima ( 46039 )
          Some one has already clustered the PS3.

          I'm surprised you didn't see that in your search.
        • by Savantissimo ( 893682 ) on Friday August 31, 2007 @07:03AM (#20422993) Journal
          Good paper - it also says that by using mixed precision (iterated 32-bit math for rough matrix factorization then fine-tuning the precision in 64-bit) the double-precision matrix performance is up to 155 Gflops.
      • Not too well, considering it could run only certified software (unless you're into violating DMCA, which isn't recommended for scientific and enterprise applications)
    • Re: (Score:3, Interesting)

      by vrmlguy ( 120854 )

      Others have pointed out that this is useful for tasks where the interconnect speed doesn't matter. I'll point out that the first "node" only costs $765, and the next seven are $564 each (then you need a bigger switch). Of course, the 8-way version won't fit in an airplane's overhead luggage compartment anymore. You might want to add a UPS.

      I seem to recall a post earlier this year about some other university building something similar using two quad-core CPUs on each motherboard. Their version, too, wou

      • Which is fine (Score:4, Informative)

        by Sycraft-fu ( 314770 ) on Friday August 31, 2007 @10:57AM (#20425655)
        But you aren't really a supercomputer at that point, you're a cluster. These days the line is more blurred than in the past but more or less the difference is interconnect speed. In a real supercomputer, there are very high speed interconnects, so you can run things that heavily rely on one part communicating with another, like particle simulations. That's why the US Department of Energy buys so many, rather than clusters. They do things like weather simulation and simulation of nuclear weapons, where every node as to be able to talk to every other node with essentially no penalty.

        Now if you have a job that doesn't use a lot of inter-node communication, like say 3D rendering, then a cluster is a better answer. Normal hardware with Ethernet interconnects. Works great and is cheap since you can use commodity parts. But don't confuse that cluster with a real super computer, you throw one of those intense inter node problem at it, it'll fall over because the interconnects are too slow.

        Unfortunately these days people really blur the distinction. You'll see systems on the top 500 list that are really questionable. It'll be commodity hardware connected with something like infiniband. Ok, great, that is faster (both more bandwidth and less latency) than Ethernet, but it still isn't necessairily up to what you'd get from a real supercomputer.

        However in the case of this deal, no, not a super computer. It's a small cluster and they are just calling it a super computer as marketing, effectively.
  • On an airplane? (Score:4, Insightful)

    by biocute ( 936687 ) on Friday August 31, 2007 @02:56AM (#20421841)
    It's small enough to check on an airplane

    With security concerns nowadays, it's the amount of cables coming out of it that worries an airline, not the size or weight of this machine.
    • by arivanov ( 12034 )
      Designwise it is not very different from my first office computer back in 1993. That one had all of its components spread around screwed to a desk so you can easily unplug or plug any one of them and replace with the component you are testing (we ran a small hardware design shop that also did computer repairs). This was long before the days of windowed cases and was quite unusual for the time. I have seen all kinds of reactions: fascination, fear (will it spark), interest, etc.

      I agree with your main point t
  • by boaworm ( 180781 ) <boaworm@gmail.com> on Friday August 31, 2007 @02:56AM (#20421843) Homepage Journal
    It looks rather fragile, quite like the iRack (http://www.youtube.com/watch?v=xcjLEwZqcQI), and I dont think it would survive checking in on an airplane given how some suitcases looks like at baggage claim.

    Cool achievement nevertheless.
  • by Anonymous Coward
    They just linked 4 motherboards together. My cat could do that.
    • Re: (Score:3, Funny)

      by MrNaz ( 730548 )
      Lisa, I'd like to buy your cat!
    • by CaptDeuce ( 84529 ) on Friday August 31, 2007 @03:51AM (#20422085) Journal

      They just linked 4 motherboards together. My cat could do that.

      Sure. But then your cat would have to moonlight as a mouser, run errands for the neighborhood dogs, and -- worst of all -- give up catnip; all in order to pay for the project.

      I would not want to live in the same house as a sleep deprived cat going through catnip withdrawl.

    • Wussywulf? (Score:3, Interesting)

      by MikeFM ( 12491 )
      I'm to lazy to run the numbers tonight to compare actual speeds but our dual CPU four-core Xeon (8 cores total) servers cost around $2500 each to build. Looking at their specs I doubt they could be doing much better and they require special clusterish programming.
      • Re: (Score:2, Informative)

        by Draconian ( 70486 )

        they require special clusterish programming

        So ? On an SMP machine you need special SMP-ish programming. Great fun if your memory bandwidth runs out...

        Some problems run naturally on distributed systems, some on shared-memory systems. It's a matter of choosing the right machine for the task at hand. Programming in MPI isn't that hard, and unless you are network bound (either bandwith or latency) it scales well. That is the equivalent of an SMP-machine not being memory bound (bandwidth, latency, coherency,...)

    • by dbIII ( 701233 ) on Friday August 31, 2007 @03:56AM (#20422121)
      Doubt it. You think you can hook up gigabit ethernet without at least five cats eh?
    • by maroberts ( 15852 ) on Friday August 31, 2007 @04:21AM (#20422247) Homepage Journal
      They just linked 4 motherboards together. My cat could do that.

      Would your cat be alive at the end of the process? We wouldn't be sure till we opened the case.
    • ***They just linked 4 motherboards together. My cat could do that.***

      Sure and Fluffy could probably mount a jet engine on a bicycle too. But could she make either the motherboard farm or the jetsicle actually do anything non-lethal? For more than 15 seconds?

      I think it's an impressive accomplishment and worth noting. Doesn't look like it would fit in any of my suitcases though. Not without dissassembly at any rate.

  • heat buildup issues? (Score:3, Interesting)

    by toQDuj ( 806112 ) on Friday August 31, 2007 @03:00AM (#20421871) Homepage Journal
    And it looks like they'll be running into heat buildup issues. An enclosure ventilated by one or two desktop fans would have provided sufficient cooling. Mere convection (outside of the tiny on-board fans) is often not enough. The Sun E450's were well ventilated machines, with a clear air path going from the front to the back. The temperature monitors (ambient, cpu (x4), PSU (x3)) were useful as well. One was used for a long time at Stack (www.stack.nl) as a room temperature monitor.

    B.
  • Great! (Score:5, Funny)

    by Colin Smith ( 2679 ) on Friday August 31, 2007 @03:05AM (#20421879)
    Now Microsoft have their next development target for Office.
     
  • But (Score:4, Funny)

    by phalse phace ( 454635 ) on Friday August 31, 2007 @03:08AM (#20421891)
    is it powerful enough to run Windows Vista?
  • Lame. (Score:4, Insightful)

    by Anonymous Coward on Friday August 31, 2007 @03:09AM (#20421899)
    I am impressed with how amazingly lame this story is. It should have been entitled, "College Senior and Professor discover Ethernet, MicroATX, and PXE boot. Funding dried up before paying for cases. News at 3 am because we can't find anything else to report."

    Honestly, our whole research lab is filled with PXE booting MicroATX computers connected via ethernet. And I guarantee that four "nodes", aka Linux PCs, are cheaper than $2500. Whoop-de-freaking-do.
    • Re:Lame. (Score:4, Informative)

      by GreatBunzinni ( 642500 ) on Friday August 31, 2007 @05:24AM (#20422485)

      And I guarantee that four "nodes", aka Linux PCs, are cheaper than $2500.

      Indeed. After I saw the component prices I was left dumbfounded. I mean, AMD Athlon 64 X2 3800+ processors at 165 dollars a pop? A kingston 1GB DDR-667 stick of RAM at 124 dollars? Are they on drugs? I mean, I've just bought an Athlon 64 X2 4000+ EE for 68euros (the 3800+ was selling for 59 euros) and each kingston 1GB DDR-800 stick for 46 euros. Where did all the rest of the money went?

      • Re:Lame. (Score:4, Funny)

        by dreamchaser ( 49529 ) on Friday August 31, 2007 @06:06AM (#20422659) Homepage Journal
        They probably padded the budget and spent the remaining money on hookers and blow. That would explain how they got delusions of grandeur and thought they built something new and innovative when all they did was link 4 motherboards via cheap gig-ethernet.

        This story is literally a 'nothing to see here, move along' one.
  • the google way (Score:5, Interesting)

    by arabagast ( 462679 ) on Friday August 31, 2007 @03:11AM (#20421909) Homepage
    This seems pretty similar to the way google builds their racks, with just mb's and no cabinets. What would have been really cool was if someone made som e kind of network driver for a pci express slot, with them being able to use external cables, is it possible to use a dedicated pci express slot as a interface to another computer, skipping the network bottleneck ?
    • Re: (Score:3, Interesting)

      by Petaris ( 771874 )
      Myself and some other students (back when I was in college) played with doing this via PCI SCSI cards, it worked to a point but wasn't quite the same as all you were really doing is providing SCSI access to each systems HDDs. Still it would have allowed quite fast data sharing if configured correctly. As we had no real goal, it was just one of those "I wonder if we can do it" times, we didn't play further then just the HDD connections and copying files across, which was very fast. :)
    • Re: (Score:3, Interesting)

      (Commenting rather than modding)

      I've often wondered the same myself. Sure, you can get some speed optimizations by running a slimmed-down wire protocol over the Ethernet, but it's intuitive that any additional hardware between nodes adds latency. Unless NIC hardware is essential for something like buffering, I'd think some sort of PCI bridging driver would be much better suited for this sort of setup.

      If anyone's heard of anything like this please share. I'm off to do some more Googling for it myself.

      -S
    • Re: (Score:3, Interesting)

      by dave420 ( 699308 )
      You'd have to implement some sort of switching, as the motherboards in question only had 1 PCIE slot. You'd have to find a motherboard with as many PCIE slots as computers wanting to speak to each other to act as a switch, or have them all talking over one connection, which would diminish performance greatly.
  • by fgodfrey ( 116175 ) <fgodfrey@bigw.org> on Friday August 31, 2007 @03:17AM (#20421947) Homepage
    ...this is *hardly* a supercomputer. This is 152.57 times slower than entry number 500 on the Top 500 List [top500.org]. There isn't a nice neat definition of what a supercomputer is anymore, but "capable of running Beowulf" isn't it. Leaving aside the more custom machines that the company I work for (and a few others) build, there are plenty of Linux clusters that *do* qualify. The fastest one seems to be number 8 on the current Top 500 list (a Dell Infiniband cluster at NCSA).
    • Just judging from the performance it's clearly not a supercomputer, you can get more than 26.5 GFLOPS with one single (expensive) Xeon CPU in a standard PC, and it will not necessarily cost more than $2500. But this is a student project, I guess the idea was designing and building a supercomputer, not building a fast computer. And this is clearly *designed* as a supercomputer - it's just not fast - but don't let that cloud your judgment.
      • by Jonner ( 189691 )
        Microwulf is designed as a Beowulf cluster, but how does that make it a supercomputer? To put it another way, what is super about a computer that isn't fast?
    • looks like this Top500 list would do nicely for a definition of a supercomputer.
    • Eh, it's got about double the GFlops of Deep Blue...

    • 1999 called (Score:3, Informative)

      by Sangui5 ( 12317 )
      They want their slowest Top 500 machine back...

      List of #500 on the TOP500 by year
      Year . .- RPeak . . . | Machine's owner and country | Make & Model
      06/1998 - 15.0 GPLOPS | Southwestern Bell, USA. . . | HPC 6000, Sun
      11/1998 - 20.5 GFLOPS | Koeln Universitaet, Germany | HPC 10000 Sun
      06/1999 - 34.2 GFLOPS | CIEMAT, Spain . . . . . . . | T3E900 Cray
      11/1999 - 38.4 GFLOPS | Bank, United States . . . . | HPC 10000 400 MHz, Sun
      06/2000 - 51.2 GFLOPS | EDS, United States. . . . . | HPC 10000 4
  • by Colin Smith ( 2679 ) on Friday August 31, 2007 @03:23AM (#20421971)
    One of the problems with supercomputers is that there aren't really very many of them, because of the size and cost. It means that the tools you use to run your supercomputing applications are similarly unusual. The skills to use and develop on parallel systems are then equally scarce. Access to a supercomputer isn't exactly common.

    Microwulf could make all of the above common. For the price of a high spec PC. The commodity nature of it could bring super computing and super computing applications to the masses.

    Then you can scale your application from microwulf to miniwulf to superwulf with little more effort than installing it on the bigger machine.

    Course, they'd have to produce a commodity pre-built system.
     
    • Re: (Score:2, Interesting)

      by Solra Bizna ( 716281 )

      The more computing power is available in the world, the less it will be used to its potential. If everyone had an Earth Simulator in their basement, how much of that power would be wasted?

      Not saying that proliferation of computers is bad, just food for thought.

      -:sigma.SB

      P.S. SETI@home, Folding@home, etc. are cheating. :P

    • One of the problems with supercomputers is that there aren't really very many of them, because of the size and cost. It means that the tools you use to run your supercomputing applications are similarly unusual. The skills to use and develop on parallel systems are then equally scarce. Access to a supercomputer isn't exactly common.


      Revolutionary? Everything old is new again...

      http://www.mini-itx.com/projects/cluster/ [mini-itx.com]
      http://news.taborcommunications.com/msgget.jsp?mid =494184&xsl=story.xsl [taborcommunications.com] -- 8 way parallel cluster that fits on an airplane for under 3 grand
      http://www-03.ibm.com/systems/bladecenter/ [ibm.com] -- a 7U chassis that holds 14 blades, and is a bit spendy, but not completely unreasonable for some situations
      http://www.linuxjournal.com/article/8177 [linuxjournal.com] -- My personal favorite, this page talks about several small portable miniclusters that have been made over the last six or seven years...

      Yes, 8 cores of Athlon64 is faster than 8 cores of low power VIA CPU's from several years ago, but the concept isn't revolutionary, and there isn't a lot of headline worthy engineering that goes into a project like this... I'm sure it's a very handy tool, and I'm not suggested it shouldn't have been built, or that it was entirely trivial to build, but in the end, it's just four ordinary motherboards and ethernet.
      • but in the end, it's just four ordinary motherboards and ethernet.

        Sure, and I've built similar (bigger & faster) custom systems. But I'm expensive and the knowledge I have is uncommon. Your average Windows admin wouldn't have a clue. This could be a cheap drop in commodity supercomputer.

        Hell, the IBM SP was a commodity pre-built supercomputer. This is much cheaper.

        but the concept isn't revolutionary

        No, the concept hasn't been revolutionary for decades, the effect might be though.

      • by MooUK ( 905450 )
        It's still interesting to many of us, simply *because* we could probably build one ourselves. Not in spite of it.
        A lot of what us humans do in life is "because we can". This doesn't appear to be any different.

        (It slightly amused me that the captcha to log in to post this post was "differer".)
  • by Kantana ( 185308 ) on Friday August 31, 2007 @03:28AM (#20421993)
    I see a few people making the expected "It's just four motherboards wired together with Gig E"-comments. While I won't object to that, I'd say this is not about a groundbreaking evolution in hardware, more a case of demonstrating what's possible today with COTS parts. Adding to that the compact packaging, and the ability to run off of a single power cord, it's a nice setup IMHO.

    While it does not have the interconnect of "true HPC" hardware (a bit of a fleeting distinction, but bear with me) it'll surely be suitable for a lot of the simpler, yet still compute-intensive tasks out there ("simple" here meaning not needing a lot of intra-node communication).

    On the flip side, it might fuel the "hell, I'll just build my own cluster"-mentality going around these days. I work in the HPC group at a university, running linux clusters, IBM "big iron" and a couple of small, old SGI installation, and we certainly see a bit of that going around. Problem is, sure, the hardware is cheap and affordable, but getting it to run in a stable and sensible manner without spending large amounts of time just keeping the thing together is a challenge, mainly due to the immature state of clustering software. As many researchers are not exactly keen on spending time solving problems outside their specific field, they're usually better off letting somebody else administer things, so they can just log on and run their stuff.

    But for individuals and small groups of people who are computer savvy enough to handle it, things like these are definately a "good thing" (TM).
  • by bundaegi ( 705619 ) on Friday August 31, 2007 @03:31AM (#20422001)
    Sure, nothing beats off-the-shelf components... but powering 4 motherboards using 4 separate PSUs sounds like waste!

    Look at this design: http://www.mini-itx.com/projects/cluster/ [mini-itx.com]. It uses DC-DC converters on each motherboards (mini-itx, so low power), a single 12V PSU and a UPS for regulation:

    The DC-DC converters require a clean, well-regulated 12VDC source. I chose to use a heavy duty 60 ampere 12VDC switching power supply capable of delivering 60 amperes peak current which I ordered from an online electronics test equipment supplier. Since badly conditioned AC power is potentially damaging to expensive computing equipment, I use a 1 KVA UPS purchased at an office supply store to make sure the cluster can't be "bumped off" by power line glitches and droputs.
    • I was going to post the same thing. It was the first thing that popped into my head after reading the headline.

      Another group is producing much the same thing commercially, in a nice case and all. A 4 node Core 2 1.8Ghz with 1 gig ram per node and 2x 250Gb storage is about $7000 (USD)

      (Wonder how that stacks up to what he built speed/cost wise, though I'd bet the Via cluster beats all in power use (140W max load))

      See the link at Mini-ITX
      http://www.mini-itx.com/2007/02/26/the-octimod-min i-itx-cluster [mini-itx.com]

      Company s
      • Thank you for completing my post. Yes, that octimod setup looks sweet.

        For a more hands-on approach, maybe these 200W+ DC-DC converters will do: http://www.mpegcar.com/acatalog/200w_and_above_PSU .html#aD220PSU [mpegcar.com]. The 220W version is rated at 95% efficiency... can't go wrong with that!

      • about the speed: Sorry, forget it. The via-cluster dies a horrible dead in most FPU intensive compuations. Even a normal core2 quad will crush that cluster.

        An single core of a core2 at 2Ghz is about 8 times faster in fpu stuff like rendering than a 1.4Ghz C7. The integer part is more competetive, thought.
  • GigaFlops (Score:5, Interesting)

    by jma05 ( 897351 ) on Friday August 31, 2007 @03:32AM (#20422011)
    Is 26 GigaFlops significant anymore? I hear that the PS3 can do 20-25 from Folding@Home people. And it is only about a 5th the price. But I hear so many different numbers that I can no longer make sense of them. Why do they bother comparing with DeepBlue, an over 10 yr old super computer? Can anyone with a PS3 can report what their PS3 with Yellow Dog Linux is doing? And what are the numbers for the latest desktop processors? Any recommendations on software to benchmark in flops for my own computers?
    • Re: (Score:2, Informative)

      by skulgnome ( 1114401 )
      Are your numbers on single-precision computation, or double-precision? Because the PS3's Cell only does amazingly quick floating-point on single-precision values. Double precision is six, seven times as slow.
    • Indeed, the main thrust of their claim is "Newer components are faster and cheaper than old ones." Gasps of surprise.
    • For current Intel core2, to get the Rpeak, take the number of cores total * clockspeed * 4. A single quad core 2.0 GHz gets you to 32 gflops already. You can readily build a rig with a single quad-core 2.0 ghz for less than 2,500. This is incredibly a non-event. It serves as a handy demonstration of how supercomputers are roughly architected today to people not in the industry, but the price/flop is noting special at all.
  • gigaflops? (Score:3, Insightful)

    by apodyopsis ( 1048476 ) on Friday August 31, 2007 @04:02AM (#20422147)
    gigaflops, schmigaglops.

    this is /.

    i thought performance was measured in fps?

  • Did anyone else notice the poster of Monty Python and the Holy Grail behind him?
  • I thought one part of the definition of a supercomputer was that the cost exceeds a million (used to be that cost exceeds ten million). Dollars or Pounds, doesn't really matter even with the current exchange rate :-)
  • Wouldn't that be a Beowulf Cub?
  • by Dogun ( 7502 )
    In '97-98, I had a little 4-5 node Beowulf cluster on a cart with wheels. While it wasn't quite as cost-effective as this, that's the nature of pricing in the computing world.

    On that note... hard drives are good to have for all nodes, imo, since you may be doing things that make 'fetch/store data over the network' a bad strategy.

    The entire point of the Beowulf model is it's cheap, easy, and fun. While it's great to see people building cute little clusters like this one, I wouldn't exactly call this a brea
  • This has 4 dual core CPUs - 8 cores. That's the same as a MacPro or Dual Quad code Xeon PC who's cores are more powerful and which have much better communication between CPUs. And they have cases ;)

    So a Dual Quad core Xeon a super computer too?
  • A striking resemblance for a box of bits. I wonder if it's got the same surly attitude.
  • Am I missing something here? The Sisoft Sandra MFLOPS measurement for a top end Intel Core 2 is 47 GFlops http://www.tomshardware.co.uk/overclocking-intel,r eview-2395-28.html/ [tomshardware.co.uk]. OK admitedly this is a sythetic measurement, but it's a ballpark figure right?

  • They attain gigaflops and call it a supercomputer? I thought you had to at least reach terraflops these days...

    From Wikipedia: Supercomputer

    The speed of a supercomputer is generally measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-,combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.)

    It's not exactly a good quote, but looks to me like we're bumping the lower edge of the petaflop scale these days. Thats six decimal places people.

  • The University of Kentucky (where he is coincidently going to grad school) beat his price point years ago on a "real" supercomputer. This super computer [aggregate.org] was built for about $84 per GFLOP in 2003 and it made the Top500 list when it was built. The Aggregate team at UK is one of the tops in the field when it comes to supercomputers on the cheap.
  • Seems like 2GB per (dual core) node is a little on the low side for practical usage. Not surprisingly though, RAM is the biggest cost of the system (992$ total) and switching to 2GB or 4GB modules will raise the system price considerably. Would still be cheap though.
  • I mean, it wouldn't even be enough to run Jita, let alone the whole EVE-Online cluster. :)
  • Wow!

    I repeat, wow!

    How exactly does this qualify as newsworthy?

    This is almost as bad as the time some goose bought a mini mac and before the sales launch was a week old he'd gone and ripped the guts out and stuck them in a frickin' PC minitower case so he could "run a cheap server". What a dingbat.

    On second thoughts, the mini mac destroyer's effort was *much* worse than this, at least there is some merit to what these guys did and they didn't go and wreck a nice piece of kit in the process. It's just not exa
  • The cluster depends on gigE for the interconnect, which means data transfers are going to be slow, and have a high latency. He'd be better off spending a little more and using Infiniband equipment.
  • by seven of five ( 578993 ) on Friday August 31, 2007 @07:59AM (#20423407)
    Beowulf is a good idea for a very limited number of number crunching applications, or as a student learning tool for comp sci or related studies. Yeah, we built one of those a couple years ago, the professors ended picking up intel quad core machines that were faster (no effing network latency). Beowulf is gathering dust.

    Oh, and try writing your own lam-mpi code sometime...
  • by Traa ( 158207 ) on Friday August 31, 2007 @10:43AM (#20425479) Homepage Journal
    I thought the hip thing was GPU based supercomputing. NVidia even has a dedicated GPU based, desktop sized, scalable supercomputer line called Tesla.

    The basic Tesla unit c870 = 518 Giga flops for ~$1300.
    Tesla s870 = 2 Terra flop for ~$12000 (still desktop size)

    NVidia Tesla [nvidia.com]
  • PS3Wulf (Score:3, Informative)

    by Doc Ruby ( 173196 ) on Friday August 31, 2007 @11:17AM (#20425905) Homepage Journal

    Also in 2003, the University of Illinois at Urbana-Champaign's National Center for Supercomputing Applications built the PS 2 Cluster for about $50,000.

    The PS3 comes out of the box with a Cell uP [wikipedia.org] that gets something like 20 GFLOPS [stanford.edu] on each $500 PS3. It's already networked into clustered supercomputing [wikipedia.org] like this MicroWulf.

    A $500 PS3 has 20 of the 26.5 GFLOPS the $2800 MicroWolf has. MicroWulf runs Ubuntu, which can also run on PS3 [psubuntu.com]. If people can port Linux libraries like Mesa/OpenGL/X to the PS3 SPEs, where most of the power lies, then we'd be looking at $25:GFLOPS, not the $94:GFLOPS on the MicroWulf.

    And while taking a break, you can play Gran Turismo 5, and 40 more games you can afford with the money you save on HW.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...