Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing IBM IT

Supercomputer On the Cheap 133

jbrodkin writes "You don't need Ivy League-type cash to get a supercomputer anymore. Organizations with limited financial resources are snatching up IBM supercomputers now that Big Blue has lowered the price of Blue Gene/L. Alabama-Birmingham and other universities that previously couldn't afford such advanced technology are using supercomputers to cure diseases at the protein level and to solve equally challenging problems. IBM dropped the price of the Blue Gene/L to $800K late last year before releasing a more powerful model, Blue Gene/P, last month. Sales of Blue Gene/L have more than doubled since then, bringing supercomputing into more corners of the academic and research worlds."
This discussion has been archived. No new comments can be posted.

Supercomputer On the Cheap

Comments Filter:
  • From TFA (Score:5, Funny)

    by DaveCar ( 189300 ) on Wednesday August 01, 2007 @07:06AM (#20068911)
    At its highest price, the Blue Gene/L cost $1.3 million per rack

    Pamela Anderson eat your heart out!
    • Re: (Score:3, Funny)

      by IBBoard ( 1128019 )
      And knowing most super computers, both would be far too large, ugly, and filled with silicon.
      • Re:From TFA (Score:5, Funny)

        by mwvdlee ( 775178 ) on Wednesday August 01, 2007 @07:27AM (#20069043) Homepage
        On the plus side; most supercomputers are fully hot swappable, try doing that with women.
        • Re: (Score:3, Funny)

          by Gazzonyx ( 982402 )

          On the plus side; most supercomputers are fully hot swappable, try doing that with women.

          My experience says the hotswap turns to a dual cold shoulder; It has something to do with an error when malloc fails to make sufficient room to store correct name, or a null pointer is dereferenced when trying to remember name. Oh well. There's still hope.

          while(1)
          {
          myGirl = myGirl -> cuteFriend;
          delete myGirl -> last;
          }
          myGirl -> isHappyEnding = !(myGirl -> isHappyEnding);

          • Re: (Score:3, Funny)

            by suggsjc ( 726146 )
            It appears that you have some problems with your logic as you will just stay in that while loop indefinitely (yeah right). Here is the updated code, and man is it complicated! But if you learn one thing from this, its that it only gets more expensive...

            // initialize variables
            myGirl = arg[0];
            acctbal = arg[1];
            girl_count = 1;

            while (!screwed) {
            date_cost = 20;
            while (!bored && !dumped) {
            date_success = date(date_cost);

            if (date_success) { date_cost = date_cost * 1.75; }
            else { dumped = true; }

            • Haha, thanks, I needed that laugh. And, yes, the infinite loop was there on purpose - just beyond the function's reach, 1 line away might have been a million ;)
        • by hitmark ( 640295 )
          sorry, i value my life to much for that.
    • Let's not forget that with Pam, you only got one rack. IBM gives you *several*.

      Of course, they're all blue, but picky, picky.
    • I'm sure Pamela's rack has gotten her way more than $1.3 mil.
    • I know there has to be a joke regarding the bit about "snatching up supercomputers" and it would belong in this thread.
  • "Supercomputer" (Score:3, Insightful)

    by pzs ( 857406 ) on Wednesday August 01, 2007 @07:07AM (#20068917)
    Anybody can have a supercomputer on the cheap because the definition of supercomputer changes every 3 seconds.

    Peter
    • Re:"Supercomputer" (Score:5, Interesting)

      by Ginger Unicorn ( 952287 ) on Wednesday August 01, 2007 @07:13AM (#20068959)
      i think my PS2 is supercomputer isnt it? Weren't the US government going to restrict exports on them as they were considered munitions or something daft like that. Same thing for old Mac G5 as i recall. Might be a stupid urban myth though.
      • Re: (Score:3, Funny)

        by cerberusss ( 660701 )

        i think my PS2 is supercomputer isnt it?
        Hah! These old and heavy IBM PS/2's are deadly weapons once loaded into my bolt thrower. Imagine using Blue Gene and my goal of World Domination is coming nearer.
      • Re:"Supercomputer" (Score:4, Informative)

        by morgan_greywolf ( 835522 ) on Wednesday August 01, 2007 @07:43AM (#20069163) Homepage Journal

        Might be a stupid urban myth though.


        Nope, at least on the PS2 count (I don't know about Mac G5s). Back in 2000, Saddam Hussein was purchasing Sony PS2s by the thousands [freerepublic.com], which were then banned from export, due to them being classified as munitions.
        • Altivec on the G5 processor could do quite a bit of calculations - no wonder if they were banned for that performance
          • Not so much as as a supercomputer. They were classified as a munition and export controlled by ITARS. Funny thing is, Motorola was fabbing them overseas. I did sysadmin for the design for test group. Of course, encryption was once controlled by ITARS. I have the "this t-shirt is a munition" and the code to do the actual encryption. Along with pictures of me wearing it around the world.
        • by Wavicle ( 181176 )
          Oh come on, that WND article doesn't pass muster. Most notably, Sony is not an American company and Saddam could have easily acquired his PS2s in Japan, Taiwan or Europe. They didn't need to do a special ban on PS2 transfers to Iraq, Iraq was already an export controlled country. Reading that article is laughable... Iraqi weapons scientists need PS2's to browse the web? How could they have published that with a straight face?

          As to whether this was motivated by the PS2 being a supercomputer... rubbish. The w
      • >they were considered munitions
        Certainly true that firing a PS2 out of a big gun was about the best thing you could do with them.
        • Actually it was their components which were supposed to be able to be used in missile systems... not the game system as a weapon.
      • I remember (Score:3, Informative)

        by Solandri ( 704621 )
        Way back when I was in jr. high around 1980, my friends and I were going ga-ga over the latest issue of Byte magazine at the library. It had a chart listing various computers (processors) and their FLOPS [wikipedia.org]. The 6502 (Apple II) and 8088 (IBM PC) were listed at less than 1000 FLOPS (they didn't do floating point so it had to be emulated in software). We were drooling over the Cray Supercomputer which was listed at 1 million FLOPS, or 1 MFLOPS.

        A 2.4 GHz Core 2 Duo rates around 500 MFLOPS. An nVidia 8600GT

      • i think my PS2 is supercomputer isnt it? Weren't the US government going to restrict exports on them as they were considered munitions or something daft like that. Same thing for old Mac G5 as i recall. Might be a stupid urban myth though

        FYI Apple computers can be used to make a supercomputer. The MACH5 [top500.org] is number 50 in the TOP500.

        Supercomputers are cheap, but many other products aren't getting cheaper at the same rate. A pack of toilet paper might be practically free if prices dropped as fast. However, who
    • Re: (Score:2, Interesting)

      by BrianHursey ( 738430 )
      I think the super computers are a thing of the past. Now days clusters are the way to go. Much cheaper and flexible.
      • Re:"Supercomputer" (Score:5, Insightful)

        by jcgf ( 688310 ) on Wednesday August 01, 2007 @09:28AM (#20070439)
        Every time there's an article on supercomputers someone brings up clusters. As has been pointed out before, a cluster only works for easily parallelizable problems where you can divide your problem into many subproblems that can be divided amongst your nodes. This is not a problem with supercomputers as you have much faster communications amongst processors (ie they're not just cheaply connected with cat5 ethernet cable like beowolf) and thus you can solve problems on a supercomputer much faster in this case.

        Supercompters aren't going anywhere fast.

        • You are correct. A preprocessing cluster needs to be used for parallelizations problems. However it can also be used for clustering database operations via fiber over a SAN (Storage Area Network) environment. This give the flexibility to add more nodes in a business environment to expand storage and preprocessing power. One example is a Oracle RAC cluster software with OCFS oracle cluster file system. With logical volume managers and a flexible storage array like an EMC Symetrix (High end) or a EMC Clariion
        • by Rhys ( 96510 )
          Clusters are supercomputers. Supercomputers are clusters. The only question is what purpose your machine is built for. Some problems aren't tightly coupled (or your machine is small) and so an interconnect like Gig-E is fine. Some problems are and you want one of the specialty interconnects: Myrinet, Infiniband, Quadrics Elan, or possibly even 10Gig-E.

          Any machine you see in the top500 list that mentions any of those interconnects is a cluster. I don't really think you can classify #7 and #8 on that list as
          • by cerelib ( 903469 )

            Clusters are supercomputers. Supercomputers are clusters.

            It might be nitpicking, but I think it would be more appropriate to say, "Clusters can be supercomputers. Supercomputers can be clusters." Because I can make a cluster that is not a supercomputer and make a supercomputer that is not a cluster.
        • by suggsjc ( 726146 )

          Supercompters aren't going anywhere fast.
          Isn't that an oxymoron?
        • Clusters of independent workstations do not preclude you from using a fast interconnect. I've designed and purchased (or helped my employer purchase) any number of clusters, including one that made it on the Top500 list when it was sited next to ASCI BlueGene/L. The most recent cluster I'm purchasing has an InfiniBand 4X DDR interconnect, sufficient for many supercomputing applications. You are in no way restricted to embarrassingly parallel problems with cluster computing.
      • I think the super computers are a thing of the past. Now days clusters are the way to go. Much cheaper and flexible.

        Yes, no, maybe.

        I guess the best definition of a cluster vs a "real" supercomputer is distributed memory connected via some kind of interconnect vs a large shared memory SMP. A blue gene is a distributed memory system connected via interconnects. The Cray XT4 and XT3 are distributed memory systems connected via interconnects. Actually, I think that SGI is the only guy that really makes large
    • Re: (Score:2, Interesting)

      by solevita ( 967690 )
      I think you mean that anybody can have an old supercomputer. The ever changing definition means staying on top requires a little more than $400,000.
    • by f97tosc ( 578893 )

      Anybody can have a supercomputer on the cheap because the definition of supercomputer changes every 3 seconds.


      Although presumably the definition is revised up in terms of performance... It is not like everyone is sitting on old slow computers which suddenly become supercomputers by definition.
      • It is not like everyone is sitting on old slow computers which suddenly become supercomputers by definition.


        Well there goes my plans to bring forth my dark legions of 486's and P75's...

        An one know where I can get a bunch of ISA NICs and 10 Mbit hubs?

  • Beowulf! (Score:3, Funny)

    by rgravina ( 520410 ) on Wednesday August 01, 2007 @07:10AM (#20068933)
    Imagine a Beowulf cluster of these!
    • Wow moderators, since when are old lame jokes redundant? (He's the first to post our beloved Beowulf-phraseme in this discussion.)

      And he's even right, clusters are the most frequent architecture in the TOP500 [wikipedia.org]:

      373 systems are labeled as clusters, making this the most common architecture in the TOP500 with a stable share of 74.6 percent.
      • Since I was modded "offtopic" but I really feel *on*topic let me elaborate my point:

        It's nice that the Blue Gene/L is now considerably lower, but the low budget "supercomputer" is a cluster of inexpensive computers. That is probably shown be their number (373 of 500) in the TOP500 [wikipedia.org] and by the fact that Google runs several clusters [wikipedia.org] of x86 PCs.

        Distributed computing is a different solution to the same task that a Blue Gene/L can solve, both have their strengths and weaknesses.

    • I'm just waiting for tomshardware to publish an article on overclocking one of these. "We get awesome framerates in Quake!"
    • I was trying to think of the configuration of a $800K Beowulf Cluster, what would the performance comparison be to the IBM Gene?
  • are there any ivy league schools that actually have one of these? I don't recall seeing any blue gene systems very high up in the top500.org list at any of the eight ivies.
    • Re:ivy league cash? (Score:4, Informative)

      by necro81 ( 917438 ) on Wednesday August 01, 2007 @08:25AM (#20069611) Journal
      If you search through the whole top500 [top500.org] list, you'll find these Ivy Leaguers with Blue Gene computers:

      #93 Harvard
      #382 Princeton

      But, there are plenty of other US schools on the list with Blue Gene computers (and a many outside the U.S. as well):

      #5 SUNY Stony Brook
      #7 Renssellaer Polytechnic
      #63 California-San Diego #374 Boston University
      #376 Iowa State
      #379 MIT
      #383 Alabama-Birmingham
      • I went to ISU, and man was that ever celebrated when we got that computer. One of the things we used it for (which I was involved with) was putting John the Ripper on it and using it to crack passwords at ISU's national cyber defense competition. That was fun :D....
    • by Andy Dodd ( 701 )
      In most cases, they bought supercomputers before IBM started making "cheap mass market" units. i.e. they don't have BlueGenes because they have custom one-offs.

      I know Cornell Theory Center has a few supercomputers that were top of the line when installed, but I think they're getting a bit old nowadays.

      Yup, Cornell has dropped off of the top500 for now. They held the #6 rank in 1995, were last on the top500 list with a ranking of 496 in 2006, and last held a top 100 ranking of 49 back in mid-2003. Just li
  • Normal business... (Score:3, Insightful)

    by IBBoard ( 1128019 ) on Wednesday August 01, 2007 @07:15AM (#20068969) Homepage
    Isn't this just normal business? "We're about to bring out the P series, so lets sell off the L series 'cheap'".

    Having said that, I don't suppose nearly half price is that bad an offer, even if $800K isn't exactly 'cheap'!
    • Re: (Score:2, Insightful)

      by DaveCar ( 189300 )
      Maybe it's supercomputerspam (tag anyone?).

      What with the IBM Saves $250M Running Linux On Mainframes [slashdot.org] story earlier it looks like IBM is pimping out their wares here on Slashdot.

      They are probably behind the milfy bewbs too (is it too hard to put those two word into a lameness filter?)
    • by Whiteox ( 919863 )
      Yeah well I bet that it doesn't come with a monitor.
      I've seen these sort of deals before, and they'll only sell you the box and the monitor, keyboard and mouse is all extra.
      What a bummer!
  • But will it run Linux?
  • by bblboy54 ( 926265 ) on Wednesday August 01, 2007 @07:34AM (#20069091) Homepage
    Stanford still has the the best idea [stanford.edu].
    • by bunratty ( 545641 ) on Wednesday August 01, 2007 @07:37AM (#20069123)
      Distributed processing is fine for "embarrassingly parallel" problems where the compute nodes don't need to communicate with each other. However, many problems solved by supercomputers or large clusters need communication between the compute nodes, so aren't amenable to distributed solutions.
      • by Nefarious Wheel ( 628136 ) * on Wednesday August 01, 2007 @08:35AM (#20069721) Journal
        A friend of mine once ran the massively parallel supercomputer centre at LaTrobe. He told me of how the transputer-based Connection Machine would run blindingly fast in parallel, only to have the lights slowly wink out until one small corner of the display was the only thing lit. He said it was disappointing, and rather funny, how parallel jobs tended to go linear over time.

        Yep, sometimes you just need a few processors running very fast cycles.

        Sigh... we miss you, Seymour Cray. Wish you hadn't taken your Jeep out that day.

        • That's the exact opposite end of the spectrum from embarrassingly parallel problems. In embarrassingly parallel problems you have so little data dependency that tasks can run independently or nearly independently. In you friend's case, the tasks were so interdependent that all the tasks were waiting on one task to finish, so there was nearly no speedup from adding more processors.

          The bottom line is that the best solution to some problems is a grid of loosely connected computers. The best solution to others

        • Re: (Score:2, Informative)

          by Beatlebum ( 213957 )
          The Connection Machine used up to 64K 1-bit processing elements configured to work lock-step with a single control unit (SIMD).

          The transputer was something completely different. It was a 32-bit processor with four high-speed connections to other transputers. This could be used to implement a MIMD processing network.

          The CM scaled well on data parallel applications, the transputer was more suited to course-grained parallelism.

      • That's why it's a pity that SGI (much as I dislike Irix) got so thoroughly thumped. Those 8-proc O2000 nodes connected into a distributed shared memory system which looked like a single, flat, address space for many purposes were nice machines, and relatively simple to program. Affordable BlueGenes, Altixes, and E25Ks (affordable being a relative term) are still useful due to their simpler to use shared design, over purely distributed code.
        • by mikael ( 484 )
          That's why it's a pity that SGI (much as I dislike Irix)

          That's why they got thumped. Every UNIX vendor had a slightly different flavour of UNIX. It meant that developers had to maintain separate builds of their application for each platform. The platforms which
          were the hardest/most expensive to develop for, were the ones that fell off the bottom of the annual budget.
          • The confused business plan which placed them squarely in the path of bigger, more established, and better funded competitors didn't help either. Then there was buying Cray, and selling the SMP Sparc based system to Sun, because they couldn't figure out how to make money off it. Sun called it an E10K, and made a mint (and continued to develop it). SGI became roadkill.
            • by mikael ( 484 )
              Then there was buying Cray, and selling the SMP Sparc based system to Sun,

              I thought SGI were forced to split Cray up in order to conform to anti-monopoly legislation.

              Though, I still find it hard to believe that SGI's original corporate headquarters (Building 20) became a
              computer history museum [computerhistory.org].

              When Sun said they were going to make SGI history, they weren't kidding.
              • I'll have to look into that, as I always heard, including from SGI-types, that they didn't want the underpowered Sparc system in their lineup, so they dumped it for cash.

                Sun can be aggressive and boastful, but periodically right.
      • Why are people equating "distributed" with "no communication"? Distributed computing certainly allows internode communication and goes far beyond embarrassingly parallel problems. The architectures of grid-based computing reduce the ability to do cross-task communication. But there's nothing about a distributed architecture that would preclude an algorithm that requires parallel communication. In fact, I've deployed many machines and software systems on distributed architectures that require collective
  • by freedom_india ( 780002 ) on Wednesday August 01, 2007 @07:51AM (#20069219) Homepage Journal
    Supercomputers and Mainframes are for totally different purposes.
    A supercomp will do one and only one job parallely to finish it off much faster than any other computer.
    A M/F can handle multiple jobs at the same time with lesser speed, but with considerable stability.

    For many companies, one S/390 running OS/390 or even an AS/400 (not related) is more than enough for their entire Notes setup.

    A supercomputer cannot be used to do that 24/7.

    They are fast racecars which cannot race outside of circuit.

    • Re: (Score:1, Funny)

      by blake3737 ( 839993 )
      They are fast racecars which cannot race outside of circuit.
      Are you trying to tell me I can take my Mainframe offroading? If I do this, can YOU be the one to tell my admin why his precious is covered in mud and burned clutch smell?
      • by jd ( 1658 )
        You'll be fine. Just remember, fuel should use a reliable delivery protocol and not UDP, and on no account use roofnet for a rollcage.
    • by Rhys ( 96510 )
      What are you talking about, 'only one job'? We run an average of 30-40 jobs on the local supercomputer I manage at any given time. Each job has their own chunk of nodes they run on. We run the machine at an average of 85% load all the time (actually more like 90-95% load when used, plus a few hours of 0% use maintenance each week). We do have failures in individual nodes fairly often, but what do you expect when you've got over 6k dimms in a room?

      You're right they are different, but your impression of how a
  • These days, $800K for a supercomputer is going to be snapped up by financial institutions far faster than academic and research. Didn't Mitsubishi just close its research plant? Banks and financial companies DEVOUR data, they're the real customers for this sort of thing. It's nice to speculate on the Folding@Home numbers you'd get, but these things are going to be used to make real money.
    • by locokamil ( 850008 ) on Wednesday August 01, 2007 @08:46AM (#20069895) Homepage
      If the price goes even lower, perhaps they will. I find it difficult to see this happening though: the financial firm I work for has swung from supercomputer to linux clusters, and is showing no signs of going back. The TCO for a bunch of linux blades is just so much lower than a supercomputer... and because banks are so conscious of their bottom lines, they usually don't improve things if they are already working.
      • Pfft, the amount of money I've seen wasted in a financial organisation who I have done work for, is ... well embarassing really. Their spending on IT is pretty impressive, because they're looking for all manner of degrees of performance and resilience.

        No, the real reason they won't go for one of these, is that the project manager signing the approvals doesn't understand why a shared memory supercomputer isn't the same as a big stack of server blades. And what he'd do with one if they got one.

        • I agree that even well-run banks waste money at times-- but that wastage (at least at my firm) is on the client-relations side. Backend departments like IT, however, are't nearly as profligate because in the end, they're cost centers, while client relations can potentially lead to additional revenue for the firm.

          That said, if the need for supercomputer-level parallelization and power crops up, I (gasp) actually trust my bosses to know exactly how and why to use them: one of them worked with Cray Research ba
      • Probably a stupid question; what do financial institutions need the processing power for?
  • The previous one was Blue Gene/L and the current one is Blue Gene/P. Was there a Blue Gene/M-Blue Gene/O, and they just weren't released as production configurations?
    • by 777v777 ( 730694 )
      L stood for "Light", as in not quite the full heavy BlueGene we will build in the future. It is not just a letter.
  • by armodude ( 1133725 ) on Wednesday August 01, 2007 @08:09AM (#20069455)
    FOR RUNNING VISTA the way it was meant to be run!!!
  • by E++99 ( 880734 ) on Wednesday August 01, 2007 @08:10AM (#20069463) Homepage
    If $800,000 is still too pricey for you, you can get a Cray supercomputer on eBay for $800:
    http://cgi.ebay.com/Cray-J90-Supercomputer-1-CPU-2 -Memory-Modules-J-90_W0QQitemZ8816248638QQihZ014QQ categoryZ162QQrdZ1QQssPageNameZWD1VQQcmdZViewItem [ebay.com]
    • by afidel ( 530433 )
      And here's [ebay.com] two whole ones currently listed for $51 (though reserve's not met). Shipping on 3tons has to be pretty expensive, not to mention powering them =)
  • by peter303 ( 12292 ) on Wednesday August 01, 2007 @09:04AM (#20070131)
    This has been a marketing ploy for decades: calling a supercomputer from a few years ago a cheap supercomputer. Well, its no longer a supercomputer.

    In the early 1980s a 60 megaflop Cray-1 defined "supercomputer" and the video processing in my cell phone is faster than that.

    The new prize is a petaflop, with anything within a magnitude of that range a true super- at least for this year.
  • I've been to several universities and I think it's safe to say all had some form of "supercomputer" for their CompSci department. A lot of it has to deal with how you define a super computer. Granted 15+ years ago Supercomputer meant Cray or IBM, but since Beowulf and the concept of using inexpensive parts to form clusters. MOsix/PVM/MVP, etc are all technologies that can allow people to use "the old labs computers" as a cluster, or get funding for some new hardware that's becomes a cluster.

    It's a lot ch

    • There's a reason for that. It's because a cluster is just not the same as a shared memory supercomputer.

      it's all very well to dig up OpenMP, PVM, MOSIX and the like, but the fact remains that they're only suitable for certain classes of problem.

      Processor cycles are cheap, but that's not why your supercomputer is expensive. The reason it's expensive is because of the internal communication needed to run a tightly coupled compute job. Myrinet, Infiniband, Scali etc. provide some rather impressive intercon

    • by mgblst ( 80109 )
      It's a lot cheaper to buy 20-30 regular computers than a 1million cray or IBM "supercomputer".
       
      Or even better, set up a grid on the computers in the various pools around the department/universities. Setting up Condor on these various machines will get you a very powerful grid in very little time, the extra cost can be really small (if the computers are already left on).
  • Alabama-Birmingham and other universities that previously couldn't afford such advanced technology are using supercomputers to cure diseases at the protein level

    Does anyone have any examples of specific diseases that Alabama-Birmingham, or any other university, have actually cured "at the protein level" using these BlueGene supercomputers?

    Not just doing research that will "eventually contribute to treatments". I want to hear which diseases have these BlueGene supercomputers being pimped in this Slashdot sto

    • by SIIHP ( 1128921 )
      No, but I have a great example of you being a bigot in my sig.

      Now insult me because it's true and you can't do anything about it.
      • All this shows is that you're obsessed with me. You're like a stalker caught in the headlights.

        What a sick freak you are. Now go say I'm calling you a sick freak because you're gay. When the reason is that you're gay for me, though I've turned you down so many times.
  • Wow those sound really cheap! I'll have to pick one up next time I'm out at the mall shopping for 24kt gold t-shirts.
  • SiCortex [sicortex.com] still seems to be the best bet to me. About $1.5 mil for the 5832-core 18KW system, $200k for the 648-core 2KW system, in base configurations according to internet hearsay. 1GFLOPS per core, and an interconnect that's incredible. They've apparently demoed the 648-core system at SC'07 now and are slated to ship "this summer".

Like punning, programming is a play on words.

Working...