Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Businesses Red Hat Software Transmeta Hardware

First 96-Node Desktop Cluster Ships 323

Panaphonix writes "The Register reports that Orion Multisystems is shipping the first 96-node desktop cluster. 'With the new, larger system, customers get pretty much the most powerful computer around that can plug into a standard electrical socket.' According to the spec sheet, the DS-96 runs Fedora Core 2 and gets 110 GFlops sustained, 230 GFlops peak."
This discussion has been archived. No new comments can be posted.

First 96-Node Desktop Cluster Ships

Comments Filter:
  • by Anonymous Coward on Wednesday May 04, 2005 @02:24AM (#12429329)
    Oh, never mind.

    I FAIL IT!

  • My uni recently got some a 12 system dell cluster that came loaded with redhat. mmm; paralizing is fun:)
  • dup (Score:2, Informative)

    by pg133 ( 307365 )
  • Question (Score:4, Interesting)

    by ErichTheWebGuy ( 745925 ) on Wednesday May 04, 2005 @02:30AM (#12429363) Homepage
    I am not familiar with the architecture of clusters, so I am a little surprised by the more than 100% difference between sustained and peak GFlops. I know what a GFlop is and all that, I just don't immediately see why there is such a huge difference.

    Can someone summarize why there is such a huge difference?
    • I don't know much about it, but my guess is that it would need time to allow some of the processors to cool down.
    • Re:Question (Score:5, Informative)

      by bobbozzo ( 622815 ) on Wednesday May 04, 2005 @02:36AM (#12429392)
      finite bandwidth between processors makes it impossible to sustain anywhere near peak performance for most real-world applications.

      Linpack is what is usually used to measure sustained performance on HPC systems.
    • Re:Question (Score:4, Funny)

      by katana ( 122232 ) on Wednesday May 04, 2005 @02:36AM (#12429394) Homepage
      Sustained: Riding down the street.
      Peak: Taking it off some sweet jumps.
    • Re:Question (Score:3, Interesting)

      by Ruie ( 30480 )
      You are right - on pure number crunching, with little I/O the cluster peak and sustained performance for the same program should be identical.

      It might be that their peak number is derived assuming code particularly favorable to the processor architecture in use - say using SSE to do the floating point math. This can easily produce the factor of 2 difference.

    • Re:Question (Score:5, Informative)

      by brsmith4 ( 567390 ) <brsmith4@gmail. c o m> on Wednesday May 04, 2005 @02:42AM (#12429416)
      The theoretical max gives a rough estimate of the raw Floating-point power for all of the processors on the system. You pretty much add up the GFlops potental for each node (not exactly, but pretty much). The sustained and demonstrated GFlops of the cluster is based on the Linpack benchmark. The reason there is such a huge difference between the two numbers can be a result of a few factors. 1) The interconnect is GigE and Linpack tends to make use of Message passing comms which are affected adversely by the latencies produced by the GigE connections (myrinet would have been a good choice, but I suppose it was probably impossible to squeeze that into that case) 2) Memory speeds also are a factor as pushing floating point numbers around involves memory. This cluster isn't using anything fancy when it comes to the memory and I suspect this may be another cause for this.

      When they say that this line of clusters can "make or break" Orion, I am right now, leaning for broke. For the cost of this machine, one can get a real cluster with a lot more performance. I know this thing is nice because of the power requirements and the fact that you don't need a dedicated server room to store it, but for $100,000, you can get Microway to build you a pimptacular cluster with Dual-Opteron nodes, high-speed memory and a phat interconnect with either myrinet or infiniband. You will get a lot more work done for the same price.
      • Re:Question (Score:5, Insightful)

        by Daniel Phillips ( 238627 ) on Wednesday May 04, 2005 @03:07AM (#12429518)
        I know this thing is nice because of the power requirements and the fact that you don't need a dedicated server room to store it, but for $100,000, you can get Microway to build you a pimptacular cluster with Dual-Opteron nodes, high-speed memory and a phat interconnect with either myrinet or infiniband. You will get a lot more work done for the same price.

        You forgot a couple of things:

        * HVAC costs
        * Realestate costs

        Remember, this is a deskside cluster. Try that with your dual-opteron cluster. And try adding up all the costs.
        • What?! (Score:5, Funny)

          by Anonymous Coward on Wednesday May 04, 2005 @03:52AM (#12429663)
          I can't hear you, the guy two cubicles over just fired up his new Opteron cluster. I'm just trying to hold on to my desk!
        • Re:Question (Score:2, Insightful)

          by Anonymous Coward
          Space (e.g.real estate) and energy(electricity) is a premium in places such as Japan. Could be interesting to see how sales develop in such places.
        • Your $100K cluster will require a $40K cooling unit and a $40K UPS on top of the power costs.

      • Re:Question (Score:5, Insightful)

        by glesga_kiss ( 596639 ) on Wednesday May 04, 2005 @04:35AM (#12429798)
        He's also forgotten about hiring an extra 2 guys at least to maintain the extra machines. A selling point of this box will be that "it just works". Pay for a support contract and wammo, you've got a cheap low maintainence cluster. For people working on top-secret stuff (who else needs clusters? ;-), hiring people is a risk and the vetting process is expensive.
        • Re:Question (Score:3, Insightful)

          by Tim C ( 15259 )
          The vetting process isn't just expensive, it's time consuming too. I don't know about the States, but here in the UK getting security cleared (to "normal" level, I forget the correct term) takes about 3 months. That's not 3 months of filling in forms, of course, but it is three months during which the person can't work on the project while you wait.
        • Re:Question (Score:3, Insightful)

          by brsmith4 ( 567390 )
          You don't need two extra guys. I manage 5 clusters myself on a regular basis. Granted, it would be a hell of a lot of work if I couldn't enlist the help of a few people every once in a while, but for the most part, its a one man job. As well, to the poster claiming that I didn't take into account the cost of storage, I know this thing is nice because of the power requirements and the fact that you don't need a dedicated server room to store it, I'm fairly sure that I roughly addressed that. BTW, we need
        • Re:Question (Score:3, Interesting)

          by femtoguy ( 751223 )
          This point is a biggy in the scientific computing world. It is easier to get capitol equipment money than it is to get salary money. (This is because equipment money is overhead free, while salary money incurrs a 50% overhead rate). We just had a donor give us many millions to guy a cluster, but he would commit to long term money for sys-admin support. We ended up including a lot of vendor support into the bid for the contract in order to turn support money into capitol equipment money. Considering tha
      • Give or take a few megabyte of cache per node, it wouldn't be too hard for atlas to do something about the weird choice of interconnect. For some problem sizes. But I guesstimate that these nodes don't have 4MB level 2 cache.
        I for one wouldn't buy one of Orions desktop clusters. I'd say most all research on latency effects of running GbE (not really that interresting) could be done on regular workstations running LAM (the desktop cluster in question also uses LAM).
        I'm not against Linux clusters. I'm using t
        • I'm not against Linux clusters. I'm using those for my studies. I'm just sceptical of using GbE as interconnect in anything that costs more than $10,000.

          It uses 10GigE on the backplane. For good measure it has something like 80 GB of disk on each node.
          • I'm not against Linux clusters. I'm using those for my studies. I'm just sceptical of using GbE as interconnect in anything that costs more than $10,000.

            It uses 10GigE on the backplane. For good measure it has something like 80 GB of disk on each node.

            Even with "10GigE" I wouldn't expect an MPI barrier to take less than 10 us. Ethernet just isn't designed for low-latency applications. Most parallel applications send lots of messages (Like Cannon's algorithm for parallel matrix multiplication sending

            • Oh, and about 80GB of storage per node: how long would it take to do a simple checksum on that?

              Huh?
            • Sorry if I ruin someones dream. It might be that this desktop cluster really has a valid application, but instead of seeing it, I see someone trying to ship oil inter-continent by transporting oil-filled balloons in VW Beetles...

              Don't worry, you did not ruin anybody's dream. You bleated about 10 usec network latency and otherwise don't seem to have a clue.
    • I'm surprised with more than 200% difference between the price of the cluster and equivalent in separate desktop computers.
  • by crottsma ( 859162 ) on Wednesday May 04, 2005 @02:45AM (#12429431)
    With power requirements quintupling that of a standard desktop computer, I'd probably have to use it at my local coffee shop, or only turn it on briefly to scare away song birds.
  • .. Computing?

    I mean, does Blender run on it at least? Can I do anything interesting from an 'immediate-personal' perspective with 96 nodes, and I don't just mean run Quake, or fire up "make -j 96" and such things..

    What sort of interesting modelling software is around? Could I use it to design stuff on a personal, non-hard-core science perspective? What are the practical uses for personal cluster computing?
  • Needs silencing! (Score:2, Interesting)

    by Slowleggs ( 604433 )
    From TA "Sound power 55 bels"

    550 dBel noise? Perhaps the producers should look into Metal cooling [slashdot.org] ? :)

    ...and/or put the box in another room.
    • Your observation is correct, that obviously must be a mistake. Among sounds heard by humans, the firing of military rifles can reach 150 dB. Sounds this powerful can apparently break bones in the ear, and so I assume would have to be the loudest sounds that we can hear. 550 dB would be just ever so slightly over the line.
      • Re:Needs silencing! (Score:3, Informative)

        by hcdejong ( 561314 )
        The loudest sounds possible (at 1 bar ambient air pressure) are about 180 dB. At that level, the pressure minimum of the sound wave is 0 bar (ie vacuum).
    • Re:Needs silencing! (Score:3, Informative)

      by gnuman99 ( 746007 )
      From TFM:
      Sound pressure level [wikipedia.org] 50dBA at operator position
      Sound power [wikipedia.org] 55 bels

      There is a difference.

  • FPS (Score:3, Funny)

    by strider44 ( 650833 ) on Wednesday May 04, 2005 @03:05AM (#12429507)
    How many FPS can you get on Doom 3? I've got to plan my future purchasing decisions.
  • Inefficient ? (Score:2, Insightful)

    by zymano ( 581466 )
    Merrimac 2 terraflop workstation for $20,000 [weblogs.com]

    General CPU's just don't have the punch that special purpose or Fpga processors do.
    • Re:Inefficient ? (Score:2, Insightful)

      by Slashcrap ( 869349 )
      General CPU's just don't have the punch that special purpose or Fpga processors do.

      And FPGAs or special purpose CPUs don't have the generality that normal CPUs have. There's also the small point about the Merrimac system not actually exisitng.

      PS. Thanks for linking to Roland Piquepaille's fucking blog. He doesn't get nearly enough links on Slashdot in my opinion.
    • Wha...? (Score:3, Insightful)

      by raehl ( 609729 )
      General purpose processors have *WAY* more punch. Especially punch per dollar, as FPGAs are fairly expensive.

      They're just general purpose, whether they be scalar (CPU) or vector (GPU), so an FPGA that is specifically optimized for a specific problem will kick the general purpose processor's butt - in that specific problem.

      But try running Quake III on an FPGA - it will be killed by the CPU in processing and killed by the GPU in graphics. Assuming you can even cram everything you need to be a CPU or GPU i
    • Re:Inefficient ? (Score:3, Insightful)

      by timeOday ( 582209 )
      That's not even a product, it's just a schematic. Talking about building a computer than only has $20K of parts, and running an actual business by selling those computers for $20K each are two very different things.
  • picture here [orionmulti.com]
  • Suspicious (Score:3, Insightful)

    by cyberfunk2 ( 656339 ) on Wednesday May 04, 2005 @04:41AM (#12429816)
    I gotta say.. I'm a tad suspicious here.. there seems to be a lot of marketing flash (no pun intended) and scarce details.

    What kind of CPUs are we talking about ? I'm assuming we're talking non-shared memory here, and therefore nodes that "retain" their own identies. But then isnt each cpu running it's own kernal ? That is.. This ISNT SMP , right ?

    I think the details could be a lot clearer here. The lack of tech specs or simple explinations, and excessive use of buisness speak "Efficiency" "unprecendented power" etc. makes me a tad nervous.
    • by Anonymous Coward
      All the current Orion systems, including this one, use Transmeta Efficeon CPUs. Not surprising since Orion was founded by a Transmeta co-founder.

      Actually, Efficeon performance is quite good on the type of repetitive loop-based code this system is intended for. It may not surpass an equivalent Athlon 64 or P4 based system, but in terms of bang per watt, it's not bad.
  • When they get these down to say 30-40k, and maybe beef up the processing power of the chips another generation or two (all entirely within the realm of possibility), I could see some of the big animation studios slapping them in deskside and clearing out the big renderfarm racks.

    - Greg

  • http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]

    Now I just need a desk big enough, and a power lead heavy enough to let me class this as a desktop machine.
    • Yeah, until the Department of Homeland Security wants to know why you're power consumption has jumped and why you have a 96 node machine running. Could you imagine having to justify one of these things for a personal use to a federal agent? Some how explaining that you want to compile KDE really fast, or play Tux Racer at full-frame rate might not fly.
  • Too expensive (Score:4, Interesting)

    by Buzh ( 74397 ) on Wednesday May 04, 2005 @06:58AM (#12430264) Homepage
    Although having 96 nodes in a single box makes it quite cute, from what I can interpret from the specs, you would get more bang for your $100K by getting what the beowulf crowd like to call MMCOTS (Mass-Market-Common-Off-The-Shelf, i.e. mass produced computers from Dell or the like), hooked toghether with a specialty high-bandwidth low-latency interconnect like Infiniband, Myrinet or SCI. Running a free beowulf cluster OS like for instance ROCKS [rocksclusters.org] would mean that a normal linux admin could maintain it quite effectively.

    I expect this thing to be marketed towards scientists in small or medium businesses that aren't employing many/any IT staff, who use commercial computer models to do things like theoretical chemistry (Gaussian, ADF etc), bioinformatics (Phase, BLAS, Paralign etc), fluid dynamics, statistics, crypto, you name it. I don't expect to see any of these types of systems used in normal supercomputing sites, where people write their own (parallel) code and skilled staff maintain the cluster.

    • This system is not designed to deliver the most FLOPS per dollar. It aims to address the heat dissipation, space, and noise concerns that arise using lots of MMCOTS boxes. Factor in the Watts and it starts to look really good.

      My big concern is that Transmeta recently announced that it was getting out of the chip making business. Unless another company licenses Transmeta's silicon design, Orion is going to run into serious supply-line shortages.
    • There choice of video really gets me.
      "ATi Mobility(TM) Radeon(TM) 9000--64MB integrated DDR" What??? This is not a workstation class card. Why not an nVidia Quadro card? I am sure heat and power are an issue but if it supposed to be an all in one desktop machine it seems like a poor choice.
  • Altitude (Score:4, Interesting)

    by hey ( 83763 ) on Wednesday May 04, 2005 @08:04AM (#12430485) Journal
    It has Altitude restrictions:

    Altitude -300 meters to +3000 meters
    -1000 feet to +10,000 feet

    I've never seen that before.
    So you cann't use it on a plane.
    • Fairly standard disclamer text there- I've seen that text in everything from HD to stereo equipment manuals.

    • Re:Altitude (Score:3, Informative)

      by omega9 ( 138280 )
      From the site itself [orionmulti.com]:

      They're fully scalable so you can add performance as your needs expand. It can be used on site: in the office, the laboratory, on a boat, or even aloft in a plane.

      Ain't that sumpin.
    • Re:Altitude (Score:3, Funny)

      by Locke2005 ( 849178 )
      So you cann't use it on a plane.
      Heck, it probably wouldn't fit on the seat-back "table" anyway...
    • Re:Altitude (Score:3, Informative)

      by flaming-opus ( 8186 )
      Hard drive heads are tiny air-foils. They depend on a certain barometric pressure to keep from crashing into the platter. I had a device that measured weather characteristics in the mountains of antarctica. We had to use magneto-optical drives, because the heads on a disk drive kept crashing. This was many years ago though. I'd think it has improved since then. Maybe not.
  • And? (Score:4, Interesting)

    by daVinci1980 ( 73174 ) on Wednesday May 04, 2005 @08:27AM (#12430584) Homepage
    Who cares? Modern graphics cards are capable of (sorry it's a PDF, it was all I could find) 40 GFLOPS [nvidia.com]. That's not even in SLI mode, which actually does push you to about a 98% over a single card (in terms of raw processing power).

    Why would you buy a 96-CPU setup when you could buy a 6-GPU setup and match the same theoretical performance? (All jokes aside about the costs being roughly equivalent, they're nowhere near the same.) 6 top of the line 6800s would run you about $3600. Even if you added top of the line parts for the rest of the system, you'd be looking at about $1600 per system. Add $0 for the linux distribution to drive the whole thing, and you're at a grand total of $10K.

    I'm not impressed.
    • Re:And? (Score:5, Interesting)

      by sjwaste ( 780063 ) on Wednesday May 04, 2005 @08:43AM (#12430682)
      Serious question here. Does production software exist to drive arbitrary computation across a GPU? I've seen articles about software on its way, etc. Does it exist, either as an application or integrated into some OS? Man, if I could push some of my statistical computing off to the GPU...
      • Right now, I do not believe there are any general-purpose software packages available. That being said, the article I referenced is a discussion on exactly that topic.

        Which means that right now, you have to do a little bit of leg work yourself in terms of getting the data to and from the GPU (in textures). I can find out if there are any toolkits later today and let you know, though.
      • Re:And? (Score:2, Informative)

        by ravenwing_np ( 22379 )
        Look for GPGPU [gpgpu.org]. They are trying to use the graphics processor for general purpose operations. It runs as any CG script would run. Just realize that it is focused more on parallel math operations then procedural. Please note that I have nothing to do with this project and haven't tried it yet.
      • Re:And? (Score:3, Informative)

        by timeOday ( 582209 )

        Serious question here. Does production software exist to drive arbitrary computation across a GPU?

        No, because graphics hardware cannot do arbitrary computation. At least not at anything like the FLOPS it achieves doing graphics.

        I've attended a workshop on using graphics hardware for accelerating other computation and it's mostly hype IMHO. It amounts to rendering images of your problem, then doing feature extraction on the image. So the *effective* FLOPs, i.e. the amount dedicated to *your* task ra

  • by Danzigism ( 881294 ) on Wednesday May 04, 2005 @08:27AM (#12430589)
    Yea, I agree that Fedora was definitely an odd choice.. Well, I can trust that the kind of person whom can build a 96 node super computer, makes very educated decisions.. I'm glad to hear that company's are still involved in making these clusters. Its a great way to build something powerful for a cheap price, and not having to lean towards Crays etc.. I worked for a company called Patmos International for the longest time, and we never shipped a single cluster.. We had tons of investors that seemed interested of course, but after 2 years of contiuous development, and no sales, the investors simply stopped investing, therefore my job was done for.. We advertised the "$99,000" super computer that would supposedly be in "everyones" garage one day.. Of course that was just a saying because of how cheap we could offer a 32 node system with all of our custom applications and linux operating system. Pretty sweet setup.. it sucks to see the big guys go down sometimes.. to this day, it was the best job I've ever had.. You can still read about Patmos if you search for James Gatzka on google.. They tried their hardest to bring some damn technology and culture to the podunk town of the Eastern Shore of Maryland..
  • What would be the target market for this kind of thing? Genomics and biochemistry? Engineering workstations for the department? Rendering? How about to run a company's desktops? Seems like it might be useful for CAVE-like environments and videoconferencing throughout a distributed office.. also maybe for a service provider offering virtual linux pcs?
  • by mindpixel ( 154865 ) on Wednesday May 04, 2005 @09:31AM (#12431049) Homepage Journal
    One of my best friends just bought a tiny little house in downtown Toronto for $377,000. I left Toronto last November and moved to Santiago, Chile and live downtwon where my rent is $260/month, for quite a nice, though small place, in an excellent area.

    So, if I spend $100K on the Orion DS-96, that leaves me more than enough for a 250 channel geodesic EEG system [egi.com] which would allow me to compute self-organizing maps of the human mind based on flashing the 1.6 million mindpixels [mindpixel.com] I have collected over the past five years to various volunteers [english teachers], AND still have 56.73 years worth of rent left!

    Too bad no bank will loan me $377,000 for a computer and an EEG system and the time to play with it...
  • by 3Suns ( 250606 ) on Wednesday May 04, 2005 @10:49AM (#12431733) Homepage
    Warning: mysql_pconnect(): Too many connections in /home/www/php/functions/executequery.php on line 21

    How blissfully ironic!

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...