Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Linux Business Operating Systems Software Windows Hardware

Cray's CX1 Desktop Supercomputer, Now For Sale 294

ocularb0b writes "Cray has announced the CX1 desktop supercomputer. Cray teamed with Microsoft and Intel to build the new machine that supports up to 8 nodes, a total of 64 cores and 64Gb of memory per node. CX1 can be ordered online with starting prices of $25K, and a choice of Linux or Windows HPC. This should be a pretty big deal for smaller schools and scientists waiting in line for time on the world's big computing centers, as well as 3D and VFX shops."
This discussion has been archived. No new comments can be posted.

Cray's CX1 Desktop Supercomputer, Now For Sale

Comments Filter:
  • Nice Specs (Score:5, Funny)

    by mythandros ( 973986 ) on Tuesday October 21, 2008 @08:15AM (#25452305)
    Will it get Crysis up over 15 fps?
    • Re: (Score:3, Funny)

      by u38cg ( 607297 )
      Sixty replies, and still no wonder has speculated on the possibility of a Beowulf cluster? Changed days...
      • Yawn (Score:2, Flamebait)

        by Plekto ( 1018050 )

        Sixty replies, and still no wonder has speculated on the possibility of a Beowulf cluster? Changed days...

        Seriously - is this a slow news day or what? It's a blade server-in-a-box. BT,DT, nothing new(and actually quite overpriced for what it is). The OP obviously didn't understand what they were looking at, and neither did the person who okayed this as being newsworthy.

        • Overpriced? (Score:4, Funny)

          by Joce640k ( 829181 ) on Tuesday October 21, 2008 @09:54AM (#25453649) Homepage

          That thing looks mean! I'd pay 25k to be the only person in the office with one of those.

        • Well, is each node a cluster of 1Us, or SMP? If SMP, I don't know of anything quite like it. I am surprised the max memory is only 1GB (hopefully they didn't really mean Gb like they wrote?) per core though.
        • Yeah, this is incredibly uninteresting. If it were a 64 processor *shared memory* system, I'd be impressed. But it's just a bunch of nodes.

          I have a hard time finding clusters interesting. Yeah, it's a bunch of computers. Big deal! What really impresses me is single nodes that have massive numbers of CPUs and memory. That takes technical ingenuity to implement. Clusters do not.

          I feel bad for the CRAY name. It's being used in such a wrong manner.

        • When you mentioned this, I started Googling around but couldn't find any sources for them. Got any favorites?

    • How the hell did you beat me? Like by nano seconds... GET OUT OF MY BRAIN! Your gonna kill my karma rating ;P
  • Yet... (Score:2, Funny)

    by hyperz69 ( 1226464 )
    It still can't play Crysis Maxed!
  • by thered2001 ( 1257950 ) on Tuesday October 21, 2008 @08:16AM (#25452317) Journal
    35 inches deep and weighing in at 136 lbs. fully loaded. My desktop would not be able to sustain that!
  • The question is, is it more "oxy" or "moron"?

  • by Lazy Jones ( 8403 )
    Those boxes are just blade systems with up to 8 blades with up to 2 quad core CPUs each, so a total of 64 cores per blade system. Certainly not not "64 cores per node" where Cray calls a blade a "node".
    • Re: (Score:3, Informative)

      by Anonymous Coward

      "supports up to 8 nodes, a total of 64 cores and 64Gb of memory per node"

      8 [nodes] x (2 [cpu] * 4 [cores]) = 64 total cores.

      I do not see where it says 64 cores per node.

      • by glwtta ( 532858 )
        "supports up to 8 nodes, (a total of 64 cores) and (64Gb of memory per node)"

        This is why I like the "Oxford comma".
        • "64Gb of memory per node" is wrong also ...
          • Yeah, lots of systems support 64 Gb of RAM per system. You can get PC motherboards that do that for under $100 on Newegg. I'm betting they mean GB.

          • by Amouth ( 879122 )

            no it isn't if you go throug and price one out you can also congigure the options on each node you add and 64gb was an option on the one i selected

      • There's two relevant ways to parse that fragment. There's one where the "and" in "64 cores and 64G of memory per node" creates a single coordinated constituent, such that it can be paraphrased as "there are 64 cores per node and there are 64 Gb per node." There's a second, the one that I think you favor and that seems correct pragmatically, which may be paraphrased as "there are 64 total cores, and each node in the machine can have 64 Gb."

        Structural ambiguity happens all the time in natural language.

        • The nodes don't support 64GB each, so both ways to parse that fragment yield wrong information. I think it's safe to assume that the author of the summary just thought that a node was the whole system with 8 blades.
          • I was merely pointing out language problem in the two posts, specifically the syntactic ambiguity. The parse itself yields no information about anything but the formal structure of the statement, and says nothing about the result you obtain by evaluating the propositions with respect to facts in the world.

            Cheers.

    • by evanbd ( 210358 )
      Also, I see no way to configure more than 32GB of memory per node (so 256GB max in the box).
    • by mcelrath ( 8027 )

      So what does that mean practically? This looks like a cluster-in-a-box, connected internally with gigibit ethernet or infiniband. As in, I have to use MPI code to utilize all the processors. If I run "top" I will see at most 8 CPU's on the current node. So cannot processes be automatically migrated to another "node". Do I have to ssh into the second "node" to access the 8 CPU's sitting there?

      This seems...not that clever.

      Please correct me if I have misunderstood what this thing is. And ditto from a

      • by jedidiah ( 1196 )

        It's a turnkey cluster, something that Linux vendors were doing 10 years ago.

        The "blade" version of this idea is also old news has been done by everybody.

        • by ajlitt ( 19055 )

          All of Cray's systems made in the last 15 years or so have been turnkey clusters. The reason people pay the big dollars for them over a few racks of HP 1U systems is their crazy fast interconnect and software environment.

          I'm positive I can explain this with a car analogy but I'm sure someone else will do that for me.

  • by AliasMarlowe ( 1042386 ) on Tuesday October 21, 2008 @08:21AM (#25452399) Journal
    When they package this as a notebook or netbook (at an attractive price), I'll be interested.
    • by rbanffy ( 584143 ) on Tuesday October 21, 2008 @08:48AM (#25452727) Homepage Journal

      Well... My netbook has 2 GB of memory, 160 GB of storage, gigabit networking and thinks it has two 32 bit cores. It's a veritable late 80's, early 90's supercomputer that fits in my backpack. And I bought it cheap.

      • by gstoddart ( 321705 ) on Tuesday October 21, 2008 @09:45AM (#25453455) Homepage

        Well... My netbook has 2 GB of memory, 160 GB of storage, gigabit networking and thinks it has two 32 bit cores. It's a veritable late 80's, early 90's supercomputer that fits in my backpack.

        Even in the mid 90's, GHz processors, and gigs of RAM/hard disk were still largely uncommon. I think you're talking late 90's before that started to become relatively common.

        I continue to be stunned at what you can buy as an entry level box nowadays for a really cheap dollar amount. My local "white box" PC store will sell you a dual-core 5GHz (or whatever) 64-bit AMD machine for under $300 -- add a little RAM and disk space and you've got a helluva system for not very much money.

        How many home PCs nowadays have TB's of storage? I know several people who do -- I remember when home users didn't have gigabytes, terabytes would have been unimaginable.

        Cheers

  • Well, Microsoft had to do something to create demand for the next version of Windows. Not much of a market for an OS where people need to book time at their neighborhood super collider when they need to edit a document.

    Probably makes one hell of a spam node too!

    • Re:Horsepower (Score:5, Insightful)

      by Ralph Spoilsport ( 673134 ) on Tuesday October 21, 2008 @08:42AM (#25452643) Journal
      Well, with all the sloppy inefficient programming, feature bloat, and generally craptastic work that goes into the ongoing, illogical, disuseful, nightmare that is MS Word, you will need one of these puppies just to run Word and Windows 7 anyway.

      Vista's MINIMUM memory requirement is 512 megs.

      Windows 2000's recommended minimum was 64 megs.

      Personally, I don't find Vista any more useful than Win2k. More stable, yes, but I don't see how upping the RAM req by an order of magnitude was required to make Win2k more stable. All it needed was better programming and better testing.

      I think what we have going now is the kind of thing that happened when gas was cheap: SUVs. When gas is expensive (viz Europe and Japan) the average car gets Really Small and Efficient. When RAM was really expensive, programming was tight and efficient. Now that RAM is measured in gigs and drives in terabytes, there is no incentive to do efficient programming or wrangle in feature creep and bloatware.

      Eventually we will hit some physical / cost limit on RAM, and then good programming will become a requirement. OF course, by then, there won't be anyone left who knows how to do that...

      RS

      • Re:Horsepower (Score:4, Insightful)

        by mollymoo ( 202721 ) on Tuesday October 21, 2008 @09:46AM (#25453491) Journal

        Well, with all the sloppy inefficient programming, feature bloat, and generally craptastic work that goes into the ongoing, illogical, disuseful, nightmare that is MS Word [...]

        Feature bloat for sure, but how do you know it's sloppily and inefficiently programmed? Have you seen the source? From what I recall of people commenting on leaked Microsoft code the quality was generally considered pretty good.

      • Yeah because RAM is so damn expensive these days
      • Re: (Score:3, Funny)

        by westlake ( 615356 )
        Vista's MINIMUM memory requirement is 512 megs.
        Windows 2000's recommended minimum was 64 megs.

        .
        The real-world hardware requirements for a Windows OS have always been those of a mid-priced system at the time of its release.

        Tell me why an OS shouldn't be making use of resources as they become available and cheap.

        I have never understood the Geek's obsession with RAM.

        You would think he had been raised under the warm glow of a vacuum tube and threaded core for his Mom as a child.

        The 8 GB 64 Bit Vista Pre

  • Is there a reason microsoft would be the prefered OS for this type of machine? I would think the type of people requiring such hardware would be quite capable of running some kind of *nix OS to perform their operations and see the advantages in doing so, like a familiar OS. I imagine MS has invested a decent amount of cash to be the logo broadcasted on the cray site, is there a reason why they want this market? This seems like it would be a very niche market for them.
    • Is there a reason microsoft would be the prefered OS for this type of machine?

      Yes there is. Microsoft are desperate to get into the cluster computing market, and they hope this will get them a foothold.

      I don't think it will though. The simple fact is that this level of supercomputing can be achieved with less cost by buying off the shelf components and building your own. It won't be as pretty but we are talking possibly ten thousand cheaper if you want to match the performance of this system. Using Windows also imposes a serious drag factor. I'm not against using Microsoft software j

    • Re: (Score:2, Informative)

      FTA the CX1, it is trying to push down into a market where newbies in life sciences, digital rendering, financial services, and other fields are playing around with supers for the first time.

      25,000 that seems like a lot of cash to fork for something that you don't know how to use.

      It's a fact: Windows HPC Server 2008 (HPCS) combines the power of the Windows Server platform with rich, out-of-the-box functionality to help improve the productivity and reduce the complexity of your HPC environment. Win
      • Lots of cluster customers buy preconfigured clusters from someone like MicroWay. Even if an organization has the skills internally to build it, their personnel might be busy running the existing systems and networks rather than building new capacity. There certainly are places where the staff builds the cluster on site, but it's not everywhere that a cluster could be useful.

    • Windows HPC has actually gotten decent reviews, probably because their programmers didn't have to listen to marketing demanding "backwards compatibility" and "make it idiot proof". We can always hope that Windows 8 will be a port of HPC to the desktop, just like XP was NT reworked.
      • XP was Windows 2000 with a new theme and some bundled software. Even now about the only software I run into that has trouble on Windows 2000 is software that specifically checks for the OS it's running on and refuses to run on anything less than XP.

        It was NT4.0 where Microsoft really worked over NT, culling subsystems and doing things like putting GDI in the kernel to let it run games at the cost of stability.

  • by ehaggis ( 879721 ) on Tuesday October 21, 2008 @08:37AM (#25452581) Homepage Journal
    Perhaps to enhance their marketing, they can offer the computer in CrayOn colors (like Apple's iMac colors). Cray Gray, Big Iron Gray, Super Computing Gray, Gray, Gray Passion, etc..

    Remember, you can order any color - as long as it is gray.
  • Why do I get a 404 error when trying to configure my CX1? I'll just wait until Psystar comes out with a knockoff anyway.
  • Comment removed based on user account deletion
  • Yeah, so? (Score:3, Interesting)

    by LibertineR ( 591918 ) on Tuesday October 21, 2008 @08:40AM (#25452619)
    I suspect Flash player will still kick it's ass.
  • by rzei ( 622725 ) on Tuesday October 21, 2008 @08:49AM (#25452747)

    For example Blender [blender.org]'s renderer's scale on a system like this? Of course something like MentalRay might scale easily but has anyone any hands on experience?

    One might argue if you are throwing away $25,000 on a system like that you might use software that costs, but then again, Blender has made tremendous progress these last years..

    • by ducomputergeek ( 595742 ) on Tuesday October 21, 2008 @09:54AM (#25453645)

      Blender has made a lot of progress, but it is still way behind Maya and even Lightwave. I've not been using Blender in the past couple releases, but it used to have some issues on my Quad Core Power Mac and using more than 4GB of Ram. I think this has been addressed now though. But I've never run into the problem of RAM or processor speed being the problem, but video ram when modeling an object. I have created scenes that will even grind a decent 256MB video card into the ground. Sure, it would be nice to render a bit faster, but for $20 - $60 a month, I do as much rendering as I want at Respower.

      But let's look at cost. For $25k I can buy about 75 commodity boxes that are dual core, 2GB of Ram each & networking gear. That's about 150 Cores and 150GB of Ram. Put Linux on there and you can run ScreamerNet (you get to put the LW rendering engine on 999 machines per license) or one of a number of Maya distributed rendering programs. End result are going to be more frames being processed at one time. (for animation)

      If I went the Mac Mini route, that's about 40 Mac Minis, which is still 80 Cores, 80GB of Ram total and with ScreamerNet or Xgrid....

      Now the downsides are, 40 - 80 computers take up a lot of space and probably would eat up more power/cooling costs. But then again, if a couple boxes kick the bucket or hiccup, the other 35 - 75 are still processing. You only loose a percentage of total output.

      Where it maybe nice is for folks who are rendering a single frame, like for a large poster. The 64 cores would make quick work of most jobs, but for animation, you're better off going with with a farm.

      • If you're having troubles using more than 4GB of RAM, you've probably got a 32-bit bottleneck somewhere.

    • One might argue if you are throwing away $25,000 on a system like that you might use software that costs, but then again, Blender has made tremendous progress these last years..

      I thought the future of desktop 'super'computing was going to revolve around GPUs/Cell processors, not clusters of quadcore CPUs.

      TFA mentions nvidia's Tesla [nvidia.com] (GPU supercomputing) but Cray's configurator doesn't make any mention of it at all.

    • Re: (Score:3, Informative)

      by delt0r ( 999393 )
      I have used blender in a 16 processor machine without problems. If you have big renders it should not be a problem since there is not really any interprocess communication.
  • Just a company that bought the name.

    • Cray Research merged with SGI (Silicon Graphics, Inc.) in February 1996. In August 1999, SGI created a separate Cray Research business unit to focus exclusively on the unique requirements of high-end supercomputing customers. Assets of this business unit were sold to Tera Computer Company in March 2000.

      Tera Computer Company was founded in 1987 in Washington, DC, and moved to Seattle, Washington, in 1988. Tera began software development for the Multithreaded Architecture (MTA) systems that year and hardware

  • I guess the MS execs want to avoid another "suitable for Vista" debacle :-)

    I bet it will still take a bloody week to boot..

  • I suppose this is good news for those that don't want to get their hands dirty building their own cluster. You could just network several servers together and simply install Rocks [rocksclusters.org] or UniCluster [univaud.com] or any number of other cluster packages.
    • You could just network several servers together and simply install Rocks [rocksclusters.org] or UniCluster [univaud.com] or any number of other cluster packages.

      Yet you don't mention Beowulf. Imagine that...
      • Yeah, clusters used to be called Beowulfs. Now that term is reserved for those posters on Slashdot that continue to repeat that stupid meme.
  • Power Cord (kit of 2) $110.00 Keyboard and Mouse $188.00 Yep...
  • With this new computer, you can:

    Send email if you are not John McCain.
    Calculate the value of Pi farther than anyone cares really.
    Run Vista and Crysis - but not at the same time.
    Set up a self aware VM cluster....
    Create your own spam botnet
    Heat your computer room
    Be the coolest guy at the next flashmob computing meet

    or... you could .... Watch pr0n

  • Obligatory (Score:2, Funny)

    by ozbon ( 99708 )

    Imagine a beowolf cluster of these!

  • As a side note, Cray has always had a flair for designing machines that are not only powerful but also have design the conveys this power.

    I wouldn't mind having a box like that. I'd wait 'til it runs something else than MS though.

  • According to the cray website, each CX1 node can have at most 8GB of RAM, not 64GB as stated in the original slashdot post. You can have at most 8 nodes/blades, so the CX1 can have a total of 64GB of RAM across all nodes, which is pretty thin on memory for a supercomputer.
  • There's no need to buy a Ferrari if you use it twice a year, just rent it. Most of the supercomputing locations where I worked at are very shy about their occupation rates. I think it is probably very low except at very active universities. All other places are wating their money buying hardware which will become useless while is not used. See Powua http://www.powua.com/ [powua.com] as a general implementation or PurePowua http://www.purepowua.com/ [purepowua.com] as a more specialized one, in this case XSI rendering.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...