Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Linux Business Operating Systems Software Windows Hardware

Cray's CX1 Desktop Supercomputer, Now For Sale 294

ocularb0b writes "Cray has announced the CX1 desktop supercomputer. Cray teamed with Microsoft and Intel to build the new machine that supports up to 8 nodes, a total of 64 cores and 64Gb of memory per node. CX1 can be ordered online with starting prices of $25K, and a choice of Linux or Windows HPC. This should be a pretty big deal for smaller schools and scientists waiting in line for time on the world's big computing centers, as well as 3D and VFX shops."
This discussion has been archived. No new comments can be posted.

Cray's CX1 Desktop Supercomputer, Now For Sale

Comments Filter:
  • by Anonymous Coward on Tuesday October 21, 2008 @09:24AM (#25452435)

    "supports up to 8 nodes, a total of 64 cores and 64Gb of memory per node"

    8 [nodes] x (2 [cpu] * 4 [cores]) = 64 total cores.

    I do not see where it says 64 cores per node.

  • Re:Bit steep (Score:3, Informative)

    by EvilRyry ( 1025309 ) on Tuesday October 21, 2008 @09:28AM (#25452469) Journal

    Or you could just buy the Cray for the same price and forget about the extra overhead of 8 separate boxes.

    BTW, you can also order these from the factory with RHEL.

  • Re:Gaming? (Score:4, Informative)

    by evanbd ( 210358 ) on Tuesday October 21, 2008 @09:43AM (#25452657)

    A number of modern games can make use of 2+ cores, but 8 isn't going to happen with any efficiency. Note also that this is a cluster in a single box -- those 8 nodes are each different computers on a very fast local network. That means a different OS image per node, and each process on its own node. For lots of supercomputing applications, this is the norm -- each node does its share of the work and they talk over the network. But no games support this; they all expect to run on a single computer.

    Also, for gaming performance, I imagine you'd want dual graphics cards -- which this box doesn't support. (It does include "visualization node" options, which have a single Quadro FX card each.)

    Still, for something like a desktop render farm, this might make sense -- except I imagine the customers for such would be more interested in options with better price/performance.

  • by malignant_minded ( 884324 ) on Tuesday October 21, 2008 @09:45AM (#25452667)
    FTA the CX1, it is trying to push down into a market where newbies in life sciences, digital rendering, financial services, and other fields are playing around with supers for the first time.

    25,000 that seems like a lot of cash to fork for something that you don't know how to use.

    It's a fact: Windows HPC Server 2008 (HPCS) combines the power of the Windows Server platform with rich, out-of-the-box functionality to help improve the productivity and reduce the complexity of your HPC environment. Windows HPC Server 2008 can efficiently scale to thousands of processing cores and provides a comprehensive set of deployment, administration, and monitoring tools that are easy to deploy, manage, and integrate with your existing infrastructure. http://www.microsoft.com/hpc/en/us/default.aspx [microsoft.com]

    So this is meant for people that need a rendering farm or some calculations performed but have no idea how to build a cluster, again how big is this market?
  • Re:Bit steep (Score:1, Informative)

    by Anonymous Coward on Tuesday October 21, 2008 @09:48AM (#25452717)
    The Cray is just a blade, so for $25k you could buy a blade, 8 nodes and your interconnect. Perhaps I should clarify: for LESS THAN $25k you can buy a rack of 8 nodes and the interconnect.

    BTW, you can also order these from the factory with RHEL.

    Yeeees. Hence my comment.

  • by Peter Simpson ( 112887 ) on Tuesday October 21, 2008 @09:51AM (#25452771)

    Just a company that bought the name.

  • From their website (Score:2, Informative)

    by Peter Simpson ( 112887 ) on Tuesday October 21, 2008 @09:55AM (#25452821)

    Cray Research merged with SGI (Silicon Graphics, Inc.) in February 1996. In August 1999, SGI created a separate Cray Research business unit to focus exclusively on the unique requirements of high-end supercomputing customers. Assets of this business unit were sold to Tera Computer Company in March 2000.

    Tera Computer Company was founded in 1987 in Washington, DC, and moved to Seattle, Washington, in 1988. Tera began software development for the Multithreaded Architecture (MTA) systems that year and hardware design commenced in 1991. The Cray MTA-2â system provides scalable shared memory, in which every processor has equal access to every memory location, greatly simplifying programming because it eliminates concerns about the layout of memory.

    The company completed its initial public offering in 1995 (TERA on the NASDAQ stock exchange), and soon after received its first order for the MTA from the San Diego Supercomputer Center. The multiprocessor system was accepted by the center in 1998, and has since been upgraded to eight processors.

    Upon the merger with the Cray Research division of SGI in 2000, the company was renamed Cray Inc. and the ticker symbol was changed to CRAY.

  • Re:Gaming? (Score:3, Informative)

    by X0563511 ( 793323 ) on Tuesday October 21, 2008 @09:56AM (#25452839) Homepage Journal

    A modification to an engine (this has already been done to quake 3 and 4) to use raytracing, would lend itself well to this hardware. Raytracing is very SMP-friendly.

  • by spectrokid ( 660550 ) on Tuesday October 21, 2008 @10:01AM (#25452899) Homepage
    that is 62 kg
  • Re:Gaming? (Score:3, Informative)

    by mpsmps ( 178373 ) on Tuesday October 21, 2008 @10:04AM (#25452947)

    Not even close. The heavy lifting for 3D games is done on the GPU, and I'm not aware of any games (except perhaps games that utilize multiple monitors, like flight simulators) that can make use of more than one GPU.

    So a single game could potentially drive many monitors, but not do more visually on a single display.

    Actually, you can configure the Cray CX-1 with "visualization nodes" [cray.com] that contain GPUs, not just CPUs.

  • Power Cord (kit of 2) $110.00 Keyboard and Mouse $188.00 Yep...
  • for Britian (Score:5, Informative)

    by TubeSteak ( 669689 ) on Tuesday October 21, 2008 @10:39AM (#25453387) Journal

    That's 9 stone 8 lbs

  • by branchingfactor ( 650391 ) on Tuesday October 21, 2008 @10:55AM (#25453657)
    According to the cray website, each CX1 node can have at most 8GB of RAM, not 64GB as stated in the original slashdot post. You can have at most 8 nodes/blades, so the CX1 can have a total of 64GB of RAM across all nodes, which is pretty thin on memory for a supercomputer.
  • by delt0r ( 999393 ) on Tuesday October 21, 2008 @11:14AM (#25454053)
    I have used blender in a 16 processor machine without problems. If you have big renders it should not be a problem since there is not really any interprocess communication.
  • Re:for Britian (Score:5, Informative)

    by frieko ( 855745 ) on Tuesday October 21, 2008 @11:41AM (#25454519)
    (12.22 in) * (17.5 in) * (35.5 in) = 0.521657047 hogsheads
  • Re:for Britian (Score:2, Informative)

    by Anonymous Coward on Tuesday October 21, 2008 @12:21PM (#25455123)

    That's 9 stone 8 lbs

    I believe that's 9 stone 10 lbs ;)

  • by brsmith4 ( 567390 ) <.brsmith4. .at. .gmail.com.> on Tuesday October 21, 2008 @06:57PM (#25461395)
    There are projects to provide unified process space and inter-node IPC like Mosix, bproc, etc. Generally, these aren't used much in HPC. Having a "bunch of individual machines networked together" works pretty well when you also consider that the network might be 20Gb 4x DDR InfiniBand sending frames from point to point at ~2us. I'm just saying... Chances are, the GP built an HPC cluster and used a typical SPMD approach with something like MPI or PVM for communications and a centralized job manager/scheduler for executing his jobs and those of others he was working with. Im also not sure what you mean by "useful software view". There are lots of tools like Ganglia or even Nagios with PNP that are good for keeping track of utilization, memory usage, etc. over a large number of machines. In HPC, there is very little need for seeing a cluster of machines as one coherent machine except to introduce further overhead in coordinating actual threads between a huge cluster of machines. A simple (yet sophisticated) job scheduler handles this just fine, with a light-weight daemon spawning tasks on your compute nodes when they get the call from a central scheduler. They monitor some performance attributes and aggregate them back to the central scheduler. This keeps things simple and the overhead low so that CPUs can be put to work crunching numbers and not handling mundane OS tasks.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...