Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Supercomputing Hardware Linux

Cray Unveils XC30 Supercomputer 67

Nerval's Lobster writes "Cray has unveiled a XC30 supercomputer capable of high-performance computing workloads of more than 100 petaflops. Originally code-named 'Cascade,' the system relies on Intel Xeon processors and Aries interconnect chipset technology, paired with Cray's integrated software environment. Cray touts the XC30's ability to utilize a wide variety of processor types; future versions of the platform will apparently feature Intel Xeon Phi and Nvidia Tesla GPUs based on the Kepler GPU computing architecture. Cray leveraged its work with DARPA's High Productivity Computing Systems program in order to design and build the XC30. Cray's XC30 isn't the only supercomputer aiming for that 100-petaflop crown. China's Guangzhou Supercomputing Center recently announced the development of a Tianhe-2 supercomputer theoretically capable of 100 petaflops, but that system isn't due to launch until 2015. Cray also faces significant competition in the realm of super-computer makers: it only built 5.4 percent of the systems on the Top500 list, compared to IBM with 42.6 percent and Hewlett-Packard with 27.6 percent."
This discussion has been archived. No new comments can be posted.

Cray Unveils XC30 Supercomputer

Comments Filter:
  • Re:details, details (Score:3, Informative)

    by whistl ( 234824 ) on Thursday November 08, 2012 @05:49PM (#41924781)

    The Cray website (http://www.cray.com/Products/XC/XC.aspx) has more details. 3072 cores (66 Tflops) per cabinet, initially, and the picture make it look like they have 16 cabinets, making 49152 cores total. Amazing.

  • by Anonymous Coward on Thursday November 08, 2012 @06:23PM (#41925157)
    It is however, the same group of very clever engineers.
  • by Anonymous Coward on Thursday November 08, 2012 @07:31PM (#41926055)

    Well with supercomputers, the benchmark in the TOP 500 is LINPACK. Which will spit out the amount of double precision FLops. The theoretical performance is Ghz*cores*floating point ops/cycle = GFLops. thats in Gflops. A regular supercomputer with CPUs should never be below 80% of the maximum theoretical performance, if it is, something is wrong. A well tuned CPU cluster can get over 95% of the theoretical performance, a well tuned GPU cluster around 60%.
    Staying with a small scale ( 12 TFLops ), a real cluster would need around 30 cards these days. This is just scaling up NUDT's 3 kW cluster at the ISC this year for the student cluster competition that got 2.6 TFLops with 9 cards. Also in that race was a pure CPU cluster that got 2.4 TFLops.

    However the LINPACK score does not actually mean that your application will run well. Maybe its an application that has too much communications or has the wrong algorithms to run efficiently on a GPU. Maybe your application is really well suited for GPUs, this is usually the case for anything with large matrices and little communications, just like LINPACK. Because they have such a tight focus on rendering images, i.e making many 4D vector ops, GPUs are really bad at anything that is not vector parallel.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...