Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Supercomputing

TACC "Stampede" Supercomputer To Go Live In January 67

Nerval's Lobster writes "The Texas Advanced Computing Center plans to go live on January 7 with "Stampede," a ten-petaflop supercomputer predicted to be the most powerful Intel supercomputer in the world once it launches. Stampede should also be among the top five supercomputers in the TOP500 list when it goes live, Jay Boisseau, TACC's director, said at the Intel Developer Forum Sept. 11. Stampede was announced a bit more than two years ago. Specs include 272 terabytes of total memory and 14 petabytes of disk storage. TACC said the compute nodes would include "several thousand" Dell Stallion servers, with each server boasting dual 8-core Intel E5-2680 processors and 32 gigabytes of memory. In addition, TACC will include a special pre-release version of the Intel MIC, or "Knights Bridge" architecture, which has been formally branded as Xeon Phi. Interestingly, the thousands of Xeon compute nodes should generate just 2 teraflops worth of performance, with the remaining 8 generated by the Xeon Phi chips, which provide highly parallelized computational power for specialized workloads."
This discussion has been archived. No new comments can be posted.

TACC "Stampede" Supercomputer To Go Live In January

Comments Filter:
  • by Jane Q. Public ( 1010737 ) on Thursday September 13, 2012 @02:20AM (#41320853)
    "Petaflops" is not representative of the power of modern supercomputers, many of which use massively parallel integer processing to perform their duties. Sure, you can say that simulating floating point operations with the integer units amounts to the same thing, but it actually doesn't. We have discovered that there are a great many real-world problems for which parallel integer math works just fine, or even better (more efficient) than floating point. And for those, flops is a completely meaningless metric.

    We need a standard that actually makes sense.
  • by Anonymous Coward on Thursday September 13, 2012 @04:08AM (#41321197)

    That's 2GB per core, a fine amount for supercomputer problems requiring compute density and bandwidth. No virtualization there and the compilers, middleware and programmers are probably sufficiently educated to know how to split the problem.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...