Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Intel Hardware

10-Petaflops Supercomputer Being Built For Open Science Community 55

An anonymous reader tips news that Dell, Intel, and the Texas Advanced Computing Center will be working together to build "Stampede," a supercomputer project aiming for peak performance of 10 petaflops. The National Science Foundation is providing $27.5 million in initial funding, and it's hoped that Stampede will be "a model for supporting petascale simulation-based science and data-driven science." From the announcement: "When completed, Stampede will comprise several thousand Dell 'Zeus' servers with each server having dual 8-core processors from the forthcoming Intel Xeon Processor E5 Family (formerly codenamed "Sandy Bridge-EP") and each server with 32 gigabytes of memory. ... [It also incorporates Intel 'Many Integrated Core' co-processors,] designed to process highly parallel workloads and provide the benefits of using the most popular x86 instruction set. This will greatly simplify the task of porting and optimizing applications on Stampede to utilize the performance of both the Intel Xeon processors and Intel MIC co-processors. ... Altogether, Stampede will have a peak performance of 10 petaflops, 272 terabytes of total memory, and 14 petabytes of disk storage."
This discussion has been archived. No new comments can be posted.

10-Petaflops Supercomputer Being Built For Open Science Community

Comments Filter:
  • by LordAzuzu ( 1701760 ) on Friday September 23, 2011 @01:48PM (#37494100)

    Not a supercomputer

    • In there a distinct difference between the two?

      • Don't ask me how I hit the N on the other side of the keyboard -_-

      • Because the best available CPUs are only so fast, and logic boards only so large, both supercomputers and clusters end up being lots-and-lots-of-cards-connected-with-some-mixture-of-backplanes-and-cables at some point.

        There's a smooth-ish order of progression in terms of interconnect speed and latency(ie. SETI@home is a cluster; but inter-node bandwidth is tiny and latency can be in the hundreds of milliseconds, a cheapo commodity cluster using the onboard GigE ports has better bandwidth and lower latenc
        • SETI@home, although an embarrassingly parallel task, is not a cluster. Each client processes independent discrete data irrespective of the results of another client. There is no MPI so all you have is a bunch of machines running the same serial software on different data. Clusters can be used for such a thing, but it's a horrible waste of money on interconnects as there is no message passing. It's like saying a computer lab with all the same software on the machines is a "cluster" because all the machines
      • Yes, the network.
    • I know you're trolling but most supercomputers these days are computing clusters.

    • by Anonymous Coward

      From Wikipedia (http://en.wikipedia.org/wiki/Supercomputer)

      Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most[which?] modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

    • Imagine a beowulf cluster of these clusters!

    • [title] Looks like a cluster [/title]

      Not a supercomputer

      Are you saying this because it is not a single system image, shared memory machine or because you just don't think distributed memory clusters are supercomputers?

      I ask because I have built supercomputers and I find your comment puzzling, at best.

  • by hawguy ( 1600213 ) on Friday September 23, 2011 @01:54PM (#37494170)

    The article mentions that it's using Dell 'Zeus' servers, but the only information I can find about those servers online is that they are being used to build this cluster.

    What is a Dell 'Zeus' server?

    • A server with the new Sandy Bridge Xeons and the MIC Larrabee coprocessors.
    • Judging from the name, it's a server that shoots sparks and sleeps around a lot.
    • by Anonymous Coward

      It's a codename for a server based on the Xeon E5 processors that aren't currently announced/generally available

  • While I applaud (and always do) advances in supercomputers, it raises the question of what happens to the previous generation(s). I'd love to get my hands on even one of the blade based boxes in your usual configuration. Might not be good for the projected tasks in modern proposals, but they would be more than good enough for my modest needs. Anyone know who the surplus process works?

    • I wouldn't be surprised if they are destroyed, especially if they have ever been used for any kind of military computing. Or maybe the main scientists have some seriously kick-ass home computers.
    • The old computers probably just get sent to a scrap yard in China.

      Actually, that makes you wonder what happens when they land there...

    • by S-100 ( 1295224 )
      There is a market for used "supercomputers". Yale recently purchased one. http://dailybulletin.yale.edu/article.aspx?id=8382 [yale.edu] It was number 146 in the list of top 500 supercomputers, and they got it for a fraction of the cost when new.
  • Will it come with its own nuclear power plant to provide the necessary energy to power it? :)
    • 8500 computers at, let's high ball it at 1000 watts each (maybe they're running sli'd quadros or something for visualization) 8.5 megawatts. Considering the site it's at will probably be the size of a small neighborhood, that's not a huge amount.
  • Sounds like a sweet machine to run boinc apps on.

  • by flaming-opus ( 8186 ) on Friday September 23, 2011 @03:44PM (#37495504)

    By 2013, 10 petaflops will be a competent, but not astonishing system. Probably top 10-ish on the top500 list.

    The interesting part here will be the MIC parts, from intel, to see if they perform better than the graphics cards everyone is putting into super computers in 2011 and 2012. The thought is that the MIC (Many Integrated Cores) design of knights corner are easier to program. Part of this is because they are x86-based, though you get little performance out of them without using vector extensions. The more likely advantage is that the cores are more similar to CPU cores than what one finds on GPUs. Their ability to deal with branching code, and scalar operations is likely to be better than GPUs, though far worse than contemporary CPU cores. (The MIC cores are derived from the Pentium P54C pipeline)

    In the 2013 generation, I don't think the distinction between MIC and GPU solutions will be very large. the MIC will still be a coprocessor attached to a fairly small pool of GDDR5 memory, and connected to the CPU across a fairly high-latency PCIe bus. Thus, it will face most of the same issues GPGPUs face now; I fear that this will only work on codes with huge regions of branchless parallel data, which is not many of them. I think the subsequent generation of MIC processors may be much more interesting. If they can base the MIC core off of atom, then you have a core that might be plausible as a self-hosting processor. Even better, if they can place a large pool of MIC cores on the same die as a couple of proper Xeon cores. If the CPU cores and coprocessor cores could share the memory controllers, or even the last cache level, one could reasonably work on more complex applications. I've seen some slides floating around the HPC world, which hint at intel heading in this direction, but it's hard to tell what will really happen, and when.

    • by Anonymous Coward

      This is what AMD is doing, lol. Once again, Intel gets scooped by a few years by a company that knows how to plan ahead.

  • They claim they will use 56 gigabit InfiniBand. Has anyone tested Mellanox's FDR adapters and switches? From what I understand, that is 14 gigabit over 4x cabling. I remember all the problems just getting 10 gigabit to work over 4x 2.5 gigabit copper. I imagine this must use fiber to get any distance from the server to the switch.

    Their asic seems to support only 36 ports. Building a 2000 node network with 36 port switches will take a lot of interconnected switches. I wonder what topology they are goin

    • The 36-port part is the ASIC. The switch boxes have a lot more ports.
    • We had no problem getting (at the time) the largest 10 gig Infiniband installation running at VT in 2003 for System X. Fabric optimization was the hardest part, but we worked with a couple of vendors and were able to get an optimized fabric manager in place within a few months. I think the copper limit is still between 15 m and 20 m. Best cables we got were from Gore. We were using 64 port switches throughout to begin with and then moved to smaller leaf switches (24 port) and larger backbone switches (288 p
      • by soldack ( 48581 )

        I know about this...I worked on the SilverStorm's Fabric Manager while I was there. I remember going into the VT System-X room and seeing piles of bad cables from earlier setup. If I remember correctly, the very first network had more switch ASICs than hosts...both were around 2000 or so. I think the first switches used 8-port ASICs internally. We made massive improvements to our fabric scan time and reaction time to moving cables, nodes going down, etc. This was a good thing because the non-silverstorm

  • Can we play NetHack on it?

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...