Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Breaking Supercomputers' Exaflops Barrier 96

Nerval's Lobster writes "Breaking the exaflops barrier remains a development goal for many who research high-performance computing. Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through. Indeed, Tianhe-2 did pretty well when it was finally revealed — knocking the U.S.-based Titan off the top of the Top500 list of the world's fastest supercomputers. Yet despite sustained performance of 33 petaflops to 35 petaflops and peaks ranging as high as 55 petaflops, even the world's fastest supercomputer couldn't make it past (or even close to) the big barrier. Now, the HPC market is back to chattering over who'll first build an exascale computer, and how long it might take to bring such a platform online. Bottom line: It will take a really long time, combined with major breakthroughs in chip design, power utilization and programming, according to Nvidia chief scientist Bill Dally, who gave the keynote speech at the 2013 International Supercomputing Conference last week in Leipzig, Germany. In a speech he called 'Future Challenges of Large-scale Computing' (and in a blog post covering similar ground), Dally described some of the incredible performance hurdles that need to be overcome in pursuit of the exaflops barrier."
This discussion has been archived. No new comments can be posted.

Breaking Supercomputers' Exaflops Barrier

Comments Filter:
  • How is exaflop a barrier? Is there some atypical difficulty in exceeding an exaflop?
    • RTFA

      • Re:Barrier? (Score:4, Insightful)

        by holmstar ( 1388267 ) on Tuesday June 25, 2013 @10:25PM (#44108377)
        I'm sure the same sort of things were said about a petaflop machine, back in the day. Doesn't make exaflop a barrier. Just an engineering challenge, like every other bleeding edge supercomputer has been.
        • by Anonymous Coward

          Yes, it's the quantitative carrot... when I was learning parallel computing, teraflops were the fantasy milestone we'd reach some day and terabytes was the crazy storage you imagined existed in some NSA datacenter, rather than in your cousin's USB drive. People feel like brilliant strategists every time they point out the next 500-1000x milestone and declare that as the thing that matters to differentiate themselves from all the myopic folk working on today's problem.

        • It is a barrier, but that being said it just means no one has done it yet. It doesn't mean it's impossible. A barrier is something to strive to overcome and in spite of all the striving, it feels like a fully blown case of Zeno's paradox, for a while. Only now that we're so much closer to the day that an exaflops will be reached, it seems that we must all chatter about it lest no one will have enough motivation to actually make it happen.

      • Re: (Score:3, Interesting)

        by Zargg ( 1596625 )

        I'm pretty sure the parent is questioning why the word "barrier" is used instead of something like "milestone", which I would have chosen. A barrier implies there is something special stopping you there that you need to work around or resolve, but milestone is just a convenient number to stop at, as in this case. I see no difference between passing exaflop and say 0.9 exaflop, since both require "a really long time, combined with major breakthroughs in chip design, power utilization and programming", so it

    • yeah, strange harmonics and shit, and word around the cooler is that it would require an infinite amount of energy as well... that or set the atmosphere on fire or some shit.

    • It's just speech. It's a milestone. It's not difficult to exceed one exaflops (the name stands for operations per second, it's not a plural) once you got to, say, 0.99 exaflops. Scientists like to talk in orders of magnitude. Right now we are in the tens of petaflops, but didn't get yet to hundreds. Tiahne-2 gets to 55 pflops, but its sustained speed is a bit bigger than half of that.

      Problem is much more about how to get there. It's not just machinery. Is how to actually write and debug programs at that sca

    • by AmiMoJo ( 196126 ) *

      There is nothing special about reach the exaflop level, unlike say the sound barrier where there are real physical forces that make it difficult to pass.

      Scaling is a challenge of course, but the difference between say 0.9 exaflops and 1.1 exaflops is basically just money.

  • by storkus ( 179708 ) on Tuesday June 25, 2013 @10:22PM (#44108359)

    My take away from reading this and the blog post is that, while NVIDIA may consider graphics to be their bread & butter, it looks like they're looking at this space (HPC) very seriously in the long term--perhaps they even think they can dominate it. This is a big difference from the other players: IBM isn't bothering to throw POWER at it, and AMD/ATI is only present on older machines; ATI in particular seems more interested in going after the mobile space rather than HPC. I don't know what to make of Intel other than they know they're the choice for the non-GPU side and are at the top of their game.

    One problem I see is that NVIDIA is still a fabless house and has performance limitations tied to whatever fab they partner with; perhaps this is why they downplay process gains in the blog post.

    Of course, if the conspiracy theorists are to be believed, NSA and friends already have this 10-years-into-the-future technology...

    • Of course, if the conspiracy theorists are to be believed, NSA and friends already have this 10-years-into-the-future technology...

      I heard 20 years - they're still learning stuff from the Roswell crash.

    • Of course, if the conspiracy theorists are to be believed, NSA and friends already have this 10-years-into-the-future technology...

      With a nearly unlimited budget, no need to sell a product or make a profit, some of the best and brightest talent in the world (they especially like math majors), and the ability to spy on and thus learn from nearly anyone ... well, they'd be pretty damned incompetent if they somehow aren't ahead of the mainstream. Make no mistake, "national security" is a very high-stakes game, these are people who play to win, and "winning" means superiority.

      That is a conspiracy theory? Usually those involve aliens o

    • Nvidia'a Performance Is Good Enough ForMe
    • by HuguesT ( 84078 )

      Actually Intel is pretty much the king of the hill at the moment for HPC. They don't have a "GPU" solution, but they do have a massively parallel CPU + PCIe compute card available called the "Xeon Phi". Extremely confusing, yet this is what the current fastest supercomputer uses

      Xeon phi is easier to deal with than Nvidia's solution for GPU, essentially because it is currently much easier to program.


  • Hmm, Mr. Fusion is due in a couple of years...

  • Imagine a beowulf cluster of.... What? All supercomputers are basically beowolf clusters now? Umm...Ok, is Natalie Portman still topical?
  • Does anyone have an idea of what these extremely expensive systems are even for? And don't say password cracking/NSA, because both of those tasks are "embarrassingly parallel", so that you can use a cloud of separate computers rather than a tightly interlinked network like a supercomputer.

    Are there real world problems right now where another 100x more CPU power would make real, practical differences? (versus making the algorithm more efficient, etc)

    • CFD simulation. Lattice Boltzmann [] simulations of fluid dynamics is one such application. Folks at the various DOE national laboratories have a pretty keen interest in this kind of simulation.

    • Exascale computers would be helpful for climate modeling. Right now climate models don't have the same resolution as weather models, because they need to be run for much longer periods of time. This means that they don't have the resolution to simulate clouds directly, and resort to average statistical approximations of cloud behavior. This is a big bottleneck in improving the accuracy of climate models. They're just now moving from 100 km to 10 km resolution for short simulations. With exascale they c

  • we all know Chinese numbers represent a value exactly 14% less than what the rest of the world agrees on.

  • Moore's law predicts that the "factor-of-33" will be bridged in about 10 years. There is only a factor of 20 to the "peak performance", so about a year before that, peak performance might topple the exabyte "barrier".
    (Some people plug in different constants in Moore's law. I use factor-of-1000 for every 20 years. That's 30 every 10, 2 every 2, and about 5 every five. This has never failed me: it always works out).

    • A more pessimistic estimate would say Moore's law only gets you a doubling every 3 years nowadays, so a factor of 32 would take 15 years to work out. See the troubles there were for e.g. TSMC moving to 28nm, and now 20nm.
      An exaflops supercomputer would still be possible, with a 10x boost from Moore's law over 10 years and building a 3x bigger supercomputer.

    • I think the DOE was predicting last year that their first exascale system will come online in 7 to 9 years.

  • We're at 5.4% of exaflop scale. Somehow I don't think this is a 2013 / 2014 goal ;)

  • Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through.

    Wait... *what* uninformed developer(s) predicted that? The previous record (six months ago) was set by Titan, at 17.59 Petaflop/s. So to pass the exaflop barrier this time around would require over a fifty-fold improvement -- something never before seen in the history of the Top500 list. Did someone *really* make this prediction, or is author Kevin Fogarty just making shit up?

  • Unless the expensive Exadata box we just bought isn't capable of the exa-stuff they promised.

The IQ of the group is the lowest IQ of a member of the group divided by the number of people in the group.