Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Japan Supercomputing

Japan Aims To Win Exascale Race 51

dcblogs writes "In the global race to build the next generation of supercomputers — exascale — there is no guarantee the U.S. will finish first. But the stakes are high for the U.S. tech industry. Today, U.S. firms — Hewlett-Packard, IBM and Intel, in particular — dominate the global high performance computing (HPC) market. On the Top 500 list, the worldwide ranking of the most powerful supercomputers, HP now has 39% of the systems, IBM, 33%, and Cray, nearly 10%. That lopsided U.S. market share does not sit well with other countries, which are busy building their own chips, interconnects, and their own high-tech industries in the push for exascale. Europe and China are deep into effort to build exascale machines, and now so is Japan. Kimihiko Hirao, director of the RIKEN Advanced Institute for Computational Science of Japan, said Japan is prepping a system for 2020. Asked whether he sees the push to exascale as a race between nations, Hirao said yes. Will Japan try to win that race? 'I hope so,' he said. 'We are rather confident,' said Hirao, arguing that Japan has the technology and the people to achieve the goal. Jack Dongarra, a professor of computer science at the University of Tennessee and one of the academic leaders of the Top 500 supercomputing list, said Japan is serious and on target to deliver a system by 2020."
This discussion has been archived. No new comments can be posted.

Japan Aims To Win Exascale Race

Comments Filter:
  • Waste of money (Score:2, Insightful)

    by SoftwareArtist ( 1472499 ) on Saturday November 23, 2013 @03:30PM (#45502409)

    I think the exascale race will turn out to be a dead end. Tightly coupled calculations simply don't scale. To effectively use even current generation supercomputers you need to scale to thousands of cores, and there just aren't very many codes that can do that. Exascale computers will require scaling to millions of cores, and I don't see that happening. For all but a handful of (mostly contrived) problems, that won't be possible.

    So like it or not, we need to settle for loosely coupled codes that run mostly independent calculations on lots of nodes with only limited communication between them. And for that, you don't need these specially designed systems with super expensive interconnects. Any ordinary data center works just as well for a fraction of the cost.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...