Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Japan Technology

Japan's 8-petaflop K Computer Is Fastest On Earth 179

Stoobalou writes "An eight-petaflop Japanese supercomputer has grabbed the title of fastest computer on earth in the new Top 500 Supercomputing List to be officially unveiled at the International Supercomputing Conference in Hamburg today. The K Computer is based at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, and smashes the previous supercomputing records with a processing power of more than 8 petaflop/s (quadrillion calculations per second) — three times that of its nearest rival."
This discussion has been archived. No new comments can be posted.

Japan's 8-petaflop K Computer Is Fastest On Earth

Comments Filter:
  • So it's faster than the Crays on the list, the nearest competitors?

    • Re: (Score:3, Funny)

      In other news, company announces the system will be used initially for Bitcoin mining. The Payback period is expected to range between three weeks an two hundred years, depending on market conditions....

      • PRNewWires 13 Jun e2011

        Today a spokeswoman for MEXT has revealed that the bitcoin mining system for the K Computer was written using j2ee, "We have estimated that it will only take four and a half years to get the JVM up and running, but after that it will be 'faster than greased soba noodles' ".
  • the NSA has it's own chip fab

    • I imagine that their in-house fab churns out some very interesting niche designs; but it would be a real surprise if it has any terribly impressive capabilities in general-purpose compute applications. Staying on the bleeding edge of fabrication requires serious money, while just quietly gobbling up commodity stuff from Intel or Nvidia or whoever won't raise any eyebrows.

      They probably have some cool specialized crypto-crunchers based on cryptoanalysis that hasn't officially been done yet, and I suspect t
      • Rubbish. They want the highest Prime95 and SuperPi scores and to find the next Mersenne prime.
      • I suspect that they are the chaps to talk to when you need a chip that absolutely hasn't been backdoored in china

        Right on! You get a chip that has been backdoored in the good old US of A instead.

    • by durrr ( 1316311 )
      But NSA only use theirs for spying on their own and other nations citizens and corporations so it's not very relevant in the big picture. Think of it like building the worlds tallest skyscraper, in a pit deeper than the height of the building, then you put a lid on it, fill the pit with water and house a family of goldfishes in it. That's more or less what a NSA "supercomputer" is.
  • oblig. (Score:5, Funny)

    by mevets ( 322601 ) on Monday June 20, 2011 @08:31AM (#36499388)

    640K cores is enough for anyone.

  • Imagine a beowulf cluster of... wait. Are we still doing that? OK, now I feel old. In Slashdot years.
  • For supercomputers, it seems at least once a year something doubles. For desktop computers... Mine is 4 years old and still similar in specs to PCs that are being sold today.

    • by jpapon ( 1877296 ) on Monday June 20, 2011 @08:57AM (#36499688) Journal
      Two things, your 4 year old desktop is nowhere near as fast as the new i7s. It just seems that way because you're not doing number crunching on it; for normal applications, you'd probably see a bigger boost by switching to a SSD then a new CPU. Second, these supercomputers are massively parallel, so while the procs themselves do get faster, the real increase in speed seen comes from adding lots more cores.
      • What if your 4 year old desktop is a dual-quad core (8 core) machine? Sure, it benches slower than some i7's, but it's as good as an i5.
        • by eqisow ( 877574 )
          If that's the case, then it was hardly a fair comparison to begin with.
        • When he's saying "the new i7s", he is talking about the 2000 series i7s, not the 800/900 series.

          There have been a lot of improvements since those four years ago. Check out AVX, for example: Sandy Bridge has 256 bit vector operations. Sure, if you can crank integer math along at 4.2 GHz overclocked, you can probably beat the pants off a 3.4 GHz machine, but... when that 3.4 GHz machine includes operations that take the same amount of time to execute but process twice as much data, or reduce a multiple-instru

    • by Yvanhoe ( 564877 )
      How many cores ? Have you considered the GPU as well ? These years, it is the form factor and the price that are halfed every year or so. It will continue until we find a use for the ridiculous power we have today.

      smartphones are now dual-core, mainstream computers DO continue to improve.
    • by fuzzyfuzzyfungus ( 1223518 ) on Monday June 20, 2011 @09:02AM (#36499766) Journal
      I'm not sure that you are being entirely fair to the desktops:

      On the one hand, since a vast percentage of desktops are sold to budget-conscious users with fairly defined needs, the bottom end of the desktop market moves fairly sluggishly(of course, the bottom end of 'supercomputers' also moves more sluggishly; but nobody bothers to talk about the "250,000th fastest supercomputer!!"); but the top end has been moving at a reasonably steady clip.

      Back in mid 2007, a Core2 quad was Pretty Serious Stuff, with maybe a Geforce 8800 or 9800 and 4-8 gigs of RAM if you were hardcore like that.

      That will still go head to head with a contemporary budget to midrange box; but if you spent the same money today that you would have had to spend on that, you could be talking a high-end i7, a markedly more powerful graphics card(or 3 of them), and two or three times the RAM. Plus, the now-reasonably-cost-effective-even-when-large-enough-to-be-useful SSD that will have driven your I/O numbers through the roof.

      Apathy and diminshing returns keep the desktop market boring; but if those are no object, you can still go nuts.
    • by Nadaka ( 224565 )

      The big advancements in personal computing these last few years have been mostly been in graphics cards. Though density has improved, the benefits have been going more towards power efficiency than towards raw speed.

      • It is true that GPUs have seen a lot of the excitement(particularly with their transition toward being general-purpose); but part of that is arguably definitional:

        Because(contemporary, I'm sure SGI and Sun were doing cool stuff back when I was knee-high to a grasshopper...) multi-GPU tech started its life as a hardcore gamer feature, you can get "desktop" motherboards with support for 3 or 4 16x(physical, usually 8x electrical) PCIe graphics cards. The moment you add a second CPU socket, though, it becom
    • by Bengie ( 1121981 )

      An i7 quad is about twice as fast as a Core2Quad. SandyBridge i7 is about 30% faster per core than the original i7 and has 50% more cores, the Ivy bridge coming out next year is about 20% faster than the SandyBridge per core and has another 50% more cores.

      Assuming you use a Core2Quad(about 4 years old), current CPUs are about 2*1.3*1.5 = 3.9 times faster, and that's not including the new AVX instructions that are about twice as fast as SSE. Add in AVX and you're talking about very large performance differen

  • Sure the Singularity and Computer Sentience cannot be far-off now. Yay. A faster computer. I guess this is news for today but this is 'dog bites man' stuff. Tomorrow the sun will rise again and there will be a faster computer than this one. Yawn
  • by 1sockchuck ( 826398 ) on Monday June 20, 2011 @09:05AM (#36499798) Homepage
    Here are some images of the system [datacenterknowledge.com], which currently uses 672 cabinets and uses about 10 megawatts of power. The K system is more powerful than the next 5 systems combined. It's a big-ass system.
  • Can anyone tell us the most recent accepted figure for human brain emulation in petaflops and terrabytes of memory?
    • If you are interested, try giving this page a read.

      http://www.transhumanist.com/volume1/moravec.htm

      I googled it, so can't verify it's accuracy, but it looked reasonable.
    • by Ecuador ( 740021 )

      Sorry, but we are so far from figuring how the brain works, your question has no meaning at all.
      If you are thinking something along the lines that a big "neural network" can emulate the brain, I would have to tell you that the artificial neuron is a very useful math construct that is only related to a biological neuron via a crude abstraction. Replacing biological neurons with artificial neural networks is similar to replacing a fisherman with a perfect sphere in a math problem : useful in some context, no

    • I research brains and haven't ever seen good numbers. It's really hard to equate brain processing with computer processing. In some ways brains are far superior and in others, far inferior. There seems to be no upper limit on what our brains can learn so essentially we have unlimited hard drive storage. However, for active processes, we're quite limited in what we can do in parallel - sort of. We do many things in parallel but not many consciously (i.e., our sensory systems are always going and our motor an
    • by kyle5t ( 1479639 )
      The EU's Human Brain Project has an estimate of 1000 times the current fastest supercomputer (probably written about a year ago), so maybe around an exaflop.

      "Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail.
  • It is using Sparc CPUs and no GPUs. I wonder if Oracle is watching? It will be interesting to see since they now own the Zombie formally known as Sun.
    So when are we going to see nVidia get into this game with ARM+GPU based super computer?

    • Since SPARC is an open spec, why should Oracle care?
      • by LWATCDR ( 28044 )

        Because Sun developed SPARC and if for no other reason than PR.
        The worlds fastest computer is powered by SPARC makes a great lead in for selling SPARC based servers.

    • by joib ( 70841 )
      This one uses SPARC chips designed and fabbed (IIRC?) by Fujitsu. Sun/Oracle has nothing to do with it. AFAICT the politics behind this machine is that a few years ago NEC pulled out from the project to design a next generation vector chip for use in a Japanese Earth Simulator follow-up. Hence the project resorted to the Fujitsu SPARC chips, which are not really designed for HPC but are still a domestic design. I wouldn't expect this machine design to become popular outside Japan.
  • So there are faster computers on some other planet somewhere?

    I might think about ordering one of those instead, except shipping cost (and time ) would be a problem, and my credit card would expire before they got the order (the speed of light is a bitch)

  • the top 500 is based upon the Linpack benchmark and it is not really a good reflection on 'how fast' a super computer really is. Newer benchmarks, such as graph500 [graph500.org] and NAS parallel benchmarks [nasa.gov] try to make the benchmark more real world. But if all you plan to do is solve linear equations then I guess Linpack is your thing.

    • But if all you plan to do is solve linear equations then I guess Linpack is your thing.

      You are right but for the wrond reasons. An awful lot of HPC stuff spoves linear equations. It forms the inner loop of many PDE solvers, for instance. However, LINPACK is dense linear, whereas many problems where linear equations are the inner loop solve large, sparse systems. That said, there are many inner blocks which are solved as dense problems.

      The Graph500 note that the graph problems are ill-suited to machines whi

  • What I don't quite get - and maybe someone can enlighten me - is how they keep 80K compute nodes going. Even with very reliable hardware, several of these nodes will fail each day. The massively parallel codes I work with (MD) can't deal with a compute node going out. Do other massively parallel codes have a way to deal with this sort of thing? This seems to be a big challenge for parallel computing. When you have a code and problem that can use several thousand nodes, hardware failure will be a daily

    • by Junta ( 36770 )

      Frequently, job restart. Long running jobs have checkpoint and restore. Generally, fault is isolated to a job, so yes, on 80,000 systems you'll have a failure, but if you were doing 800 large jobs, you only lose 1, and 799 jobs didn't even know something went wrong. Generally something like this runs a few benchmarks across the whole thing in the very very beginning, and never again does the whole work as one toward a single task.

      • by Junta ( 36770 )

        Oh, and that one job 'lost' gets restarted shortly thereafter, with the user maybe realizing that it took longer than he thought it should.

      • Frequent checkpoints would be onerous for billion atom systems.

    • Mostly systems fail at start because of manufacturing defects or after a few years due to mechanical failure. At least in one image I saw they were using water cooling on their boards, which should reduce the mechanical failure issue of fans, I would also assume they are using some fault tolerant storage which is the other major failure route. If you take a, possibly optimistic, failure rate of 1% that is about 1 machine failing a day over a 3 year life span. Being as I would make them as similar as possibl

    • by joib ( 70841 )
      Most codes can't deal with node failure. So far it seems the solution is to checkpoint frequently (say, once per hour). There's not much else that is sensible. E.g. running pairs of nodes in lockstep is more expensive than an IO subsystem capable of the checkpointing.
  • penismeasuringcontest
  • Hah - I just started a ~10000 proc job on the machine sitting in position 99....

    I also regularly run jobs on Jaguar (#3)....

    The advances in supercomputing in the last year have been simply astounding. GPUs are changing the game of course.... but the density of CPUs is getting insane. Being able to plug 4x12 core processors into a 1U mobo is getting crazy. Can't wait to see where it goes in the next year!

  • .. as opposed to the fastest in space?

    If they mean the fastest computer we've ever made to date, why don't they just say that instead of using a qualifier like "on earth" which implicates a specifically limited scope on account of an awareness of something faster elsewhere?

    ... or maybe I'm just too damned literal.

  • Even the story seems to think that you're cheating if you use a GPU - when in fact, depending on the problems you're trying to solve, use of GPUs can make your computer more power-efficient, less expensive, and faster. OK, sure, if you want to argue that not every algorithm will map efficiently to a GPU, I'll accept that argument. But then you have to grant me the reverse argument; not every algorithm maps efficiently to CPUs. The problem in thinking here is just that the CPU is the "correct" way to do thin

  • Wasn't there an article recently about China's world's fastest supercomputer, and no competition in site for the next several years? Or did I fall asleep at my seak for several years?

  • The fastest computers are 100,000x faster in 25 years. But the typical problem in my discipline (seismic) is a 4D grid. If you increase the side of a grid 18x to the fourth power, then you reach 100,000. I was at a conference earlier month where people are still talking about tricks like custom hardware (GPUs) and data compression to squeeze ever larger problems into supercomputers. This aspect has not changed since I went to grad school a quarter century ago.
  • although it should give SkyNet a run for its money.
  • The K computer is able to do work more efficiently than GPUs because it uses a very power-efficient core, the Sun VIIIfx. If you peel the onion, it seems like the real reason for energy efficiency is special purpose units and the HPC-ACE instructions. I did a quick investigation of what this core has (and what it doesn't) to make it so energy efficient. It may be an interesting read for some of you guys so leaving a link here: http://bit.ly/kTvvDE [bit.ly]

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...