Sequoia Supercomputer Sets Record With 'Time Warp' 39
Nerval's Lobster writes "The 'Sequoia' Blue Gene/Q supercomputer at the Lawrence Livermore National Laboratory (LLNL) has topped a new HPC record, helped along by a new 'Time Warp' protocol and benchmark that detects parallelism and automatically improves performance as the system scales out to more cores. Scientists at the Rensselaer Polytechnic Institute and LLNL said Sequoia topped 504 billion events per second, breaking the previous record of 12.2 billion events per second set in 2009. The scientists believe that such performance enables them to reach so-called "planetary"-scale calculations, enough to factor in all 7 billion people in the world, or the billions of hosts found on the Internet. 'We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain, and validate models of complex systems than by our ability to execute them in a timely manner,' Chris Carothers, director of the Computational Center for Nanotechnology Innovations at RPI, wrote in a statement."
Planetary scale computing? (Score:5, Funny)
"enough to factor in all 7 billion people in the world, or the billions of hosts found on the Internet."
It's all fun and games until someone puts in a hyperspace bypass...
This sounds familiar... (Score:5, Informative)
http://tech.slashdot.org/story/13/05/02/2119214/llnlrpi-supercomputer-smashes-simulation-speed-record [slashdot.org]
Yup.
Re: (Score:3)
The system's so efficient it managed to travel back in time by 23 hours.
Re: (Score:2)
Re: (Score:2)
I could've swore I heard about this less than 23 hours ago somewhere... http://tech.slashdot.org/story/13/05/02/2119214/llnlrpi-supercomputer-smashes-simulation-speed-record [slashdot.org]
Maybe they could use 0.000000001% of it's processing power to look for duplicate stories on here.
Spoiler (Score:4, Funny)
You live in a computer simulation.
Re: (Score:2)
You live in a computer simulation.
And so does this supercomputer...
That raises up the question of how can a supercomputer simulate what our species would do if it had access to a supercomputer that could simulate what our species would do if it had access to a supercomputer that could simulate...
Out of memory [core dump]
% rm -rf *
Which of course begs the question...
Clearly the big bang was a giant core dump (Score:2)
You live in a computer simulation.
And so does this supercomputer...
That raises up the question of how can a supercomputer simulate what our species would do if it had access to a supercomputer that could simulate what our species would do if it had access to a supercomputer that could simulate...
Out of memory [core dump]
% rm -rf *
Which of course begs the question...
Questions I've posed to astrophysicist friends that have never gotten good answers:
1) Why should we not think of galaxies as simply accretion disks - ie, we're all circling the giant 10M solar mass black hole drain that's the center of the galaxy.
2) Why is it irrational for me to think of the big bang as basically the opposite of a black hole - and how do we know it's not continuing to spew matter?
Psychohistory (Score:2)
Really? (Score:2)
The time warp? Again?
Re: (Score:1)
Re: (Score:3)
10 LET JUMP = "to the left"
Re: (Score:1)
20 LET STEP = " to the right"
Re: (Score:2)
Maybe laserdiscs [wikipedia.org] are making a comeback for some reason?
so.. how much power does it draw? (Score:3)
Other factors: based on the parallelism model used, and the current state of electronics, what is the maximum number of cores that can be included before performance degrades from additional nodes?
(Eg, it takes x time to transmit data over a bus (any bus). How may cores, before the time penalty for transmitting the data over the bus to the allocated processor becomes greater than the penalty for just waiting for a processor to become free?)
There *must* be an upper bound on parallel computing potential before we need pure unobtanium semiconductors.
I am curious what that limit is, and how close we are to it.
Re: (Score:1)
It's most likely dependent on the problem size / nature. Apparently, they were able to achieve super-linear scaling on their particular problem because more and more of the data structures they used fit into CPU cache. For problems that use less memory or more inter-processor communication, I imagine the sweet spot would be very different.
Re: (Score:1)
Re: (Score:2)
That would be close to running on pure unobtanium. (Energy costs would become untenable to maintain the quantum states of many thousands of entangled particles, and keep them cold.) Also, quantum computing can only efficiently serve a subset of parallelized tasks, and are a poor fit for general parallelism as is. (Improvements may fix this in time however.)
The brick walls that I see looming are:
Energy requirements VS what can feasibly be delivered
Thermal waste generation rate VS rate of removal
Computational
I for one... (Score:2)
I for one welcome our Matrix overlord... (or MCP... or Skynet, which ever achieves sentience first...)
I was hoping to.... (Score:2)
Simulate people? (Score:2)
Hard to do, with free will and all [wikia.com]...
Now We Wait... (Score:2)
For the Russians to complete Guardian!
Colossus:_The_Forbin_Project [wikipedia.org]
Re: (Score:2)
So shut up and make an AI already (Score:2)
You've got the tools. You've got the know how (sort of). First one to intelligence wins the world, more or less.
So MOVE!
Mind (Score:2)
Google should invest in such things. Data is worthless if can't do something useful with it. Then Google in the best case could become something like a Mind [wikipedia.org] or at least an embryo of one. In the worst case... Facebook... no! We're doomed!
The Answer is... (Score:1)
Asimov's Foundation? (Score:1)
A toast! (Score:2)