Has Supercomputing Hit a Brick Wall? 185
anzha writes "Horst Simon, Deputy Director of Lawrence Berkeley National Laboratory, has stood up at conferences of late and said the unthinkable: supercomputing is hitting a wall and will not build an exaFLOPS HPC system by 2020. This is defined as one that passes linpack with a performance of one exaFLOPS sustained or better. He's even placed money on it. You can read the original presentation here."
Clarke's Three Laws (Score:5, Interesting)
Clarke's Three Laws are three "laws" of prediction formulated by the British writer Arthur C. Clarke. They are:
1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
3. Any sufficiently advanced technology is indistinguishable from magic.
Re:Ha, not the first (Score:4, Interesting)
building a supercomputer means getting thousands of CPUs to cooperate which is a much harder challenge.
Looking at his presentation, that seems to be his point. He concludes that power efficiency is going to become the limiting factor driving design decisions, and that since the power cost of increasing FLOPS has been so much lower than the power cost of moving larger quantities of data we're heading into an era where connectivity costs will so dominate the cost of cycles that cycles will be essentially free.
Hes's then basically arguing that it won't be cost-effective to build data transmission architectures that can effectively utilize exaflops, so no one will bother to build an exaflop machine.
He didn't state it, but if the rest of his arguments are correct, perhaps we're going to see the definition of a new metric for HPC, one that somehow captures the ability of a machine to distribute data to its computation nodes.
The Nanosecond (Score:5, Interesting)
Back in the early 80's I got the opportunity to hear Grace Hopper [wikipedia.org] speak. One of the stories she used to like to tell at her talks was about the time that she was having trouble visualizing a nanosecond. Eventually she sent a memo to her engineers which said, "Please send up one nanosecond." She waited, curious as to how they would respond. After a couple of days a response came back in the form of a metal rod 11-3/4 inches in length with the note attached, "One Nanosecond", and no other explanation. After puzzling over the metal rod she called down to the engineering department and asked, "I give up, what is it"? "That's the distance light travels in a nanosecond", was the response. Later, she sent another memo to the engineers with the request, "Please send up one picosecond." The engineers immediately responded with a memo instructing her to, "put the nanosecond in a pepper grinder and you can make picoseconds all over your desk."
Grace Hopper's humorous anecdote underlines the serious problems faced by researchers when they push the boundaries. In her case, it was a real concern over how far a bit can travel at the speed of light. I have no idea if that has any bearing on the exascale problem, but it might illustrate the kinds of problems they might be running into.
Re:It is tough (Score:3, Interesting)
Why not just make a Beowulf cluster?
Can you imagine?