Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology 118
dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'"
Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).
So what? (Score:2, Interesting)
So what? Much of supercomputing is a tax-supported boondoggle. There are few supercomputers in the private sector. Many things that used to require supercomputers, from rocket flight planning to mould design, can now be done on desktops. Most US nuclear weapons were designed on machines with less than 1 MIPS.
Supercomputers have higher cost/MIPS than larger desktop machines. If you need a cluster, Amazon and others will rent you time on theirs. If you're sharing a supercomputer, and not using hours or days of time on single problems, you don't need one.
Does disruptive mean affordable? (Score:5, Interesting)
We've had Silicon Germanium cpus that can scale to 1000+ GHz for years. Graphene is also another interesting possibility.
The question is that "At what price can you make the power affordable?"
For 99% of people, computers are good enough. For the other 1% they never will be.
SOLUTION for CMOS "band gap" (Score:3, Interesting)
1. Silicone bandgap of CMOSS is higher than TTL
2. Gate length is more fabricable. (Fabricate the gates in Mexico; say they were made in USA)
3. Drain has "quantum clogging" problems in TTL but not CMOSS
4. Dopant levels make GaAs less commercially feasible.
5. Wafer sizes still dominated by "silicon" technology. It is not cheaper to go to more e-toxic and alien technologies. Far cheaper to stick with the wafers commercially produced today. GaAs and Indium Phosphate are like communion wafers!!!
6. Investors. We need to keep money at the fore front. Global depression is iminint. Must make cheep and available components with "WHAT WE HAVE" allready!!
TTL. I think its a good idea.
-KD
Re:Work smarter, not harder. (Score:4, Interesting)
I wonder if the next breakthrough would be using FPGAs and configuring the instruction set for the task at hand. For example, a core gets a large AES encryption task, so it gets set to an instruction set optimized for array shifting. Another core gets another job, so shifts to a set optimized for handling trig functions. Still another set deals with large amounts of I/O, so ends up having a lot of registers to help with transforms, and so on.
Of course, fiber from chip to chip may be the next thing. This isn't new tech (the PPC 603 had this), but it might be what is needed to allow for CPUs to communicate closely coupled, but have signal path lengths be not as big an engineering issue. Similar with the CPU and RAM.
Then there are other bottlenecks. We have a lot of technologies that are slower than RAM but faster than disk. Those can be used for virtual memory or a cache to speed things up, or at least get data in the pipeline to the HDD so the machine can go onto other tasks, especially if a subsequent read can fetch data no matter where it lies in that I/O pipeline.
Long term, photonics will be the next breakthrough that propels things forward. That and the Holy Grail of storage -- holographic storage, which promises a lot, but has left many a company (Tamarak, InPhase) on the side of the road, broken and mutilated without mercy.
Re:So what? (Score:5, Interesting)
Of course these people are using talking about supercomputers and the relevance to supercomputers, but you have to be pretty daft to not see the implications for everything else. In the last years almost all the improvement have been in power states and frequency/voltage scaling, if you're doing something at 100% CPU load (and isn't a corner case to benefit from a new instruction) the power efficiency has been almost unchanged. Top of the line graphics cards have gone constantly upwards and are pushing 250-300W, even Intel's got Xeons pushing 150W not to mention AMD's 220W beast, though that's a special oddity. The point is that we need more power to do more and for hardware running 24x7 that's a non-trivial part of the cost that's not going down.
We know CMOS scaling is coming to an end, maybe not at 14nm or 10nm but at the end of this decade we're approaching the size of silicon atoms and lattices. There's no way we can sustain the current rate of scaling in the 2020s. And it wouldn't be the end of the world, computers would go roughly the same speed they did ten or twenty years ago like cars and jet planes do. Your phone would never become as fast as your computer which would never become as fast as a supercomputer again. We could get smarter at using that power of course, but fundamentally hard problems that require a lot of processing power would go nowhere and it won't be terahertz processors, terabytes of RAM and petabytes of storage for the average man. It was a good run while it lasted.
Re:So what? (Score:2, Interesting)
Actually, the sort-of sad reality is that, outside the top few supercomputers in the world, the "top500" type lists are completely bogus because they don't include commercial efforts who don't care to register. Those public top-cluster lists are basically where tax-supported-boondoggles show off, but outside the top 5-10 entries (which are usually uniquely powerful in the world), the rest of the list is bullshit. There are *lots* (I'd guess thousands) of clusters out there that would easily make the top-20 or top-50 list of the public clusters, that are just undocumented publicly. So yes, "supercomputing"-level clusters are in wide commercial use. I know for a fact I've worked at two different companies in the past in this situation. One had a bit over 10K Opterons in a single datacenter wired up with Infiniband doing MPI-style parallelism, and this was back in like ... I want to say about 2005? They were using it to analyze seismic data to find oil. Never showed up on any list of supercomputers anywhere, like almost all commercial efforts.
Re:Work smarter, not harder. (Score:4, Interesting)
"Of course, fiber from chip to chip may be the next thing. This isn't new tech (the PPC 603 had this), but it might be what is needed to allow for CPUs to communicate closely coupled, but have signal path lengths be not as big an engineering issue. Similar with the CPU and RAM."
Fiber from chip to chip is probably a dead end, unless you're just primarily taking advantage of the speed of serial over parallel buses.
The problem is that you have to convert the light back to electricity anyway. So while fiber is speedier than wires, the delays (and expense) introduced at both ends limits its utility. Unless you go to actual light-based (rather than electrical) processing on the chips, any advantage to be gained there is strictly limited.
Probably more practical would be to migrate from massively parallel to faster serial communication. Like the difference between old parallel printer cables to USB. Granted, these inter-chip lineswould have to be carefully designed and shielded (high freq.), but so do light fibers.
A lot of supercomputing motivated by bad science! (Score:4, Interesting)
There are plenty of algorithms that benefit from supercomputers. But it turns out that a lot of the justification for funding super computer research has been based on bad math. Check out this paper:
http://www.cs.binghamton.edu/~pmadden/pubs/dispelling-ieeedt-2013.pdf
It turns out that a lot of money has been spent to fund supercomputing research, but the researchers receiving that money were demonstrating the need for this research based on the wrong algorithms. This paper points out several highly parallelizable O(n-squared) algorithms that researchers have used. It seems that these people lack an understanding of basic computational complexity, because there are O(n log n) approaches to the same problems that can run much more quickly, using a lot less energy, on a single-processor desktop computer. But they’re not sexy because they’re not parallelizable.
Perhaps some honest mistakes have been made, it trends towards dishonestly as long as these researchers continue to use provably wrong methods.
Re:on the nature of disruptive... (Score:5, Interesting)
my intuition tells me that disruptive technologies are precisely that because people don't anticipate them coming along nor do they anticipate the changes that will follow their introduction. not that people can't see disruptive tech ramping up, but often they don't.
Arguably, there are at least two senses of 'disruptive' at play when people talk about 'disruptive technology'.
There's the business sense, where a technology is 'disruptive' because it turns a (usually pre-existing, even considered banal or cheap and inferior) technology into a viable, then superior, competitor to a nicer but far more expensive product put out by the fat, lazy, incumbent. This comment, and probably yours, was typed on one of those(or, really, a collection of those.)
Then there's the engineering/applied science sense, where it is quite clear to everybody that "If we could only fabricate silicon photonics/achieve stable entanglement of N QBits/grow a single-walled carbon nanotube as long as we want/synthesize a non-precious-metal substitute for platinum catalysts/whatever, we could change the world!"; but nobody knows how to do that yet.
Unlike the business case (where the implications of 'surprisingly adequate computers get unbelievably fucking crazy cheap' were largely unexplored, and before that happened people would have looked at you like you were nuts if you told them that, in the year 2013, we have no space colonies, people still live in mud huts and fight bush wars with slightly-post-WWII small arms; but people who have inadequate food and no electricity have cell phones), the technology case is generally fairly well planned out (practically every vendor in the silicon compute or interconnect space has a plan for, say, what the silicon-photonics-interconnect architecture of the future would look like; but no silicon photonics interconnects, and we have no quantum computers of useful size; but computer scientists have already studied the algorithms that we might run on them, if we had them); but application awaits some breakthrough in the lab that hasn't come yet.
(Optical fiber is probably a decent example of a tech/engineering 'disruptive technology' that has already happened. Microwave waveguides, because those can be tacked together with sheet metal and a bit of effort, were old news, and the logic and desireability of applying the same approach to smaller wavelengths was clear; but until somebody hit on a way to make cheap, high-purity, glass fiber, that was irrelevant. Once they did, the microwave-based infrastructure fell apart pretty quickly; but until they did, no amount of knowing that 'if we had optical fiber, we could shove 1000 links into that one damn waveguide!' made much difference.)
Re:Does disruptive mean affordable? (Score:2, Interesting)
Light-carrying fiber is slower than copper (5ns/m vs 4 for copper) - it sort of has to be, as the higher impedance goes hand-in-hand with the need for total internal reflection at the boundary of the clear plastic. Optical helps with band-width-per-strand, not with latency.
I think the next decade of advances will be very much about power efficiency, and very little about clock rate on high-end CPUs. That will benefit both mobile and supercomputers, as power are power-constrained (supercomputers by the heat rather than the raw power, but it works out to the same thing).