Has Supercomputing Hit a Brick Wall? 185
anzha writes "Horst Simon, Deputy Director of Lawrence Berkeley National Laboratory, has stood up at conferences of late and said the unthinkable: supercomputing is hitting a wall and will not build an exaFLOPS HPC system by 2020. This is defined as one that passes linpack with a performance of one exaFLOPS sustained or better. He's even placed money on it. You can read the original presentation here."
It is tough (Score:3)
You can't really make factor 10 improvements indefinitely. Eventually the numbers overwhelm you and you hit roadblocks. The only real solution will ultimately be new computing technology, such as quantum computers.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3, Interesting)
Why not just make a Beowulf cluster?
Can you imagine?
Re: (Score:3)
Memory latency. Beowulf clusters are good for things that are highly parallel *and* have a high degree of memory locality, ie. you rarely need to make memory calls between boxes.
True supercomputers use high-speed interconnects between systems for this reason, usually using something like Infiniband or a weird proprietary system, and usually with some network topology with numerous inter-system links. This gives them much lower latency when one system uses data in memory in another system.
The wall got smacked in the 1980's. (Score:2)
Memory latency. Beowulf clusters are good for things that are highly parallel *and* have a high degree of memory locality, ie. you rarely need to make memory calls between boxes.
True supercomputers use high-speed interconnects between systems for this reason, usually using something like Infiniband or a weird proprietary system, and usually with some network topology with numerous inter-system links. This gives them much lower latency when one system uses data in memory in another system.
True Supercomputers can solve non-highy-parallel problems.
The wall got smacked in the 1980's.
Re:It is tough (Score:4, Funny)
What?! Someone was wrong on the Internet?
Re: (Score:2)
Saying that "All laws in science are observations" is untrue. Eventually, laws in science are based on observations, but to say they're they same thing attempts to shortcut and denigrate the steps needed to get from observation to law.
No? (Score:5, Informative)
"Japan to develop new exaflop computer by 2020" [japandailypress.com] ... why not? And if it's even a few microseconds into 2021 I suppose that supercomputing has failed, will pack up, and go home.
Re:No? (Score:5, Informative)
Power consumption and MTBF: power consumption (high operating costs) be solved perhaps be solved by a larger budget, but the mean time between failures (MTBF) means, that the machine will fail before it can compute anything meaningful. Right know the machines we build, and even more importantly, the software we build rely on all parts of the machine to function. If even a single node fails, then the data it holds becomes inaccessible and the rest of the compute job crashes like a house of cards.
This can be remedied by taking frequent snapshots and then restarting from the last snapshot, but the time for checkpoint/restart has been continuously growing for the last systems. No one really expects exascale systems to do full system checkpoint/restart in a reasonable time frame. They'd spend more time taking snapshots than actually computing.
Source: I'm doing my PhD in supercomputing.
Re: (Score:2)
Re: (Score:2)
Control units don't heartbeat individual nodes? They aren't designed to monitor and restart the unreported work of a failed node?
Frankly, I'm shocked.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You are misinterpreting and misapplying the data on metastability to computer systems. Once data is inside a synchronous system that isn't being clocked so fast that data isn't fully settled at flipflop inputs, slowing the system isn't going to enhance reliability. (Similarly, if you're running a synchronous system much too fast, not even a single instruction will execute properly.)
Metastability is a concern for asynchronous inputs. There are techniques for dealing with it, although it becomes tricky as dat
Re: (Score:2)
What about the idea that was popular a few years ago about making stuff fail gracefully, both at the hardware level and the software level, so that the system could swallow the error and go on calculating without completely ruining the result? Could failures be reduced to essentially just another source of error?
Software is the problem/solution (Score:2)
Yes, in a way. We'll probably never be able to improve the hardware far enough that we can simply rely on it to fail gracefully (i.e. announce it's impending death a few seconds in advance). The reason is that ATM our systems contain approx. 20k nodes. Exascale systems will likely push this to 200k.Even if you assume a node will live 10 years in average, then you can estimate that every ~53 minutes one node of the system will fail.
My money is on the software: we'll need some kind of redundancy (e.g. a simul
Re: (Score:2)
Re: (Score:2)
The thing about scientific computing is that scientists like to write MPI and Fortran. They just love that shit. And they are traditionally really resistant to any new programming model. So when you tell them they need to start using XYZ instead of MPI so their programs can actually complete at exascale *before* hardware failure, they get unhappy and instead implement things like checkpoint/restore that takes 70% of the runtime. Source: I work in HPC.
Changing from FORTRAN would require us to actually try to comprehend decades worth of scientific data stored in FORTRAN data files. Ain't no one got time fo' that.
Re: (Score:2)
For good checkpoint/restore, you probably need a custom node design that would accommodate it efficiently, but I can't see why it'd have a 70% overhead. Doing a copy of your memory contents to another memory that has same bandwidth and capacity, and then lazily moving that off-node without the main CPU being involved is no biggie. You probably could implement the memory bridge and the recover CPU on a simple FPGA. The main, fast CPU is crunching numbers, then stops, the FPGA takes a memory copy, the main CP
Re: (Score:3)
You might need to broaden your research beyond what is available in the academic literature. Google handles redundancy. When they do a map/reduce, the clusters are self forming. If a cluster leader/master goes down, the cluster reelects a new master. They trust the integrity of nothing. Not even DRAM. They checksum everything. The actual architecture of Google's data centers is a closely guarded trade secret, but from what [little] I've been able to glean, they're light years ahead of "big iron" vend
Re: (Score:2)
When they do a mapreduce, each node might take minutes or hours to do work.
I'm sure they have processes that require fine grained, millisecond parallelization, but mapreduce is not one of them.
Google is not a Supercomputer (Score:2)
Whenever someone on on /. likens Google's network to a supercomputer God kills a Pokemon. But honestly: the reason why Google can cope with these massive outages is that they're doing totally different computations from supercomputers. Google's compute jobs are losely coupled. They do data mining. That is fundamentally different from supercomputing where all compute jobs are tightly coupled. To give you a car analogy:
Re: (Score:2)
Presumably no matter what the memory size is on any node, it could be doubled, and presumably the bandwidth on that memory is such that duplicating the contents of one half of the memory to the other half would take a reasonable amount of time (say 0.1-1s). You can then dump the second copy over a dedicated bus without slowing down the computations. Even if the bus wasn't dedicated, the bandwidth will be curtailed by the hard drive array you use for long-term snapshot storage - so it may, say, eat 10% of yo
Re: (Score:3)
Sure but they're one of many. Even if one of the many don't accomplish this, surely another will. If not by (or before!) 2020, sometime later. People aren't just going to give up if it doesn't happen by some arbitrary date. This is my real point.
These days, how much is really revolutionary anyway? So many new supercomputing announcements are "we threw N parts at this, so it's Yflops".
Re: (Score:2)
Yes, and that there is a "brick wall". First, the article may be wrong; exascale might hit by (or before) 2020. They've got 7 years. That's a long time in terms of technology; the first teraflops supercomputer was 1996, merely 17 years ago. Speed increase can't happen indefinitely, but we're not talking about indefinitely, just exaflops. Even if this is not achieved by 2020, they have not hit a "brick wall", because development will continue until it is achieved. There is nothing even slightly theoret
I didn't RTFA, nor am I technical... (Score:2)
...but I can pretty much guess where this is going. If you look at the massive parallelization improvements we've witnessed among supercomputers over the past couple decades, you can predict that at some point, most of the low hanging fruit would eventually be picked at which point the underlying latency between interconnects would start to become a limiting factor. Couple that with the fact that there's been a complete lack of significant performance improvement in desktop/server CPU space in say the past
RTFP (Score:2)
He doesn't say it's not possible, rather we can't get there by just extending current technology. So by extension, 2020 is too soon to expect exaflops. He also presents arguments why exaflops is important and work to get there should continue.
If you ignore the best news in supercomputing ... (Score:2)
... I guess you may be excused to think it hit a brick wall. Alternative technology has fortunately already matured, and is commercially available [wp.me].
Re: (Score:2)
Lie you ignored the article? And nothing i that link shows they are using it to build a supercomputer.
Re: (Score:2)
Have problems parsing your question "Lie you ignored the article?"
Was that supposed to be "Like"?
My point is that conventional super-computing is indeed facing a crisis, but that non CMOS based technologies may save the day.
Re: (Score:2)
Re: (Score:2)
Fair enough, I used CMOS as sloppy shorthand for all current silicon based field effect transistor integrated circuit technology. (See how much longer that is?)
Re: (Score:3, Insightful)
Re: (Score:2)
Think you miss the bigger picture here, in that they are pioneering non silicon based LSI circuits that operate adiabatically (no heat production). This technology could very well be extended to include conventional logic in paralel with their quantum circuitry.
Clarke's Three Laws (Score:5, Interesting)
Clarke's Three Laws are three "laws" of prediction formulated by the British writer Arthur C. Clarke. They are:
1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
3. Any sufficiently advanced technology is indistinguishable from magic.
Re: (Score:2, Insightful)
So, since Freeman Dyson said "Faster-than-light travel is rubbish" [slashdot.org] that means he's probably wrong, and we'll be warping around the galaxy soon enough?
Re: (Score:2, Funny)
he should stick to building vacuum cleaners.
Re: (Score:2)
You don't seem to understand the "concept" behind "warp."
You are not exceeding the speed of light, you are just not traveling the linear distance between the two points.
Re:Clarke's Three Laws (Score:4, Funny)
You don't seem to understand the "concept" behind "warp."
You are not exceeding the speed of light, you are just not traveling the linear distance between the two points.
That's like saying that he doesn't understand the concept behind a Stargate. Made up is made up is made up.
You can't have an honest discourse on the speed if light when you're trying to involve fiction. You might as well go full star trek and say that thetalon radiation transmorphs subspace and changes the value of C, but only in the presence of an extradimensional rift, and if-and-only-if you have a humpback whale.
Re: (Score:2)
You don't seem to understand the "concept" behind "warp."
You are not exceeding the speed of light, you are just not traveling the linear distance between the two points.
ah the good old futurama theory! you know it's a joke, right?
Re: (Score:2)
If you exist on one side of the galaxy at one moment, and on the other the next, this does not mean you traveled a path between the two points in three-dimensional space as we understand it. Should there be a means to complete that positional change without passing through the space between, then acceleration and velocity do not come into play at all, and so neither does special relativity.
Re: (Score:2)
The ability to transfer information faster than light, by any means (such as yelling "Look! A diversion!" whenever a physicist wanders by), either violates special relativity or enables time travel (or both). Personally, if it should be possible, I'm betting on enabling time travel.
Re: (Score:2)
3 - I have always hated that one, becasue it's wrong. It's been wrong since the scientific method was put into place.
I can see a floating disk and know it's science and engineering that created it. In fact, we could use that to figure out how it works.
Not understanding something doesn't mean it's magic.
Re: (Score:2)
3 - I have always hated that one, becasue it's wrong. It's been wrong since the scientific method was put into place.
I can see a floating disk and know it's science and engineering that created it. In fact, we could use that to figure out how it works.
Not understanding something doesn't mean it's magic.
With sufficiently advanced technology I could emulate your brain and this planet within a much larger supercomputer. What if I counfound your scientific method and make the observable universe as crazy as the rules of magic. A being within said simulation could be given access to the voice-activated debugger mode, and when he spoke the 'magic' words, you could suddenly find yourself crawling about as a newt.
So, no, you're wrong. Clark's third law is correct, and always shall be.
Re: (Score:2)
I disagree.
Because you believe there is no magic, you say that.
On the other hand if we wanted to be scientific, we would asume: there is magic, and we would try to distinguish this new technology from magic. Either finding it is technology or finding it is magic.
As long as you have no single hint what kind of technology it is, your scientific method wont help you to formulate any theory and executes any tests on that theory. Therefore it is not distinguishable from magic. Plain simple.
However when you have
Original presentation here. (Score:2)
A 30 MB google docs document. Oh joy. It even appears to break my ipad. Yes, it's worth reading, but would it kill you to write an interesting summary? Even a pithy one, such as "by 2020, the energy costs associated with moving bits around will exceed the costs of actually processing them.
Re: (Score:2)
It's not their fault you use an inferior device.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This.
Re: (Score:2)
Inferior device? Nonsense. I happen to prefer Apple's "walled garden" to Google's "walled garden", but the world beyond those walls is often more interesting than what's contained within.
It's just his slides, anyway You can watch Simon give his lecture here [illinois.edu].
Re: (Score:2)
The audio quality leaves something to be desired,
Re: (Score:2)
" I happen to prefer Apple's "walled garden" to Google's "walled garden" "
And both suck versus FreeBSD's walled garden.
Re: (Score:2)
Free BSD has a walled garden? Where?
The problem here is that pdf is an open, standardized format. Google docs is not. Yes, I have a google docs account. A slashdot story should not require me to
1. Log into my third party account, or create one.
2. Discover that there is "No Preview" available.
3. Download a 30 megabyte file
All to comment intelligently on a slashdot story consisting of two sentences.
Re: (Score:2)
A 30 MB google docs document.
61 pages of rather blah presentation slides.
And we used to think PowerPoint was a bloated format.
Re: (Score:2)
Your computer does not support this comment
To view the comment, please download the slashdotcomment app from the app store [apple.com]
interconnect (Score:2)
I wish there was more discussion on the interconnect and routing challenge of these systems. I used to work on an InfiniBand SubnetManager. Exascale will require more complex topologies and more complex routing. Does anyone think today's systems are up to the task?
Re: (Score:2)
I just woke up... (Score:4, Funny)
And still a little fuzzy headed, but the first thing I though of was arranging the racks for shortest maximim path, instead of one big football field sized room, stacking the datacenter into a cube shape... Then I thoght, "That's probably why Borg ships are Cubes."
Re: (Score:2)
Re: (Score:2)
sphere.
anyhow, there were reasons for some crays to be shaped like they were.
Re: (Score:2)
A sphere of cubical modules. Individually spheres are probably not the most optimal shape of CPUs and each blade of the super-computer. Across a large super-computer a spherical shape sets the best distance from center to edge for maximum latency.
The Nanosecond (Score:5, Interesting)
Back in the early 80's I got the opportunity to hear Grace Hopper [wikipedia.org] speak. One of the stories she used to like to tell at her talks was about the time that she was having trouble visualizing a nanosecond. Eventually she sent a memo to her engineers which said, "Please send up one nanosecond." She waited, curious as to how they would respond. After a couple of days a response came back in the form of a metal rod 11-3/4 inches in length with the note attached, "One Nanosecond", and no other explanation. After puzzling over the metal rod she called down to the engineering department and asked, "I give up, what is it"? "That's the distance light travels in a nanosecond", was the response. Later, she sent another memo to the engineers with the request, "Please send up one picosecond." The engineers immediately responded with a memo instructing her to, "put the nanosecond in a pepper grinder and you can make picoseconds all over your desk."
Grace Hopper's humorous anecdote underlines the serious problems faced by researchers when they push the boundaries. In her case, it was a real concern over how far a bit can travel at the speed of light. I have no idea if that has any bearing on the exascale problem, but it might illustrate the kinds of problems they might be running into.
so what? (Score:5, Insightful)
I'm an HPC professional, and do not see much value in these "hero" machines. Yes, you can go on all you want about the march of progress and tier-1 and grand challenges, but you're just reiterating an unquestioned manifest destiny-based view of history. Why do we need an Exaflop machine? is it because some particular set of applications need it? where is the threshold for those applications where the compute facility will be fast enough to achieve some breakthrough?
it's hard to find areas that are primarily limited by compute facilities. for instance, genetics/proteomics/metabilomics/whatever are *not* compute-limited, especially at the high end. they're laboratory-limited, the same way weather simulations are good and getting better, but not past the quality of their input data.
we need more compute in general, but not necessarily in one machine. a single exaflop machine will cost much more than a thousand petaflop machines. letting a thousand flowers bloom is much prettier than one excruciatingly beautiful flower...
and no, hero machines do not provide an efficient way to improve the tech of lesser or later machines. they have to be justified by their own need.
Re: (Score:3)
you are silly. systems biology modeling of cells will require exascale computing, so will simulations in chemistry of miilions or more atoms for hundredth of a second or more. Lattice simulations for physics are demanding them too.
Re: (Score:2)
Systems biology modeling of cells will require exascale computing
No, it won't because we won't be modeling objects as large as cells at the atomic level. Instead, we will use lots of coupled coarser models, saving the finer ones for parts where "interesting" things are happening (e.g., at membrane interfaces). People are already doing this sort of thing, but at a very coarse scale and with only very limited numbers of fine simulations.
Of course, I happen to think that the really interesting things happen when you scale up to modeling a whole tissue, or a whole organ, or
Re:so what? (Score:4, Insightful)
I'm an HPC professional too.
I don't totally disagree with your premise, but what the heck are you doing talking about genetics and proteomics in reference to giant supercomputers? If you know anything about proteomics codes, you know that the commonly used search engines like sequest and mascot were never designed to run on systems like that. Hell, they barely run on small clusters and yet people are getting enough science done that they just don't care. That doesn't mean that it's hard to find problems that need supercomputers though.
If you want to talk about the really big systems, you are talking about things like nuclear weapons simulations, astrophysics, molecular dynamics, and quantum mechanics. There are only a handful of guys that will actually make really good use of those systems and scores of folks that would otherwise be perfectly fine running on significantly smaller ones. Having smaller jobs backfill on the big machines when the really hardcore guys are off doing something else isn't such a bad situation though. It lets you get the big science done and still keep the machines being used efficiently in the interim.
Beyond that, just because some researchers aren't scaling their codes to those levels yet doesn't mean we should give up on big systems. There will always be people pushing the envelop and others playing catch up. Our job is to help the slow guys scale their codes when possible so they can do even better and more intensive science. Yes, not all problems require the big systems, but there are many that do, many that can be made to scale even when they don't appear to at first, and others that can serve as backfill to keep the systems busy. They have their place just as smaller clusters, cloud resources, and big data resources do.
Re: (Score:2)
If you want to talk about the really big systems ... There are only a handful of guys that will actually make really good use of those systems and scores of folks that would otherwise be perfectly fine running on significantly smaller ones.
That's what they all say. Don't worry, that's plenty for me! (until next year). Five computers are enough for the world. 640k ought to be enough for anybody. Of course I suppose a logical progression would be to get an Exaflop machine running before figuring out how to make one for the high school science lab.
Re: (Score:2)
Considering how much more powerful my phone is than supercomputers of 30 years ago, I can only imagine that in 2043 the iPhone 17QX will require multi-petaflop performance to create holographic picture and sound and touch. (And you'll have to hope it is available through Mobile Safari, 'cause Apple still won't allow porn apps.)
Re: (Score:2)
I can only imagine that in 2043 the iPhone 17QX will require multi-petaflop performance to create holographic picture and sound and touch.
Considering how faster hardware always seems to lead to less efficient software, it'll probably need 1 petaflop just to flash an LED.
Re: (Score:2)
Some problems literally can't be parallelised.
Re: (Score:3)
Te problem itself can't. However sou can solve many problems of the same kind at the same time in parallel. (That actually is what most super computers in our days do)
Re: (Score:2)
Why do we need an Exaflop machine?
If you build it, they will come.
"640K ought to be enough for anyone." etc.
Really? (Score:3)
Re: (Score:3)
I'm pretty sure at one point, someone stood up in a meeting and said "No one will ever make a 1MB memory chip" or "No one will ever achieve a 64 bit processor", so how about sit down and just wait.
The author of the presentation didn't say we'd never get to Exaflops, just that it might take longer than anticipated. Second, the fact that some technologies have scaled incredibly well doesn't mean that all technologies do or that there are no limits. Chips are perhaps history's greatest example of a technology that scales well. However, we were also supposed to have flying cars and visit Jupiter by 2001. Sometimes the limits are practical rather than strictly technical. SST's were built designed in the 6
Re: (Score:3)
You seem to be forgetting about the laws of physics. In fact, we are already hitting them. You can't shrink transistors much more or you get slapped with Schrodinger's cat. The interconnects are already using fiber optics. You can only put machines so close to one another. So on and so forth.
When people have made claims before, it was due to either their idea of market forces or the limits of the current technology. Now, the actual physical limits are beginning to present roadblocks. Even if quantum computi
Re: (Score:2)
Distributed computing (Score:2)
I don't see anything about this in the PDF, so I'll ask the Hive Mind here:
How does this affect distributed computing efforts such as Folding@Home and the BOINC project?
These have very little node-to-server and zero node-to-node communication. With F@H already on the petaFLOP scale I wouldn't think it all that unlikely that it would reach exaFLOP level in less than a decade if interest keeps up.
Re:Ha, not the first (Score:5, Insightful)
moore's law only talks about transistor counts. building a supercomputer means getting thousands of CPUs to cooperate which is a much harder challenge.
Anyone (with a large wallet) can stick an exoflop worth of CPUs in a large room. by 2020 you'll be able to do that with a not so large wallet. but that does not result in a useful exoflop computer
Re: (Score:3)
Re:Ha, not the first (Score:5, Insightful)
It's a particular nuisance because the speed of light is pretty strictly enforced...
Even if you went full-on-nuts and replaced fiber interconnects with little tubes full of hard vacuum, to squeak out that slight improvement over the speed of light in glass or air, you'll still see latency that meaningfully hinders the cooperation of multi-GHz CPUs and RAM across systems of any nontrivial size.
For loosely coupled problems, that barely matters; but not all problems are loosely coupled.
Re: (Score:2)
Re:Ha, not the first (Score:5, Insightful)
I'm no expert on the refined world of supercomputers; but my money would be on latency. If you are made of money, bandwidth is a problem that you can substantially brute force. Not 100% efficiently; and layout gets to be a real headache; but if the state of the art in serial interconnects isn't good enough, you can bolt a bunch of them together and have a parallel interconnect(it'll be harder to do board layout for, the wiring will suck more, and it'll cost more; but the major sticking point is money).
If you want to cut latency, even the most exotic photonics-on-die-with-hollow-fiber arrangement imaginable still gives you surprisingly short distances before you start losing CPU cycles to waiting for the return photon.
Latency not as important as expected (Score:4)
Although latency isn't so much of an issue: the #1 systems of the last ~3 years did all have torus networks (all Blue Genes, all Crays, K computer, too). These networks only perform well for next neighbor communication -- which is fine since most codes running on these machines are simulation codes and they only need this type of communication. If you scale up the system, you'll typically also scale the size of the simulation instance (this is known as "weak scaling").
This means that your program can still spend the same time waiting for the network as it could on a smaller machine. The cables do not need to become shorter.
Re: Latency not as important as expected (Score:2)
Re: (Score:2)
Re: (Score:2)
It's a particular nuisance because the speed of light is pretty strictly enforced...
Obviously most physicists have a police state mentality, but the anarchist physicists say we should use wormholes!
Re: Ha, not the first (Score:2)
Not to mention data storage and transmission. That's the thing that's currently causing headaches: petabyte /scratch systems that don't go down when you look at them sideways and then similar systems to hold all that crap you just dumped on /scratch!
Re:Ha, not the first (Score:4, Interesting)
building a supercomputer means getting thousands of CPUs to cooperate which is a much harder challenge.
Looking at his presentation, that seems to be his point. He concludes that power efficiency is going to become the limiting factor driving design decisions, and that since the power cost of increasing FLOPS has been so much lower than the power cost of moving larger quantities of data we're heading into an era where connectivity costs will so dominate the cost of cycles that cycles will be essentially free.
Hes's then basically arguing that it won't be cost-effective to build data transmission architectures that can effectively utilize exaflops, so no one will bother to build an exaflop machine.
He didn't state it, but if the rest of his arguments are correct, perhaps we're going to see the definition of a new metric for HPC, one that somehow captures the ability of a machine to distribute data to its computation nodes.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Moore predicted an increase in transistors by a factor of 2 every 1.5 years (originally every 2 years, I believe).
This only scales on a processor or on a memory ship, not in a cluster of nodes.
Just because every node in your cluster is 2x faster (or 100 times in this case) does not make the whole cluster similar faster.
Re: (Score:2)
Re: (Score:2)
At Moore's faux Law,
RTFA. His point is that flops keep getting cheaper, but data links will be the bottleneck.
Re: (Score:3)
If Intel can cut the power to its 'big iron' cpu's (the 4/6/8 core chips), then just increasing the number of processors in supercomputers from 10,000 to 100,000 will give you an 10x increase in speed while using the same or less power. ... An 80x increase at the same size/power as what we have now puts us into exaflops range.
RTFA. Flops are easy. The scaling problem is data links between nodes.
Re: (Score:2)
Re: (Score:2)