US DOE Sets Sights On 300 Petaflop Supercomputer 127
dcblogs writes U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops. The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals. If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. How adequate this planned investment will look three years from now is a question. Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said.
Let us see what else are in the past tense (Score:1, Offtopic)
Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said
Let us see what else are in the past tense ...
How many of the microwave ovens / teevee sets are being made inside the United States of America?
How many of the American jobs have been "outsourced" to places like India or the Philippines?
How many top American companies are using foreigners as their CEOs? ... and the list continues ...
Re: (Score:2)
Sorry, the NSA needs all those cycles to process everyone's phone calls. Remember, it's only illegal for a person to listen in on your calls.
Ehhh Meh (Score:2)
Reminds me a little of Soviet era build the biggest thing you can projects. I could see it if they have a particular problem that either needs faster updates or higher resolution updates than the performance currently available provides (weather forecasting comes to mind). But building big to build big ? The interesting part of high performance computing is all in architecture and software to make use of it. This strikes as a little wasteful
Re: (Score:1)
Nowadays you inject silicone into the pig's hips and ass.
Re: (Score:3)
Re:Ehhh Meh (Score:5, Informative)
There are plenty of things that can use all the computing power you can throw at it these days. As you mentioned, weather forecasting - though more generally, climate science. Somebody from one of the National Labs mentioned at a college recruiting event that they use their supercomputer for (among other things) making sure that our aging nukes don't explode while just sitting in storage. There are thousands of applications, from particle physics to molecular dynamics to protein folding to drug discovery... Almost any branch of science you can find has some problem that a supercomputer can help solve.
Additionally, it's worth noting that these generally aren't monolithic systems; they can be split into different chunks. One project might need the whole machine to do its computations, but the next job to run after it might only need a quarter - and so four different projects can use the one supercomputer at once. It's not like the smaller computing problems end up wasting the huge size of the supercomputer. After all, many of these installations spend more in electricity bills over the 3- or 5-year lifetime of the computer than they do to install the computer in the first place, so they need to use it efficiently, 24/7.
Re: (Score:3)
Well, you could just actually test old and unrefurbished nukes to see just what all those decay products accumulating beneath their shells do, or you could just simulate it. No wait, the politicians have sworn off all actual testing, you can only simulate. Back in the 2000's Supercomputers were all we had to tell us what was in the decomissioned former Soviet nukes they were asking us to open up and get the Plutonium out of - some were seven to ten years behind scheduled maintenance and nobody was sure jus
Re: (Score:3)
There are plenty of things that can use all the computing power you can throw at it these days. As you mentioned, weather forecasting - though more generally, climate science. Somebody from one of the National Labs mentioned at a college recruiting event that they use their supercomputer for (among other things) making sure that our aging nukes don't explode while just sitting in storage. There are thousands of applications, from particle physics to molecular dynamics to protein folding to drug discovery... Almost any branch of science you can find has some problem that a supercomputer can help solve.
True enough, the rub is that developing solutions for those problems that effectively use supercomputing resources is as big a problem as the problem. It's more than likely you are reading this on a multiprocessor with a vector acceleration system, that has more potential compute power than any supercomputer from older than 15 years. The question is just what is your utilization and where is the speedup from all the extra compute resources.
Re: (Score:1)
Oops - that was me. Forgot to log in (Score:2)
Re: (Score:2)
You have missed it. Those were developed as research into new architectures and new techniques not taking an existing system and making it bigger.
Re: (Score:2)
It's probably more like 20 years, we've been pushing against a wall for a while now.
Re: (Score:2)
There are plenty of things that can use all the computing power you can throw at it these days. As you mentioned, weather forecasting - though more generally, climate science. Somebody from one of the National Labs mentioned at a college recruiting event that they use their supercomputer for (among other things) making sure that our aging nukes don't explode while just sitting in storage. There are thousands of applications, from particle physics to molecular dynamics to protein folding to drug discovery... Almost any branch of science you can find has some problem that a supercomputer can help solve.
Additionally, it's worth noting that these generally aren't monolithic systems; they can be split into different chunks. One project might need the whole machine to do its computations, but the next job to run after it might only need a quarter - and so four different projects can use the one supercomputer at once. It's not like the smaller computing problems end up wasting the huge size of the supercomputer. After all, many of these installations spend more in electricity bills over the 3- or 5-year lifetime of the computer than they do to install the computer in the first place, so they need to use it efficiently, 24/7.
You forgot encryption key researching. Got an encrypted file you want to read. Lets use this beast to determine the encryption key and read the xxx contents.
Re: (Score:2)
As supercomputers grow larger, the pool of problems that benefit by using them gets smaller.
Re: (Score:2)
The number of floating point operations (FLOPS) performed by a next-generation game console outranks early days supercomputers like the Cray.
Cray-2 = 1.9 GFLOPS
http://www.dcemu.co.uk/vbullet... [dcemu.co.uk]
Dreamcast | CPU: 1.4 GFLOPS | GPU: 0.1 GFLOPS | Combined: 1.5 GFLOPS
PS2 | CPU: 6 GFLOPS | GPU: 0 GFLOPS | Combined: 6 GFLOPS
Xbox | CPU: 1.5 GFLOPS | GPU: 5.8 GFLOPS | Combined: 7.3 GFLOPS
Wii | CPU: 60 GFLOPS | GPU: 1 GFLOPS | Combined: 61 GFLOPS
Xbox360 | CPU: 115 GFLOPS | GPU: 240 GFLOPS | Combined: 355 GFLOPS
PS3 | CP
Re:Ehhh Meh (Score:4, Interesting)
The number of floating point operations (FLOPS) performed by a next-generation game console outranks early days supercomputers like the Cray.
Sure, but do they have the system capability / bandwidth to actually do anything with those numbers and is their raw speed offset by not being vector processors like the Cray 2 (process an entire array of data in 1 instruction)? I'm not a hardware geek, but was an administrator for the Cray 2 at the NASA Langley Research Center back in the mid 1980s and, among other things, wrote a proof-of-concept program in C to perform Fast Fourier transforms on wind tunnel data in near real time - probably would have been faster had I been a FORTRAN geek - and the system could pump through quite a bit of data - at least for the 80s.
And the Cray 2 was way prettier than a PS3/4 or Xbox, though the Fluorinert immersion used for cooling is a bit cumbersome and expensive :-)
Re: (Score:3)
Sure, but do they have the system capability / bandwidth to actually do anything with those numbers and is their raw speed offset by not being vector processors like the Cray 2 (process an entire array of data in 1 instruction)?
Nope. The vetor unit with its crazy chaining and entire array computations initiated by a single instruction were the tricks required to get the CRAY to be as fast as it was. With all those tricks, the CRAY-2 peaked at about 2GFlops or so. Bear in mind the relative of Vector process
Re: (Score:3)
Not sure what point you're trying to make here, but newer supercomputers are very different from those early supercomputers, in far more ways than one. The parallelism is much higher (supercomputers now have millions of nodes, with exascale computers expected to have tens of millions or more), for instance. It's extremely hard to program for them. Interconnects have not been improving very much and so data flow between cores has to be managed carefully.
Re: (Score:2)
By 'nodes' I mean 'cores'. Typo.
Re: (Score:2)
Re: (Score:3)
With tens of millions of nodes data logistics pretty much always is a problem, even for supposedly embarrassingly parallel problems. Either the nodes communicate with only a few neighbours, in which case you have to carefully design the layout of the computations to make sure every node can communicate efficiently with its neighbours, and there probably is also some kind of global clock that has to be maintained. Alternatively you have some kind of farmer-worker setup where each worker node is happily chomp
Some stuff is set and forget without chatter (Score:2)
Re: (Score:2)
t I was trying to explain that there was a vast number of applications using classic supercomputer type technology, ranging from academic research down to multiplayer games. A modern game console now uses multiple cores, vector processors, vector chaining, kernels (if you consider vertex, fragment, geometry shaders as kernels), client-server communication to update players moves. Even geometry data is streamed across the network as some game MMORG worlds are so vast, all the data couldn't be stored on one d
Re: (Score:2)
Re: (Score:2)
Supercomputers are designed to be unlimited in scalability (super-scalar). Everything is duplicated from the cores on a single chip die to the boards, racks, rack-frames, aisles of rack-frames and interconnect fabric. The only limits to the size of a super-computer are financial; component cost, office space lease and electricity bills. Usually, it's the last one that's the problem. The slowest proocessing nodes can be pulled out and replaced with more powerful ones as time goes by.
Re: (Score:2)
Supercomputers are designed to be unlimited in scalability (super-scalar). Everything is duplicated from the cores on a single chip die to the boards, racks, rack-frames, aisles of rack-frames and interconnect fabric. The only limits to the size of a super-computer are financial; component cost, office space lease and electricity bills. Usually, it's the last one that's the problem. The slowest proocessing nodes can be pulled out and replaced with more powerful ones as time goes by.
That's meaningless if your software doesn't scale or has serial bottlenecks.
Re: (Score:2)
That's why many simulations are still written in Fortran - the compilers were optimized to handle multi-dimensional grid arrays, which is what fluid dynamics and other solvers use.
Re: (Score:2)
Really ?
I always thought it was the incredible abundance of numeric and simulation libraries for Fortran and the incredible amount of testing they have undergone, also there is the inertia of so many scientists and engineers learning Fortran as their first language or just knowing the language.
Re: (Score:3)
...also there is the inertia of so many scientists and engineers...
Sounds like words of a youngster who doesn't know that newer isn't always better.
Re: (Score:2)
FORTRAN is an excellent language for that sort of thing even though the standards people seem hell bent on screwing that up lately.
C is great for many things but it's too easy to have bugs that crash it in hard to diagnose ways. Interpreted languages have their place too, but not when absolutely maximum performance is a requirement.
Re: (Score:2)
FORTRAN is an excellent language for that sort of thing even though the standards people seem hell bent on screwing that up lately.
C is great for many things but it's too easy to have bugs that crash it in hard to diagnose ways. Interpreted languages have their place too, but not when absolutely maximum performance is a requirement.
So much in this it's almost impossible to reply to.
Fortran is certainly a good language for numerical codes of any type. I wasn't aware this was in dispute.
C is not the only other choice for a compiled language.
Is there an inherent performance increase for precompiled code vs interpreted or just in time compiled code on massively parallel systems ? Dunno. I'd pass that off to someone doing their doctoral thesis and still likely to get a wrong/ incomplete answer. On the other hand if we want "Leadership in s
Re: (Score:2)
You seemed to feel that inertia was a big factor.
No JIT will be as fast as compiling once in advance for exactly the hardware it will run on. Especially given the chance to do time (and correctness) trials with various optimizations first. Interestingly, JIT and scripted languages make a lot more sense for small to medium clusters, particularly if they would see idle time anyway. In those, the pressure to get value from every cycle tends to be a bit lower such that saving development time and debugging effo
Re: (Score:2)
You seemed to feel that inertia was a big factor.
Certainly do, haven't seen anyone make a plausible case it isn't
No JIT will be as fast as compiling once in advance for exactly the hardware it will run on
You really don't understand the nature of the question do you ? Because if you did you would never make such a blanket statement as answer.
Re: (Score:2)
And so I claimed that it actually is a good language to use, not just there by inertia.
In what way is Jit going to run faster than a binary pre-compiled after careful (and automated) profiling and tuning? JIT's advantage is in cases where the end user can't do a custom compile.
Re: (Score:2)
No JIT will be as fast as compiling once in advance for exactly the hardware it will run on
vs
In what way is Jit going to run faster than a binary pre-compiled after careful (and automated) profiling and tuning? JIT's advantage is in cases where the end user can't do a custom compile..
You're babbling.
Re: (Score:2)
More likely you can't answer the question.
Re: (Score:2)
More likely you can't answer the question.
I believe I Initially claimed that the answer to that question would likely be someone's thesis topic.
Troll harder
Re: (Score:2)
No troll here. Just someone who apparently is a lot more likely to write that paper one day than you are. If you think I'm talking gibberish, it's because the argument is over your head. Had you been more polite about it, I might be more polite here.
Re: (Score:2)
No troll here. Just someone who apparently is a lot more likely to write that paper one day than you are. If you think I'm talking gibberish, it's because the argument is over your head. Had you been more polite about it, I might be more polite here.
LOL I'll worry about it when you can go a few paragraphs without contradicting yourself.
Re: (Score:2)
Those numeric and simulation libraries were optimized in conjunnction with the Fortran compiler to take advantage of the hardware. The most obvious example; having fixed sized multi-dimensional arrays as global variables. For regular grids, the compiler can then decide which way to slice that data block up so that every processing node gets assigned a chunk of data. Since each function is not more than a few loop counters reading previous and current state for each grid cell, those get optimized into paral
Re: (Score:2)
Those numeric and simulation libraries were optimized in conjunnction with the Fortran compiler to take advantage of the hardware.
So you are trying to say that having millions of lines of code already in place that does things like finite element analysis has nothing to do with it ?
Re: (Score:2)
The most obvious example; having fixed sized multi-dimensional arrays as global variables.
You mean like these ?
http://www.phy.ornl.gov/csep/p... [ornl.gov]
Fortran 90 has three varieties of dynamic arrays. All three allow array creation at run time with sizes determined by computed (or input) values. These three varieties of dynamic arrays are:
Oh I wouldn't hold my breath on the compiler parallelizing those, it has to be able to determine it's safe to do so, more often than not a programmer will have to tell it to do so with a doall.
Re: (Score:2)
Just because a language has a feature doesn't mean you have to use it, especially when there are performance tradeoffs. And there are still performance advantages to having a static dimension within a multidimensional array, even if other dimensions are dynamic.
Hmm I must have missed where Fortran is the only language with statically dimensioned arrays, or where it is better to have a compiler decide what is safe to parallelize based on compile time analysis vs having the programmer decide what should be parallelized based on his understanding of the problem vs dynamically making that decision at run time.
Oh just an aside that is one of the many reasons it's damn near impossible to make blanket statements about how things perform on massively parallel systems.
Re: (Score:2)
It seems you are unaware of just how much software there is out there that can scale to orders of magnitude more parallel processing units than we have now. Just in the field of plasma physics that I work in, there are numerous sets of software that can use any sized super computer available today, because the algorithms involved are incredibly parallel. It is pretty common to hear in conference talks that the resolution of some solver or simulation was due to the computer time budget they had, and even then they only got one big run, when there would be value in doing multiple runs to gather statistics in certain behavior. And for many problems, the extra resolution is not about just smoothing out a solution, but because of multiscale problems where the phenomena under study occur at very different scales, and many computational physicists have to extrapolate from numeric work that greatly abbreviates those scales.
LOL so your answer is to do what hasn't been working poorly for you but bigger ?
Re: (Score:2)
Nice derail.
The question here though is not about plasma physics but U.S. leadership in supercomputing.
Re: (Score:2)
And you still seem to have no clue what these computers get used for... a large slice of DoE's computer resources are used for computational plasma physics. If you don't know what these things are being used for, any thoughts you have on current demand, needs, and shortcomings of such systems are baseless. On top of that, your lack of reading comprehension results in every post of yours in this chain to be a derailment. So either congrats on trolling efforts, or sorry that you can't keep up.
You haven't understood a single thing I have said. All your replies have been of the form "I am interested in X" so because this helps me with "X" I like it.
Re: (Score:2)
Just what do you think this does to advance U.S. supercomputing ?
There's no advance in the architecture.
There isn't anything to advance software development
The problems it solves are the same problems that have run fine on parallel machines since they were invented.
All it does is throw money at building a bigger version of things we already have and have had for a long time.
The reason I haven't said anything about this and U.S. leadership in supercomputing is it doesn't do anything to help it.
Play again. Ma
Re: (Score:2)
And guess what, when the government orders new a hammer for a project, they probably are not going to do anything to advance the design of hammers
And they aren't going to spend 300 million dollars on it saying it's to maintain U.S. leadership in hammers.
It would be a waste of money if it was being built to just study supercomputers,
Shame the people making the law don't look at it the way you do
"will ensure the United States retains global leadership in supercomputing."
And lastly
but that isn't the point, as in these things are intended to be used to further other research, and nothing you've said suggests it is unable to do so.
If the other research goals can justify the cost let them just don't go around saying its goal is to advance supercomputing.
Re: (Score:2)
Re: (Score:3)
You forgot the Tsar Bomba http://en.wikipedia.org/wiki/T... [wikipedia.org]
But then again there were so many. It's kind of mind numbing that we have to borrow stupid from the former soviet union.
P.S. The soviet era is Lenin to breakup, the life of the Soviet Union.
Re: (Score:2)
i think something like this would be a case of "if you build it, they will come".
Re: (Score:3)
These systems will use IBM Power CPUs and Nvidia's Volta GPU, the name of a chip still in development.
Re: (Score:3)
On the contrary, modern supercomputers are designed for energy and thermal efficiency that rivals and exceeds that of smartphones. Granted, you wouldn't want to put one of these NVidia chipsets in a smart phone, but in terms of compute power per watt, they're far more efficient than general purpose computers.
That said, they do consume a lot of power. But that's precisely why they're engineered for efficiency -- when you're getting the bill for such a monster, that extra 10W/core adds up big time.
Re: (Score:2)
Nice and all, but where's the beef? (Score:4, Interesting)
I remember back in the 80's all the excitement about building faster and faster super computers to solve all sorts of grand challenge problems and how a teraflop would just about be nirvana for science. Around 2000 teraflops came and went and then petaflops became the new nirvana for science where we would be able to solve grand challenge problems. Now exaflop is the new nirvana that will solve grand challenge science problems once again. Seems raw computing power hasn't given us the progress in science we predicted. Sure it's been used for stuff, but it hasn't helped us crack nuclear fusion for instance, one of its often hyped goals.
Where's the score card on how much progress has been made because of super computing? I know drug design is one very useful application, but what are other areas that have been transformed?
Re: Nice and all, but where's the beef? (Score:1)
Faster machines just run shitty code faster. Without theory it's all a waste.
Re: (Score:3)
A large proportion of the science that has been done with supercomputers is about nuclear weapons and is thus classified. There's no real way for us to know if supercomputers have helped in that direction or not. Presumably they have, otherwise LLNL wouldn't be getting the latest shiniest toy every few years (they often get the very first make of a new supercomputer that is developed). Or they haven't and it's all a big waste of money.
Re: (Score:2)
Got it in two.
Re: (Score:1)
They've done public research on supernova simulations. Short-term weather forecasts have gained from higher resolution grids.
Re: (Score:3)
Re: (Score:2)
Sure it's been used for stuff, but it hasn't helped us crack nuclear fusion for instance, one of its often hyped goals.
At one point they'll need a nuclear fusion reactor to power the "next big thing in supercomputers", so someone working at some non-cutting-edge compute facility will figure it out so that they can get the grant money for that "next big thing in supercomputers." It's all about funding :-)
Re:Nice and all, but where's the beef? (Score:5, Informative)
There are countless problems solved only as a result of supercomputers. Setting aside for a minute the minority of problems that are classified (e.g. nuclear stockpile stewardship, etc), supercomputers benefit both academia and industry alike. You'll be hard pressed to find a Fortune 500 company that doesn't have at least one if not multiple supercomputers in house.
For example, here is a list of case studies of specific manufacturing problems that have been solved http://www.compete.org/publica... [compete.org] which include things as mundane as shipping pallets, golf clubs, and washing machines.
The organization I work for, the Ohio Supercomputer Center, annually publishes a research report listing primarily academic projects that benefit from our supercomputers: https://www.osc.edu/sites/osc.... [osc.edu] which range from Periodontal Disease, Photovoltaic Cells, Forest Management and Welding.
TL;DR: "HPC Matters" in many ways. Here's some short blinky flashy videos: http://www.youtube.com/channel... [youtube.com]
Re: (Score:3)
P.S. - OSC is going to be doing a reddit AMA on Monday at 7:30PM Eastern. Feel free to hop on and ask us some questions!
“We will be answering questions about running a Supercomputer Center, High Performance Computing (HPC) and anything else. Our current systems have a total performance of 358 TeraFLOPS, and consist of 18,000 CPUs, 73 TB of RAM and 4 PB of storage, all connected to a 100 Gbps statewide network (yes, it will run Crysis, just barely;). We will be holding the AMA in conjunction with th
Re: (Score:1)
Re: (Score:2)
> Setting aside for a minute the minority of problems that are classified (e.g. nuclear stockpile stewardship, etc)
Nuclear simulations aren't a 'minority'. Both of the US' top supercomputers (Titan and Sequoia) are at DOE facilities (ORNL and LLNL). Most of the time on Sequoia is reserved for nuclear simulations. Titan does more varied stuff but nuclear still takes up a sizable share of its time.
Re: (Score:2)
Re:Nice and all, but where's the beef? (Score:4, Funny)
Become a researcher in a field that makes use of lots of computing power, then specialize in the math modeling and simulation subfields. Surprisingly often it's quite easy to get time on a system if you apply as a post-doc or even a grad student. Becoming part of a research group that develops simulation tools for others to use can be an especially good way.
Or, get an advanced degree in numerical analysis or similar and get hired by a manufacturer or an organization that builds or runs supercomputers. On one hand that'd give you a much more permanent job, and you'd be mostly doing coding, not working on your research; on the other hand it's probably a lot harder to get.
But ultimately, why would you want this? They're not especially magical machines. Especially today, when they're usually Linux based, and the system developers do all they can to make it look and act like a regular Linux system.
If you want to experience what it's like, try this: Install a 4-5 year old version of Red Hat on a workstation. Install OpenMP and OpenMPI, and make sure all your code uses either or both. Install an oddball C/C++ compiler. Access your workstation only via SSH, not directly. And add a job queue system that will semi-randomly let your app run after anything from a few seconds to several hours.
Re: (Score:2)
I think it's a bit like in IT, nobody notices when it just works. More and more bad designs die on the drawing board because we run detailed simulations. For example if you buy a car today I expect the deformation zones have gone through plenty of simulated crashes. Perhaps you've even stepped that up another notch and let the computer try to design what the optimal deformation zone looks like within certain requirements. Thousands of adjustments times thousands of scenarios at different angles, speeds and
Re: (Score:2)
Sure it's been used for stuff, but it hasn't helped us crack nuclear fusion for instance, one of its often hyped goals.
Speak for yourself [wired.com], bucko. ;)
Re: (Score:2)
Take a look, there is some neat stuff going on with Blue Waters: https://bluewaters.ncsa.illino... [illinois.edu]
Most science is not breakthroughs; it's usually slow progress, with many failed experiments.
These computers are facilitating a lot of good science and increases like this in our computational infrastructure for research is great news. I do wonder how they are going to power this beast and what kind of hardware it will be made of. 300 PFlop is pretty unreal with today's technology.
Re: (Score:1)
petaflop porn filter (Score:2)
Re: (Score:2)
lol.. I love that series.
regain the lead in supercomputing speed from China (Score:1)
The subject implies that the NSA publicizes the capabilities of their rigs. I would be willing to bet they have near the computing power of China all by themselves.
Conservative design (Score:2)
For 20+ years, HPC systems have relied on the same conservative design of compute separated from storage, connected by Infiniband. Hadoop kind of shook up the HPC world with its introduction of data locality, especially as scientific use cases have involved larger data sets that distributed data storage is well-suited for. The HPC world has been wondering aloud how best and when to start incorporating local data storage for each node. Summit introduces some modest 800GB non-volatile storage per node for cac
Re: (Score:2)
InfiniBand is the only Petascale-proven, standard interconnect solution, connecting 24 out of the 48 Petaflop capable systems on the [TOP500] [mellanox.com]
On enough with the whining (Score:2)
I hate this attitude that if you don't have the top spot, you are crap. It is so silly the attitude that the US somehow lost something by not having the first spot on the top 500 list.
I mean for one thing, the Chinese computer is more specialized than the big US supercomputers. It gets its performance using Intel Xeon Phi GPGPU type processors. Nothing wrong with hat but they are vector processors hanging off the PCIe bus. They work a lot like graphics cards. There are problems that they are very fast at,
Lies, damn lies, and claims from China == same (Score:1)
If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems
It's kind of hard to regain something you didn't truly lose to China.
Re: (Score:2)
Nonsense. The US is a technology backwater now. Even if it "regains" the larger number, it does not have the people to actually use this infrastructure efficiently, making it a meaningless stunt.
another idea (Score:1)
Here's a thought. 100% of chip making companies that make chips that are actually good/fast are American companies. So just don't sell to any other countries in bulk for supercomputer use. Win by cutting off the supply.
Re: (Score:2)
AMD produces CPUs in Dresden, Memory and chipsets are fully produced outside the US, ARM is British, the CPUs for china's supercomputer are made there, etc. These are global companies, sometimes non-US domestic ones, but never only US companies. You mindless patriotism blinds you to reality.
Result of your proposed move would be that the US would not get components, not the other way round.
Higher numbers are not "leadership"... (Score:2)
A factor of 10 is pretty meaningless in supercomputing. Software quality makes much more of a difference. Of course, politicians are not mentally equipped to understand that and instead want "the larger number" like the most stupid noob PC buyer.
Right before Nov'14 Top500 list in SC'14 (Score:2)
So, they make this announcement right before the new Top500 list is unveiled in the SuperComputing conference... What clearly means that once again there will be no US system in the Top1 position, right?
Re: (Score:2)
Chess is "easy" compared to Go. While Chess requires more bits to store the board the search space for Go is **exponentially* larger. i.e. A a single state of the go board is 2^(19*19) = 2^361 positions = 46 bytes.
Links of interest:
* http://codegolf.stackexchange.... [stackexchange.com]
* http://en.wikipedia.org/wiki/B... [wikipedia.org]