IEEE Says Multicore is Bad News For Supercomputers 251
Richard Kelleher writes "It seems the current design of multi-core processors is
not good for the design of supercomputers. According to IEEE: 'Engineers at Sandia National Laboratories, in New Mexico, have simulated future high-performance computers containing the 8-core, 16-core, and 32-core microprocessors that chip makers say are the future of the industry. The results are distressing. Because of limited memory bandwidth and memory-management schemes that are poorly suited to supercomputers, the performance of these machines would level off or even decline with more cores.'"
Time for vector processing again (Score:5, Insightful)
Sounds like its time for supercomputers to go their own way again. I'd love to see some new technologies.
Re:Time for vector processing again (Score:5, Interesting)
I've always felt there was something odd about the recent trend of Super Computers using common hardware. components. They have really loss their way in super computing by just making a beefed up PC and running a version of a common OS which could handle it. Or Clustering a bunch of PC's togeter. Multi-Core technology is good for desktop systems as it is meant to run a lot of relatively small apps Rarely taking advantage of more then 1 or 2 cores. per app.In other-words it allows Multi-Tasking without a penalty. We don't use super computers that way. We use them to to perform 1 app that takes huge resources that would take hours or years on your PC and spit out results in seconds or days. Back in the early-mid 90's we had different processors for Desktop and Super Computers. Yes it was more expensive for the super computers but if you were going to pay millions of dollars for a super computer what the difference if you need to pay an additional $80,000 for more custom processors.
Re: (Score:2)
Yes I agree and have the same odd feeling. The first time I read an article where I think Los Alamos was ordering a supercomputer with 8192 Pentium Pro processors in it, I was like WTF?
I missed the days when super computers looked like alien technology or Raiders of the Lost Ark.
Re:Time for vector processing again (Score:5, Insightful)
Look are deceptive.
The problem with multicores relates to the fact that the cores are processors, but the relationship to other cores and to memory aren't fully 'cross-bar'. Sun did a multi-CPU architecture that's truly crossbar (meaning that there are no dirty cache problems and semaphor latencies) among the processors, but the machine was more of a technical achievement than a decent workhorse to use in day to day stuff.
Still, cores are cores. More cores aren't better necessarily until you fix what they describe. And it doesn't matter what they look like at all. Like any other system, it's what's under the hood that count. Esoteric-looking shells are there for marketing purposes and cost-justification.
Re:Time for vector processing again (Score:5, Interesting)
well, supercomputing has always been about maximizing system performance through parallelism, which can only be done in three main ways: instruction level parallelism, thread level parallelism, and data parallelism.
ILM can be achieved through instruction pipelining, which means breaking down instructions into multiple stages so that CPU modules can work in parallel and reduce idle time. for instance, in a RISC pipeline you break an instruction down into 5 operations:
so while the first instruction is still in the decode stage the CPU is already fetching a second instruction. thus if fully-pipelined there are no stalls or wasted idle time, and a new instruction is loaded every clock cycle, resulting in a maximum of 5 parallel instructions being processed simultaneously.
then there are superscalar processors, which have redundant functional units--for instance, multiple ALUs, FPUs, or SIMD (vector processing) units. and if each of these functional units are also pipelined, then the result is a processor with an execution rate far in excess of one instruction per cycle.
thread level parallelism OTOH is achieved through multiprocessing (SMP, ASMP, NUMA, etc.) or multithreading. this is where multicore and multiprocessor systems come in handy. multithreading is generally cheaper to achieve than multiprocessing since fewer processor components need to be replicated.
lastly, there's data level parallelism, which is achieved in the form of SIMD (Single Instruction, Multiple Data) vector processors. this type of parallelism, which originated from supercomputing, is especially useful for multimedia applications, scientific research, engineering tasks, cryptography, and data processing/compression, where the same operation needs to be applied to large sets of data. most modern CPUs have some kind of SWAR (SIMD Within A Register) instruction set extension like MMX, 3DNow!, SSE, AltiVec, but these are of limited utility compared to highly specialized dedicated vector processors like GPUs, array processors, DSPs, and stream processors (GPGPU).
Re: (Score:2)
Certainly sales efforts are justified. But it's what the machine does, rather than its esoteric facade, that makes a difference. CEOs like Lambo looks. Nerds understand that it's how much you can actually productively crunch that makes the difference. Gimme crunch, as the aesthetics are somewhat meaningless. These are computers.
Re: (Score:2)
Absolutely. A supercomputers good looks are for the buyers to make a purchasing decision. The people actually using the supercomputers don't necessarily even see them on a regular basis.
Re:Time for vector processing again (Score:5, Insightful)
It's very simple. Intel & AMD spend about $6bn/year on R&D. The total supercomputing market is on the order of $35bn (out of a global IT market on the order of $1000bn) and a big chunk of that is spent on storage, people, software, etc., rather than processors. That market simply isn't large enough to support an R&D effort which will consistently outperform commodity hardware at a price people are willing to pay. Even if a company spent a huge amount of money developing a breakthrough architecture which dramatically outperformed existing hardware, the odds are that the commodity processors would catch up before that innovator recouped its development costs. Certainly they'd catch up before everyone rewrote their software to take advantage of the new architecture. The days when Seymour Cray could design a product which was cutting edge & saleable for a decade are long gone.
Re: (Score:2)
Re: (Score:3, Interesting)
Re:Time for vector processing again (Score:5, Interesting)
It may be true that "That market simply isn't large enough to support an R&D which will consistently outperform commodity hardware at a price people are willing to pay," that's not quite tantamount to saying "there is no possible rational justification for a larger supercomputer budget." There are considerable inflection points and external factors to consider.
The market doesn't allocate funds the way a central planner does. A central planner says, "there isn't room in this budget to add to supercomputer R&D." The way the market works is that commodity hardware vendors beat each other down until everybody is earning roughly similar normal profits. Then somebody comes a long with a set of ideas that could double the rate at which supercomputer power is increasing. If that person is credible, he is a standout investment, not just despite the fact that there is so much money being poured into commodity hardware, but because of that.
There may also be reasons for public investment in R&D. Naturally the public has no reason to invest in commodity hardware research, but it may have reason to look at exotic computing research. Suppose that you expected to have a certain maximum practical supercomputer capability in twenty years' time. Suppose you figure that once you have that capability you could predict a hurricane's track with several times the precision you could today. It'd be quite reasonable to put a fair amount of public research funds into supercomputing in order to have the that ability in five to ten years' time.
Re: (Score:3, Interesting)
Thus the "commodity" is the IP design, not the finished chip. If everybody else is doing a chip with 128 cores and one interconnect, they'll be happy to fab you a chip with one core and 128 interconnects.
Re: (Score:2)
The problem is that no idea doubles the rate at which supercomputers advance. Most of the ideas out there jump foreward, but they do it once. Vectors, streams, reconfigurable computing. All of these buzzwords once were the next big thing in supercomputing. Today everyone is talking about GPGPUs. None of them go very far. How much engineering goes into the systems? How long does it take to get to market? How difficult is it to rewrite all the algorithms to take advantage of the new machine? What proportion o
Re: (Score:2)
I'm talking about the scenario TFA proposes: that directions in technology cause supercomputing advancement to stall. In that case expressed as a ratio any advancement at all would be infinite. However, I don't expect improvements in supercomputing will go to all the way down to zero.
Now, why couldn't the rate of improvement double over some timescale from what it is now? I think it is because investors don't care a rat's ass about the rate of technological advance; they care about having something to sel
Re:Kill all engineering then! (Score:4, Insightful)
The phrase "By logical extension" is just another way of saying "This is a straw man argument"
I believe that the point he was making was not that it's pointless to go beyond X86 hardware, but that it's more cost-effective to use consumer hardware. Consumer hardware is not necessarily X86 hardware. See IBM's Roadrunner, presently the fastest supercomputer in the world, which uses an advanced version of the PS3's processor (the PowerXCell 8i).
In time, we'll probably see demand in consumer hardware for breaking past the boundaries and bottlenecks of multi-core processing, and so supercomputers will follow.
Re:Time for vector processing again (Score:4, Informative)
There are really only 2 options for modern systems when it comes to memory you can have lot's of cores and a tiny cache like GPU's or lot's of cache and fewer cores like CPU's. (ignoring type of core issues and on chip interconnects etc.) So there is little advantage to paying 10x per chip to go custom vs using more cheaper chips when they can build supper computers out of CPU's, GPU's, or something between them like the Cell processor.
Re:Time for vector processing again (Score:5, Interesting)
A related problem to the speed of memory access is the energy efficiency of it. In an IEEE Spectrum Radio [ieee.org] piece interviewing Peter Kogge, current supercomputers can spend many times more energy shuffling bits around than operating on them. Today's computer can do a double-precision (64-bit) floating point operation using about 100 picojoules. However, it takes upwards of 30 pJ per bit to get the 128 bits of data loaded into the floating point math unit of the CPU, and then moving the 64-bit result elsewhere.
Actual math operations consume 5-10% of a supercomputer's total power, moving data from A to B is approaching 50%. Most optimization and innovation in the past few decades has gone into compute algorithms in the CPU core, and very little has gone into memory.
Re: (Score:2)
Modern CPU's have 8+ Mega Bytes of L2/L3 cache on chip so RAM is only a problem when your working set it larger than that.
Unfortunately, most modern apps require far more working set than that! The crappy .NET app I use at work had a working set of 700MB today.
The other issue is that HPC applications generally require small amounts of processing on lots and lots of snippets of data - ie highly parallel processing. This means that memory bandwidth is a very significant bottleneck.
you can have lot's of cores and a tiny cache like GPU's
Incidentally GPUs have a lot of cache - my Graphics card has 512Mb RAM.
Re: (Score:2, Insightful)
This is slashdot, our professions are computer related not literature based. You're on the wrong website.
Re: (Score:2, Funny)
Because I'm sure inserting a random apostrophe into your code would make it run just fine...
Re: (Score:2)
It's a shame we don't compile written word. ...
Programming is not literature, it's machine instructions.
Re:Time for vector processing again (Score:5, Insightful)
Re: (Score:3, Informative)
Yes, that is what I want, a super computer designed by an English major...
Please get over yourself. This is slashdot, not something important like a resume or will.
Re: (Score:2, Insightful)
Re: (Score:2, Funny)
Re: (Score:3, Insightful)
Language is a tool, and everyone who uses the tool needs to use it properly. HTML is a tool, and there are proper use standards for it. Some, however, choose not to use those standards, and it only makes a mess for everyone else who do use them. If you're going to use a tool, you need to learn to use it correctly; language is no exception.
Re: (Score:3, Informative)
Bad English isn't something you can keep locked out of sight in the back closet of society; it's like a termite infestation. Allow it a foothold and it'll spread everywhere. It's a higher-entropy state.
There's a world difference between someone who's just writing casually (and goofing up), and someone else who is completely unable to grasp the tenets of grammar. The former are perfectly capable of writing well on a resume, as you say; the latter are functionally illiterate, and they should be told when thei
Re:Time for vector processing again (Score:4, Funny)
"I disapprove of what you say, but I'll defend to the death your right to say it" -Voltaire
... as long as you spell it right :)
Re:Time for vector processing again (Score:5, Funny)
Hey dipshit. When you mock someone's grammar, you'd sure as fuck better not mis-spell 'apostrophe'
Idiot.
I'll paste it a few times so you can look at your grotesque failure more:
aprostrophe
aprostrophe
aprostrophe
aprostrophe
See how stupid that looks?
Re: (Score:2)
Lots of people think so.
Re:Time for vector processing again (Score:5, Insightful)
My supercomputing tasks are computation-limited. Multicores are great because each core shares memory and they save me the overhead of porting my simulations to distributed memory multiprocessor setups. I think a better summary of the study is:
Faster computation doesn't help communication-limited tasks. Faster communication doesn't help computation-limited tasks.
Re: (Score:2)
Faster computation doesn't help communication-limited tasks. Faster communication doesn't help computation-limited tasks.
I thought the same thing. Years ago with the massively-parallel architectures you could have said that massively-parallel architectures don't help inherently serial tasks.
The other thing I wonder is how server and desktop tasks will drive the multi-core architecture. It may be the case that many of the common server and desktop tasks have massive IO need (gaming?). The current memory a
Re: (Score:3, Insightful)
Faster computation doesn't help communication-limited tasks. Faster communication doesn't help computation-limited tasks.
Computation is communication. It's communication between the CPU and memory.
The problem with multicore is that, as you add more cores, the increased bus contention causes the cores to stall making so they cannot compute. This is why many real supercomputers have memory local to each CPU. Cache memory can help, but just adding more cache per core yields diminishing returns. SMP will only get you so far in the supercomputer world. You have to go NUMA for performance, which means custom code and algorith
Re: (Score:3, Insightful)
Yeah, if you buy Intel chips. Despite the fact that they are slower clock for clock than the new intel chips, amd's architecture was and is the way to go, which is of course why Intel has copied it (i7). If you properly architect the chips to contain all of the "proper" plumbing, then this becomes less of a problem. Unfortuantely Intel has for the past few years simply cobbled together "cores" that are nothing more than processors that are linked via a partially adequite bus. So when contention goes up they
Re:Time for vector processing again (Score:4, Insightful)
Re: (Score:3, Informative)
http://www.cray.com/products/XMT.aspx
Rest assured, there are still people who know how to build them. They're just not quite as popular as they used to be, now that a middle manager who has no idea what the hell they're talking about can go to an upper manager with a spec sheet that's got 8 thousand processors on it and say "look! This ones got a whole ton more processors than that dumb Cray thing!"
Re: (Score:2)
Re:Time for vector processing again (Score:5, Informative)
Sorry but that's not entirely correct, most super computers work on highly parallel problems [earthsimulator.org.uk] using numerical analysis [wikipedia.org] techniques. By definition the problem is broken up into millions of smaller problems [bluebrain.epfl.ch] that make ideal "small apps", a common consequence is that the bandwidth of the communications between the 'small apps' becomes the limiting factor.
"Back in the early-mid 90's we had different processors for Desktop and Super Computers."
The earth simulator was refered to in some parts as 'computenick', it's speed jump over it's nearest rival and longevity at the top marked the renaissance [lbl.gov] of "vector processing" after it had been largely ignored during the 90's.
In the end a supercomputer is a purpose built machine, if cores fit the purpose then they will be used.
Re: (Score:2)
Back in the 90s, there were custom super-computer processors (both vector and scalar), that were faster than desktop processors for all supercomputing tasks. This hit a wall, as the desktop processors became faster than the custom processors, at least for some tasks. If you can get a processor that's faster for some tasks and slower for others, but costs 1/10th the price of the other, you're probably going to go with the cheap one. The world has petaflop computers because of the move to commodity parts. Noo
Re: (Score:2)
Early supercomputers were built from custom chips designed for specific applications along with a custom network topology. This may have reduced the energy demands of the system, but meant that the system was good for one application only.
Also, different supercomputers would have different network topologies depending upon the application. It become immediately obvious that a single bus shared between a group of CPU's wasn't going to achieve peak performance, so different architectures were developed for ea
Re: (Score:2)
Re: (Score:2, Funny)
That does it...I'm not buying a supercomputer this Christmas!
Re: (Score:2)
>> I'd love to see some new technologies.
Yeah, It would be nice to see that Quantum Computing ( http://en.wikipedia.org/wiki/Quantum_Computer [wikipedia.org] ) finally adds a couple of arbitrary integers. Despite the many publications in the subject, it smells like the superstrings theory of computing. Hope that's not the case.
Re: (Score:2)
Sounds like its time for supercomputers to go their own way again. I'd love to see some new technologies.
While supercomputers might have to come up with unique architectures again, vector processing isn't it. The issue here is the total bandwidth to a single shared view of a large amount of memory. Off-the-shelf PC cores with their SIMD units are already too fast for the available memory bandwidth; swapping those cores out for vector units won't do anything to solve the problem. (Especially if you were go back to the original Cray approach of streaming vectors from main memory with no caches, which would just
Re: (Score:3, Informative)
Cray did not stream vectors from memory. One of the advances of the Cray-1 was the use of vector registers as opposed to, for example, the Burroughs machines which streamed vectors directly to/from memory.
We know how to build memory systems that can handle large vectors. Both the Cray X1 and Earth Simulator demonstrate that. The problem is that those memory systems are currently too expensive. We are going to see more and more vector processing in commodity processors.
Re: (Score:2)
So it had a tiny 4 kilobyte, manually allocated cache (aka vector registers). But yes, they still had to be juggled by streaming vectors to and from memory. That approach still requires more memory bandwidth than current cache architectures, and it wouldn't solve the memory issues any more effectively than today's common designs, which just happen to stream cache blocks instead of vectors.
There are still vector processors out there. (Score:3, Insightful)
NEc still makes the SX9 vector system, and cray still sells X2 blades that can be installed into their xt5 super. So vector processors are available, they just aren't very popular, mostly due to cost/flop.
A vector processor implements an instruction set that is slightly better than a scalar processor at doing math, considerably worse than a scalar processor at branch-heavy code, but orders of magnitude better in terms of memory bandwidth. The X2, for example, has 4 25gflop cores per node, which share 64 cha
Well doh (Score:5, Insightful)
If you make a simulation like that keeping the memory interface constant then of course you'll see diminishing returns. That's why we're still not running plain old FSBs as AMD has HyperTransport, Intel has QPI, the AMD Horus system expands it up to 32 sockets / 128 cores and I'm sure something similar can and will be built as a supercomputer backplane. The header is more than a little sensationalist...
Re:Well doh (Score:5, Insightful)
There are limits however to what you can do.
Its not like multi-processor systems where each cpu gets its own ram.
Re:Well doh (Score:5, Interesting)
You have failed to notice that AMD is already on top of this and can add more memory channels to their processors as needed for the application. This may increase the number of pins the processor has, but that is to be expected.
You may not have noticed, but there is a difference between AMD Opteron and Phenom processors beyond just the price. The base CPU design may be the same, but AMD and Intel can make special versions of their chips for the supercomputer market and have them work well.
In a worst case, with the support from AMD or Intel, a new CPU with extra pins(and an increased die size) could add as many channels of memory support as required for the application. This is another area where spinning off the fab business might come in handy.
And yes, this might be a bit expensive, but have you seen the price of a supercomputer?
Re: (Score:2)
no but by using dual or quad cores with a crapload of ram each you do get a benefit.
a 128 processor quad core supercomputer will be faster than a 128 processor single core supercomputer.
you get a benefit.
Re: (Score:3, Informative)
Actually that is part of the problem. Most of the architectures have core-specific L1 cache, and unless a particular thread has its affinity mapped to a particular core, a thread can jump from a core where its data is in the L1 cache to a core where its data is not present, and is forced to undergo a cache refresh from memory.
Also, regardless of whether a system is multi-processing within a chip (multi-core) or on a board (multi-CPU), the number of communication channels required to avoid communication bott
Re: (Score:2)
So yes, we are probably seeing the beginning of the end of performance gains using general-purpose CPU interconnects and have to go back to vector processing. Unless we are somehow able to jump the heat dissipation barrier and start raising GHz again.
That's what the superconducting FETs [newscientist.com] are for, just wait a few years / couple decades for them to get something that can be made on an IC and works at liquid nitrogen temperatures.
Re: (Score:3, Informative)
The summary mentions that the path to the memory controller gets clogged.
There is only so much bandwidth to go around.
Re: (Score:2)
With any pool of memory, there is some limit to latency and throughput. Thus, the more processing elements you throw at the problem the more they compete for this resource.
Now, if you can separate this memory into discrete pools associated with each processing element, you have less contention (locally) hence the possibility of lower latency and higher throughput (locally).
If you can design your algorithms such that multiple writers (and hopefully readers) to the same location happen infrequently, then you
Re: (Score:2, Informative)
There's no real way to split the banks for each core, so the net effect is that you have 4-32 cores sharing the same lanes for memory.
Re: (Score:3, Interesting)
Multi-channel memory controller is my response to this. Remember how going to a dual-channel memory controller increased the available bandwidth to memory? Having support for even 32 banks of memory could be implemented if the CPU design and connections are there.
You are thinking along the lines of current computers, not of the applications. People keep quoting the old statement that 640KB should be enough memory for anyone, but then repeat the same mistake they quote. Quantity of memory not only go
Re: (Score:2)
A dual-channel memory controller used to be the exception, not the rule, but now the idea is very common. In time, a 32 channel memory controller will be the standard even in an average home computer. How those channels are used to talk to memory of course remains to be seen, but you get the idea.
Where are you going to find enough pins for that, or the space for the DIMM slots?
I'd expect either (1) completely on-die or in-package memory (and fixed memory per core) or (2) some sort of stackable chips (how would this interact with heatsinks?) where your CPU has a grid of contacts on top and the memory has a grid on the bottom, which should allow for much higher speeds because you're not driving some hugely long set of wires.
Unganged channels = already non shared lanes today (Score:5, Insightful)
The issue is with a single processor that has multiple cores.
There's no real way to split the banks for each core, so the net effect is that you have 4-32 cores sharing the same lanes for memory.
No, sorry. That's how Phenom processor are *Already* working.
Each physical CPU package has two 64-bit memory controllers, each controlling a separate bank of 64bits DDR-2 memory chips. (Each of the two bank in a dual channel mother board).
Phenom have two mode of function :
- Ganged : both memory controllers work in parallel, working as if they were a huge 128bits memory connection. That's how dual channel has worked since it was invented.
That's good for system running few very bandwidth-hungry applications (for example : benchmarks)
- Unganged : each memory controller work on its own. Thus you have two completely separate 64bits memory channel accessible at the same time. By correctly lying the applications in memory thanks to a NUMA-aware OS (anything better than Windows Vista), that means that two separate applications can simultaneously access each one's memory at the exact same moment, although at only half the bandwith *per process* (but still the same total of bandwidth for all processes running at the same time on a multi core chip).
This is perfect for systems running lots of tasks in parallel, and is the default mode on most BIOSes I've seen.
This gives a tremendous boost to heavily multi-tasked applications (a busy database server, for example), and it's what TFA's author are looking for.
Probably that at some point in the future, Intel will follow the same trend with its QPI processors.
Also, the future trend is to multiply the memory channels on the CPU: Intel has already planned Triple Channel DDR-3 for their high-end server Xeons (the first crop of QPI chips). AMD has announced 4 memory channels for their future 6- and 12- core chips targeting the G34 socket.
So the net effect of Unganged Dual Channel is that today you already have 4 cores having a choice of 2 sets of memory lanes to choose among, and within 1 year, you'll have 6-to-12 cores sharing 4 sets of memory lanes.
By the time you reach 32 cores on CPU, probably that almost each slot will have its own dedicated memory channel (probably with the help of some technology which communicates serially with fewer lines, like FB-DIMM). Or even weirder memory interfaces (who knows ? maybe DDR-6 will be able to give several simultaneous access to the same memory module).
So, well, once again, it proves that running stupid simulations without taking into account that other technologies will improves beside the number of cores* yields stupid non realistic results.
Shame on TFA's Author, because the trends to increase bandwith have already started. I little bit more background research would have avoided this kind of stupidity.
But on the other hand, they would have missed the opportunity to publish an alarmist article with an eye catching title.
--
*: Although, yes, the number of cores you can slap inside the same package seems to be the "new megahertz" in the manufacturers' race, with some like Intel trying to increase this number faster without putting so much efforts on the rest.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Interesting)
Re: (Score:2)
This really is a problem that doesn't exist. The issue at hand is if you have all cores cranking away you run out of bandwidth. Simple solution - don't run all cores and continue to scale horizontally as they currently do. So if you need 8-core CPUs and it has the bandwidth you need, only buy 8-core CPUs. If your CPUs run out of bandwidth are 16-cores (or whatever), then only buy up to 16-core CPUs, passing on the 32-core CPUs.
Wow that a hard problem to solve. Next.
Yeah! (Score:5, Funny)
Once we get to 32 or 64 core cpus that cost less than $100 (say, five years), I'd HATE to have a beowulf cluster of those!
Re: (Score:2)
So what does it mean for PCs? (Score:4, Insightful)
>>>"After about 8 cores, there's no improvement," says James Peery, director of computation, computers, information, and mathematics at Sandia. "At 16 cores, it looks like 2 cores."
>>>
That's interesting but how does it affect us, the users of "personal computers"? Can we extrapolate that buying a CPU larger than 8 cores is a waste of dollars, because it will actually run slower?
Re: (Score:2)
Re: (Score:2)
I've only ever used a QuadCore PC once in my life:
- Core 1 was 100% utilized.
- Core 2 was only 25%.
- Cores 3 and 4 were sitting idle doing nothing.
It was clocked at 2000 megahertz and based upon what I observed, it doesn't look like I'm "hurting" myself by sticking with my "singlecore" 3100 megahertz Pentium. The multicores don't seem to be used very well by Windows, and my singlecore Pentium might actually be faster for my main purpose (web browsing/watching tv shows).
Re: (Score:2)
Image processing was a bad example for you to use, as it lends itself well to multi-threaded operations.
Re: (Score:2)
I think that depends upon the person and how they use their machine. I got a huge increase in performance going from 2 to 4 cores, but I often have some heavy multitasking going on.
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:2)
Make it a car analogy: you can transport cargo, each car = core. You can only transport 2 cargo loads down a 2 lane road, adding a 3rd car makes them go slower... if you think the engineers don't think about expanding the road is idiotic.
Adding c
It's so obvious... (Score:4, Interesting)
That to remove the 'memory wall', main memory and CPU will have to be integrated.
I mean, look at general-purpose computing systems past & present: there is a somewhat constant relation between CPU speed and memory size. Ever seen a 1 MHz. system with a GB. RAM? Ever seen a GHz. CPU coupled with a single KB. of RAM? Why not? Because with very few exceptions, heavier compute loads also require more memory space.
Just like the line between GPU and CPU is slowly blurring, it's just obvious that the parts with the most intensive communication, should be the parts closest together. Instead of doubling nummber of cores from 8 to 16, why not use those extra transistors to stack main memory directly on top of the CPU core(s)? Main memory would then be split up in little sections, with each section on top of a particular CPU core. I read sometime that semiconductor processes that are suitable for CPU's, aren't that good for memory chips (and vice versa) - don't know if that's true but if so, let the engineers figure that out.
Ofcourse things are different with supercomputers. If you have a 1000 'processing units', where each PU would consist of say, 32 cores and some GB's RAM on a single die, that would create a memory wall between 'local' and 'remote' memory. The on-die section of main memory would be accessible at near CPU speed, main memory that is part of other PU's would be 'remote', and slow. Hey wait, sounds like a compute cluster of some kind... (so scientists already know how to deal with it).
Perhaps the trick would be to make access to memory found on one of the other PU's transparent, so that programming-wise there's no visible distinction between 'local' and 'remote' memory. With some intelligent routing to migrate blocks of data closer towards the core(s) that access it? Maybe that could be done in hardware, maybe that's better done on a software level. Either way: the technology isn't the problem, it's an architectural / software problem.
Re: (Score:2)
P.S. Your idea of putting memory on the CPU is certainly workable. The very first CPU to integrate memory was the 80486 (8 kilobyte cache), so the idea has been proven sound since at least 1990.
Re: (Score:2)
I seem to recall the 68020 (1984?) having an instruction cache. (Though a lot smaller than 8k, if I recall.)
Re: (Score:2)
You are correct. The Motorola 68020 had 1/4 kilobyte of memory onboard, and was also a true 32-bit processor in 1984.
I should have known. Motorola CPUs were always more-advanced than Intel. Of course I'm biased since I always preferred Amigas and Macs. ;-)
Re: (Score:2)
Or maybe you're not biased, and preferred those machines because of their more advanced tech. :-)
Re:It's so obvious... (Score:4, Informative)
You mean something like a CPU cache? I assume you know that every core already has a cache (L1) on multi-core [wikipedia.org] systems, and shares a larger cache (L2) between all cores.
The problem is that on/near-core memory is damn expensive, and your average supercomputing task requires significant amounts of memory. When the bottleneck for high performance computing becomes memory bandwidth instead of interconnect/network bandwidth you have something a lot harder to optimize, so I can understand where the complaint in IEEE comes from.
Perhaps this will lead to CPUs with large L1 caches specifically for supercomputing tasks, who knows...
Re: (Score:2)
Perhaps this will lead to CPUs with large L1 caches specifically for supercomputing tasks, who knows...
Even discounting price concerns, L1 caches can only increase a certain amount. As the capacity increases, so does the search time for the data, until you find yourself with access times equivalent to the next level down the cache heirarchy, thus negating use of L1. L1 needs to be /quite/ fast for it to be worthwhile.
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Ofcourse things are different with supercomputers. If you have a 1000 'processing units', where each PU would consist of say, 32 cores and some GB's RAM on a single die, that would create a memory wall between 'local' and 'remote' memory. The on-die section of main memory would be accessible at near CPU speed, main memory that is part of other PU's would be 'remote', and slow. Hey wait, sounds like a compute cluster of some kind... (so scientists already know how to deal with it).
It also sounds like you are described the Cell Processor setup. Each SPU has local memory on-die - but cannot do operations on main memory(remote). Each SPU also has a DMA engine that will grab data from main memory and bring it into its local store. The good thing is you can overlap the DMA transfer and the computation so the SPUs are constantly burning through computation.
This does help against the memory wall. And is a big reason why Roadrunner is so damn fast.
Re: (Score:2)
Are you sure that wasn't a megabyte of RAM?
Memory (Score:5, Insightful)
Multiple CPUs? (Score:5, Insightful)
This doesn't quite make sense to me. You wouldn't replace a 64 CPU supercomputer with a single 64 core CPU, but would instead use 64 multicore CPUs. As production switches to multicore, the cost of producing multiple cores will be about the same as the single core CPUs of old. So eventually you'll get 4 cores from the price of 2, then get 8 cores from the price of 4, then 16 for the price of 8, etc. So the extra cores in the CPUs of a supercomputer are like a bonus, and if software can be written to utilize those extra cores in some way that benefits performance, then that's a good thing.
The problem allegedly being.. (Score:5, Informative)
For a given node count, we've seen increases in performance. The claimed problem is that for the workloads that concern these researchers, they don't see people mentioning significant enhancements to the fundamental memory architecture projected to follow the scale at which multi-core systems go. So you buy a 16 core chip system to upgrade your quad-core based system and hypothetically gain little despite the expense. Power efficencies drop and getting more performance requires more nodes. Additionally, who is to say that clock speeds won't lower if programming models in the mass market change such that distributed workloads are common and single-core performance isn't all that impressive.
All that said, talk beyond 6-core/8-core is mostly grandstanding at this time. As memory architecture for the mass market is not considered as intrinsically exciting, I would wager there will be advancements that no one speaks to. For example, Nehalem leapfrogs AMD memory bandwidth by a large margin (like by a factor of 2). It means if Shanghai parts are considered satisfactory today to get respectable yield memory wise to support four cores, Nehalem, by a particular metric, supports 8 equally satisfactorily. The whole picture is a tad more complicated (i.e. latency, numbers I don't know off hand), but the one metric is a highly important one in the supercomputer field.
For all the worry over memory bandwidth though, it hasn't stopped supercomputer purchasers from buying into Core2 all this time. Despite improvements in their chipset, Intel Core2 still doesn't reach AMD performance. Despite that, people spending money to get into the Top500 still chose to put their money on Core2 in general. Sure, Cray and IBM supercomputers in the Top2 used AMD, but from the time of its release, Core2 has decimated AMD supercomputer market share despite an inferior memory architecture.
Re: (Score:2, Interesting)
as expected (Score:3, Funny)
"A supercomputer is a device for turning compute-bound problems into I/O-bound problems."
-Ken Batcher
Simple, if it doesn't work, don't use it. (Score:2)
What's distressing here? That they have to keep building supercomputers the same way they always have? I worked with an ex IBM'er from their supercomputing algorithms department, he and I BSed about future chip performance alot in the late 2006 - early 2007 timeframe. We were both convinced that the current approaches to CPU design were going to top out in usefulness at 8 to maybe 16 cores due to memory bandwidth.
I guess the guys at Sandia had to do a little more than BS about it before they published
Ok. soooo.... (Score:2)
Because of limited memory bandwidth and memory-management schemes that are poorly suited to supercomputers, the performance of these machines would level off or even decline with more cores.
So increase the bandwidth on the memory to something more suited to supercomputers then. Design and make a supercomputer for supercomputer purposes. You are scientists using supercomputers, not kids begging mom for a new laptop on christmas. Make it happen.
Well, duh.... (Score:3, Insightful)
It's hardly any secret that CPU speed, even for single core processors, has been running ahead of memory bandwidth gains for years - that's why we have cache, and ever increasing amounts of it. It's also hardly any relevation to realize that if you're sharing your memory bandwidth between multiple cores then the bandwidth available per core is less than if you weren't sharing. Obviously you need to keep the amount of cache per core and the number of cores per machine (or, more precisely, per unit of memory sybsystem bandwidth) within reasonable bounds to keep it usable for general purpose aplications, else you'll end up in GPU-CPU (e.g. CUDA) territory where you're totally memory constrained and applicability is much less universal.
For cluster-based ("supercomputer") applications, partitioning between nodes is always going to be an issue in optimizing performance for a given architecture, and available memory bandwidth per node and per core is obviously a part of that equation. Moreover, even if CPU designers do add more cores per processor than is useful for some applications, no-one is forcing you to use them. The cost per CPU is going to remain approximately fixed, so extra cores per CPU essentially come for free. A library like pthreads, and different implementations of it (coroutine vs LWP based), gives you the flexibility over the mapping of threads to cores, and your overall across-node application partitioning gives you control over how much memory bandwidth per node you need.
Multicore is a fallback plan (Score:2)
It's worth noting that multicore CPUs are just a plan B technology. What the market really wants is faster CPUs, but the current old technology can't deliver them, so CPU makers are trying to convince people that multicore is a good idea.
And now for the local news (Score:2)
This just in:
* Intel sucks at making zillion-dollar computers
* AMD sucks at everything
* Supercomputer engineers are worried for their jobs
I realize these people have a legitimate complaint, but quite frankly if you're worried about a certain processor affecting your code, maybe you suck at programming ?! So what if the internal bandwidth is ho-hum ? These old dogs need to stop complaining and learn to adapt, else their overpaid jobs will be given to others who can.
Re: (Score:2)
Well we are talking about CPU to ram not the Hard drive. But a similar process the Ram is order of magnitude slower then the CPU. But the When the CPU talks to the ram it goes over the bus and talks to the ram and back threw the bus to the CPU. With a single core Fast CPU you can have a bus for each core, which is like adding more lanes to a highway it allows more traffic so the CPU while may be waiting for the ram it will be faster as you are not waiting for your bits because an other core requested some
Re: (Score:3, Interesting)
So your saying that next generation processors need a gig of cache. Plus 4gigs of ram.
I think what is really needed is new OS designs. Something that is no longer tied quite as close to the hardware. So that new hardware ideas can be tried.
Re: (Score:2)
Re: (Score:2)
Like on the 386 0 wait computers.
Re:but.. (Score:5, Funny)
Re: (Score:2)
Would optical get around such a barrier?
Physical space seems to be one of the major hurdles in CPU design today, due to leakage with the ever shrinking processes.
And i think it is about damn time that new silicon laser receiver thing (forgot the details) was put into implementation and testing.
IBM is already working on it. Stay tuned.
Re: (Score:2)