Next-Gen Processor Unveiled 183
A bunch of readers sent us word on the prototype for a new general-purpose processor with the potential of reaching trillions of calculations per second. TRIPS (obligatory back-formation given in the article) was designed and built by a team at the University of Texas at Austin. The TRIPS chip is a demonstration of a new class of processing architectures called Explicit Data Graph Execution. Each TRIPS contains two processing cores, each of which can issue 16 operations per cycle with up to 1,024 instructions in flight simultaneously. The article claims that current high-performance processors typically are designed to sustain a maximum execution rate of four operations per cycle.
I want one (Score:4, Insightful)
Hm... (Score:5, Insightful)
But it seems to me that we called this great new invention "vector processors" 15 years ago, and there is a reason they arent around anymore.
"Many instructions in flight"=="huge pipeline flushes on context switches"+"huge branching penalities" anybody?
Next-Gen Business Model Unveiled (Score:3, Insightful)
2. Make sure google ads show up at the top of the page
3. Submit blog to slashdot
4. Profit
Gets rid of the register-file (Score:5, Insightful)
but... (Score:4, Insightful)
They point out that current multi-core architectures put a huge burden on the software developer. This is true, but their claim that this technology will relieve that burden is dubious. They mention, for example, that current processing cores can typically only perform 4 simultaneous operations per-core, and imply that this is some kind of weakness. They completely fail to mention that the vast majority of applications running on those processors don't even use the 4 available scheduling resources in each core. In other words, the number of applications that would benefit from being able to execute more than 4 simultaneous instructions in the same core is vanishingly small. This is why most current processors have stopped at 3 or 4. Not because they haven't thought of pushing it beyond that, but because it is expensive, and because it yields very little return on the investment. Very few real-world users would see any performance benefit if the current cores on the market were any wider than 3 or 4. Most of those users aren't even using the 4 that are currently available.
Certainly the ability to do 1024 operations simulatenously in a single core is impressive. But it is not an ability that magically solves any of the current bottlenecks in multi-threaded software design. Most software application developers have difficulty figuring out what to do with multiple-cores. Those same developers would have just as much (if not more) difficult figuring out what to do with a the extra resources in a core that can execute 1024 simultaneous operations.
Re:TRIPS (Score:4, Insightful)
Re:Ix86 (Score:5, Insightful)
something which doesn't ultimately look like an x86.
Re:This is cool! (Score:3, Insightful)
Re:Better support for concurrency in Languages (Score:2, Insightful)
A lot of this is due to the fact that most popular languages right now do not support concurrency very well. Most common languages are stateful, and state and concurrency are rather antithetical to one another. The solution is to gradually evolve toward languages that solve this either by forsaking state (Haskell, Erlang) or by using something like transaction memory for encapsulating state in a way that is easy to deal with (Haskell's STM, Fortress (I think), maybe some others).
Concurrency is not that hard to do well in the right setting.
And before anyone claims that Haskell and Erlang are impractical, there are many examples of "real world" programs written in them.
A few nice, and very useful ones are Yaws [hyber.org] (for erlang) and Darcs [abridgegame.org] (for Haskell). There are many others (even quake clones), which I won't bother listing, but people can find them easily if they look.
Regarding concurrency, and its ease of use in these languages, I'm taking a machine learning class at the moment where most of the problems are computationally intensive, and could stand for improvement by making use of multiple cores. I do all of my assignments in Haskell, and not only are my solutions often shorter than those of my classmates (and they often work fine the first time they compile), but it's usually trivial to allow my application to scale nicely to as many cores as I can throw at it. It's worth mentioning, by the way, that most algorithms given in these classes are given under the assumption that people are using imperative languages, and even then, it's still easy. It takes a while to learn how to approach problems differently without mutable state, yes, but it's not as hard as some people make it out to be. I think it has more to do with the fact that people just don't like to learn anything new unless they absolutely are forced to do so, which is a pity.
By the way, there is a nice presentation [mit.edu] from Tim Sweeney on what he would like future programming languages to look like, and there's a lot in there about functional programming, concurrency, and expressive (re: dependent) types.
Compilers Are Not Magic, or Why IA64 Didn't Work (Score:3, Insightful)
And it's true. You build a sea of ALUs and you sic some folks on hand coding all sorts of things to the machine, and you end up with some spectacular results.
The problem is that we still can't get a compiler to do a good job at it, for the most part. We thought we could, and we threw every bell and whistle into IA64
for a compiler-controlled architecture, and you've seen what we've ended up with. Many years later, the situation is still pretty much the same: the compiler
can't do all that great of a job with these sorts of machines.
Don't get me wrong, there are lots of good ideas in TRIPS or any of the various other academic projects like it, but I'm yet to be convinced that it's useful in
any kind of real codebase that's not coded by hand by an army of graudate students. For some tasks, that's an acceptable model -- It's been the model in the world of
signal processing for quite a while (though becoming less so daily) -- but for most mainstream applications it just won't fly.
That, and it's hard for compilers to have knowledge about history. It's terribly important for optimization, and it's just hard to get into the compiler (though relatively
easy to get into a branch predictor).