Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

Next-Gen Processor Unveiled 183

A bunch of readers sent us word on the prototype for a new general-purpose processor with the potential of reaching trillions of calculations per second. TRIPS (obligatory back-formation given in the article) was designed and built by a team at the University of Texas at Austin. The TRIPS chip is a demonstration of a new class of processing architectures called Explicit Data Graph Execution. Each TRIPS contains two processing cores, each of which can issue 16 operations per cycle with up to 1,024 instructions in flight simultaneously. The article claims that current high-performance processors typically are designed to sustain a maximum execution rate of four operations per cycle.
This discussion has been archived. No new comments can be posted.

Next-Gen Processor Unveiled

Comments Filter:
  • I want one (Score:4, Insightful)

    by Normal Dan ( 1053064 ) on Tuesday April 24, 2007 @03:36PM (#18861013)
    But when are they likely to be ready?
    • by ackthpt ( 218170 ) * on Tuesday April 24, 2007 @03:51PM (#18861275) Homepage Journal

      But when are they likely to be ready?



      • You know they'll be ready when Intel places large orders for aluminium for heatsinks.
      • You know they'll be ready when there's a sudden drop in prices of the current Hot CPUs, which are all proven but suddenly look like last month's pizza from under the couch.
      • You know they'll be ready when AMD hasn't said anything and they are suddenly shipping them, while Intel tells you in 9 mos. then suddenly says 3 mos. (and you can hear the whips cracking through the walls.)
      • You know they'll be ready when Microsoft doesn't have an operating system ready, but there are a dozen Linux distros good to go.
  • Hm... (Score:5, Insightful)

    by imsabbel ( 611519 ) on Tuesday April 24, 2007 @03:37PM (#18861039)
    The article contains little more information than the blurb.
    But it seems to me that we called this great new invention "vector processors" 15 years ago, and there is a reason they arent around anymore.
    "Many instructions in flight"=="huge pipeline flushes on context switches"+"huge branching penalities" anybody?
    • Re:Hm... (Score:5, Interesting)

      by superpulpsicle ( 533373 ) on Tuesday April 24, 2007 @03:42PM (#18861121)
      Come on now. It's a capitalist market. You can't just innovate your way to fame. Just like the list of 5 million other patents that never see the daylight.
    • Re:Hm... (Score:5, Informative)

      by volsung ( 378 ) <stan@mtrr.org> on Tuesday April 24, 2007 @03:42PM (#18861133)

      The vector processors never went away. They just became your graphics card: 128 floating point units at your command [nvidia.com]

      BTW, here is a real article on TRIPS [utexas.edu].

    • Re: (Score:3, Informative)

      by Anonymous Coward
      Actually, it is more like the dataflow architectures from the 70s. Vector processors are a totally different kind of thing (SIMD).

      The idea is simple, instead of discovering instruction level parallelism by checking the dependencies and anti-dependencies by global names (registers), define the dependencies directly by relating to instructions themselves.

      > "Many instructions in flight"=="huge pipeline flushes on context switches"+"huge branching penalities" anybody?

      That equality does not exist. It is a wid
    • Also with the move towards multi-core there is the potential that special task could stay on one processor for their entire length. This would renew reason to look at vector processing for main CPU usage.
    • You can read more about it here [utexas.edu]...

      Actually, from what I can tell it's more like a VLIW with it's program chopped up into horizontal and vertical microcode "chunks" for more efficient register forwarding, than a vector processor...

      I figure that it chops up the code into 128-instruction chunks (or smaller if there are branch dependancies that can't be done with predicates) and schedules it horizontally (the classic wide VLIW microcode which feeds independent instruction pipelines), and vertically (the sequenc
    • by naoursla ( 99850 )
      I only have a passing familiarity with TRIPS but I think that one of the goals was to get rid of the huge costs for pipeline flushes.

      Doug Burger [utexas.edu] is one of the main PI's on this project (which is around seven years old at this point). I'm sure you can find more information there if you are interested.
    • I'd say it looks more like VLIW or EPIC. Instructions grouped into blocks with no looping or data dependence, running in parallel. It looks like a 16-wide Itanium, with all the same problems actually generating code that will run on it very well.
      • by MORB ( 793798 )
        There are detailed informations available, including the isa of their prototype.

        http://www.cs.utexas.edu/~trips/publications.html [utexas.edu]

        Instructions are grouped but WITH data dependencies, that are explicitly encoded in the instruction stream, which means that generating code running well on that thing doesn't sound that difficult. IANA compiler expert but I think a compiler able to generate good code for this out of regular, scalar code sounds quite plausible.
    • by rbanffy ( 584143 )
      You can get away with little to no penalties on context switches by having a context-file around. This way you keep the top most used contexts on chip and only hit memory when you swap a context to/from memory - and even that can have reduced impact if it happens while you are running other on-chip contexts.

      You could also avoid some switching by keeping micro-contexts - separating the context of the various units and letting the software deal with them independently. This way, if you only have use for 5 pro
    • Thank you for telling me that Earth [top500.org] Simulator [wikipedia.org] went away 15 years ago!
    • Perhaps the "next-gen" is specialized coprocessors?
    • Re:Hm... (Score:4, Informative)

      by SiJockey ( 702903 ) on Tuesday April 24, 2007 @10:47PM (#18865553) Homepage
      The big difference in TRIPS is that stuff flying around out in memory can be squashed easily. The machine has aggressive branch prediction, efficient predication support in the ISA, and data dependence prediction. So, the 1024 instructions don't need to be long vectors streaming from memory. Squashing a mispredicted branch and restarting down the right path takes on the order of 10-20 machine cycles. Thanks for your comments and interest. -DB
    • "But it seems to me that we called this great new invention "vector processors" 15 years ago, and there is a reason they arent around anymore."

      I'm willing to bet that you typed that on a machine with a vector processor. What happened is that they became integrated into general purpose CPUs. The Altivec unit in my Mac's Power PC chip is a vector processor, as is the SSE unit in Intel CPUs.
  • by Anonymous Coward on Tuesday April 24, 2007 @03:38PM (#18861043)
    1. Copy some university press release to your blog
    2. Make sure google ads show up at the top of the page
    3. Submit blog to slashdot
    4. Profit
  • Marketting hype? (Score:5, Informative)

    by faragon ( 789704 ) on Tuesday April 24, 2007 @03:40PM (#18861085) Homepage
    Each TRIPS chip contains two processing cores, each of which can issue 16 operations per cycle with up to 1,024 instructions in flight simultaneously. Current high-performance processors are typically designed to sustain a maximum execution rate of four operations per cycle.

    It's me or are they trying to reparaphrasing, euphemistically, the Out-of-order execution [wikipedia.org]?
    • Re:Marketting hype? (Score:5, Informative)

      by Aadain2001 ( 684036 ) on Tuesday April 24, 2007 @04:02PM (#18861459) Journal
      Based on the article, "TRIPS" is nothing more than a Out-Of-Order(OOO) SuperScalar based processor. So unless the article is grossly simplifying (possible), this is nothing but a PR stunt. And based on the quote from one of the Professors about building it on "nanoscale" technology (um, been doing that for years now), my vote is pure PR BS.

      And as an aside, the reason modern CPUs are designed to "only" issue 4 instructions per cycle instead of 16 is because after years of careful research and testing real work applications, 4 instructions is almost always the maximum number of instructions any program can concurrently issue, due to issues like branches, cache-misses, data dependencies, etc. Makes me question just how much these "professors" really know.

      • Re:Marketting hype? (Score:4, Interesting)

        by Doches ( 761288 ) <Doches@gma[ ]com ['il.' in gap]> on Tuesday April 24, 2007 @04:17PM (#18861713)
        Branches are no problem for TRIPS -- in the EDGE architecture, both execution paths resulting from a branch are computed, unlike in classic architectures where the processor blocks (8086), skips ahead a single instruction before blocking (MIPS), or chooses a path using a branch predictor and executing it, possibly only to discard all instructions issued since the branch, if the predictor turns out wrong. EDGE architectures still lag on cache misses (or any memory hit) -- but that's fundamentally a problem with memory, not with the processor. Don't read the article, read the UT pdf.
        • hmmmmm.... sounds almost exactly like IA64, something Intel has had since the turn of the century.
          • by renoX ( 11677 )
            No in the IA64, like in the VLIW, the instructions are scheduled by the compiler (which works well on very regular code, poorly everywhere else) whereas in a TRIPS, on each execution unit the resources are dynamically used.

            From the paper "Scaling to the End of Silicon with EDGE Architectures", TRIPS ISAs are hardware dependant though, which means that you'd have to recompile your applications each time you use a new CPU, if I understood correctly, this is a significant problem (that and the memory wall).
        • Re:Marketting hype? (Score:5, Interesting)

          by smallfries ( 601545 ) on Tuesday April 24, 2007 @06:12PM (#18863151) Homepage
          Multiway branching is ancient, and it's not used much because it's very inefficient. At least half of the instruction stream after a branch will be canceled, two branches deep it is 75% and so on. No matter how much parallelism ou throw at this there are only marginal gains to made (exponential increase in number of execution units for a linear increase in depth). It still doesn't get around data dependencies which will be the major bottleneck if looking that far ahead in the instruction stream.

          Having read the articles that were easy to get to, and the abstract of the PhD student: this is buzzword bollocks. There is no innovation in what they have done. As other people have pointed out this is a vector / datastream architecture. It's not a very good one at that. Although it has the "potential" to scale to terraflops, so does my toothbrush. On a 130 process they can fit 2 cores with 32-wide dispatch clocked at 500Mhz. My 7800 is fab'ed on a 130 process with 24*4*4 = 384 operation wide vector dispatch. This prototype would hit about 16 billion ops/sec, versus 180 Gflop on the 7800. This is a long way from terraflops, and doesn't convince me that it can scale.

          As the 7800 is close to a systolic model there is a limited class of programs that can be executed; but those that are in that class exhibit (near)perfect parallelism and so have zero hit from memory access costs. Actually the internal bandwidth on the 7800 is a bottleneck for some computations but I'm just going for coarse detail here.

          Edge appears mix and match ideas from several parallel designs; every one of which suffers from hard code generation problems. I suspect that the only sample applications that hit 32 ops / cycle are media apps (or dataflow problems as they used to be called) which normal architectures run at high speed anyway.

          Interesting research, as it's always good to see people explore different designs, but it sounds overhyped and I believe that it has zero commercial appeal. Finally, as a sidenote, you are right about cache latencies being a memory defect rather than processor but there are ways around it. If you are willing to limit yourself to a certain class of applications (roughly the same one that executes well on most parallel architectures such as this, or GPUs) then you can completely avoid the latency. This provides a much bigger performance hike than any other technique as memory latency is a dominating factor in most runtimes. The only snag is that it is very hard to do, requires different fabrication technology (largely solved now), and lots of compiler advances... If you're interested then google for intelligent ram. It's about a decade of research now...
          • Re:Marketting hype? (Score:4, Interesting)

            by David Greene ( 463 ) on Tuesday April 24, 2007 @11:26PM (#18865891)

            ...this is buzzword bollocks.

            No, it isn't. The TRIPS group has done some really interesting things with compilers, for example. They've managed to have the compiler break up code into packets and schedule them on the processor array so that dependencies flow nicely across the grid. That is not an easy problem to tackle. This is very good research.

            I believe that it has zero commercial appeal.

            That's not the point of research. The point of research is to explore problems no one has tackled before, of course always with an eye toward future technology trends.

            • Apart from your selective misquoting you haven't really said anything. From your other posts I would guess that you are (loosely) associated with the project / people on it. Please back up your claim about breaking code up into packets (preferably with a paper citation), because if they have done that I would like to read it.

              That's not the point of research

              From the way that you misquoted me and then attacked a strawman, either you don't understand what research is about, or you do know but scoring points is

          • As the 7800 is close to a systolic model there is a limited class of programs that can be executed; but those that are in that class exhibit (near)perfect parallelism and so have zero hit from memory access costs. Actually the internal bandwidth on the 7800 is a bottleneck for some computations but I'm just going for coarse detail here.

            I had no idea Atari's third 8-bit console was so powerful. It's too bad they had to shelve the system when the market crash hit, and never gained back the developer or user
      • Re:Marketting hype? (Score:4, Informative)

        by SiJockey ( 702903 ) on Tuesday April 24, 2007 @10:56PM (#18865625) Homepage
        Actually, there is much more parallelism (more than 4 ops/cycle) available in many of these applications, but you correctly observe that many of these ancillary features (branch mispredictions, cache misses, etc.) chip away at the achieved parallelism. The TRIPS ISA and microarchitecture (which is, as you correctly point out, a variant of an OOO "superscalar" processor) has numerous features to try to mitigate many of these features ... up to 64 outstanding cache misses from the 1,024-entry window, aggressive predication to eliminate many branches, a memory dependence predictor, and direct ALU-ALU communication for making data dependences more efficient. The most important difference is in the ISA, which allows the compiler to express dataflow graphs to directly to the hardware, which will work best (compared to convention) in ultra-small technologies where the wires are quite slow. To get a similar dependence graph in a RISC or CISC ISA, a superscalar processor must reconstruct it on the fly, instruction by instruction, using register renaming and issue window tag broadcasting. Thanks for reading.
      • by Quantam ( 870027 )
        "Based on the article, "TRIPS" is nothing more than a Out-Of-Order(OOO) SuperScalar based processor. So unless the article is grossly simplifying (possible), this is nothing but a PR stunt."

        I'd call it OOE perfected. EDGE allows OOE on scales orders of magnitude larger than current architectures can (and a few other benefits), using less (and less power consuming) hardware. This is accomplished through a rather interesting paradigm inversion (I haven't seen anything like it before, though I don't exactly sc
    • Yes, TRIPS is an out-of-order superscalar processor. But it's bigger and better: by eliminating centralized structures, a TRIPS core can issue more instructions per cycle out of a bigger instruction window. It's not just more of the same; it's a qualitative improvement that allows much bigger (and thus higher performance) cores to built, yet with lower power and design costs.
    • by ghoul ( 157158 )
      Actually the TRIPS is somewhere between OOO and VLIW (Itanium) The explicit data flow information embedded in the instructionenables much larger instruction windows than possible in traditional OOO. As instructions are not just loaded into the window in a dumb manner it reduces the chances and costs of pipeline flushes
  • TRIPS (obligatory back-formation given in the article)
    Is that to make people RTFA (Read The F[ine] Article), or because "Tera-op, Reliable, Intelligently adaptive Processing System" was 13 more characters than the submitter wanted to copy and paste?
  • by xocp ( 575023 ) on Tuesday April 24, 2007 @03:41PM (#18861101)
    A link to the U of Texas project website can be found here [utexas.edu].

    Key Innovations:

    Explicit Data Graph Execution (EDGE) instruction set architecture
    Scalable and distributed processor core composed of replicated heterogeneous tiles
    Non-uniform cache architecture and implementation
    On-chip networks for operands and data traffic
    Configurable on-chip memory system with capability to shift storage between cache and physical memory
    Composable processors constructed by aggregating homogeneous processor tiles
    Compiler algorithms and an implementation that create atomically executable blocks of code
    Spatial instruction scheduling algorithms and implementation
    TRIPS Hardware and Software
    • Re: (Score:3, Informative)

      by xocp ( 575023 )
      DARPA is the primary sponsor...
      Check out this writeup at HPC wire [hpcwire.com].

      A major design goal of the TRIPS architecture is to support "polymorphism," that is, the capability to provide high-performance execution for many different application domains. Polymorphism is one of the main capabilities sought by DARPA, TRIPS' principal sponsor. The objective is to enable a single processor to perform as if it were a heterogeneous set of special-purpose processors. The advantages of this approach, in terms of scalability and simplicity of design, are obvious.

      To implement polymorphism, the TRIPS architecture employs three levels of concurrency: instruction-level, thread-level and data-level parallelism (ILP, TLP, and DLP, respectively). At run-time, the grid of execution nodes can be dynamically reconfigured so that the hardware can obtain the best performance based on the type of concurrency inherent to the application. In this way, the TRIPS architecture can adapt to a broad range of application types, including desktop, signal processing, graphics, server, scientific and embedded.

    • with individual instructions no longer spitting out to registers, and , to quote you Compiler algorithms and an implementation that create atomically executable blocks of code , does this not mean they can finally hide keys from us in the die of a general purpose processor?
      • I doubt that DARPA cares much about DRM, and if AMD and Intel wanted to they could have already hidden encryption keys on their CPU's.
        • and if AMD and Intel wanted to they could have already hidden encryption keys on their CPU's.

          not true, otherwise they would not be general purpose because they would not run every piece of x86 software thrown at them.

          with the current architecture the key has to be in plaintext in one of the registers, which can then be dumped.

          in this proposed architecture it can be passed to the next instruction throughout huge contiguous blocks of code w/o touching a register.

          this also brings up the related issue of debugg

          • I disagree. They could quite easily add extensions to the microcode and architecture that would allow harder to break DRM (there is no such thing as impossible to break). The key(s) would leak due to human nature though so it would be a futile effort.
    • by faragon ( 789704 )
      Explicit Data Graph Execution (EDGE) instruction set architecture [wikipedia.org]?
      Scalable and distributed processor core composed of replicated heterogeneous tiles [wikipedia.org]?
      Non-uniform cache architecture and implementation [wikipedia.org]?
      ...

      Well, very disapointing when compared to other [wikipedia.org] modern [wikipedia.org] microprocessor [wikipedia.org] architectures [wikipedia.org]. Don't get me wrong, I love computer architecture, and the design seems interesting, but the "over-hype" is discouraging.
      • Explicit Data Graph Execution (EDGE) instruction set architecture?

        Exactly. As is explicitly stated in the PDF [utexas.edu] linked from this [slashdot.org] comment by volsung, TRIPS is an implementation of EDGE.
  • by jimicus ( 737525 ) on Tuesday April 24, 2007 @03:42PM (#18861129)
    Imagine a beowulf cluster of these!
  • So i assume its software compatible with 90% of the code that the 'general public' uses?

    It did say 'general purpose' and if you try to create something beter but different, you get slapped down eventually ( like PowerPC Apples. )
    • Re:Ix86 (Score:5, Insightful)

      by convolvatron ( 176505 ) on Tuesday April 24, 2007 @03:59PM (#18861409)
      you are absolutely right. no one should ever do any research into
      something which doesn't ultimately look like an x86.
      • by nurb432 ( 527695 )
        I never meant that, I only meant that anything else seems to be a commercial deadend due to the market dominance.

        Having a compatibly layer will help prevent it from being a doomed project/product.
        • Somewhere in their pdfs, it says that the EDGE architecture should be better at emulating x86 than a VLIW or Itanic processor. If they can get some dynamic recompilation going, they should be good. (Though they will still have to scale beyond 500Mhz.) It does seem pretty interesting having 16 ALUs per core, and not using registers for intermediate values.
    • by ghoul ( 157158 )
      I took Prof Burger's course and we studied the TRIPS processor. There are two parts to the project. One is the Chip team and the other is the software team led by Dr Katherine McKinley . The software team has developed emulators so that current code can run on the TRIPS processor. Of course emulation is never as good as native execution but it does provide an upgrade path. The key thing to notice is that the upgrade path has been part of the thinking from the beginning.
    • Theres only a few things that need 10-500x increase in speed.

      Video transcoding.
      Rendering farms - need a $500 solution that can out do a $35,000 solution. ie, 10 x $35 chips on a $20 card + profit margin and yearly software licence.
      Folding type apps.
      Nuclear/Sci sims.

  • by DrDitto ( 962751 ) on Tuesday April 24, 2007 @03:47PM (#18861221)
    The EDGE architecture gets rid of relying on a single register file to communicate results between instructions. Instead, a producer-consumer ISA directly sends results to one of 128 instructions in a superblock (sort of like a basic block, but larger). In this way, hopefully more instruction-level parallelism can be extracted because superscalars can't really go beyond 4-wide (8-wide is a stretch...DEC was attempting this before Alpha was killed). Nice concept, but it doesn't solve many pressing problems in computer architecture, namely the memory wall and parallel programmability.
    • That's because they designed it in 2004, built the prototype in 2005, and some blogspam idiot is publicizing it in 2007.
    • Well, it gets rid of the isa-visible register file. That doesn't mean there aren't sram cells in there holding onto data. Don't confuse architecture an implementation.
      • by Tokerat ( 150341 )

        Well, it gets rid of the isa-visible register file.
        Anything that makes technical jargon sound less like Jar-Jar Binks is a win in my book.
      • by DrDitto ( 962751 )
        Fine. It gets rid of the complex, non-scalable register-bypass logic in the instruction window of an out-of-order superscalar.
  • This is cool! (Score:4, Informative)

    by adubey ( 82183 ) on Tuesday April 24, 2007 @03:50PM (#18861261)
    The link has NO information.

    The PDF here: has more information about EDGE [utexas.edu].

    The basic idea is that CISC/RISC architectures rely on storing intermediate data in registers (or in main memory on old skool CISC). EDGE bypasses registers: the output of one instruction is fed directly to the input of the next. No need to do register allocation while compiling. I'm still reading the PDF, this sounds like a really neat idea, though.

    The only question is, will this be so much better than existing ISA's to eventually replace them? -- even if only for specific applications like high-performance computing.
    • Re: (Score:3, Insightful)

      by treeves ( 963993 )
      If it's so cool, why did it take three years for us to hear about it? I'm really asking, not just trolling.
      • Because you read the wrong publications. Try IEEE and ACM digital library.

        Doug Burger's work is known to computer scientists for years...
        • by treeves ( 963993 )
          That's what I mean. The link given by the poster I replied to was to an article from IEEE Computer from Jul 2004. I'm not a computer scientist so I don't regularly read those journals. But the question is, is it really "news for nerds" if it's three years old?
          • by treeves ( 963993 )
            Sorry for replying to my own post, but I guess the answer is that they devised the architecture three years ago, but just now have the actual thing in silicon. I should read more carefully or not rely on short-term memory of 30 seconds ago!
      • by ghoul ( 157158 )
        Its been in development for a while. Papers were published 3 years back but the actual working prototype just came out last year and after testing and debugging it was released to public in a function this month. Just like AMD has been talking about Greyhound for 3 years but it wont release till June this year
    • Hmm. Interesting.

      I wonder how this differs from the dataflow architectures of the early 90s?

    • The implementation still has to rely on some registers to hold the intermediate computations whether they are exposed in the ISA or hidden in the instruction dispatch unit. This doesn't seem that new. Tomasulo [wikipedia.org] used a similar idea in the FP unit of the IBM 360/195 back in 1967. The dataflow idea was extensively mined at MIT in the eighties. This seems to be just an implementation of the latter that uses the former with a large number of functional units and reservation stations thrown in. Most modern mi
    • The only question is, will this be so much better than existing ISA's to eventually replace them? -- even if only for specific applications like high-performance computing.

      Or running Java. Or CLR.

  • by Manos_Of_Fate ( 1092793 ) <link226@gmail.com> on Tuesday April 24, 2007 @03:51PM (#18861291)
    It seems like for every "realist" claiming that Moore's law will soon hit a ceiling, I see another ZOMG Breakthrough! Lately, the question I've been asking myself is, "Will we ever surpass it?"
    • Will we ever surpass it?
      Doubtful, unless we see a new hardware player burst onto the scene. AMD made quite a splash, but they certainly didn't have any potential, nor do they today, to outpace Moore's Law. Intel still drives the hardware market.
    • Moore's Law is about the transistor density on the chip. A new processor design may help getting more performance from the same transistor density, but it certainly doesn't anything to increase the transistor density.

      Since there's a finite atom density on a chip, the transistor density will inevitably stop to grow eventually.
  • but... (Score:4, Insightful)

    by Klintus Fang ( 988910 ) on Tuesday April 24, 2007 @03:55PM (#18861353)
    The motivations for this technology provided in the article ignore some rather basic facts.

    They point out that current multi-core architectures put a huge burden on the software developer. This is true, but their claim that this technology will relieve that burden is dubious. They mention, for example, that current processing cores can typically only perform 4 simultaneous operations per-core, and imply that this is some kind of weakness. They completely fail to mention that the vast majority of applications running on those processors don't even use the 4 available scheduling resources in each core. In other words, the number of applications that would benefit from being able to execute more than 4 simultaneous instructions in the same core is vanishingly small. This is why most current processors have stopped at 3 or 4. Not because they haven't thought of pushing it beyond that, but because it is expensive, and because it yields very little return on the investment. Very few real-world users would see any performance benefit if the current cores on the market were any wider than 3 or 4. Most of those users aren't even using the 4 that are currently available.

    Certainly the ability to do 1024 operations simulatenously in a single core is impressive. But it is not an ability that magically solves any of the current bottlenecks in multi-threaded software design. Most software application developers have difficulty figuring out what to do with multiple-cores. Those same developers would have just as much (if not more) difficult figuring out what to do with a the extra resources in a core that can execute 1024 simultaneous operations.
    • Re:but... (Score:4, Informative)

      by $RANDOMLUSER ( 804576 ) on Tuesday April 24, 2007 @04:15PM (#18861667)
      Two words: loop unwinding [wikipedia.org]. This critter is perfect to run all iterations of (certain) loops in parallel, which would be determinable at compile time.
      • Re: (Score:3, Interesting)

        of course loop unwinding works fine... when you have a long loop. it does though have two problems. 1) it only works when you have very long loops where there are very little dependencies between the consecutive iterations of the loop 2) even when it does work, it causes the code footprint of the application to be much bigger which means you end up putting a lot more stress on your cache pipeline, requiring bigger caches and a wider fetch engine. And that all aside, what about the vast majority of code s
    • The benefit per simultaneous operations is not necessarily monotonically decreasing.

      Consider a loop with a medium-sized body, and iterations mostly independent. If there are enough simultaneous operations allowed to schedule multiple iterations through the loop at once, the loop could potentially run that many times faster. Now, with current designs, there aren't that many slots, and even if there were, the ISA makes it difficult to express this in a way that's useful to the processor. All we can do is O
    • by Anonymous Coward

      A lot of this is due to the fact that most popular languages right now do not support concurrency very well. Most common languages are stateful, and state and concurrency are rather antithetical to one another. The solution is to gradually evolve toward languages that solve this either by forsaking state (Haskell, Erlang) or by using something like transaction memory for encapsulating state in a way that is easy to deal with (Haskell's STM, Fortress (I think), maybe some others).

      Concurrency is not that

      • When you can show me a distributed memory parallel weather forecasting or climate prediction code written in Haskell, i.e. something that runs and scales well on large Linux clusters, has high interprocessor communication needs (both in terms of latency and bandwidth), and does a metric assload of floating point computations, I'll start to get interested. If you don't want to go so far as to include all the physics that go into weather and/or climate, just show me a Navier-Stokes simulator that has all thos
    • The big thing that all the commenters have missed that I've read so far is the fact that OOO execution is difficult not because it's hard to make many ALU's on a chip (vector design, anyone?) but because in a general-purpose processor the register file and routing complexity grows as N^2 in the number of units. That's bad. Every unit has to communicate with every other unit (via the register file or, more commonly, via bypasses to an OOO buffer for every stage prior to writeback). The issue being address
      • Re: (Score:3, Interesting)

        by knowsalot ( 810875 )
        Oh, and before someone points this out for me, you have to imagine that the routing requirements are VASTLY improved. Imagine a grid of ALU's each connected by a single bus, (simple,) rather than 128 bypass busses all multiplexed in to each ALU. (chaos! don't forget the MUX logic!) You map one instruction to one (virtual) ALU, rather than one result to a (virtual) register. Then you pipeline/march each instruction with its partial data down the grid until all the inputs come in. Instructions continually
    • by shmlco ( 594907 )
      "They completely fail to mention that the vast majority of applications running on those processors don't even use the 4 available scheduling resources in each core."

      Yes, but that's primarily because most of those resources are specialized. One or two of those are integer paths, one's a branch system, another is floating point, and so on. If the current code block doesn't include any of those specialized instructions, then those particular execution paths sit there unused.
  • nothing spectacular (Score:5, Informative)

    by CBravo ( 35450 ) on Tuesday April 24, 2007 @04:08PM (#18861557)
    Right, let me begin by saying that after reading ftp://ftp.cs.utexas.edu/pub/dburger/papers/IEEECOM PUTER04_trips.pdf [utexas.edu] it actually became a bit more clear about what they were talking about.

    It might sound very novel if you are only accustomed to normal processors. Look at MOVE http://www.everything2.com/index.pl?node_id=103228 8&lastnode_id=0 [everything2.com] to see what transport-triggered architectures are about. They are more power efficient, etc etc.

    Secondly, they talk about how execution graphs are mapped onto their processing grid. I don't think any scheduler has a problem with scheduling an execution graph (or whatever name you give it) to an architecture. Generally, it can be scheduled in-time (there is a critical path somewhere) or it is scheduled with a certain degree (generally > .9 efficient) of optimality. I don't see the gain there in efficiency.

    Now here comes the shameless self-plug. If you want to gain efficiency in scheduling a node of an execution graph you have to know which node is more critical than the other. The critical nodes (the ones on the critical path) need to be scheduled to the fast/optimized processing units and the others can be scheduled to slow/efficient processing units (and they can get some communication delays without penalty). Look http://ce.et.tudelft.nl/publicationfiles/786_11_dh ofstee_v1.0_18july2003_eindverslag.pdf [tudelft.nl] here for my thesis.
  • will the new name be captain trips?
  • by coldmist ( 154493 ) on Tuesday April 24, 2007 @05:15PM (#18862577) Homepage

    Here is the slashdot article from 2003 about this processor: link [slashdot.org]

    The specs have been updated to 1024 from 512, but that's about it.

    Another 3-5 years out?

  • Don't dismiss it (Score:5, Informative)

    by er824 ( 307205 ) on Tuesday April 24, 2007 @05:31PM (#18862757)
    I apologize if I butcher some of the details, but I highly recommend that anyone interested peruse the TRIPS website.

    http://www.cs.utexas.edu/~trips/ [utexas.edu]

    They have several papers available that motivate the rationale for a architecture.

    The designers of this architecture believed that conventional architectures were going to run into some physical limitations that were going to prevent them from scaling further. One of the issues they foresaw was that as feature size continued to shrink and die size continued to increase chips would become susceptible to, and ultimately constrained by wire delay. Meaning the amount of time it took to send a signal from one part of a chip to another would constrain the ultimate performance. To some extent the shift in focus to multi-core CPUS validates some of their beliefs.

    To address the wire delay problem the architecture attempts to limit the length of signal paths through the CPU by having instructions send their results directly to their dependent instructions instead of using intermediate architectural registers. TRIPS is similar to VLIW in that many small instructions are grouped into larger instructions (Blocks) by the compiler. However it differs in how the operations within the block are scheduled.

    TRIPS does not depend on the compiler to schedule the operations making up a block like a VLIW architecture does. Instead the TRIPS compiler maps the individual operations making up a large TRIPS instruction block to a grid of execution units. Each execution unit in the grid has several reservation stations, effectively forming a 3 dimensional execution substrate.

    By having the compiler assign data dependent instructions to execution units that are physically close to one another the communication overhead on the chip can be reduced. The individual operations wait for the operands to arrive at their assigned execution unit, once all of operations dependencies are available then the operation fires and its result is forwarded to any waiting instruction. In this way the operations making up the TRIPS are dynamically scheduled according to the data flow of the block and the amount of communications that have to occur across large distances are limited. Once an entire block is executed its can be retired and its results can be written to a register or memory.

    At the block level a TRIPS processor can still function much like a conventional processor. Blocks can be executed out of order, speculatively, or in parallel. They have also defined TRIPS as a polymorphous architecture meaning the configuration and execution dynamics can be changed to best leverage the available parallelism. If code is highly parallelizable it might make sense to allow bigger blocks mapped. However, by performing these type of operations at the level of a block instead of for each individual instruction the overhead is theoretically drastically reduced.

    There is some flexibility in how the hardware can be utilized. For some types of software with a high degree of parallelism you may want very large blocks, when there is less data level parallelism available it may be better to schedule multiple blocks onto the substrate simultaneously. I'm not sure how the prototype is implemented but the designers have several papers available where they discuss how a TRIPS style architecture can be adapted to perform well on a wide gamut of software.

  • by Erich ( 151 ) on Tuesday April 24, 2007 @08:47PM (#18864381) Homepage Journal
    TRIPS, like many other projects optimized to produce the largest number of PhD students possible, starts out with a premise something like:

    So, I have this big array of CPUs/ALUs/Functional Units, all I need to do is program them and I can be the computingest damn thing you ever saw!

    And it's true. You build a sea of ALUs and you sic some folks on hand coding all sorts of things to the machine, and you end up with some spectacular results.


    The problem is that we still can't get a compiler to do a good job at it, for the most part. We thought we could, and we threw every bell and whistle into IA64
    for a compiler-controlled architecture, and you've seen what we've ended up with. Many years later, the situation is still pretty much the same: the compiler
    can't do all that great of a job with these sorts of machines.


    Don't get me wrong, there are lots of good ideas in TRIPS or any of the various other academic projects like it, but I'm yet to be convinced that it's useful in
    any kind of real codebase that's not coded by hand by an army of graudate students. For some tasks, that's an acceptable model -- It's been the model in the world of
    signal processing for quite a while (though becoming less so daily) -- but for most mainstream applications it just won't fly.


    That, and it's hard for compilers to have knowledge about history. It's terribly important for optimization, and it's just hard to get into the compiler (though relatively
    easy to get into a branch predictor).

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...