Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Intel Technology

Larrabee ISA Revealed 196

David Greene writes "Intel has released information on Larrabee's ISA. Far more than an instruction set for graphics, Larrabee's ISA provides x86 users with a vector architecture reminiscent of the top supercomputers of the late 1990s and early 2000s. '... Intel has also been applying additional transistors in a different way — by adding more cores. This approach has the great advantage that, given software that can parallelize across many such cores, performance can scale nearly linearly as more and more cores get packed onto chips in the future. Larrabee takes this approach to its logical conclusion, with lots of power-efficient in-order cores clocked at the power/performance sweet spot. Furthermore, these cores are optimized for running not single-threaded scalar code, but rather multiple threads of streaming vector code, with both the threads and the vector units further extending the benefits of parallelization.' Things are going to get interesting."
This discussion has been archived. No new comments can be posted.

Larrabee ISA Revealed

Comments Filter:
  • Bet they've got some serious CONTROL structures to keep things from getting too KAOTIC....

    "Would you believe a GOTO statement and a couple of flags?"
    • Bet they've got some serious CONTROL structures to keep things from getting too KAOTIC.... "Would you believe a GOTO statement and a couple of flags?"

      How about a while loop and a continue statement?

      • "Would you believe a GOTO statement and a couple of flags?"

        How about a while loop and a continue statement?

        In C, a continue breaks out of only one nested while or for loop. If you're in a triply nested loop, for example, you can't specify "break break continue" to break out of two nested loops and go to the next iteration of the outer loop. You have to break your loop up into multiple functions and eat a possible performance hit from calling a function in a loop. So if your profiler tells you the occasional goto is faster than a function call in a loop, there's still a place for a well-documented goto.

        C++ code can use exceptions to break out of a loop. But statically linking libsupc++'s exception support bloats your binary by roughly 64 KiB (tested on MinGW for x86 ISA and devkitARM for Thumb ISA). This can be a pain if your executable must load entirely into a tiny RAM dedicated to a core, as seen in the proverbial elevator controller [catb.org], in multiplayer clients on the Game Boy Advance system (which run without a Game Pak present so they must fit into the 256 KiB RAM), or even in the Cell architecture (which gives 128 KiB to each DSP core).

        • FWIW, I believe setjmp/longjmp are the closest C equivalents to exceptions.

  • The story title conjured up images of the boxes of ISA cards I've still got sitting around. Ah, the joys of setting IRQs... good times.
    • Re: (Score:2, Informative)

      by 4181 ( 551316 )
      It's probably worth noting that although the actual article uses neither the acronym nor its expansion, ISA in the story title refers to Instruction Set Architecture [wikipedia.org]. (My first thoughts were of ISA [wikipedia.org] cards as well.)
      • Re: (Score:3, Funny)

        Yeah, I just got weird mental images of ISA cards jutting out of modern-day motherboards. It was disturbing.
        • Didn't you hear? As of PCIE 3.1, compliant PCIE controllers will support detection, pin remapping, and an entire emulated 8086, allowing valuable legacy 8-bit ISA cards to remain in use!
  • End of an era (Score:2, Interesting)

    by pkv74 ( 1524279 )
    This 300 watts monter, 8086/386/586/x86-64/mmx+sse+ss2+ss3+whateversse compatible mess represents (or should represent) the end of an era. Few people is asking for that kind of product; price and size is more important. It's just Intel trying to hold the market captive forever.
    • Re: (Score:3, Informative)

      Intel actually tried to build a different leaner, instruction set. IA64, the market rejected it.

      Via and AMD don't have much trouble implementing these instruction sets either, or adding their own, so this doesn't much represent a stranglehold move on Intel's part.

      If you really want cheap small processors with no extra instruction sets, Intel does still make Celerons, I dare you to run Vista on one.

      • Re: (Score:3, Informative)

        by Hal_Porter ( 817932 )

        Actually the key patents on x86 probably run out soon. x64 has always been licensable from AMD. And an AMD or Intel x86/x64 chip has been at the top of the SpecInt benchmark for most of the last few years. Plus Itanium killed of most of the Risc architectures and x64 looks likely to kill off or nicheify Itanium.

        Meanwhile NVidia are rumoured to be working on a Larrabee like chip of their own. Via have a ten year patent license, by which point the architecture is rather open. And Larrabee shows a chip with a

        • Re: (Score:2, Interesting)

          by ThePhilips ( 752041 )

          Plus Itanium killed of most of the Risc architectures and x64 looks likely to kill off or nicheify Itanium.

          This is misinformed B.S. Itanium didn't kill anything.

          That was (and is) triumphant march of Linux/x64 all the time.

          It is true that Intel and HP made out of PA-RISC and Alpha sacrificial lambs on Itanic's altar. Yet, Itanic's never caught up (and never will) to the levels where both PA-RISC and Alpha in the times were.

          I bet a Larrabee like CPU would be great in a server too, and it's trivially highly scalable by changing the number of cores.

          Servers are I/O heavy - CPU parallelism is very secondary. I doubt Larrabee would make any dent in server market. Unless of course OnLive/similar would catch up or Intel add something i

          • by julesh ( 229690 )

            Servers are I/O heavy - CPU parallelism is very secondary

            I take it you've never tried to run a large-scale J2EE app.

            • With same success, I can run "for(;;;);" in several threads and run all CPUs/cores to ground.

              No matter what Java folks try to make out of it, Java on servers is pretty niche - precisely because of inefficient use of resources.

              Server Java is of course not so niche in whole Java market. But not other way around.

          • Re: (Score:3, Informative)

            by forkazoo ( 138186 )

            This is misinformed B.S. Itanium didn't kill anything.

            That was (and is) triumphant march of Linux/x64 all the time.

            Itanium killed high end MIPS years before anybody was talking about x64. You mentioned PA RISC, and Alpha was dead in-practice long before HP ever had it to officially declare it dead. Itanium killed a lot of good architectures.

          • by LWATCDR ( 28044 )

            That will depend on the server. Encryption could benifit from a Larrabee like System as could things like software RAIDs. With extra cpu power available software RAIDs and advanced file system like ZFS could replace hardware RAIDs everywhere.

        • Re:End of an era (Score:5, Informative)

          by SpazmodeusG ( 1334705 ) on Saturday April 04, 2009 @07:55AM (#27456593)
          Look i hate to be anal, but neither Intel nor AMD have been at the top of the SpecInt benchmark for a long time.
          The stock IBM Power6 5.0Ghz CPU is the fastest CPU on the specint benchmark on a per-core basis (and before that it was the 4.7Ghz model of the same CPU that was the leader).

          http://www.spec.org/cpu2006/results/res2008q2/cpu2006-20080407-04057.html [spec.org]
          Search for: IBM Power 595 (5.0 GHz, 1 core)
          Which is telling considering it's made on a larger process than the fastest x86 (the i7). It really shows there's room for improvement if you ditch the x86 instruction set.
      • Re:End of an era (Score:5, Insightful)

        by Anonymous Coward on Saturday April 04, 2009 @07:13AM (#27456459)
        IA64 was rejected because it was too lean. It's actually a horrendously complicated ISA which requires the compiler to do a lot of the work for it, but it turns out that compilers aren't very good at the sort of stuff the ISA requires (instruction reordering, branch prediction etc.) It also turned out that EPIC CPUs are very complex and power-hunger things, and IA32/x86-64 had easily caught up with and surpassed many of the so-called advantages that Intel had touted for IA64.

        The only reason Itanium is still hanging around like a bad smell is because companies like HP were dumb enough to dump their own perfectly good RISC CPUs on a flimsy promise from Intel, and now they have no choice.
        • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Saturday April 04, 2009 @07:48AM (#27456577)

          So that is where the term "EPIC FAIL" comes from...

        • by ebuck ( 585470 )

          It's not that HP is dumb, it's greedy. HP owns something on the order of 50% of the IP that goes into an Itanium. If they can effectively block you from buying anything else, you buy into their patents. Intel is the other major patent holder.

          Most of the patents for the Itanium are designed to make it impossible to produce an Itanium clone without violation the patents.

        • What I find interesting is that Intel tried this thing before, but it was called the iAPX 432 back in the '80s. It failed miserably back then, but is only somewhat more successful now.

          Also, I think it was HP that approached Intel to make Itanium, not the other way around.

        • by ozbird ( 127571 )
          It also turned out that EPIC CPUs are very complex and power-hunger things, and AMD64 had easily caught up with and surpassed many of the so-called advantages that Intel had touted for IA64.

          Fixed that for you.
      • Re:End of an era (Score:4, Informative)

        by turgid ( 580780 ) on Saturday April 04, 2009 @07:55AM (#27456595) Journal

        Intel actually tried to build a different leaner, instruction set. IA64, the market rejected it.

        It wasn't lean at all. It it typical over-complicated intel junk. Just look at the implementations: itanic. It's big, hot, expensive, slow...

        If you really want cheap small processors with no extra instruction sets, Intel does still make Celerons, I dare you to run Vista on one.

        The Celerons have all the same instructions as the equivalent "core" processors, they just have less cache usually.

        This Larabee thing doesn't sound much different to what AMD (ATi) and nVidia already have. A friend of mine has done some CUDA programming and, form what he says, it sounds just the same. Just like a vector supercomputer from 10 years ago.

      • Uh, I would not call IA64 leaner; VLIW is a huge mess and forces the compiler to do a lot of optimization, and if it can't do the optimization then performance sucks. Of course the market rejected it.
    • Re: (Score:2, Insightful)

      by Rockoon ( 1252108 )
      Some people buy 300watt video cards..

      ..and some of them dont even do it for gaming, but instead for GPGPU

      This is a real market, and as it matures the average joe will find that it offers things that they want as well.

      The fact is that as long as even a small market exists, that market can expand under its own momentum to fill roles that cannot be anticipated.

      I certainly wasn't thinking that there was a market for hardware accelerated graphics 20 years ago, yet I'm sure to make sure thats in the syste
    • Comment removed based on user account deletion
      • Re: (Score:3, Informative)

        by godefroi ( 52421 )

        Here's a little secret:

        Lots of games (maybe all of them) already include graphics-vendor-specific rendering engines. It's just that nowadays your graphics API isn't your whole game development toolset (glide), so it's easy to include support for both (all) vendors.

    • On the other hand, the high-end product lines make up a significant portion of the profits for chip companies. They charge a huge premium for perfect chips that run at high speeds and low (or relatively low) power and temperature. The chips that aren't as perfect get sold for much less to the masses, running at lower clocks, lower voltages, and even with features disabled (as in the case where a chip has a defect).

      Because of this, chipmakers will probably continue to have at least one product for the high-e
  • by GreatBunzinni ( 642500 ) on Saturday April 04, 2009 @06:00AM (#27456209)

    As a structural engineering in training who is starting to cut his teeth in writing structural analysis software, these are truly interesting times in the personal computer world. Technologies like CUDA, OpenCL and maybe also Larrabee are making it possible to simply place in any engineer's desk a system capable of analysing complex structures practically instantaneously. Moreover, it will also push the boundaries of that sort of software beyond, making it possible to, for example, modeling composite materials such as reinforced concrete through the plastic limit, a task that involves simulating random cracks through a structure in order to get the value of the lowest supported load and that, with today's personal computers, takes hours just to run the test on a simple simply supported, single span beam.

    So, to put this in perspective, this sort of technology will end up making it possible for construction projects to be both cheaper, safer and take less time to finish, all in exchange of a couple hundred dollars on hardware that a while back was intended for playing games. Good times.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      As a seasoned structural engineer (and PhD in numerical analysis), I hate to say this, but this is partly wishful thinking. Even an infinitely powerful computer won't remove some of the fundamental mathematical problems in numerical simulations. I will not start a technical discussion here, but just take some time to learn about condition numbers, for instance. Or about the real quality of 3D plasticity models for concrete, and the incredibly difficult task of designing and driving experiments for measuring

    • modeling composite materials such as reinforced concrete through the plastic limit

      I wonder if that software could also improve animation, by making solid objects which look as if they actually have weight. Too many avatars seem to be hovering just above the ground because you don't see the forces being transmitted through their bodies.

      • Re: (Score:3, Interesting)

        That's a problem with the animator. You don't need complicated software to make good animation--Toy Story should be sufficient evidence of that. You just need talent. Less and less talent these days, actually: if you're playing a game where the avatars are floating, it's because the designers don't give a^H^H^H^H^H^H^H care enough to simulate motion properly.

        As an aside, realism is frequently not a goal in animation. You tend to run up against the uncanny valley: all the characters look like zombies. Realis

        • Hey, "A Scanner Darkly" was not painful to watch. Not for anyone except you. ^^
          I have seen it with many people, and most of them liked it. Some of them did find it a bit slow/boring. But nobody found it to be painful.

          So if you always presume you are talking just about your views, then I apologize. But if not, please stop stop assuming everybody has your point of view. :)
          Thank you. :)

    • by LWATCDR ( 28044 )

      Kind of scary if you ask me. It sounds like you are trying to use simulation to reduce the margin of error you build into a structure. While that can be a good thing it isn't always. It puts a lot higher demand on quality control at the building site which is often outside the control of the engineer.
      Kind of reminds of some really nice homebuild aircraft in the 80s. They used very low drag laminar flow airfoils. They where very fast and worked well. Soon some where falling out of the sky on take off. They

    • Re: (Score:3, Informative)

      by Pseudonym ( 62607 )

      As a structural engineering in training who is starting to cut his teeth in writing structural analysis software, these are truly interesting times in the personal computer world.

      There's one drawback to the current crop of CPUs, though. More cores per die means less cache per core. So depending on what you're doing, this could actually degrade performance (all other things being equal) over older SMP machines.

  • by jonwil ( 467024 ) on Saturday April 04, 2009 @06:15AM (#27456265)

    If Intel are smart they will release a chip containing one core (or 2 cores) from some kind of lower-power Core design and a pile of Larabee cores on the one die along with a memory controler and some circuits to produce the actual video output to feed to the LCD controler, DVI/HDMI encoder, TV encoder or whatever. Then do a second chip containing a WiFi chip, audio, SATA and USB (and whatever else one needs in a chipset). Would make the PERFECT 2-chip solution for netbooks if combined with a good OpenGL stack running on the Larabee cores (which Intel are talking about already).

    Such a 2-chip solution would also work for things like media set top boxes and PVRs (if combined with a Larabee solution for encoding and decoding MPEG video). PVRs would just need 1 or 2 of whatever is being used in the current crop of digital set top boxes to decode the video.

    As for the comment that people will need to understand how to best program Larabee to get the most out of it, most of the time they will just be using a stack provided by Intel (e.g. an OpenGL stack or a MPEG decoding stack). Plus, its highly likely that compilers will start supporting Larabee (Intel's own compiler for one if nothing else).

    • I was thinking that. The Larrabees vector unit looks like it could just replace SSE entirely.

      Which does raise a question - will Intel keep SSE if it adds in the Larrabee vector unit as yet another legacy feature? I'm guessing it will (sigh).
      • by joib ( 70841 )

        Yeah, most x86_64 ABI's use SSE for scalar floating point, so it's too late to remove it. But hey, at least SSE is an improvement over x87.

    • Re: (Score:3, Insightful)

      by seeker_1us ( 1203072 )

      I don't think we will see this in notebooks for a while. We need to wait and see what the real product looks like (Intel hasn't released any specs), but Google for Larrabee and 300W and you will see the scuttlebut is that this chip will draw very large amounts of power.

      • Re: (Score:3, Interesting)

        by smallfries ( 601545 )

        Oddly enough your post ranks quite highly in that search. Drilling through the forums that show up reveal speculation that a 32-core Larrabee design will use 300W TDP, or roughly 10W per core. There doesn't seem to be any justification for that number although the Larrabee looks like Atom + stonking huge vector array. The Atom only uses 2W, it seems hard to believe that the 16-way vector array would use as much power for each FLOP as the entire Atom power budget to deliver that FLOP. Or perhaps it will, it'

  • The claim that this is the first time you can get "GPU class rendering in software"... with nothing more than a pixel sampler to help is somewhat dubious. Modern GPUs are, after all a bunch of stream processors with a pixel sampler. So, really, modern GPU graphics is all in software except the sampling.

    Oh, hey and anyone here remember the voodoo? That was a big (for the sime) sampler driven by an x86 CPU. Sound familiar?

    Sarcasm aside, I want one. The peak performance is high, and the programming model is we

    • by tepples ( 727027 )

      The claim that this is the first time you can get "GPU class rendering in software"... with nothing more than a pixel sampler to help is somewhat dubious. Modern GPUs are, after all a bunch of stream processors with a pixel sampler. So, really, modern GPU graphics is all in software except the sampling.

      As I understand it, the key difference that makes the software running on Larrabee more like traditional "software" than NV's or ATI's offerings is that Intel is exposing these stream processors' instruction sets to let compiler writers compete on writing shader compilers.

  • by julesh ( 229690 ) on Saturday April 04, 2009 @07:44AM (#27456553)

    Articles states that there's hardware support for transcendental functions, but the list of instructions doesn't include any. Anyone know what is/isn't supported in this line?

    • by gnasher719 ( 869701 ) on Saturday April 04, 2009 @09:40AM (#27457073)

      Articles states that there's hardware support for transcendental functions, but the list of instructions doesn't include any. Anyone know what is/isn't supported in this line?

      "Hardware support" doesn't mean "fully implemented in hardware".

      What hardware support do you need for transcendental functions?
      1. Bit fiddling operations to extract exponents from floating point numbers. Check. 2. Fused multiply-add for fast polynomial evaluation. Check. 3. Scatter/gather operations to use coefficients of different polynomials depending on the range of the operand. Check.

    • by mikael ( 484 )

      I would guess that they would be the same transcendental functions supported by the other shader languages; Cg, GLSL and Renderman; sine, cosine, tan, asin, acos, atan, sinh, cosh, tanh, if not sincos as well. They are also going to need exp, log, exp2, log2, exp10 and log10. All of these will be required for statistical modeling of texture, 3D animation and image processing. Maybe they won't be vectorized, or maybe it will be possible to treat each 16-element vector as a matrix.

    • From the C++ prototype guide [intel.com], which is just the ISA made into a terribly complex C++ wrapper, they support these transcendental functions in the ISA:
      EXP2_PS - Exponential Base-2 of Float32 Vector
      LOG2_PS - Logarithm Base-2 of Float32 Vector
      RECIP_PS - Reciprocal of a Float32 Vector
      RSQRT_PS - Reciprocal of the Square Root of a Float32 Vector

      They also provide library functions that implement everything else you'd want (sin, cos, etc) in software, I assume using Newton-Raphson iteration.

  • by chrysrobyn ( 106763 ) on Saturday April 04, 2009 @09:33AM (#27457033)

    Article: "Things are going to get interesting."

    nVidia: "Define interesting."

    AMD: "Oh God, oh God, we're all gonna die?"

  • by saiha ( 665337 )

    wtf does the international school of amsterdam have to do with this?

  • Saying "given software that can parallelize across many such cores" is the same as saying "then a miracle occurs".

    Unless you are interested in a pretty small class of problems, the inherent parallelism of most applications continues to be somewhere in the range 2.1 to 2.5 (i.e., you can speed them up by a little over 2x with the addition of more processors). Thus, in most real-world applications, most of those cores, or vector units, or any other "supercomputer" features will go unused.

    If anyone here

    • Well, virtualization: it's the driving force behind enterprise adoption of multicore technology today. Companies are eating down all the cores they can get. The appetite is so voracious that memory busses are well and truly stressed. Worse, no one really has any serious technological proposal to solving the memory b/w problem as we get to 16 cores or so.

      C//

    • I did a small project a few years ago with multi-core and multi-processor execution. The workload I used was a variety of parallel sorting algorithms. The fastest machines I had access to at the time were a dual-processor dual-core machine, and a dual-processor quad-core. That's 4 or 8 cores total, with two or four L2 caches (one for each pair of cores). I got close to perfect speedups when running on 2 cores; about 1.95x to 2x the speed of running on a single core. Going to 4 cores didn't give nearly as mu

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...