Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Intel Technology

Larrabee ISA Revealed 196

David Greene writes "Intel has released information on Larrabee's ISA. Far more than an instruction set for graphics, Larrabee's ISA provides x86 users with a vector architecture reminiscent of the top supercomputers of the late 1990s and early 2000s. '... Intel has also been applying additional transistors in a different way — by adding more cores. This approach has the great advantage that, given software that can parallelize across many such cores, performance can scale nearly linearly as more and more cores get packed onto chips in the future. Larrabee takes this approach to its logical conclusion, with lots of power-efficient in-order cores clocked at the power/performance sweet spot. Furthermore, these cores are optimized for running not single-threaded scalar code, but rather multiple threads of streaming vector code, with both the threads and the vector units further extending the benefits of parallelization.' Things are going to get interesting."
This discussion has been archived. No new comments can be posted.

Larrabee ISA Revealed

Comments Filter:
  • by SlashWombat ( 1227578 ) on Saturday April 04, 2009 @05:44AM (#27456177)
    It appears that this could well improve the speed of lots of different operations. A definite boon for graphics like operations, but also a lot of DSP (audio/maths)stuff can benefit from these enhancements. It would also appear that general code could easily be sped up, however, compiler writers need to get their collective arses into gear for this to happen.

    However, give the average developer more speed, and all that gets produced is more bloat with less speed. If you watch large teams of programmers, the managment actually force the developers to write slow code, claiming that maintainability is more important than any other factor! (smart code that actually executes quickly is generally too difficult for the dumb-arsed upper level (management) programmers to understand, and is thus removed. Believe me, I've seen this happen many times!)
  • by jonwil ( 467024 ) on Saturday April 04, 2009 @06:15AM (#27456265)

    If Intel are smart they will release a chip containing one core (or 2 cores) from some kind of lower-power Core design and a pile of Larabee cores on the one die along with a memory controler and some circuits to produce the actual video output to feed to the LCD controler, DVI/HDMI encoder, TV encoder or whatever. Then do a second chip containing a WiFi chip, audio, SATA and USB (and whatever else one needs in a chipset). Would make the PERFECT 2-chip solution for netbooks if combined with a good OpenGL stack running on the Larabee cores (which Intel are talking about already).

    Such a 2-chip solution would also work for things like media set top boxes and PVRs (if combined with a Larabee solution for encoding and decoding MPEG video). PVRs would just need 1 or 2 of whatever is being used in the current crop of digital set top boxes to decode the video.

    As for the comment that people will need to understand how to best program Larabee to get the most out of it, most of the time they will just be using a stack provided by Intel (e.g. an OpenGL stack or a MPEG decoding stack). Plus, its highly likely that compilers will start supporting Larabee (Intel's own compiler for one if nothing else).

  • by Anonymous Coward on Saturday April 04, 2009 @06:24AM (#27456297)

    As a seasoned structural engineer (and PhD in numerical analysis), I hate to say this, but this is partly wishful thinking. Even an infinitely powerful computer won't remove some of the fundamental mathematical problems in numerical simulations. I will not start a technical discussion here, but just take some time to learn about condition numbers, for instance. Or about the real quality of 3D plasticity models for concrete, and the incredibly difficult task of designing and driving experiments for measuring them. Etc.

  • by ShakaUVM ( 157947 ) on Saturday April 04, 2009 @06:37AM (#27456351) Homepage Journal

    The programming languages that will benefit from Larrabee though will not be C/C++. It will be Fortran and the purely functional programming languages. Unless C/C++ has some extensions to deal with the pointer aliasing issue, that is.

    Intel has a lot of smart people in their compilers group, and they've done stuff like this before in different times in the past. I wouldn't at all be surprised if they released compiler extensions to allow quick loading of data into the processing vectors.

  • by Anonymous Coward on Saturday April 04, 2009 @06:40AM (#27456361)

    "Math is hard" - Barbie

  • by seeker_1us ( 1203072 ) on Saturday April 04, 2009 @07:00AM (#27456427)

    I don't think we will see this in notebooks for a while. We need to wait and see what the real product looks like (Intel hasn't released any specs), but Google for Larrabee and 300W and you will see the scuttlebut is that this chip will draw very large amounts of power.

  • Re:End of an era (Score:5, Insightful)

    by Anonymous Coward on Saturday April 04, 2009 @07:13AM (#27456459)
    IA64 was rejected because it was too lean. It's actually a horrendously complicated ISA which requires the compiler to do a lot of the work for it, but it turns out that compilers aren't very good at the sort of stuff the ISA requires (instruction reordering, branch prediction etc.) It also turned out that EPIC CPUs are very complex and power-hunger things, and IA32/x86-64 had easily caught up with and surpassed many of the so-called advantages that Intel had touted for IA64.

    The only reason Itanium is still hanging around like a bad smell is because companies like HP were dumb enough to dump their own perfectly good RISC CPUs on a flimsy promise from Intel, and now they have no choice.
  • Re:Not really x86 (Score:3, Insightful)

    by makomk ( 752139 ) on Saturday April 04, 2009 @07:38AM (#27456525) Journal
    Perhaps. As it stands, though, I don't think Larrabee can run all standard x86 code, since it doesn't support legacy instructions. Plus, even if it did, the performance would suck. For desktop use, it probably makes more sense to have some real x86 cores and a bunch of simpler graphics cores that don't have to be x86. To get full benefit from Larrabee, the code has to be threaded anyhow, so there's not so much point being able to run it on the same core as the standard x86 code.
  • by serviscope_minor ( 664417 ) on Saturday April 04, 2009 @07:43AM (#27456545) Journal

    The programming languages that will benefit from Larrabee though will not be C/C++.

    Awwwww :-(

    It will be Fortran and the purely functional programming languages. Unless C/C++ has some extensions to deal with the pointer aliasing issue, that is.

    Oh. You mean like restrict which has been in the C standard for 10 years?

    GCC supports it for C++ too. I'd be suprised if ICC and VS didn't support it for C++ too.

  • Re:End of an era (Score:2, Insightful)

    by Rockoon ( 1252108 ) on Saturday April 04, 2009 @08:43AM (#27456755)
    Some people buy 300watt video cards..

    ..and some of them dont even do it for gaming, but instead for GPGPU

    This is a real market, and as it matures the average joe will find that it offers things that they want as well.

    The fact is that as long as even a small market exists, that market can expand under its own momentum to fill roles that cannot be anticipated.

    I certainly wasn't thinking that there was a market for hardware accelerated graphics 20 years ago, yet I'm sure to make sure thats in the system I build today.

    I certainly wasn't thinking about multi-core computers 20 years ago, yet I wouldn't buy anything less than a quad core today.

    I certainly wasn't thinking about going from 20-bit to 32 -bit to 64-bit addressing 20 years ago.. I was happy with 640K and some bios above that, yet today if I build a system its going to have at least 8 gigs of memory.

    I couldnt even dream of filling the 40meg hard drive I got with my first 386, yet now I am wondering if I should clean up my 500GB drive or simply buy a new 1TB drive to slot right next to it.

    Yeah.. people weren't asking for those kinds of products either.. now we want them because those pesky unpredictable uses that come up that are actualy attractive to us.
  • by gnasher719 ( 869701 ) on Saturday April 04, 2009 @09:48AM (#27457127)

    What I wonder is why they haven't attempted to release two versions: an x86 version, and a stripped down RISC version without the x86 decoder.

    If you looked at what Intel has been doing recently, the RISC code that x86 is translated to has been slowly evolving. For example, sequences of compare + conditional branch become a single micro op. Instructions manipulating the stack are often combined or not executed at all. So what is the perfect RISC instruction set today isn't the perfect RISC instruction set tomorrow. And Intel's RISC instruction set would likely be quite different from AMD's.

  • Re:Duh (Score:3, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Saturday April 04, 2009 @11:03AM (#27457573) Journal
    We should remember that, at least at first, Larrabee is being pushed as a GPU, not a CPU, and all contemporary GPUs are already substantially parallel. If anything, intel's is (unsurprisingly) by far the most conventionally CPU-like of the bunch.
  • by DogAlmity ( 664209 ) on Saturday April 04, 2009 @11:19AM (#27457687)
    I'm gonna go ahead and agree with management that maintainability is more important than any other factor. Having had to maintain a few ancient codebases is my day, I've seen way too many "clever" coders that do ridiculous tricks to save time or space. Well designed (read: maintainable) code does not imply any significant performance hit.
  • by mdwh2 ( 535323 ) on Saturday April 04, 2009 @11:31AM (#27457771) Journal

    If you watch large teams of programmers, the managment actually force the developers to write slow code, claiming that maintainability is more important than any other factor!

    I don't see why it should be one or the other - maintainability is important, as is using optimal algorithms. Fast algorithms can still be written in a clear and understandable manner.

  • by Anonymous Coward on Saturday April 04, 2009 @12:48PM (#27458321)

    As a former Cray employee I find it interesting to see that Intel's previously unannounced deal with Cray is finally starting to deliver the goods. Intel should just get it over with and buy Cray. They've wanted back into the supercomputer business for while now anyway.

An authority is a person who can tell you more about something than you really care to know.

Working...