Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software AMD Hardware

AMD's OpenCL Allows GPU Code To Run On X86 CPUs 176

eldavojohn writes "Two blog posts from AMD are causing a stir in the GPU community. AMD has created and released the industry's first OpenCL which allows developers to code against AMD's graphics API (normally only used for their GPUs) and run it on any x86 CPU. Now, as a developer, you can divide the workload between the two as you see fit instead of having to commit to either GPU or CPU. Ars has more details."
This discussion has been archived. No new comments can be posted.

AMD's OpenCL Allows GPU Code To Run On X86 CPUs

Comments Filter:
  • Nice (Score:5, Interesting)

    by clarkn0va ( 807617 ) <apt.get@[ ]il.com ['gma' in gap]> on Thursday August 06, 2009 @12:41PM (#28975447) Homepage
    Good on them. Now how about an API that allows me to run GPU code on the GPU? The day I can play 1080p mkvs from a netbook on AMD/ATI hardware is the day I'll quit buying nvidia.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Good on them. Now how about an API that allows me to run GPU code on the GPU? The day I can play 1080p mkvs from a netbook on AMD/ATI hardware is the day I'll quit buying nvidia.

      *Head Explodes*

      • Re:Nice (Score:5, Informative)

        by clarkn0va ( 807617 ) <apt.get@[ ]il.com ['gma' in gap]> on Thursday August 06, 2009 @12:56PM (#28975703) Homepage
        I suppose I could have been clearer. I'm talking about gpu decoding of HD video, conspicuously absent on AMD hardware in Linux, fully functional on NVIDIA. [slashdot.org]
        • Re:Nice (Score:5, Informative)

          by MostAwesomeDude ( 980382 ) on Thursday August 06, 2009 @01:37PM (#28976311) Homepage

          AMD/ATI only offers GPU-accelerated decoding and presentation through the XvBA API, which is only available to their enterprise and embedded customers. People seem to always forget that fglrx is for enterprise (FireGL) people first.

          Wait for the officially supported open-source radeon drivers to get support for GPU-accelerated decoding, or (God forbid!) contribute some code. In particular, if somebody would write a VDPAU frontend for Gallium3D...

        • Re:Nice (Score:4, Insightful)

          by Briareos ( 21163 ) * on Thursday August 06, 2009 @01:46PM (#28976495)

          I suppose I could have been clearer. I'm talking about gpu decoding of HD video, conspicuously absent on AMD drivers in Linux, fully functional on NVIDIA.

          Fixed that for you. Or does installing Linux somehow magically unsolder the video decoding part of AMD's GPUs?

          np: Death Cab For Cutie - Information Travels Faster (The Photo Album)

          • Re:Nice (Score:5, Funny)

            by clarkn0va ( 807617 ) <apt.get@[ ]il.com ['gma' in gap]> on Thursday August 06, 2009 @05:01PM (#28979525) Homepage

            does installing Linux somehow magically unsolder the video decoding part of AMD's GPUs?

            I'm not going to lie to you; I don't know the answer to that question, and I'm not about to make any assumptions.

          • What the heck, this is /. so I can nitpick as much as I want.

            The OP you referred to said "decoding of HD video ... absent on AMD hardware in Linux" not "from". There's a difference and it's enough to understand his statement correctly (as he meant it).

            • by dwater ( 72834 )

              > decoding of HD video ... absent from AMD hardware in Linux

              eh? Doesn't make any sense to me.

              • by dwater ( 72834 )

                oh, now it does...when I put the right emphasis on and fill in the '...' with the right words (something like 'is' and group 'from' with 'absent' rather than 'AMD'.

                never mind...

        • Re: (Score:3, Interesting)

          Comment removed based on user account deletion
          • We've come a long way in most respects, I'll give you that. Hardware accelerated HD playback on Linux too is happening, but I want it now, see?

            When it comes to open source, I'm part of the pragmatist camp. Yeah, I totally prefer to use the stuff that's open, but then it has to be usable. AMD's video hardware is way more open than nvidia's if you believe the reports, yet time and again I'm disappointed by its poor real-world performance. As I implied earlier in this discussion, ATI has already won my heart,

    • Re: (Score:3, Insightful)

      by Bootarn ( 970788 )

      Damn, you beat me to it!

      The problem now is the lack of applications that enable end users to make benefit from having a powerful GPU. This will be the case until there's a standard API which works across multiple GPU architectures. Having both CUDA and OpenCL is one too many

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      That's hilarious. Maybe you should quit buying nvidia hardware, then.

      .

      Maybe I should be a little clearer: you should have quit buying nvidia hardware in September of 2008 [phoronix.com] , because hardware acceleration for video on Linux has been available since then, with the official AMD/ATI driver.

      • From your link:

        XvBA isn't yet usable by end-users on Linux

        The API for XvBA isn't published yet and we are not sure whether it will be due to legal issues. We're told by a credible source though that X-Video Bitstream Acceleration wouldn't be much of a challenge to reverse-engineer by the open-source community.

        Interesting, but not yet useful (unless you're able to reverse-engineer this type of code, and I'm not). I'm still looking forward to the day when ATI hardware is a viable alternative on Linux.

        • Re:Nice (Score:4, Interesting)

          by RiotingPacifist ( 1228016 ) on Thursday August 06, 2009 @11:47PM (#28982579)

          look back about a year, since AMD opened up specs & docs, the radeon drivers have become very usable for everyday stuff (maybe not HD video, compiz or games), but the stability blows any prop driver i have ever used (nvidia or flgrx) right out of the water.
          For years linux users/developers have been claiming that we don't want drivers we just want open specs (without NDAs) and "we" would do the hard work. Well AMD have opened specs but it turns out when i say "we" i mean just the 2 guys who can be bothers, fortunately these guys are pretty fucking awesome so development is coming along smoothly but still lags behind what prop drives offer (in terms of performance anyway). Perhaps readon does not meet your needs but they it is defiantly viable alternative to nvidia for many uses!

  • In that memory on the card is faster for the card GPU and memory on the CPU is faster than the CPU. Like, I know PC-Express speeds things up, but, is it that fast that you don't have to worry about the bottleneck of the system bus?

    • If it was a problem then it wouldn't have been worth it to have a separate GPU in the first place.

      The GPU is there, now lets make it useful as often as possible. And if there is no GPU but two CPUs then with OpenCL we can use two the CPUs instead.
    • Re: (Score:2, Interesting)

      by Eddy Luten ( 1166889 )

      IMO, the fundamental problem with OpenCL is the same as with OpenAL, which is that Operating System vendors don't provide a standard implementation as is done with OpenGL.

      (Bus) speed isn't an issue as creating a CPU or GPU context requires a specific creation flag, so one would know what the target platform is.

      • Re: (Score:3, Interesting)

        by iluvcapra ( 782887 )

        IMO, the fundamental problem with OpenCL is the same as with OpenAL, which is that Operating System vendors don't provide a standard implementation as is done with OpenGL.

        It's still pretty early to say, though Apple provides an API for this with Snow Leopard. I don't know it OpenAL is a bad comparison or not, but as someone that does audio coding, OpenAL is the biggest joke of an API yet devised by man. OpenAL has little support because it's an awful and usless set of resources and features.

      • by tyrione ( 134248 )

        IMO, the fundamental problem with OpenCL is the same as with OpenAL, which is that Operating System vendors don't provide a standard implementation as is done with OpenGL.

        (Bus) speed isn't an issue as creating a CPU or GPU context requires a specific creation flag, so one would know what the target platform is.

        http://www.khronos.org/registry/cl/ [khronos.org]

        Embrace and extend. So far I'm seeing C/C++ APIs and of course Apple extends their own with ObjC APIs.

        What's stopping you from using the C APIs?

        The Core Spec is akin to the OpenGL spec. The custom extensions for Intel, Nvidia and AMD will be based upon their design decisions they implement in their GPUs.

        However, the CPU specs for Intel and AMD are there to leverage with OpenCL.

        What else do you want?

    • Re: (Score:3, Informative)

      by ByOhTek ( 1181381 )

      So, you store the data the GPU is working on in the card's memory, and the data the CPU is working on in system memory.

      yes, it is relatively slow to move between the two, but not so much that the one time latency incurred will eliminate the benefits.

      • Re: (Score:3, Interesting)

        by kramulous ( 977841 )

        I've found that an O(n^3) algorithm or less should be run on cpu. The overhead of moving to gpu memory is just too high. The gen2 pci is faster, but that just means I do #pragma omp parallel for and set the number of processors to 2.

        The comparisons of gpu and cpu code are not fair. They talk about highly optimised code for the gpu but totally neglect the cpu code (only use a O2 with the gcc compiler and that's it). On a E5430 Xeon, intel compiler and well written code, an O(n^3) or less is faster.

      • Re: (Score:3, Interesting)

        by schwaang ( 667808 )

        Unless of course you have a device (like newer macbooks) with nvidia's mobile chipset, which shares system memory and can therefore take advantage of Zero-copy access [nvidia.com], in which case there is no transfer penalty because there is no transfer. A limited case, but useful for sure.

    • by sarkeizen ( 106737 ) on Thursday August 06, 2009 @01:14PM (#28975977) Journal
      It's difficult to actually figure out what you are talking about here..from what I see this article is about writing code to the AMD stream framework and have it target X86 (or AMD GPUs).
      If your concern is shipping object code to a card to be processed may end up being so time consuming that it would not be worth it. Then I'd say that most examples of this kind of processing I've seen are doing some specific highly scalable task (e.g. MD5 hashing, portions of h264 decode). So clearly you have to do a cost/benefit like you would with any type of parallelization. That said, the cost of shipping code to the card is pretty small. So I would expect any reasonably repetitive task would afford some improvement. You're probably more worried about how well the code can be parallelized rather than the transfer cost.
      • by tjstork ( 137384 )

        If your concern is shipping object code to a card to be processed may end up being so time consuming that it would not be worth i

        Not so much as the code but the data. If you have a giant array of stuff to crunch, then yeah, shipping it to the card makes sense. But if you have a lot of tiny chunks of data then, it may not make as much sense to ship it all over to the card. That same problem is really what haunts multicore designs as well - its like you can build a job scheduler that takes a list of jobs a

        • I suppose it depends on what you mean by "lots of tiny chunks". Clearly doing a single "burst" transfer is better than lots of small ones but if you are still planning to process all these "chunks" of data at the same time then there's no reason why you couldn't just ship them all together and process them individually. Perhaps even from shared memory.

          Unless of course we're taking about a bunch of chunks that are not going to be worked on simultaneously which goes back to my statement about the degree o
    • I'm guessing we'll soon get with GPUs what happened with FPUs. Remember FPUs? Maths Co-processors? 80387? A seperate chip that handled floating point ops because the CPU did have those in the instruction set. Eventually merged into the main CPU chip. GPUs: initially on a seperate card, but requiring and increasingly faster bus (GPUs have driven the development of high speed buses), now often on the mainboard (true, not top-of-the-line chips yet, but I suspect that has a lot to do with marketing rather than

  • The real benefit (Score:5, Insightful)

    by HappySqurriel ( 1010623 ) on Thursday August 06, 2009 @12:49PM (#28975585)
    Wouldn't the real benefit be that you wouldn't have to create two separate code-bases to create an application that both supported GPU optimization and could run naively on any system?
    • by Red Flayer ( 890720 ) on Thursday August 06, 2009 @01:12PM (#28975949) Journal

      to create an application that both supported GPU optimization and could run naively on any system?

      Yes, that's the solution. Have your code run on any system, all too willing to be duped by street vendors, and blissfully unaware of the nefarious intentions of the guy waving candy from the back of the BUS.

      Oh... you meant running code natively... I see.

  • by fibrewire ( 1132953 ) on Thursday August 06, 2009 @12:50PM (#28975591) Homepage

    Ironically Intel announced that they are going to stop outsourcing their GPU's in Atom processors and include the gpu + cpu in one package, yet nobody knows what happened to the dual core Atom N270...

    • by avandesande ( 143899 ) on Thursday August 06, 2009 @03:14PM (#28978045) Journal

      Microsoft wouldn't allow licensing dual cores on netbooks.

      • by Cycon ( 11899 )

        Microsoft wouldn't allow licensing dual cores on netbooks.

        As far as I can tell, that's only regards Windows XP.

        See this article [overclockers.com] (which, admittedly, its talking about a "nettop" box, not a netbook:

        ...first thing you see is that it runs on Windows Vista - XP under Microsoft's licensing terms for netbooks limited it to single core CPUs.

        Got anything which specifically states that other OS's besides XP (which they've been trying to drop support on for a some time now) is restricted regards Dual Core?

      • Re: (Score:3, Interesting)

        by PitaBred ( 632671 )
        If that's not monopoly control, I don't know what is. A single company essentially telling another one what it can or can't develop or release?
  • Makes sense (Score:4, Interesting)

    by m.dillon ( 147925 ) on Thursday August 06, 2009 @12:55PM (#28975687) Homepage

    Things have been slowly moving in this directly already, since game makers have not been using available cpu horsepower very effectively. A little z-buffer magic and there is no reason why the object space couldn't be separated into completely independent processing streams.

    -Matt

  • I haven't read too much of OpenCL (just a few whitepapers and tutorials) but does anybody know if you can use both the GPU and CPU at the same time for the same kind of task. For example, in a single "kernel", I want it done 100 times, I can send 4 to the quad-core CPU and the rest to the GPU? If so, this would be a big win for AMD.

    • by jerep ( 794296 )

      I am pretty sure these are details for the implementation of OpenCL, not for client code. It is the very reason why libraries such as OpenGL/CL/AL/etc exists, so you don't have to worry about implementation details in your code.

      From what I know of the spec, you would just create your kernel, feed it data, and execute it, the implementation will worry about sharing the work between the CPU and GPU to get optimal performance.

      However, I don't think it would be optimal to have all 4 cores of the CPU running on

  • Overhyped (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Thursday August 06, 2009 @01:00PM (#28975777) Journal
    Compiling OpenCL code as x86 is potentially interesting. There are two ways that make sense. One is as a front-end to your existing compiler toolchain (e.g. GCC or LLVM) so that you can write parts of your code in OpenCL and have them compiled to SSE (or whatever) code and inlined in the calling code on platforms without a programmable GPU. With this approach, you'd include both the OpenCL bytecode (which is JIT-compiled to the GPU's native instruction set by the driver) and the native binary and load the CPU-based version if OpenCL is not available. The other is in the driver stack, where something like Gallium (which has an OpenCL state tracker under development) will fall back to compiling to native CPU code if the GPU can't support the OpenCL program directly.

    Having a separate compiler that doesn't integrate cleanly with the rest of your toolchain (i.e. uses a different intermediate representation preventing cross-module optimisations between C code and OpenCL) and doesn't integrate with the driver stack is very boring.

    Oh, and the press release appears to be a lie:

    AMD is the first to deliver a beta release of an OpenCL software development platform for x86-based CPUs

    Somewhat surprising, given that OS X 10.6 betas have included an OpenCL SDK for x86 CPUs for several months prior to the date of the press release. Possibly they meant public beta.

    • Compiling OpenCL code as x86 is potentially interesting. There are two ways that make sense. One is as a front-end to your existing compiler toolchain (e.g. GCC or LLVM) so that you can write parts of your code in OpenCL and have them compiled to SSE (or whatever) code and inlined in the calling code on platforms without a programmable GPU. With this approach, you'd include both the OpenCL bytecode (which is JIT-compiled to the GPU's native instruction set by the driver) and the native binary and load the CPU-based version if OpenCL is not available. The other is in the driver stack, where something like Gallium (which has an OpenCL state tracker under development) will fall back to compiling to native CPU code if the GPU can't support the OpenCL program directly.

      Having a separate compiler that doesn't integrate cleanly with the rest of your toolchain (i.e. uses a different intermediate representation preventing cross-module optimisations between C code and OpenCL) and doesn't integrate with the driver stack is very boring.

      Oh, and the press release appears to be a lie:

      AMD is the first to deliver a beta release of an OpenCL software development platform for x86-based CPUs

      Somewhat surprising, given that OS X 10.6 betas have included an OpenCL SDK for x86 CPUs for several months prior to the date of the press release. Possibly they meant public beta.

      I assume so OpenCL for ATI cards is heavens sent, since ATI seems to get nowhere with their custom shader language solutions, unlike NVidia which made heavy inroads with CUDA on the video codec front.
      I am rather sick of having a powerhorse which rivals the best nvidia cards and yet all the codecs use CUDA for video coding acceleration!

    • Yeah, CUDA does this already as far as I know. Kernels you write in their version of restricted C can be transparently called as CPU code if you don't have an available physical CUDA device.
    • Re: (Score:3, Informative)

      by tyrione ( 134248 )

      AMD is the first to deliver a beta release of an OpenCL software cross development platform for x86-based CPUs

      Source: http://developer.amd.com/GPU/ATISTREAMSDKBETAPROGRAM/Pages/default.aspx [amd.com]

      Being able to target both Windows and Linux is something outside Apple's platform scope.

  • by realmolo ( 574068 ) on Thursday August 06, 2009 @01:18PM (#28976029)

    Now that we have CPUs with literally more cores than we know what to do with, it makes sense to use those cores for graphics processing. I think that within a few years, we'll start seeing games that don't require a high-end graphics card- they'll just use a couple of the cores on your CPU. It makes sense, and is actually a good thing. Fewer discrete chips is better, as far as power consumption and heat, ease-of-programming and compatibility are concerned.

    • by Pentium100 ( 1240090 ) on Thursday August 06, 2009 @01:32PM (#28976221)

      A dedicated graphics processor will be faster than a general purpose processor. Yes, you could use an 8 core CPU for graphics, or you could use a 4 year old VGA. Guess which one is cheaper.

    • by Khyber ( 864651 ) <techkitsune@gmail.com> on Thursday August 06, 2009 @01:42PM (#28976415) Homepage Journal

      Hey, my nVidia 9800GTX+ has over 120 processing cores of one form or another in one package..

      Show me an Intel offering or AMD offering in the CPU market with similar numbers of cores in one package.

      • Technology fail.

    • by SpinyNorman ( 33776 ) on Thursday August 06, 2009 @01:50PM (#28976591)

      For some games that'll be true, but I think it'll be a long time, if ever, before we see a CPU that can compete with a high end GPU especially as the bar gets higher and higher - e.g. physics simulation , ray tracing...

      Note that a GPU core/thread processor is way simpler than a general purpose CPU core and so MANY more can be fit on a die. Compare an x86 chip with maybe 4 cores with something like an NVidea Tesla (CUDA) card which starts with 128 thread processors and goes up to 960(!) in a 1U format card! I think there'll always be that 10-100 factor more cores in a high end GPU vs CPU and for apps that need that degree of paralellism/power the CPU will not be a substitute.

      • Actually, ray tracing would be an area where a multi-core CPU would help. There's some progress, but in contrast with scanline rendering, ray tracing is very GPU unfriendly. So, for photo-realism, the future might still be with the CPU.
    • Not any time soon (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Thursday August 06, 2009 @01:59PM (#28976755)

      I agree that the eventual goal is everything on the CPU. After all, that is the great thing about a computer. You do everything in software, you don't need dedicated devices for each feature, you just need software. However, even as powerful as CPUs are, they are WAY behind what is needed to get the kind of graphics we do out of a GPU. At this point in time, dedicated hardware is still far ahead of what you can do with a CPU. So it is coming, but probably not for 10+ years.

      • The question is why? Ideology should not make this determination. Assuming the current trajectories continue (or close enough to what we've seen so far), by the time the CPU can do what we want, the GPU will still be able to do it faster and with less waste. Energy costs aren't likely to drop in the next 50 years, and the GPU applications (e.g. 3D modelling/lighting) that we've done with a CPU based approach (ray tracing) usually require 10x the hardware. If one GPU (drawing, for example 200 watts) can
        • by Miseph ( 979059 )

          Simplicity and size. The less components we need, and the smaller they can be, the better. Ultimately, if programmers didn't NEED to split up their code to run on different processors, they wouldn't, because it just makes life harder. Having one chip that handles everything makes that so, and having an API that brings us closer to a place where that makes intuitive sense is a logical progression toward that end.

    • Re: (Score:3, Informative)

      There's only two ways to do that:

      1. Some of the cores are specialized in the same way that current GPUs are: You may lose some performance due to memory bottlenecks, but you'll still have the specialized circuitry for doing quick vectored floating point math.
      2. You throw out the current graphics model used in 99% of 3D applications, replacing it with ray tracing, and lose 90% of your performance in exchange for mostly unnoticeable improvements in the quality of the generated graphics.

      Of course, you're reading

    • Re: (Score:2, Funny)

      And so, the wheel [catb.org] starts another turn.

    • Now that we have CPUs with literally more cores than we know what to do with, it makes sense to use those cores for graphics processing. I think that within a few years, we'll start seeing games that don't require a high-end graphics card- they'll just use a couple of the cores on your CPU.

      LOL. That's funny, because this is about exactly the opposite -- using the very impressive floating point number crunching power of the GPU to do the work that the CPU used to do. OpenCL is essentially an API for being

    • If history tells us anything, it's quite the opposite. For years, graphics cards have been getting more and more cores and applications (especially games or anything 3D) have come to rely on them much more than the CPU. I remember playing Half-life 2 with a 5 year old processor and a new graphics card...and it worked pretty well.

      The CPU folk, meanwhile, are being pretty useless. CPUs haven't gotten much faster in the past 5 years; they just add more cores. Which is fine from the perspective of a multipr

    • Except that GPU architecture is pretty different from that of a CPU. IANAE(xpert), but from what I understand the GPU is very, very, parallel compared to a CPU thanks to how easily parallelized most graphics problems are. Though CPUs are gaining more cores, I think that the difficulty in parallelizing many problems places a practical limit on the CPU's parallelism.

      That's not to say though that a GPU-type parallel core can't be integrated into the CPU package, however. I believe NVIDIA is doing some of th

      • Since when has NVidia sold CPU's?

        Intel and AMD are doing this, and NVidia is going to be left in the dust. Why do you think they are shifting some of their focus to ultra-high end parallel processing tasks? NVidia is slowly moving away from the desktop market, or at least are building a safety net in case they get pushed out of it. Who knows, maybe they'll team up with VIA to produce a third alternative to the CPU/GPU combo.

    • "Now that we have CPUs with literally more cores than we know what to do with,"

      For many problems, multi-core CPU's aren't even close to having enough power, that's why all of the interest in utilizing the GPU processing power.

      They are different ends of a spectrum: CPU generally=fast serial processing, GPU generally=slow serial, fast parallel. Some problems require fast serial processing, some require fast parallel processing and some are in between. Both are valuable tools and neither will replace t
    • "Now that we have CPUs with literally more cores than we know what to do with, it makes sense to use those cores for graphics processing."

      This comment is always trotted out by people who have no clue about hardware.

      CPU's doing graphics are bandwidth limited by main memory (not to mention general architecture). Graphics requires insane bandwidth. GPU's have had way more main memory bandwidth then modern CPU's have had for a long time. There is simply no way CPU's will ever catch up to GPU's because the GP

    • by mikael ( 484 )

      It would like going back to the era of early DOS game programming where you just had the framebuffer, a sound function (sound), two keyboard input functions (getch/kbhit), and everyone wrote their own rendering code.

  • What's the story? (Score:3, Informative)

    by trigeek ( 662294 ) on Thursday August 06, 2009 @01:29PM (#28976189)
    The OpenCL spec already allowed for running code on a CPU or a GPU. It's just registered as a different type of device. So basically, they are enabling compiling the OpenCL programming language to the x86? I don't really see the story, here.
  • UniversCL (Score:2, Interesting)

    by phil_ps ( 1164071 )
    Hi, I am working on an OpenCL implementation sponsored by google summer of code. It is nearly done supporting the CPU and the Cell processor. This news has come to as a blow to me. I have struggled so much with my open source project and now a big company is going to come and trample all over me. boo hoo. http://github.com/pcpratts/gcc_opencl/tree/master [github.com]
    • ... You were doomed to fail for multiple reasons. 'nearly done supporting the CPU and the Cell'. ... which CPU? ARM, x86, SPARC, PPC? Are you ignoring all the other implementations that already support OCL on x86?

      If this comes as a blow to you, you didn't do any research before you started and I find it really hard to believe you haven't come across the other existing implementations in your research for your own project.

  • AMD obviously has a vested interest in making their scheme an industry standard, so of course they'd want to support Larrabee with their GPGPU stuff. Larrabee has x86 lineage (of some sort, I'm not clear on exactly what or how), so they'd have to have at least some x86 support to be able to use their scheme on Larrabee. It seems to me that if they were going to bake some x86 support in there, they may as well add regular CPUs in as well (if you already wrote 90% of it, why not write the other 10%?).

    I don'

  • http://everything2.com/index.pl?node_id=1311164&displaytype=linkview&lastnode_id=1311164

    Exactly the same thing.

    I said EXACTLY!

    [wanders off, muttering and picking bugs out of beard]
  • Is AMD cleverly trying to undermine Intel's Larrabee threat? If this code can run abstracted enough that it doesn't matter what CPU/GPU is under the hood, this knocks out the main point of selling point larrabee: x86 code.

    (Ars makes a similar point:)

    the fact that Larrabee runs x86 will be irrelevant; so Intel had better be able to scale up Larrabee's performance

    If AMD is working on a abstraction layer that lets OpenCL run on x86, could the reverse be in the works, having x86 code ported to run on CPU+GPGPU as one combined processing resource? AMD may be trying to make it's GPUs more like what Intel is trying to achi

The wages of sin are unreported.

Working...