Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Khronos Releases OpenCL Spec 115

kpesler writes "Today, the Khronos Group released the OpenCL API specification (which we discussed earlier this year). It provides an open API for executing general-purpose code kernels on GPUs — so-called GPGPU functionality. Initially bolstered by Apple, the API garnered the support of major players including NVIDIA, AMD/ATI, and Intel. Motivated by inclusion in OS X Snow Leopard, the spec was completed in record time — about half a year from the formation of the group to the ratified spec."
This discussion has been archived. No new comments can be posted.

Khronos Releases OpenCL Spec

Comments Filter:
  • what does it DO? (Score:4, Interesting)

    by Bizzeh ( 851225 ) on Tuesday December 09, 2008 @07:09AM (#26045017) Homepage

    is this simply a spec that people expect ati and nvidia to conform to? or is this another api outside of CUDA and CAL, that wraps the two up so that a single api can execute code on all GPGPU's?

    • Re:what does it DO? (Score:5, Informative)

      by u38cg ( 607297 ) <calum@callingthetune.co.uk> on Tuesday December 09, 2008 @07:15AM (#26045051) Homepage
      No it basically turns your graphics card into a general purpose floating point number cruncher, which is potentially useful for all sorts of things (although I predict Moore's Law will in a few years render it as obsolete as the maths co-processor).
      • Re:what does it DO? (Score:5, Interesting)

        by moogord ( 904702 ) on Tuesday December 09, 2008 @07:32AM (#26045139)
        It has applications further than that, the SIMD architecture of gpus makes them almost perfect as a hugely powerful non general purpose processor. Do you want to use this to handle AI? no. do you want to use this to enable millions of crates to go flying every which way when you fire a rocket? yes. Its essentially what glsl is to Nvidia's Cg, but instead of cg its an open (that's the important thing) CUDA replacement.
        • by deniable ( 76198 )
          If there's enough processing elements for SIMD, you could use it for the hard core matrix operations in engineering analysis. I'm wondering waht this could do for things like FEA.
          • Re:what does it DO? (Score:4, Informative)

            by volsung ( 378 ) <stan@mtrr.org> on Tuesday December 09, 2008 @09:56AM (#26045807)

            CUDA is already doing great things in molecular dynamics, which bears some similarity to FEA:

            HOOMD Benchmarks [ameslab.gov]

            A single 8800 GTX reaching 75% of the performance of a 32 node cluster isn't bad. I imagine the GTX 280 would easily beat the cluster.

          • by mrchaotica ( 681592 ) * on Tuesday December 09, 2008 @11:35AM (#26046999)

            The two major issues to be solved with that are that you need double-precision hardware (I can't remember if the Nvidia 9000 series supports that or not) and, more importantly, you need to write GPU algorithms for solving sparse matrices.

            • Re: (Score:1, Informative)

              by Anonymous Coward

              I about 99% sure that the 8800/9800 series are single precision, and gt260/280 have double precision. I'd guess everything beyond 260/280 will have double precision, but you never know.

              • Re: (Score:1, Informative)

                by Anonymous Coward

                Oh, and you need floating point exceptions, not silent over/underflow.
                Otherwise you get nonsense out the other end and have to start over.

                I've yet to see a graphics card that's fully IEEE-compliant.

                • If Nvidia or ATI really wants to start pushing their GPU's as plug-in math-coprocessors, they'll probably start designing them to cater for that use too.

        • Actually this would be GREAT for AI. Game AI? I have no idea. But, using a floating point co-processor like this you could do the calculations directly on, say, a robot, instead of having to send the data back to a mainframe for processing. Also much cheaper than buying a really fast CPU for the same tasks.

          This would be much faster than a general purpose CPU for all sorts of machine learning concepts like hidden markovs, computer vision, speech recognition... if only I had one of the cards...
        • Re:what does it DO? (Score:4, Informative)

          by chris_oat ( 5511 ) on Tuesday December 09, 2008 @11:33AM (#26046971) Homepage

          Do you want to use this to handle AI?

          It depends on what kind of AI you are talking about. Path finding actually maps nicely to the GPU. AMD released a demo that showcases this by running a path finding simulation on the GPU for several tens of thousands of agents. Read all about it in Chapter 3 the Advanced Real-Time Rendering course notes [amd.com] from SIGGRAPH 2008. Demo and screen shots here: Froblins Demo [amd.com]

        • I plan on use it to model proteins for my Science Olympiad team without upgrading the processor with our limited team budget
        • Actually it could handle AI. If your AI project involve neural networks this could make then run very fast. Also it could do some rather simple everyday tasks well too such as transcoding media files, adjusting the color of images and the first levels of processing for voice recognition. All of these tasks involve massive numbers of simple calculatons

        • An open spec for crunching a shitload of calculations on a GPU on a card that handless dedicated rastering graphics...

          Call it far-fetched but... do I smell the opportunity of real-time ray tracing games running on top of rasterised 3D desktop?

      • by malaba ( 9813 ) on Tuesday December 09, 2008 @09:54AM (#26045783)

        they just have been integrated into the main chip

        by 486 era if I remember correctly.

        By that time they had enough transistor to just put everything inside the same silicon chip, faster, cheaper.

        Today, every CPU have an IEEE floating point unit.
        To say we don't have maths co-proc is misleading.

        • by Hatta ( 162192 )

          And eventually what happened with the FPU will happen with the GPU.

        • by SendBot ( 29932 )

          intel had 486's without math coprocessors integrated (the 486sx), though it was actually on the die but disabled (either intentionally or from chips with defective math coprocs). They had a 487 you could couple with your 486sx that was still basically the same chip as the 486dx, and I think may have just disabled the other 486sx and run everything on that one processor. From what I've heard, all that craziness was due to marketing rather than technical reasons.

      • by /ASCII ( 86998 ) on Tuesday December 09, 2008 @10:35AM (#26046257) Homepage

        The math co-processor wasn't made obsolete. It became so vital to system performance that Intel and friends started including it in on the CPU proper. These days, they call it an FPU.

        • The math co-processor wasn't made obsolete. It became so vital to system performance that Intel and friends started including it in on the CPU proper. These days, they call it an FPU.

          A more cynical view is that as they became more popular, Intel started losing marketshare to alternative math coprocessor vendors, which they ended by putting the co-processor onto the CPU, which made it much harder for alternatives to compete.

      • by Creepy ( 93888 )

        Um, the math co-processor never became obsolete; they started building it on-die rather than in a separate package.

        And yes, it is similar to CUDA and CAL, but designed for any general purpose parallel computing, not just GPU from what I can tell.

    • Re:what does it DO? (Score:5, Informative)

      by san ( 6716 ) on Tuesday December 09, 2008 @07:25AM (#26045107)

      is this simply a spec that people expect ati and nvidia to conform to? or is this another api outside of CUDA and CAL, that wraps the two up so that a single api can execute code on all GPGPU's?

      It's the latter: a single API + kernel language for any GPU. Because both NVIDIA and AMD are represented in the contributor list, it actually has a chance of being adopted.

      • Re:what does it DO? (Score:5, Informative)

        by mikrorechner ( 621077 ) on Tuesday December 09, 2008 @09:28AM (#26045611)

        It's the latter: a single API + kernel language for any GPU. Because both NVIDIA and AMD are represented in the contributor list, it actually has a chance of being adopted.

        According to heise.de [heise.de] (in German), nVidia says that OpenCL applications will run seamlessly on any gpus with a CUDA-compliant driver. Does anyone know if that applies to the proprietary Linux drivers?

        If this really takes off, how long until the hardworking people from the x.264 or VLC or ffmpeg or mplayer projects can write a H.264/AVC decoder that uses the GPU?

        • Re: (Score:3, Informative)

          by Anonymous Coward

          Yes, there is a CUDA driver and SDK for Linux on NVIDIA's site: http://www.nvidia.com/object/cuda_get.html

        • by pato101 ( 851725 )

          Does anyone know if that applies to the proprietary Linux drivers?

          Proprietary Linux drivers do CUDA, don't they? If I'm not wrong, NVidia proprietary Linux drivers do not lack of any features .... why should they lack now?

        • Who cares about the decoder? 2.0 GHz Core 2 Duo can handle the decoding without breaking a sweat. Encoding 20 minutes of 1080p H.264 video, however, takes a fair chunk of time. Encoding is where we want to see some action. OK, nice _extra_ to free some CPU time when watching a video.. but typical use case is such that when you watching the video you don't have much else with HIGH realtime priority going on. So.. encoding.. make it faster, make it smooth, make me cum!
          • Who cares about the decoder? 2.0 GHz Core 2 Duo can handle the decoding without breaking a sweat.

            That's nice and well when we're talking about desktop systems. But think about MythTV media center PCs - if you could combine an Atom CPU and a passively cooled nVidia or AMD GPU, a super-silent, HDTV capable home-grown set-top box would be possible.

            Of course, an OpenCL encoder would help, too, for this kind of setup - broadcast TV encoding, for example.

            • Re: (Score:2, Informative)

              This isn't necessary for that because modern GPUs already have dedicated hardware for video decode [slashdot.org].

              • by drdaz ( 994457 )

                That's an nVidia only API...

                • by BLKMGK ( 34057 )

                  And appears to be profile dependent - my videos don't match their canned profile nor do I see myself changing. My fingers are crossed they can expand that....

            • A good point, but I was thinking more along the lines that since encoding is order of magnitude slower than decoding, having order of magnitude more processing power would come to a good use.. ;)-
          • Oddly the XBMC folks have found that a 3ghz DUAL CORE CPU - Intel - is what it takes to do high bitrate HD video. So I'm sorry to burst your bubble but some GPU help would certainly be helpful! I'll agree that encoding takes far longer but playback shouldn't require such beefy hardware either and encoding need only be done once as opposed to DEcoding which happens on every user's machine who wishes to see the content. Have you looked at the sorts of speed-ups the NVIDA code has been giving? When you see a m

        • by ConanG ( 699649 )
          There are already CUDA H.264 encoders and decoders. Don't know of any open source tools yet, though.

          http://www.badaboomit.com/?q=node/4 [badaboomit.com]
        • VLC is fine if you don't care about preserving the quality of the format, or if you're too braindead to install proper codecs. Or if you want your integrated subtitles to look like shit, unless you run a nightly build with a few tweaks.

          Honestly, I've found that Zoom Player's codec downloader and auto-configured silent install work the best for everyone, from the common person to the hardcore encoder to the obscure format enthusiast. Its a nice little stand-alone exe that, when run, will actually update your

        • Just out of curiosity, what about ATI?

          (9600GT still looks like a good deal, but the HD4670 might just be slightly better for me at the moment... But do these include something like PhysX? Is it even a consideration?)

        • I'm more interested in when Matlab or the rendering-engines for applications like Maya and Pro-Engineer will add support for OpenCL. =)

      • Microsoft isn't one of the companies listed. Which is probably a good thing, since they have a tendency to prevent progress then release their own proprietary version (ala OpenGL).

  • by Anonymous Coward on Tuesday December 09, 2008 @07:12AM (#26045037)

    There's no way I'm writing a single line of CUDA code when it only works with nVidia hardware, and I think there are a lot of other people like me. This could open up GPGPU programming to a much wider group of programmers.

  • by elh_inny ( 557966 ) on Tuesday December 09, 2008 @07:17AM (#26045059) Homepage Journal

    While I see quite a few members that I wasn't expecting (Creative Labs), my concern is that there are some companies that should definitely be participating in this but aren't.
    By that I mean gfx chip makers such as Via or S3, as for now it seems we're tied to the major players (nVidia, AMD, Intel) for desktop/laptop implementations and that's never good for the consumer.

    Either way the spec itself is a great initiative and I can't wait to get my hands on beta bulids of Snow Leopard to try it out...

    • Re: (Score:3, Interesting)

      by elh_inny ( 557966 )

      Oops.. I just noticed S3 is on the list, they managed to get a lot of companies on board after all.

      • Well, it doesn't mean a thing. You know, M$ was on ODF OASIS board quite some time too...

        To me litmus test of OpenCL would be independent (from video card vendor) portable implementation which runs on Linux.

        Participation doesn't mean a lot to how the spec would develop.

        To put it in another words: I'm waiting for reaction from Mesa and X.org folks. Then, if reaction would be positive, the news would get me excited.

        • Re: (Score:3, Interesting)

          Open Source means all of us. X.org and Mesa don't have a magic cow that shits code, it has to be written by people in their spare time (and X.org is stagnating due to a lack of developer interest). Nobody on the OpenCL list particularly give a shit about linux, and adoption will happen with or without linux or open source. Instead of waiting for other people to tell you how to feel, maybe you should sit down and read the spec.

        • I don't think your litmus test is reasonable. Because of the "big 3" video vendors signed on, virtually all modern Linux desktops will end up with OpenCL supported, and in the case of Intel and perhaps AMD, with open source drivers. nVidia will hold out open sourcing, but not many people will care because the performance will be good anyway.

          Having an independent implementation here is not important, and not at all useful, given that you'd be independent for the sake of it, rather than letting the vendor wri

        • What does this have to do with the Mesa and X.org teams? They're graphics guys, and this isn't for graphics. I'd like to see inclusion in GCC (auto-vectorization maybe?) though.

          • Access to resources of video card has to be synchronized.

            Unless of course you are going to "init 1" just to run you suppa-puppa OpenCL application.

            You can't avoid X.org - because they are the driver of video card.

            You will stuck into Mesa folks - because they are closer than anybody else to what Kronos groups is doing.

    • Does Via make anything with enough power for this to matter? All their graphics (and for that matter, everything else) seems to be bare minimum or less hardware that competes on price and power.

      Don't get me wrong, I love them* but I have a hard time imagining it'd be worth anything to exploit the power in their graphics.

      *Except that they don't make a server quality chipset to go with their processors. That pisses me off.

      • by tyrione ( 134248 )

        Does Via make anything with enough power for this to matter? All their graphics (and for that matter, everything else) seems to be bare minimum or less hardware that competes on price and power.

        Don't get me wrong, I love them* but I have a hard time imagining it'd be worth anything to exploit the power in their graphics.

        *Except that they don't make a server quality chipset to go with their processors. That pisses me off.

        They own S3.

    • Isn't it an open standard?
      Then those companies can implement support for their chips too...
      They just can't influence how the standard looks or works.

  • Great! (Score:4, Insightful)

    by johannesg ( 664142 ) on Tuesday December 09, 2008 @07:19AM (#26045077)

    Now, if only they could do the same for OpenGL... Which is needed by a lot more people, and is in my opinion a lot more important for anyone who wishes to be free of Windows.

    • Now, if only they could do the same for OpenGL.

      Care to elaborate?

      OpenGL is quite well supported on both Mac OS X and Linux. So with OpenGL you are already free.

      • Re:Great! (Score:5, Insightful)

        by robthebloke ( 1308483 ) on Tuesday December 09, 2008 @10:08AM (#26045953)
        I think the OP meant, "If they could finally get around to ratifying an openGL 3.1 specification in 6 months (instead of being 2 or 3 years late as GL3.0 was); turn it into a useful standard that people actually want to use (which GL3.0 is not); and finally make good on all the things we were promised for 3.0, which they ended up ditching at the last minute. If that happens linux/mac openGL developers around the world will feel less dirty than they do right now"....

        He wasn't implying anything about windows + GL as such, more making the observation that openGL is vital to Mac/linux - and as such those OS's are very much at the mercy of the Khronos group's actions (or more accurately - no action at all as was the case with GL3).
        • I think the OP meant, "If they could finally get around to ratifying an openGL 3.1 specification in 6 months (instead of being 2 or 3 years late as GL3.0 was); turn it into a useful standard that people actually want to use (which GL3.0 is not); and finally make good on all the things we were promised for 3.0, which they ended up ditching at the last minute. If that happens linux/mac openGL developers around the world will feel less dirty than they do right now"....

          He wasn't implying anything about windows + GL as such, more making the observation that openGL is vital to Mac/linux - and as such those OS's are very much at the mercy of the Khronos group's actions (or more accurately - no action at all as was the case with GL3).

          Thank you sir. That is indeed exactly what I meant, except that you phrased my frustrations a lot better than I did.

        • by tyrione ( 134248 )

          I think the OP meant, "If they could finally get around to ratifying an openGL 3.1 specification in 6 months (instead of being 2 or 3 years late as GL3.0 was); turn it into a useful standard that people actually want to use (which GL3.0 is not); and finally make good on all the things we were promised for 3.0, which they ended up ditching at the last minute. If that happens linux/mac openGL developers around the world will feel less dirty than they do right now"....

          He wasn't implying anything about windows + GL as such, more making the observation that openGL is vital to Mac/linux - and as such those OS's are very much at the mercy of the Khronos group's actions (or more accurately - no action at all as was the case with GL3).

          The 6 months for OpenCL is based upon 3 or more years of Apple work flushed out and the 6 months are the flushed out specs that satisfy all the big 3 GPU vendors and the more specialized vendors. Perhaps now that this is done, Apple will do more of the heavy lifting and then accelerate OpenGL 3.1 to be done soon as well, seeing as how much it will play a role in Snow Leopard with OpenCL.

        • by jerep ( 794296 )

          The OpenCL spec is also way shorter than the OpenGL spec, and so are the header files. If you want to compare the time it takes to bring specifications out you have to compare the resulting size of these specifications.

    • Actually, that's kinda the point. OpenGL needed to go much more general purpose (read, push a load of verticies and then do general purpose computing on them to turn them into colours). OpenCL is rather more flexible than CUDA in that it can read from vertex buffer objects, and write to framebuffers, which means that it could sensibly be used for "software" graphics engines.

    • You've got that exactly backwards.

      It's OpenCL that's trying to follow in the footsteps of OpenGL, not vice versa.

      OpenGL is an open specification that has many implementations taking rather good advantage of graphics hardware! ;-)

      OpenCL wants to establish a similar standard but for GPU based kernel execution rather than graphics rendering.

  • Looking at http://www.khronos.org/registry/cl/api/1.0/cl_gl.h [khronos.org] ,they are using the CL prefix. This will cause a huge headache for existing code that uses the ClanLib SDK. http://clanlib.org/docs/clanlib-0.9.0/reference/modules.html [clanlib.org]
  • Actually I'm glad to hear this. With IGP and Crossfire/SLI with dual-GPU in a quad arrangement. One could have an inexpensive (relatively speaking) supercomputer under their desks. Throw in the upcoming quad-core in 45nm fabrication and now is a good time to be into computers.

  • Now, work on getting double-precision.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      It's in the spec as an extension. The feature will appear when hardware implements it.

  • Plugins for ffmpeg or any other codec for that matter could provide a huge boosts for linux. What about blender or yafray support.

    There would be a huge uptake of Linux for video editor and 3d graphics artists.

It is easier to write an incorrect program than understand a correct one.

Working...