Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Technology

OpenGL 4.0 Spec Released 166

tbcpp writes "The Khronos Group has announced the release of the OpenGL 4.0 specification. Among the new features: two new shader stages that enable the GPU to offload geometry tessellation from the CPU; per-sample fragment shaders and programmable fragment shader input positions; drawing of data generated by OpenGL, or external APIs such as OpenCL, without CPU intervention; shader subroutines for significantly increased programming flexibility; 64-bit, double-precision, floating-point shader operations and inputs/outputs for increased rendering accuracy and quality. Khronos has also released an OpenGL 3.3 specification, together with a set of ARB extensions, to enable as much OpenGL 4.0 functionality as possible on previous-generation GPU hardware."
This discussion has been archived. No new comments can be posted.

OpenGL 4.0 Spec Released

Comments Filter:
  • by Thunderbird2k ( 1753946 ) on Thursday March 11, 2010 @12:26PM (#31439244)
    To give an idea to non-OpenGL developers, OpenGL 4.0 closes the feature gap with Direct3D11. If you want to use OpenGL 4.0 you need to wait a couple of weeks before drivers will be out. In case of Nvidia, the drivers will be launched together with their new GTX4*0 GPUs which are the first Nvidia GPUs with Direct3D11/OpenGL 4.0 support. AMD might release new drivers before Nvidia since their hardware is Direct3D11 capable already.
  • by binarylarry ( 1338699 ) on Thursday March 11, 2010 @12:28PM (#31439290)

    It's not really a huge problem in practice.

    All the major graphics IHV's provide that extension anyway. It would nice if it was in GL's core spec, but since it's included for any device that matters, it's not a practical concern.

  • Re:Hardware support (Score:3, Informative)

    by binarylarry ( 1338699 ) on Thursday March 11, 2010 @12:31PM (#31439334)

    It's most similar to D3D 11.

    DirectX is a larger set of development technologies and apis (most of which has been deprecated). Direct3d is it's "direct" analog to OpenGL.

  • by Again ( 1351325 ) on Thursday March 11, 2010 @12:38PM (#31439474)

    DirectX won, because it does sound and HID input handling, and because its on every PC sold to every mouthbreathing, Best Buy shopping, banana eating customer.

    I wouldn't be so quick to say that DirectX won. The xBox 360 is the only current generation console which uses DirectX.

  • by Anonymous Coward on Thursday March 11, 2010 @12:51PM (#31439722)

    and you can use OpenAL if you want to have the same sound effects engine on windows

    Especially Windows Vista and 7 since DirectSound acceleration doesn't exist anymore LOL

  • by H4x0r Jim Duggan ( 757476 ) on Thursday March 11, 2010 @01:00PM (#31439878) Homepage Journal

    > it's not a practical concern.

    According to the references linked from that en.swpat.org page, it seems the developers of the free software Mesa project think it's indeed a practical concern.

  • by binarylarry ( 1338699 ) on Thursday March 11, 2010 @01:11PM (#31440074)

    How many products are shipped with Mesa as an important, primary component?

    If you're using OpenGL, 99% of the products are going to want real hardware acceleration, not Mesa.

    Mesa is a great project though, don't get me wrong.

  • by TheRaven64 ( 641858 ) on Thursday March 11, 2010 @01:25PM (#31440262) Journal
    Mesa is used in a lot of X.org drivers. It provides the OpenGL state tracker for Gallium3D, so it will be used a lot more in future.
  • by medv4380 ( 1604309 ) on Thursday March 11, 2010 @01:30PM (#31440344)
    Really?

    Given that the PC gaming market is really a joke compared to the console market I think DirectX is really rather meaningless.

    When the Top 50 [vgchartz.com] selling games world wide contains only 3 PC games The Sims, World of Warcraft, and Starcraft it's time to say that DirectX for the PC is over rated.

    Since the Wii and PS3 [wikipedia.org] use a custom modified version of OpenGL for their hardware I'd also have to side with OpenGL as at least being relevant to professional games.

  • by binarylarry ( 1338699 ) on Thursday March 11, 2010 @01:40PM (#31440484)

    What I meant is loading a picture and using it as a texture.

    Oh, so how does this even matter from a language/platform/execution standpoint? In the case of loading a texture from disk, you're going to be limited by IO wait anyway, which means even something like Bash would work initiating the transfer and waiting while it's finished.

    Interesting. Still, you're going to need to read a model from a file or create its geometry.

    Again, you're talking about IO wait, which isn't really limited by your application's execution speed anyway. I'm sure you knew that though, you seem like a very experienced and capable programmer.

    Yeah, but I wouldn't advocate Java for 'real time' apps also the kind of geometry processing OpenGL requires. (which is what you'll probably be doing apart from the OpenGL triangle demo)

    Your application doesn't usually "process" geometry, in any sane application you just send off a big chunk of data to the server and the OpenGL implementation handles it from there. Regardless, Java is fast, so if you're generating the geometry it's fine anyway. Java lets you use OO development techniques and still get great performance and it works fine for "real time" applications.

    Please type 'java floating point' into google and check the first article

    I hate to be the one to break this to you, but floating point is fraught with all types of these issues. It's not a data type to be used for any application requiring exact calculations, which is why fixed point alternatives exist.

    Besides that, OpenGL is going to mangle your floats and turn them into yet another representation on the GPU server side anyway.

    I mean look at SIMD with something like SSE on an x86 chip. It will also *mangle* your floating point values, chomping off lots of data and return skewed values. It's just the nature of floating point.

  • by malloc ( 30902 ) on Thursday March 11, 2010 @02:07PM (#31440918)

    Yeah, but anyone using OpenGL with X is going to be using either the Nvidia proprietary drivers or ATI proprietary drivers.

    The OSS offerings do not provide nearly the same level of performance, unfortunately.

    So again, from a real world practical standpoint, Mesa isn't in use anyway.

    Unless you meant to say "OpenGL 3.0" This is absolutely not accurate. Has your "real world" been isolated to workstation CAD and/or heavy gaming users? Those are the only groups where binary non-mesa drivers are used almost universally, but they are a minority. Intel, which has over half the graphics market [ngohq.com] only uses mesa. Your default Fedora and upcoming Ubuntu 10.4 installs use mesa for both amd and nvidia chips. AMD actively supports the open driver and is working to make that the main driver.

    The continued development on gallium points to mesa gaining more traction. I think the trend is for binary drivers to become less and less common in the future.

    -Malloc

  • by Kjella ( 173770 ) on Thursday March 11, 2010 @02:38PM (#31441392) Homepage

    AMD actively supports the open driver and is working to make that the main driver.

    They are working on an open driver yes, but they are not looking to replace the proprietary driver in the foreseeable future. For a number of reasons - not least of which is the tons of optimization work going into the main driver - they expect the open source 3D performance to top out at 60-70% of Catalyst by keeping a simple structure. That is much better than than the difference between accelerated and unaccelerated though which can be <1% of the performance.

  • Re:Hardware support (Score:3, Informative)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Thursday March 11, 2010 @03:32PM (#31442280) Homepage Journal

    Nintendo requires a new Wii developer to have previous published commercial titles on another platform as well as a "secure office facility", a hurdle that most micro-ISVs cannot clear. 2D Boy had to pretty much cheat Nintendo in order to qualify for a Wii devkit without the overhead of having to lease an office.

  • by Anonymous Coward on Thursday March 11, 2010 @04:40PM (#31443744)

    C++ a superior language? LOL WUT?

    Considering it does everything Java does and more, and faster, yes, it is.

  • by JesseMcDonald ( 536341 ) on Thursday March 11, 2010 @04:42PM (#31443796) Homepage

    I expect that the idea is that instead of calls like glClearBuffer(...) which take their context from the program's global environment, you'd have calls like glClearBuffer(context, ...). The point of this would be to make it easier for a given program to work with multiple contexts at once, e.g. for mixing render-to-texture with normal rendering. (Note: I am not an OpenGL expert, by any means.)

    I'm certain no one is suggesting that the GPU become stateless, just the API.

  • by mojo-raisin ( 223411 ) on Thursday March 11, 2010 @06:27PM (#31445554)

    Incorrect. Proprietary drivers are no longer required for Real Work.

    I use the OpenSource Radeon Drivers with Mesa 7.7 to do lots of work with PyMol. It has all the performance I need, is stable and glitch free.

    From what I've read, Mesa 7.8 is just around the corner, and will be even better.

    Open Source Accelerated 3D has arrived.

  • by Anonymous Coward on Thursday March 11, 2010 @07:20PM (#31446324)

    Most of the data in the GPU memory is textures, vertex buffers, and shader fragments. None of these have anything to do with state.

    The stateful parts of the OpenGL API are things like the current value set in glColor, the contents of the matrix stack, and lighting parameters. A lot of this stuff never even crosses into GPU memory in the first place, it is handled on the CPU by the OpenGL libraries. And a lot of it has been removed in newer versions of the API.

  • Mod up! (Score:3, Informative)

    by n dot l ( 1099033 ) on Thursday March 11, 2010 @08:10PM (#31446880)

    I expect that the idea is that instead of calls like glClearBuffer(...) which take their context from the program's global environment, you'd have calls like glClearBuffer(context, ...). The point of this would be to make it easier for a given program to work with multiple contexts at once, e.g. for mixing render-to-texture with normal rendering. (Note: I am not an OpenGL expert, by any means.)

    I'm a game developer. I work with OpenGL. This is exactly what's meant when people bitch about OpenGL being stateful. That and the selector states, which make it annoying to write libraries that target OpenGL and work together nicely.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...