Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Technology

OpenGL 4.0 Spec Released 166

tbcpp writes "The Khronos Group has announced the release of the OpenGL 4.0 specification. Among the new features: two new shader stages that enable the GPU to offload geometry tessellation from the CPU; per-sample fragment shaders and programmable fragment shader input positions; drawing of data generated by OpenGL, or external APIs such as OpenCL, without CPU intervention; shader subroutines for significantly increased programming flexibility; 64-bit, double-precision, floating-point shader operations and inputs/outputs for increased rendering accuracy and quality. Khronos has also released an OpenGL 3.3 specification, together with a set of ARB extensions, to enable as much OpenGL 4.0 functionality as possible on previous-generation GPU hardware."
This discussion has been archived. No new comments can be posted.

OpenGL 4.0 Spec Released

Comments Filter:
  • by H4x0r Jim Duggan ( 757476 ) on Thursday March 11, 2010 @11:25AM (#31439238) Homepage Journal

    Any chance the patent problems of OpenGL 3 [swpat.org] have been fixed?

    • by binarylarry ( 1338699 ) on Thursday March 11, 2010 @11:28AM (#31439290)

      It's not really a huge problem in practice.

      All the major graphics IHV's provide that extension anyway. It would nice if it was in GL's core spec, but since it's included for any device that matters, it's not a practical concern.

      • by H4x0r Jim Duggan ( 757476 ) on Thursday March 11, 2010 @12:00PM (#31439878) Homepage Journal

        > it's not a practical concern.

        According to the references linked from that en.swpat.org page, it seems the developers of the free software Mesa project think it's indeed a practical concern.

        • Re: (Score:3, Informative)

          How many products are shipped with Mesa as an important, primary component?

          If you're using OpenGL, 99% of the products are going to want real hardware acceleration, not Mesa.

          Mesa is a great project though, don't get me wrong.

          • by TheRaven64 ( 641858 ) on Thursday March 11, 2010 @12:25PM (#31440262) Journal
            Mesa is used in a lot of X.org drivers. It provides the OpenGL state tracker for Gallium3D, so it will be used a lot more in future.
            • Re: (Score:3, Insightful)

              Yeah, but anyone using OpenGL with X is going to be using either the Nvidia proprietary drivers or ATI proprietary drivers.

              The OSS offerings do not provide nearly the same level of performance, unfortunately.

              So again, from a real world practical standpoint, Mesa isn't in use anyway.

              • by malloc ( 30902 ) on Thursday March 11, 2010 @01:07PM (#31440918)

                Yeah, but anyone using OpenGL with X is going to be using either the Nvidia proprietary drivers or ATI proprietary drivers.

                The OSS offerings do not provide nearly the same level of performance, unfortunately.

                So again, from a real world practical standpoint, Mesa isn't in use anyway.

                Unless you meant to say "OpenGL 3.0" This is absolutely not accurate. Has your "real world" been isolated to workstation CAD and/or heavy gaming users? Those are the only groups where binary non-mesa drivers are used almost universally, but they are a minority. Intel, which has over half the graphics market [ngohq.com] only uses mesa. Your default Fedora and upcoming Ubuntu 10.4 installs use mesa for both amd and nvidia chips. AMD actively supports the open driver and is working to make that the main driver.

                The continued development on gallium points to mesa gaining more traction. I think the trend is for binary drivers to become less and less common in the future.

                -Malloc

                • I hope you're right, I'm not for the proprietary drivers at all.

                  But gallium and the open source drivers aren't really ready for prime time, they're theoretical. I'm talking about practicalities. Right now, the open source drivers only exist to keep X running long enough to get the proprietary drivers installed.

                  No one is going to need S3TC compressed texture support for things like compiz anyway.

                  • Re: (Score:3, Interesting)

                    by malloc ( 30902 )

                    But gallium and the open source drivers aren't really ready for prime time, they're theoretical. I'm talking about practicalities. Right now, the open source drivers only exist to keep X running long enough to get the proprietary drivers installed.

                    [emphasis mine]

                    Again, except for the majority of users! The default mesa drivers let you run Quake 3, composite your desktop, and do whatever 90% of desktop users want. It's only the "I want to play the latest game with max fps" and the "I'm rendering 100 million vertexes / frame in CAD" people that need to change to a binary driver.

                    No one is going to need S3TC compressed texture support for things like compiz anyway.

                    (FYI, the latest mesa actually supports S3TC this in the same way MP3 is. Too bad we have to wait till 2017 for patent expiry.)

                  • Gallium isn't ready yet, but for the classic stack it depends on which card you are talking about. Intel is OSS only, and works fine (discounting GMA500, which isn't really theirs), and a great many ATI cards (r100-r500) work just fine with full 3D on the OSS stack. Even r600/r700 is pretty good these days (for example, ioquake3 apparently works fine on r700 aka 4000 series).

                • by Kjella ( 173770 ) on Thursday March 11, 2010 @01:38PM (#31441392) Homepage

                  AMD actively supports the open driver and is working to make that the main driver.

                  They are working on an open driver yes, but they are not looking to replace the proprietary driver in the foreseeable future. For a number of reasons - not least of which is the tons of optimization work going into the main driver - they expect the open source 3D performance to top out at 60-70% of Catalyst by keeping a simple structure. That is much better than than the difference between accelerated and unaccelerated though which can be <1% of the performance.

                  • by FedeTXF ( 456407 )

                    AMD's intentions when realeasing the specs might not have been replacing the proprietary driver, but I don't see why a group of people from all over the world could not make the open driver as good as the already not so good closed one. If the infrastructure is there, with the specs you can do the same kind of tricks in both drivers.

              • by Ltap ( 1572175 )
                But it will be. Ultimately, once performance improves, the OSS drivers will supplant the proprietary ones. Then this will become a concern.
              • >Yeah, but anyone using OpenGL with X is going to be using either the Nvidia proprietary drivers or ATI proprietary drivers.

                I'm not. I use Intel drivers right now, and before that I was using ATI OSS drivers, and sometime this year (hopefully) I will be using ATI OSS drivers again (probably with a 5000 series). I know of plenty of other people that are using OSS 3D drivers.

                Also, Gallium3D should ultimately help performance a lot.

              • Re: (Score:3, Informative)

                by mojo-raisin ( 223411 )

                Incorrect. Proprietary drivers are no longer required for Real Work.

                I use the OpenSource Radeon Drivers with Mesa 7.7 to do lots of work with PyMol. It has all the performance I need, is stable and glitch free.

                From what I've read, Mesa 7.8 is just around the corner, and will be even better.

                Open Source Accelerated 3D has arrived.

          • How many products are shipped with Mesa as an important, primary component?

            If you're using OpenGL, 99% of the products are going to want real hardware acceleration, not Mesa.

            According to Linux From Scratch [linuxfromscratch.org], Mesa is used to as the userspace component of OpenGL acceleration in X.org, at least with DRI drivers. In other words, if Mesa doesn't have it, FOSS drivers in Linux won't have it.

            Of course the real solution is to move the project over to software patent free part of the world, rather than meekly remove

          • Mesa is how you get hardware acceleration in Linux.

    • You mean: Microsoft’s problem of nobody taking them serious, and everybody doing it anyway, without MS being able to do anything about what they “believe” they have? (Remember: The companies implementing and supporting OpenGL can simply shut off Windows from their cards, end cooperation, and kill MS in the blink of an eye.

      Stop buying into every shit and criminal makes up to gain power over you!

  • by Thunderbird2k ( 1753946 ) on Thursday March 11, 2010 @11:26AM (#31439244)
    To give an idea to non-OpenGL developers, OpenGL 4.0 closes the feature gap with Direct3D11. If you want to use OpenGL 4.0 you need to wait a couple of weeks before drivers will be out. In case of Nvidia, the drivers will be launched together with their new GTX4*0 GPUs which are the first Nvidia GPUs with Direct3D11/OpenGL 4.0 support. AMD might release new drivers before Nvidia since their hardware is Direct3D11 capable already.
    • Re: (Score:3, Interesting)

      by Hurricane78 ( 562437 )

      Only a loser would make his goal to get on par with the competition. Because at the time when he would reach that goal, the competition would already have moved on.

      I want them to put DirectX to shame! New! Revolutionary! Impressive! Putting MS in the position to catch up!
      Because when MS is in that position, they are known to fuck up. (They make the same error of not trying to surpass the competition.) ^^

      Design a spec, that is every graphics card designer’s, every game developer’s and every playe

  • by Eric Smith ( 4379 ) on Thursday March 11, 2010 @02:01PM (#31441702) Homepage Journal
    3.0 was supposed to introduce a stateless API, but didn't. Now 4.0 apparently hasn't either. Have they decided that it's a bad idea, or that it's too difficult, or what?

    Having the API retain state is a fundamentally bad idea. As one overview [wpi.edu] points out, "Nearly all of OpenGL state may be queried". (emphasis added)

    It would be much better if there were OpenGL context objects that encapsulated the state, and were explicitly passed into API calls. I was completely dumbfounded when I first looked at API and saw that it didn't work that way.

    • Re: (Score:3, Insightful)

      Have they decided that it's a bad idea, or that it's too difficult, or what?

      I am a bit fuzzy here on the idea of "stateless" API that deals with inherently state-oriented hardware such as GPUs with their frame buffers, pixel processors, massive texture memories and what not...

      It would be much better if there were OpenGL context objects that encapsulated the state, and were explicitly passed into API calls.

      So you do you expect the entire (multi-hundred megabyte sized) state of the GPU and its memory to be

      • Re: (Score:3, Informative)

        I expect that the idea is that instead of calls like glClearBuffer(...) which take their context from the program's global environment, you'd have calls like glClearBuffer(context, ...). The point of this would be to make it easier for a given program to work with multiple contexts at once, e.g. for mixing render-to-texture with normal rendering. (Note: I am not an OpenGL expert, by any means.)

        I'm certain no one is suggesting that the GPU become stateless, just the API.

        • Mod up! (Score:3, Informative)

          by n dot l ( 1099033 )

          I expect that the idea is that instead of calls like glClearBuffer(...) which take their context from the program's global environment, you'd have calls like glClearBuffer(context, ...). The point of this would be to make it easier for a given program to work with multiple contexts at once, e.g. for mixing render-to-texture with normal rendering. (Note: I am not an OpenGL expert, by any means.)

          I'm a game developer. I work with OpenGL. This is exactly what's meant when people bitch about OpenGL being stateful. That and the selector states, which make it annoying to write libraries that target OpenGL and work together nicely.

  • Only 4.0! (Score:4, Funny)

    by johno.ie ( 102073 ) on Thursday March 11, 2010 @03:27PM (#31443496)

    DirectX goes all the way to 11.

If it wasn't for Newton, we wouldn't have to eat bruised apples.

Working...