Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software

OpenGL 2.0 White Papers 129

Timothy J. Wood writes "3DLabs has posted a series of white papers on OpenGL 2.0 covering topics such as improving parallelism, timing control, minimizing data movement programmable pixel pack and unpack and (most notably) a proposal for a hardware independent shading language."
This discussion has been archived. No new comments can be posted.

OpenGL 2.0 White Papers

Comments Filter:
  • At least this way my friends can stop bugging me about how much better DirectX 8.0 is compared to OpenGL.
    I think it will be a fun upcoming year for graphics programming B-)
  • Meeting minutes (Score:3, Informative)

    by Mr Thinly Sliced ( 73041 ) on Wednesday November 21, 2001 @08:41AM (#2595407) Journal
    Also of interest is the meeting minutes [opengl.org] where the opengl panel discuss the implications of this leap, and raise some interesting questions:

    > Bimal: Devil's Advocacy question: why do we want OpenGL to survive? If IHVs can't articulate this and drive progress, it won't survive.

    I'd be really sad to see OpenGL go. Its the only way I've been able to fart around with all that graphic lovelness since University, doing my bit with deformable objects.

    I hope they get their finger out and pull it off. Apple should be helping to sponsor this sort of thing really IMHO...
    • Re:Meeting minutes (Score:3, Insightful)

      by DGolden ( 17848 )
      OpenGL is unlikely to go - after all, DirectX only exists on Wintel/XBox. Scientific and industrial visualisation applications are pretty much all OpenGL, and are pretty much all designed for Unix and Linux (there are rickety ports to WNT from Unix), and depend on the design for high-poly-count and easy C and FORTRAN compatibility of OpenGL. The fancy-texturing facilities of DirectX are largely irrelevant, and the requirement to use C++ or (bleurgh!) a COM interface in other languages, makes it difficult to use DirectX for anything "serious".

      The margins for software and hardware vendors in this market are much larger, and more secure, than in the games software market (where most products barely break even) - you can get away with charging £20000 a year for a license to many specialised programs.
      • OpenGL is also built into OSX i believe
  • What about.. (Score:3, Insightful)

    by nervlord1 ( 529523 ) on Wednesday November 21, 2001 @08:42AM (#2595413) Homepage
    I really wish an open source group would release some sort of sound-graphics-animation libary for windows that directly competes with direct X. why would it rock? Simple porting of windows games to linux One less area Microsoft has a stranglehold on Games development would be easier for open source programmers A good games libary for linux Wed already have 3d and Direct 3d covered with open gl, we just gotta get the rest of the stuff covered. But IMO i think this might happen sooner than you think, it is my firm belif that once the Xbox gains motion ms will drop support of Direct X on the pc in order to convince PC games developers to come develop for its Xbox. It would make alot of business sense to do that. YMMV
    • Something like SDL [libsdl.org] you mean?
      • is it windows compatible.. i heard about htis along time ago, it doesnt look like its windows compatible isnt this what loki use to port there games?
        • Re:What about.. (Score:2, Informative)

          by Anonymous Coward
          I'f you'd followed the link to libsdl.org you'd see it IS win32 compatible, as well as a number of other platforms.
          • i had a look at the link yse, but all the of the files in download where .tar.gz so i assumed it was linux only, my bad
            • Quoting part of:
              http://www.libsdl.org/download-1.2.html
              """
              Runtime libraries:

              Linux:
              SDL-1.2.3-1.i686.rpm (Mandrake 8.1)
              SDL-1.2.3-1.ppc.rpm

              Win32:
              SDL-1.2.3-win32.zip

              BeOS:
              SDL-1.2.3-x86-beos.zip (BeOS 5.0)

              MacOS:
              SDL-1.2.3-PPC.sea.bin

              MacOS X:
              SDL-1.2.3.pkg.tar.gz
              """

              Notice the .zip extension.

              Check out the games section -- a number of them have precompiled Windows binaries. PyGame (a python interface to SDL) is also well worth checking out.
      • Re:What about.. (Score:3, Interesting)

        by Junks Jerzey ( 54586 )
        Something like SDL you mean?

        SDL is a 2D frame buffer and blitting library for the most part (to be pedantic, it also includes input, CD, and minimal sound libraries). The 3D side of SDL is just OpenGL. So if OpenGL goes away, so do 3D graphics in SDL.
        • And if C goes away, then SDL is completely useless!
        • SDL is a 2D frame buffer and blitting library for the most part

          I disagree - what about keyboard, joystick, sound stuff? OpenGL doesn't support that, and it's needed to write a game.
          I am porting Atari800 emulator to SDL, now. So I can compare it to other ports - like svgalib or X11. You need a lot of code to write something with pure X, svgalib is messy, too, becouse you have to write keyboard, sound, and joystick driver. SDL it's great library - no matter if you do 2D or 3D game. I think using SDL is best way to write any linux game. And you will get Win/BeOS/Unices portability for free.
    • Well yes and no. MS, won't drop support for it. The market place isn't one unified group where you can easily get people to trade their PC's up for a console.

      They will appeal to both markets if they use both the console and the PC. Why? Because both makes money and dropping one would cost them too much in profit. Well this is the way the market is now...and has been for a while.

      Making games for consoles is hard, you can't be as free as you would like to be on all your resources. Yes I know this is slowly changing esp with the PS2 and the X-box. I may not like the X-box personally, but I recognize that it has potential.
    • I've heard of SDL (not even sure what the acronym stands for) being used for cross-platform development between Win32 and Linux-- what kind of functionality does this cover?
      • Keyboard, mouse, peripherals, and most importantly connecting opengl to the screen!

        Something that might seem a little odd about OpenGL is that it can't open a window, so you always need some platform specific code (normally GLUT, which is already crossplatform) to do it. SDL is a nicer alternative.

        0.02
      • Re:What about.. (Score:3, Informative)

        *S*imple *D*irectmedia *L*ayer

        It's a very low level media abstraction library, together with a lot of small extensions which add support for image processing, truetype fonts, sound mixing, etc.

        http://www.libsdl.org/libraries.html
        shows what extension libraries to SDL exist.
      • Re:What about.. (Score:3, Informative)

        by DGolden ( 17848 )
        Simple Directmedia Layer. It, and associated add-on SDL_* libraries, provide simple, easy to understand and use, pure C APIs to the basics a game developer would want. It handles input/output, and acts as a gate to OpenGL for 3D.

        (from SDLsite:)

        Simple DirectMedia Layer is a cross-platform multimedia library designed to provide fast access to the graphics framebuffer and audio device. It is used by MPEG playback software, emulators, and many popular games, including the award winning Linux port of "Civilization: Call To Power." Simple DirectMedia Layer supports Linux, Win32, BeOS, MacOS, Solaris, IRIX, and FreeBSD.

        SDL is written in C, but works with C++ natively, and has bindings to several other languages, including Ada, Eiffel, ML, Perl, PHP, Python, and Ruby.


        Pretty much all commercial games on linux use SDL, as well as most new little games on linux, and a fair proportion of them on windows and MacOS

        See here [libsdl.org] for more details.
        • Simple DirectMedia Layer supports Linux, Win32, BeOS, MacOS, Solaris, IRIX, and FreeBSD

          Does somebody know a subset of SDL running on PalmOS :-)
    • As far as DirectX video acceleration goes, usinf the Linux kernel framebuffer and XFree86 4.1 can get you very close. Direct support exists for Matrox, ATI, etc cards, and unsupported cards can still use the standard VGA framebuffer. Using the included DRI support with X and the kernel framebuffer allows the X server to directly access the framebuffer, which accesses the hardware, bypassing some of the slower, intermediate X calls. It's not perfect, but its a heck of an improvement in performance, and still allows the kernel to control video access, rather than letting your programs stomp all over the OS/kernel/video hardware.
    • Check out GLUT [opengl.org], it does a pretty decent job of covering the cross patform graphics and input handling that directx does.
      • GLUT is very ugly to work with. It is not Object-Oriented. I use it somewhat like I use MFC. Great to get things up and running quick. But I wouldn't use it for production level code.

        The input handling is annoying at best. It is similar to handling the WM_KEYDOWN and associated messages. Who want's to do that. That is an inflexible solution. That is why DirectInput was invented. To give a OO, abstracted view of the input devices connected to your machine.

        Just to add to that. I really hope SDL [libsdl.org] really takes off. Without trying to be rendundant, developers really need a cross-platform library that matches DirectX. I can't wait until the first major game is released using SDL or something similar. That's the better solution to getting games for Linux rather than WineX or whatever other emulator you want. And contrary to what they claim, Wine is an emulator.

        You have to admit that Windoze is king in the PC gaming market. So let's develop a library that allows developers to make games on Windoze...and then get Linux, Mac, etc ports for free.

        • That is why DirectInput was invented. To give a OO, abstracted view of the input devices connected to your machine.

          Abstracted? OO? So, in fact, it's the exact opposite of "direct". :-) Sorry, just anobligatory dig at Microsoft's misleading naming schemes...
    • by Tord ( 5801 )
      I know others have allready said this but as a former game developer, with more than 6 years of professional experience, I just want to add some weight to the argument that SDL indeed is what you are asking for.

      I've never used SDL professionally (I've used Direct-X, Glide, PlayStation specific API's and some old inhouse stuff for DOS), but I've toyed around with it in my sparetime and I would have no trouble trusting it as the foundation for a high-quality cross-platform game (both 2D and 3D). In fact, I would rather use it than Direct-X since I find the API simpler and more straight forward as long as I don't need some obscure Direct-X feature for performance reasons (most games don't).

      The URL is www.libsdl.org if you want to check it out.
  • I'll get 2x the frame rate?

    :)
  • Shading Language (Score:5, Interesting)

    by bribecka ( 176328 ) on Wednesday November 21, 2001 @08:53AM (#2595452) Homepage
    It's about time they get a programmable shading language in OpenGL--that is the most lacking feature in my opinion. Probably 90% of the textures used in things like games could be eliminated and replaced with much higher quality shaders that not only get rid of the repeatability of textures, but also *gain* detail as the distance decreases.

    Can't wait! Hopefully they'll base it on something already well established, ie. Renderman SL.
    • There's always SGI's OpenGL Shader, an evaluation of which is available for free for Linux 2.2, and IRIX 6.5 (currently at IRIX 6.5.14). Get it here:
      http://www.sgi.com/software/shader/ [sgi.com]
    • I hope not renderman. I'm finding that renderman shaders give to much freedom to the shader writer that it makes it hard to create a unified environment for realistic rendering such as inter-diffuse reflections and caustics. Special shaders have to be created in order to do this, but its hard to unify them to your settings for example in distributed ray tracing, bi-directional path tracing and photon mapping. A shader for example in Renderman simply returns color and transparency but with distributed ray tracing path tracing and photon mapping (advanced rendering techniques) you have to know the probability of a photon or ray being absorbed (transmitted) or reflected (speculary or diffusely) or the shader would at least have a second function which bounces photons or rays off it, in order to get high quality graphics with the least amount of calculations. This also has its issues because some shaders could never work in a realistic way, like cel shaders for non-photorealistic rendering, but these can be implemented as secondary shaders that sit on top of realistic shaders, so you do your calculations in a realistic scene then convert that data to look nonphotorealistic on a secondary shader/filter, but this again only works with a few shaders, a number of shaders are just not realistic at all. Phong highlights (dont confuse with phong shading) are another example of a shader not being real, for realism you would need a slightly rough surface which causes the reflection of a light source to appear soft and blurred, phong highlights are a fake approximation of this effect which will some day be replaced by real reflections of rough surfaces.

      Obviously I am looking far into the future when real time graphics are going to start looking at real time photo-realistic lighting. Hopefully some day these kinds of things...
      http://www.3dluvr.com/marcosss/
      http://jackie.sf.net/
      ...will be done in real time, but at those times there will be problems as the ones I describe above with shaders.
  • Way too late. (Score:4, Interesting)

    by Otis_INF ( 130595 ) on Wednesday November 21, 2001 @08:59AM (#2595472) Homepage
    Microsoft, allthough they're a member of the ARB, has > 90% of the desktop market, and is moving forward with a rapid speed towards the heavy workstation market. With this situation comes the fact that DirectX is THE platform to target when it comes to 3D accelerated code.

    What's another issue is that Microsoft, up till now, has refused to distribute an updated opengl32.dll with their Operating Systems. The current version is the old OpenGL v1.1 compatible version. SGI has said it has distributed a v1.2 version to Microsoft, but for whatever reason, it's not distributed further to the clients. This widens the gap between a non-uniform OpenGL platform still on v1.1, forcing you to use non-standard stuff like vendor-specific extensions and vendor specific opengl loading on one side and the DirectX API on the other. Without Microsoft's help, OpenGL will never be in the front seat again on Windows systems and because they're gaining a lot of marketshare in the workstation market, also not in that typical OpenGL area.
    • Re:Way too late. (Score:2, Insightful)

      by Anonymous Coward
      Speaking as someone in the "heavy workstation" area - MS is being squeezed OUT of that environment, not growing into it. Linux is popping up, left, right and centre. Since the applications people run on "heavy workstations" tend be OpenGL-on-IRIX applications, it's a no-brainer to use Mesa-on-Linux instead. Maya (CGI), Fluent (CFD), etc, etc, are all shipping linux versions, and are seeing a drop in takeup of WNT stuff.
    • " Microsoft, allthough they're a member of the ARB, has > 90% of the desktop market, and is moving forward with a rapid speed towards the heavy workstation market. With this situation comes the fact that DirectX is THE platform to target when it comes to 3D accelerated code. "

      Just because MS has this market share doesn't mean that it is *the* 3d API to use. Sony probably sells the most component receivers out there but that certainly doesn't mean that it is the best choice.

      There is no reason why openGL couldn't be used on windows. You might just have to prive a recent openGL dll. Same goes for java.

      I will say that it might be easier to use the solutions that MS pushes (shoves) - but that doesn't make it better.
    • Re:Way too late. (Score:4, Informative)

      by Mike Connell ( 81274 ) on Wednesday November 21, 2001 @09:20AM (#2595554) Homepage
      Not really. Most people using windows for 3d graphics in the workstation area are using a high end graphics card. By that I mean GeForce3 or faster.

      They all come with OpenGL drivers. You dont even notice that MS doesn't ship them. Install video cards drivers, get OpenGL.

      MS is really in a position to lose market here to Linux because of this: Linux on a PC with fast 3d (via nvidia for example) is infinitely more like the workstation being replaced than NT on a PC is.

      At the higher workstation end (higher than GeForce3), people aren't yet looking at windows because the hardware isn't there anyway.

      I think it'll be a while before OpenGL dies, especially as in all markets people are finally moving up the ladder - to scenegraph API's like this one [openscenegraph.org].

      If the SG supports both DX and OGL backends then you dont even have to think about it.

      my random 0.02,
      Mike
      • Acutally a lot of people use real 3d workstation cards, like the oxygen cards from www.3dlabs.com. Most of those cards (read:not all) do not really have much directx support at all. From what I can see, directx is seen as a set of technologies used for gaming, i don't think its quite as precise when it comes to workstation/cad/3d rendering as opengl is, but thats just my opinion..

    • the only people using that opengl dll are the people using software rendering. how many people do you know who use opengl in software rendering mode?
    • Re:Way too late. (Score:3, Interesting)

      by praedor ( 218403 )

      Hrmmm. Apple is up and coming (again) with its nice PPCs and MacOS X (UNIX!). This and the fact that even MacOS 9 doesn't use Direct X means that software that goes out there for Macs and PCs, and seeks to stick with the growing (again) Apple market will have to stick with OpenGL. Mac doesn't do DirectX (thank GAWD!).


      Many still do lots of graphics stuff on Macs, this means OpenGL. Games on Macs will have to be OpenGL.


      Fortunately, since there is but a relatively minor difference between MacOS X and Linux/*BSD, support for Macs means easier time getting support for Linux/*BSD in this area. DirectX is not and has not killed OpenGL. It cannot.

  • I'll just rename all my opengl.dll to opengl2.quake and everything will seem to run faster.
  • by arQon ( 447508 ) on Wednesday November 21, 2001 @09:04AM (#2595498)
    Honestly, the only really annoying thing about working with OpenGL lately is the headaches that come from pixel/vertex shaders. We certainly need a vendor-independent way to support those, because damned if I'm going to rewrite mine for ATI cards - they'll simply be treated as "not supporting vertex programs".

    The synchronisation stuff is pretty handy: certainly, NV_fence has been very useful over the past year or so, and again: vendor-specific paths BAD. :)

    Some of the changes seem to be as much to persuade some-ignorant developers to use OpenGL over D3D - the "black box" aspects of OpenGL are one the more DESIRABLE things about it. Changing those because some D3D guy is saying "I do xyz in D3D and I want the exact same concept to work well in GL because I'm too thick to actually use the right approach for that renderer" seems simply wrong to me.

    Uh-oh: UPS just kicked in. Yay mountain storms...

    "Pure" OpenGL2 is a terrible mistake. Give vendors the option NOT to support something, and they won't. Then all your old apps+games are up shit creek.

    Will finish later when I have stable power again...
    • Yes, well, the way OpenGL development works is that the features first show up as a "Vendor Extension", then if they're any good, the Architecture Review Board sits down and folds them into the main specification. You're just seeing the middle of this process. But, hey, the process works, and it's picking up speed again,
      after a whole load of (largely MS-related) shenanigans involving the ARB and SGI (MS basically did the same to SGI as they did to IBM with OS/2 and WNT, with a project called "Fahrenheit" - which wasted lots of SGI's time and money. SGI should have known better than to deal with MS)

      Since Apple and BSD, commercial unix and Linux are now all firmly on the OpenGL path, it's unlikely OpenGL will go away...
    • >"Pure" OpenGL2 is a terrible mistake. Give vendors the option NOT to support something, and they won't.

      I believe that the 'optional' part will be from the programs point of view - that means someone can't produce a pure-opengl 2.0 compliant library that isn't providing the backwards compatibility (without them failing the GL compliance tests that is).

      If you want to use pure OpenGL2.0, you can, but your legacy apps will still run just fine.
    • Actually, I kind of like the idea of having a simpler, SMALLER subset of OpenGL.

      1. Implementors would be cutting their own throats by dropping legacy support. Not to mention they would fail conformance.

      2. Implementors can now point to a small subset of the API and say "this must be done the best/fastest" because future applications will be migrating toward it. Currently, implementors have to look and see what parts of OpenGL various applications are using, and pick and choose which subsets of the API to optimize.

      3. Legacy OpenGL has a lot of CRAP in it. Crap that is useful from an academic perspective, but not heavily used in many real apps. (Selection and TexGen come to mind).

      4. Immediate-mode/display lists/vertex arrays. Do you need three different ways to specify your scene? Chop out everything but vertex arrays and call it OpenGL Pure. Fewer code paths mean there are fewer ways implementors can screw up their drivers.
    • Ah, the joys of electricity... :)

      So, yeah: the Stanford PS stuff would be a very nice fit, from the looks of things. Certainly, something along those lines is needed, and it might as well be something that's actually be thrashed around a bit.

      One thing that needs to be stressed wrt OpenML is that it's NOT necessarily what we want in OpenGL. While there's a clear overlap in some areas, the ARB needs to make sure that they don't end up kitchen-sinking it. The concern here is dependencies: in the same way that you try to keep classes as loosely coupled as possible, you also don't want your specs tied to each other so that changes in OpenML end up being mirrored slavishly in OpenGL irrespective of whether they actually add real value *from an OpenGL perspective* or not.

      I'm absolutely NOT going to buy into the "xyz existing feature should be cut out" approach. Are the non-indexed primitives fundamentally worthless for "real" work? Yeah, pretty much. But they're also an easy way for newbies to mess around with OpenGL and get instant results. Those of us who suffered through the POS that was D3D's execute buffers would prefer that people not need 80 lines of code just to draw a single triangle. Every language needs its "Hello World", and that's basically the function that immediate mode serves.
      Even display lists, which I haven't used in many years now, seem to have a place - I gather that on a fair number of non-"consumer class" systems they're still the way to go for a lot of things.

      The "focus on this stuff which must be done the best/fastest" argument is a load of bollocks. Developers and IHV's already know what the important fast path is, and have done for years: it's glDrawElements on fully-featured data in CVAs.

      The "fewer code paths mean there are fewer ways implementors can screw up their drivers" claim is fundamentally flawed, because we would certainly hope that any vendor who DOES drop "legacy" support goes out of business, so those "old" code paths will still be in every driver. Or at least, every driver for the cards *I'll* own...

      Basically, given a decent shading language, we're pretty much set in terms of features. That's BY FAR the most important part of this.

      GLsync (i.e. NV_fence extended and renamed) is a nice thing, although it's as much a matter of doing the Right Thing for the future as anything else, since we can EASILY saturate GeForce2-class cards ATM. Saving a few microseconds in setup/transform is all well and good, but since the true bottleneck for common apps (i.e. games) right now is memory bandwidth, the actual gains are basically non-existent. Still, there are other apps that DO benefit greatly from the improved parallelism, and like I say it's the Right Thing anyway.

      The pack/unpack features are nice for certain effects, but they're clearly an example of "add this predominantly useless feature just to shut D3D people up". While a pack processor MAY lend itself to something useful (though given what's available through the pixel/vertex shaders that's a bit iffy), the unpack aspects seem almost completely pointless to me. Basically, the amount of "useful" work you can do in software is close to nil, and ANY glReadPixels-style call is going to bog because you're stalling the pipeline. That the other costs of such an action can be minimised is all very lovely, but next to the stall they're absolutely trivial.

      The memory management is a *shrug* feature. Useful in some places, and I expect I'll whore certain aspects of it once it's available, but in reality it's nothing like as big a deal as some people seem to think - again, most of it is "D3D has this and we don't". Strangely enough though, this perceived lack doesn't stop my renderer or, say, Quake3's consistently outperforming any D3D-based one, does it? :)
    • We certainly need a vendor-independent way to support those, because damned if I'm going to rewrite mine for ATI cards - they'll simply be treated as "not supporting vertex programs

      This illustrates another problem. ATI cards support 6 simultaneous textures. NVidia cards support 4. Kyro supports 8, but doesn't support pixel shaders. Code that will work with all of these will need to use ordinary DX or OpenGL multitexture commands, and only 4 textures. This is the worst of all of these. To get the best performance and image quality for all these cards, you need to write 3 different versions of the same code. We need a universal shader language with a good fallback mechanism.

      the "black box" aspects of OpenGL are one the more DESIRABLE things about it.

      Yep. Basic OpenGL is easy to program for. Tell it to transform an object, apply textures, light it, and it does it. No need to tell it about whether its using hardware for the T&L. Perhaps it would be nice to have more direct control over buffers since a lot of programmers think in terms of chunks of data rather than state, but the basic framework works very well and doesn't take long to learn.
  • by codexus ( 538087 ) on Wednesday November 21, 2001 @09:07AM (#2595512)
    Here's a very nice article [hardwareaccelerated.com] about the future of OpenGL. It might be easier to read than the full OpenGL 2.0 white papers.
  • by Anonymous Coward
    How Rasterman address 2.0 in evas.matt
  • Humm... (Score:3, Interesting)

    by GISboy ( 533907 ) on Wednesday November 21, 2001 @09:49AM (#2595666) Homepage
    Some of the propisitions are good, but I fail to see how it will help.

    Let me explain, and ignore the hardware issues to some extent:
    Looking back at a summary of 3d api's:
    Glide: Wickedly fast , easy to write for, obsessivly propritary (IIRC) a la 3dfx.

    OpenGL: Fast (glide speed on compliant/correct hardware), moderatly hard to write for initially but easier as time goes by. Open/closed? I honestly forget. Was it documented API's and closed source?

    DirectX: Humm. Used to be "Dog ass slow". Moderatly fast, maybe medium speed, very compatible, used to be (maybe still is) hard as hell to write for (this may have changed).
    Eventually, If memory recalls correctly, absorbed the "better aspects" of other 3d api's, but also added another degree of difficulty/confusion to implementation.

    Now, realize I base this off of my: gaming, memory (oops) and discussion of merits I've read/heard/been privy to.

    I'll focus on gaming because that is where the rubber meets the road (or where the 'trons hit the tube).

    OpenGL: Quake/GLquake (...or was it GLquack? heh)
    Even the "software" mode went from Impressive and Eye opening, but GLquake put the "am" in {higher octave in voice} 'Daaaaaaaammmmmnn'.
    Fast and pretty (for its day).

    Glide: Quake2, software vs Glide mode. No debate until you can pick your jaw off the floor and keep your mouth closed for more than 5 seconds.
    (and stop drooling on my keyboard, dammit).
    Glide was the...damn, what is the word?...pinnacle, saviour, "schwing" that low, med end hardware needed to be 'high endish'.
    (example: friend of min playing tomb raider on a G3 333...Showed him TR on a p200 w/V2...'fuck you, man' was the response {seg})

    DirectX...Thief: Glide vs DX{murmph-snort-bwahahahaha}...Hummmm: 30 fps in glide, .3 in directX? on the same hardware.
    Nowadays, the software has improved incrementally, but the hardware by leaps and bounds...making the s/w look good. Humm.
    Sort of the reverse of Glide--software made the h/w shine--here it is the other way around.

    I think OpenGL is trying to bring back the "make the hardware shine" days back.

    Good luck, because if I understand the way DX is now...it has "absorbed" the api's of GL and Glide and whatever company made the software in the first place (damned if I can find the link...british company I believe).

    Well, I've blathered on long enough. Not bashing, just offering my opinion and what I saw and heard in a nutshell.

    The eyes don't lie...that is the lips job. :\

    Cheers.

    • API's can't be faster or slower. They're just interfaces. The "speed" of your graphics depends on the driver implementation and bus speed (for tris/sec) and the graphics hardware underneath (for texels/sec).

      As for how easy one is to program, well, at this point anyone who knows one API and has a good grasp on 3D fundamentals can probably pick up another API in under a week. Immediate-mode 3D APIs aren't really that different anymore.
      • > API's can't be faster or slower. They're just interfaces.

        True, but API's can make a large difference in how well they lend themselves towards hardware acceleration. API's that require lots of data copying will leave you with slow-running applications.

        One of the biggest problems with the current OpenGL API is that there are too many ways to try to accelerate geometry rendering, and they don't work equally well on different hardware.

        In a way, it's kind of "nice" that DirectX totally rewrites itself every couple of versions. Now they have a single way of specifying primitives, and it's relatively simple to understand, and it lends itself well towards hardware acceleration.
    • it has "absorbed" the api's of GL and Glide and whatever company made the software in the first place (damned if I can find the link...british company I believe)

      Survey of Microsoft DirectX [depaul.edu]

      I quote:
      "What is now Direct3D was purchased by Microsoft in early 1995 from RenderMorphics, a British company founded in 1993 by Servan Keondjian"

    • (pushes glasses up nose)

      As Stiletto already noted, Direct3D and OpenGL don't really differ in performance anymore (if they ever did in the first place).

      Also, Quake 2 wasn't really Glide *per se*. It was OpenGL that interfaced a custom 3DFX 'MiniGL" driver. Nevertheless, it was coded to use the OpenGL API. (You can still run Quake 2 on an Nvidia card in hardware mode, which obviously wouldn't be possible if the game were truly Glide-based.)
      • (correcting the corrections to my corrections... /me checks cigarette pack...nope no crack.
        Too much caffinee, perhaps).

        Sorry about that, I said "api" when "drivers" is what i was meaning. Thanks, Stiletto.

        What made Glide, and by extension, 3dfx, "kick ass" vs. Direct X at the time was:
        Glide was easy to program, closed, and most importantly meant for Voodoo cards, only.

        If I am not mistaken, Nvidia hacked or created a wrapper for Glide to work with thier cards *and* use a "windowed" mode.
        3dfx "screamed bloody murder" because they did not want Glide "co-opted", "diluted" or "tainted"...or something along those lines.

        Also, Quake 2 wasn't really Glide *per se*. It was OpenGL that interfaced a custom 3DFX 'MiniGL" driver. Nevertheless, it was coded to use the OpenGL API. (You can still run Quake 2 on an Nvidia card in hardware mode, which obviously wouldn't be possible if the game were truly Glide-based.)


        Ah, thank you, I was working to that. Sadly that was the mental jigsaw piece that did not fit into place...doh!

        I think (based on my own observations) that most of the animosity to DX is base on the same premise as was Glide.
        Only difference is Glide is/was hardware specific and DX is OS specific.

        OpenGL is the "saving grace" as it is hardware and software "agnostic", but the latest shiney object/whizz bang features are not implemented yet.
        So, the "perception" is that GL is behind the curve, even when some features have not been implemented by the drivers by the hardware manufacturers themselves.
        (witness Nvidia, ATI and the company formerly know as 3dfx...they all did it to greater/lesser degrees).

        I think that it boils down to on the politics that companies wanted higher "fame-rates" and gamers want higher "frame-rates".
        Making GL the "dark horse", so to speak.

        Sound about right?
  • At least at the rate things are currently moving. You still can't rely on OpenGL 1.2 on any system, as drivers are few and far between.

    The problem is that DirectX is getting very complex, and OpenGL is getting very complex. It has been hard enough for video card makers to get stable drivers of any kind in the past. This is only going to make it much worse. I can see cutting OpenGL support to be a common decision.
  • by DG ( 989 ) on Wednesday November 21, 2001 @10:08AM (#2595776) Homepage Journal
    Hey John, have you seen this spec yet?

    What do you think?

    DG
  • From the time Microsoft and SGI announced they were working together to improve 3D performance on windows, I assumed it was a way for each of them to learn/borrow/steal ideas from each other.

    Unfortunately for SGI, there wasn't much to learn from Microsoft and MS is better at deception. As others have mentioned, DX has borrowed/stolen api and ideas from other competing API. It's great for gamers, but bad for the competition.

    Is anyone really surprised Microsoft hasn't released new openGL drivers for windows? Unless game development on other platforms gain momentum, DX will eventually win and OpenGL will fade away. It's really shame, since OpenGL and Glide are better, but what does average joe care about. If it runs fast on their windows box, no one really cares. At the current rate of performance improvement in GPU's, no one is really going to care about squeezing out the last ounce of power. Only exception I can think is scientific application that absolutely need every ounce of power. Doing realtime simulations/modeling require insane power. But those are nitch products.

    • Doing realtime simulations/modeling require insane power.

      That's very true. But I very much doubt there are any exceptions at all that optimising the drivers and shaving off cycles is going to help much.

      The only mental requirements for processing power I can tell is nuclear modelling and simulated medical trials. This is all precomputed and wouldn't really benefit from a perpixel shading language.. Also considering Churches law this point is made mute. Would a faster graphics API, get me more completed units with Seti? No.

      I do agree with your comment about MS deception with Farenheit. But didn't we all see this coming?

      -J

    • Realtime simulations was one thing OpenGL was designed for. The idea being that you had some really big supercomputer doing the number crunching, while you have a smaller workstation level computer do the graphics rendering. That's why OpenGL works over the network. At least the IRIX versions do, I don't know about the WInNT versions.
  • When OpenGL 1.0 was released in 1991, the API was ahead of the hardware and hardware had to catch up.

    Somewhere along the line, the hardware surpassed the API. That's when shit hit the fan with the crap-assed extension system.

    So OpenGL needs to forward thinking. Make the hardware push the limits to catch up to the API again. And that is exactly the reason behing the Shading Language.

    Good luck guys. Can't wait to see the unveiling next year at Siggraph!!

  • Fahrenheit (Score:3, Interesting)

    by codework ( 252361 ) on Wednesday November 21, 2001 @10:54AM (#2596041) Homepage
    I'm just please that OpenGL is progressing to the next level, especially after the Fahrenheit fiasco. It's my belief that MS used it to distract SGI/ARB from OpenGL to progress DirectX unhindered. And look at market place now, nearly 100% DirectX penetration if we're talking games, and even near that for typical workstation/cad style products that OpenGL considered it's home market.

    Don't get me wrong though. I Love OpenGL. Not only do I consider it the _only_ 3d api, I consider it one of the most professional and designed APIs. It's how an API should be designed.


    Perpixel shading is a very welcome addition and should save some texture memory. Imaging all the Q3 shaders implemented in hardware... yum.. And I'm sure you could implement some nice trees with it too..

    Although all these nice additions to an API won't stop inventive programming. There will still be a need for billboard trees and highlights..

    Even so, the additions to the api will create even more ingenious implementations. Lionheads use of mipmapping for bluring distant objects was ingenious. Look at how far ModeX pushed the pc, or how Mode7 pushed the Nes. With a more powerful API the possibilities appear endless

    Unfortunately I don't see drivers appearing for a long time..

    -J

  • by Mongoose ( 8480 ) on Wednesday November 21, 2001 @11:02AM (#2596092) Homepage
    I'd like to inform everyone in these groups:

    a) OpenGL is dead!

    b) OpenGL is out of date

    c) Let's ditch OpenGL and do DirectX

    DirectX isn't the same thing as OpenGL, however you can compare D3D and OpenGL. DirectX is for sound, input, and rendering not just rendering, kids.

    OpenGL will outlive D3D, since it's what big iron and the 'professionals' use for high end graphics. Also hardware vendors produce GL extentions way before D3D work has even started. GL can use extentions made *after it's release to support more features quickly and easily. ( If you're in one of these camps you never done 3d development, or think all computers are consumer PCs. )

    Also if you use DirectX, you're limiting yourself needlessly. If you want the "latest and greatest" , then you're not going to use an API that has no modular extention system to support hw/ideas made after the API release. OpenGL can support hw/algorthims that happened *after it's release. OpenGL also runs on manchines a lot more powerful than your pentium 4 you bought at comp usa.
    • DirectX isn't the same thing as OpenGL, however you can compare D3D and OpenGL

      You are correct... Direct3D is at version 8.0 and OpenGL is at 1.3. That means Direct3D is ~6.15384 times better than OpenGL. If you have been told otherwise then you have been misinformed.

    • 1) Right now, D3D stuff comes out LONG before standardized (ie. not vendor specific*) extensions do.

      2) Who cares what OpenGL runs on. If you're doing consumer level 3D, the x86 is all you have to worry about.

      *> I'm so surpised that a GPL obsessed place like /. supports OpenGL extensions that are proprietory and lock developers into particular hardware!
  • by Anonymous Coward
    Nintendo claims they sold out of their inital shipment of 700,000 gamecubes. The GameCube uses OpenGL as its rendering library. I'm convinced Sony will use OpenGL in the PS3 (expected in 2-3 years) because there is no way they can use a proprietary API while their two competitors are using PC-compatible APIs. OpenGL has a really bright future.
  • I dunno - I quite like OpenGL, but for realistic rendering, raytracing is pretty cool... Imagine having HW capable of raytrace-rendering 100s of renderman frames per second, and using that to animate in real time.

    Now that would really be cool.
    • I dunno - I quite like OpenGL, but for realistic rendering, raytracing is pretty cool... Imagine having HW capable of raytrace-rendering 100s of renderman frames per second, and using that to animate in real time.

      The odd bit is that we're probably at the point where people could start using realtime raytracing engines. I suspect the reality is that if you were to drudge up some old ratracing engine from years ago and rig it to run in realtime, it still wouldn't look as impressive as as the next generation engines running on a GForce3. Raytracing initially had an edge over rendering the facets directly because it provided lighting effects, shadows, and reflections. The current engines are already capable of doing this stuff, why would you use raytracing?

      At one point I assumed that eventually computers would be so powerful that raytraced VR was inevitable. The achievments that have been accomplished with other rendering technologies make that future doubtful.

      • the ART RenderDrive (art-render.com) is a hardware raytracer which supports 3DS Max and (i think) Maya through plugins.

        Given a simple enough scene (under a few hundred thousand polys) i believe this hardware could just about do 30fps raytracing.

        Their PCI card claims a peak rate of 1.1 billion ray/triangle intersections per second, which should probably correspond to (given ~5 pixel triangles), around 220 million triangles per second.

        divide that by 30, and you would think that this card could render a 7.3 million polygon scene in realtime.

        This is a peak rate, so halve that. It also doesn't account for texturing etc., so halve it again. Take into account reflections, refractions, transparency etc. - each ray likely intersects more than one triangle, so halve it again.

        However, it would appear, based on the manufacturers specs, that ART's PURE 3D accelerator card could handle raytracing a 900 thousand polygon scene at 30fps, which is 'realtime' as far as i am concerned.

        This doesn't address the PCI bus bottleneck, so the card would probably choke moving scene description data to the card and image data off the card fast enough - so while it could theoretically render this many polys, it most likely can't handle the I/O requirements of realtime rendering.

        It also doesn't address the performance bottleneck of getting your CPU to transform those 900,000 polygons.

        Of course, realtime interactive raytracing is not what this product is designed for, as far as i can see, but the capability is there to put a chipset like the one used in this product behind some super-fast data path on an AGP bus and do realtime raytracing.

        It would be expensive, but it would probably work.

        .
  • ok walk into a design shop what do they have some SGI's some NT boxen and a smattering of Mac's for photoshop

    what do the SGI's and the NT boxen do ?

    they run the CAD tools

    where do you think 3DLabs gets it's money from ?

    yes that CAD market, now CAD tool vendors actually HAVE to provide versions for evil versions of Unix (HP-UX) so they say
    "we need to write a complex graphics product that works on **ix, ***UX and win**** where's that OpenGL docs"

    but now they say fsck that they all run with OpenGL extension bob so we shall use bobs features

    3DLabs problem is they and many others dont have bob and NVidia until now has kept them propriatry (once ATI offered their extensions free and open to the ARB NVidia changed their minds)

    intresting but I wonder how long till silicon ?

    regards

    john jones
  • Comment removed based on user account deletion
  • All I want from OpenGL is the ability to save and restore portions of the color and depth buffers offscreen and be able to restore portions of them. This is easy in DirectX, yet is absolutely impossible with OpenGL and no extension exists that does it.

    The closest thing OpenGL has is the KTX_BUFFER_REGION extension, but unfortunately that extension's specification is broken; it fails to allow this feature properly because it says that any copy operation can erase data from previous copy operations -- totally freakin stupid.

You can tune a piano, but you can't tuna fish. You can tune a filesystem, but you can't tuna fish. -- from the tunefs(8) man page

Working...