OpenGL 2.0 White Papers 129
Timothy J. Wood writes "3DLabs has posted a series of white papers on OpenGL 2.0 covering topics such as improving parallelism, timing control, minimizing data movement programmable pixel pack and unpack and (most notably) a proposal for a hardware independent shading language."
Sounds good. (Score:1)
I think it will be a fun upcoming year for graphics programming B-)
Needs 3D hardware support (Score:1)
Meeting minutes (Score:3, Informative)
> Bimal: Devil's Advocacy question: why do we want OpenGL to survive? If IHVs can't articulate this and drive progress, it won't survive.
I'd be really sad to see OpenGL go. Its the only way I've been able to fart around with all that graphic lovelness since University, doing my bit with deformable objects.
I hope they get their finger out and pull it off. Apple should be helping to sponsor this sort of thing really IMHO...
Re:Meeting minutes (Score:3, Insightful)
The margins for software and hardware vendors in this market are much larger, and more secure, than in the games software market (where most products barely break even) - you can get away with charging £20000 a year for a license to many specialised programs.
Re:Meeting minutes (Score:1)
Re:Meeting minutes (Score:1)
You mean stick around for the next 40 years?
Re:Meeting minutes (Score:4, Insightful)
(b) As a mechanical engineer and computational fluid dynamicist, I assure you, the workstations are not "dominated by windows" - most people are still on SGIs, and the majority of those that aren't are moving to Linux, not Windows NT, under the advice of the application vendors, who find supporting their apps on linux much less of a pain than on WinNT.
Unix Clusters aren't going away either. Just because you can do on one computer what took a cluster two years ago, doesn't mean that people like me won't just find more complex problems to do. Depending on the application, there's a spectrum of cost/performance solutions that may be worthwhile - if you're simulating a nuclear explosion, and CPUs get more powerful, you don't necessarily downsize, you might make the simulation more accurate by using roughly the same amount of computers to do much more. Human's AREN'T able to simulate the physical world with complete accuracy - but the more calculations, the better (assuming perfect programming), at least until you hit quantum limits, and then it takes EVEN MORE power to do probabilistic predictions via monte-carlo or sum-over-histories....
Re:Meeting minutes (Score:2)
Bruce's Law: Every 18 months plus one day, the size of computational problems doubles.
;-)
What about.. (Score:3, Insightful)
Re:What about.. (Score:3, Informative)
Re:What about.. (Score:1)
Re:What about.. (Score:2, Informative)
Re:What about.. (Score:1)
Re:What about.. (Score:1)
Re:What about.. (Score:1)
Re:What about.. (Score:1)
http://www.libsdl.org/download-1.2.html
"""
Runtime libraries:
Linux:
SDL-1.2.3-1.i686.rpm (Mandrake 8.1)
SDL-1.2.3-1.ppc.rpm
Win32:
SDL-1.2.3-win32.zip
BeOS:
SDL-1.2.3-x86-beos.zip (BeOS 5.0)
MacOS:
SDL-1.2.3-PPC.sea.bin
MacOS X:
SDL-1.2.3.pkg.tar.gz
"""
Notice the
Check out the games section -- a number of them have precompiled Windows binaries. PyGame (a python interface to SDL) is also well worth checking out.
THATS WHERE I HEARD IT (Score:1)
Re:What about.. (Score:3, Interesting)
SDL is a 2D frame buffer and blitting library for the most part (to be pedantic, it also includes input, CD, and minimal sound libraries). The 3D side of SDL is just OpenGL. So if OpenGL goes away, so do 3D graphics in SDL.
If OpenGL goes away? (Score:3, Funny)
Re:What about.. (Score:1)
I disagree - what about keyboard, joystick, sound stuff? OpenGL doesn't support that, and it's needed to write a game.
I am porting Atari800 emulator to SDL, now. So I can compare it to other ports - like svgalib or X11. You need a lot of code to write something with pure X, svgalib is messy, too, becouse you have to write keyboard, sound, and joystick driver. SDL it's great library - no matter if you do 2D or 3D game. I think using SDL is best way to write any linux game. And you will get Win/BeOS/Unices portability for free.
Re:What about.. (Score:1)
They will appeal to both markets if they use both the console and the PC. Why? Because both makes money and dropping one would cost them too much in profit. Well this is the way the market is now...and has been for a while.
Making games for consoles is hard, you can't be as free as you would like to be on all your resources. Yes I know this is slowly changing esp with the PS2 and the X-box. I may not like the X-box personally, but I recognize that it has potential.
Re:What about.. (Score:2)
Re:What about.. (Score:2)
Something that might seem a little odd about OpenGL is that it can't open a window, so you always need some platform specific code (normally GLUT, which is already crossplatform) to do it. SDL is a nicer alternative.
0.02
Re:What about.. (Score:3, Informative)
It's a very low level media abstraction library, together with a lot of small extensions which add support for image processing, truetype fonts, sound mixing, etc.
http://www.libsdl.org/libraries.html
shows what extension libraries to SDL exist.
Re:What about.. (Score:3, Informative)
(from SDLsite:)
Simple DirectMedia Layer is a cross-platform multimedia library designed to provide fast access to the graphics framebuffer and audio device. It is used by MPEG playback software, emulators, and many popular games, including the award winning Linux port of "Civilization: Call To Power." Simple DirectMedia Layer supports Linux, Win32, BeOS, MacOS, Solaris, IRIX, and FreeBSD.
SDL is written in C, but works with C++ natively, and has bindings to several other languages, including Ada, Eiffel, ML, Perl, PHP, Python, and Ruby.
Pretty much all commercial games on linux use SDL, as well as most new little games on linux, and a fair proportion of them on windows and MacOS
See here [libsdl.org] for more details.
Re:What about.. (Score:1)
Does somebody know a subset of SDL running on PalmOS
Re:What about.. (Score:1)
Re:What about.. (Score:1)
Re:What about.. (Score:2, Interesting)
The input handling is annoying at best. It is similar to handling the WM_KEYDOWN and associated messages. Who want's to do that. That is an inflexible solution. That is why DirectInput was invented. To give a OO, abstracted view of the input devices connected to your machine.
Just to add to that. I really hope SDL [libsdl.org] really takes off. Without trying to be rendundant, developers really need a cross-platform library that matches DirectX. I can't wait until the first major game is released using SDL or something similar. That's the better solution to getting games for Linux rather than WineX or whatever other emulator you want. And contrary to what they claim, Wine is an emulator.
You have to admit that Windoze is king in the PC gaming market. So let's develop a library that allows developers to make games on Windoze...and then get Linux, Mac, etc ports for free.
Re:What about.. (Score:1)
Abstracted? OO? So, in fact, it's the exact opposite of "direct".
SDL is just that... (Score:3, Insightful)
I've never used SDL professionally (I've used Direct-X, Glide, PlayStation specific API's and some old inhouse stuff for DOS), but I've toyed around with it in my sparetime and I would have no trouble trusting it as the foundation for a high-quality cross-platform game (both 2D and 3D). In fact, I would rather use it than Direct-X since I find the API simpler and more straight forward as long as I don't need some obscure Direct-X feature for performance reasons (most games don't).
The URL is www.libsdl.org if you want to check it out.
does this mean (Score:2, Funny)
:)
Shading Language (Score:5, Interesting)
Can't wait! Hopefully they'll base it on something already well established, ie. Renderman SL.
Re:OpenGL is losing ground to DX8 (Score:2)
Agreed. Unfortunately, there are barely any games *now* that use OpenGL (not that OGL is going away--it is still the only choice for more technical/scientific viz).
But not that DX8 is a bad API--it's just not portable, which is considered bad for the people on the non-ported end.
Re:OpenGL is losing ground to DX8 (Score:1)
Re:Shading Language (Score:1)
http://www.sgi.com/software/shader/ [sgi.com]
Re:Shading Language (Score:1)
Obviously I am looking far into the future when real time graphics are going to start looking at real time photo-realistic lighting. Hopefully some day these kinds of things...
http://www.3dluvr.com/marcosss/
http://jackie.sf.net/
...will be done in real time, but at those times there will be problems as the ones I describe above with shaders.
Way too late. (Score:4, Interesting)
What's another issue is that Microsoft, up till now, has refused to distribute an updated opengl32.dll with their Operating Systems. The current version is the old OpenGL v1.1 compatible version. SGI has said it has distributed a v1.2 version to Microsoft, but for whatever reason, it's not distributed further to the clients. This widens the gap between a non-uniform OpenGL platform still on v1.1, forcing you to use non-standard stuff like vendor-specific extensions and vendor specific opengl loading on one side and the DirectX API on the other. Without Microsoft's help, OpenGL will never be in the front seat again on Windows systems and because they're gaining a lot of marketshare in the workstation market, also not in that typical OpenGL area.
Re:Way too late. (Score:2, Insightful)
Re:Way too late. (Score:1)
Just because MS has this market share doesn't mean that it is *the* 3d API to use. Sony probably sells the most component receivers out there but that certainly doesn't mean that it is the best choice.
There is no reason why openGL couldn't be used on windows. You might just have to prive a recent openGL dll. Same goes for java.
I will say that it might be easier to use the solutions that MS pushes (shoves) - but that doesn't make it better.
Re:Way too late. (Score:4, Informative)
They all come with OpenGL drivers. You dont even notice that MS doesn't ship them. Install video cards drivers, get OpenGL.
MS is really in a position to lose market here to Linux because of this: Linux on a PC with fast 3d (via nvidia for example) is infinitely more like the workstation being replaced than NT on a PC is.
At the higher workstation end (higher than GeForce3), people aren't yet looking at windows because the hardware isn't there anyway.
I think it'll be a while before OpenGL dies, especially as in all markets people are finally moving up the ladder - to scenegraph API's like this one [openscenegraph.org].
If the SG supports both DX and OGL backends then you dont even have to think about it.
my random 0.02,
Mike
Re:Way too late. (Score:1)
Re:Way too late. (Score:2)
Mainly, windowsIrix, PCOnyx and WildcatRE3
Value for money is a completely different thing though.
My 0.02
Re:Way too late. (Score:1)
Re:Way too late. (Score:3, Interesting)
Hrmmm. Apple is up and coming (again) with its nice PPCs and MacOS X (UNIX!). This and the fact that even MacOS 9 doesn't use Direct X means that software that goes out there for Macs and PCs, and seeks to stick with the growing (again) Apple market will have to stick with OpenGL. Mac doesn't do DirectX (thank GAWD!).
Many still do lots of graphics stuff on Macs, this means OpenGL. Games on Macs will have to be OpenGL.
Fortunately, since there is but a relatively minor difference between MacOS X and Linux/*BSD, support for Macs means easier time getting support for Linux/*BSD in this area. DirectX is not and has not killed OpenGL. It cannot.
No need (Score:1)
Re:No need (Score:1)
In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:4, Interesting)
The synchronisation stuff is pretty handy: certainly, NV_fence has been very useful over the past year or so, and again: vendor-specific paths BAD.
Some of the changes seem to be as much to persuade some-ignorant developers to use OpenGL over D3D - the "black box" aspects of OpenGL are one the more DESIRABLE things about it. Changing those because some D3D guy is saying "I do xyz in D3D and I want the exact same concept to work well in GL because I'm too thick to actually use the right approach for that renderer" seems simply wrong to me.
Uh-oh: UPS just kicked in. Yay mountain storms...
"Pure" OpenGL2 is a terrible mistake. Give vendors the option NOT to support something, and they won't. Then all your old apps+games are up shit creek.
Will finish later when I have stable power again...
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:2, Informative)
after a whole load of (largely MS-related) shenanigans involving the ARB and SGI (MS basically did the same to SGI as they did to IBM with OS/2 and WNT, with a project called "Fahrenheit" - which wasted lots of SGI's time and money. SGI should have known better than to deal with MS)
Since Apple and BSD, commercial unix and Linux are now all firmly on the OpenGL path, it's unlikely OpenGL will go away...
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:1)
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:1)
I believe that the 'optional' part will be from the programs point of view - that means someone can't produce a pure-opengl 2.0 compliant library that isn't providing the backwards compatibility (without them failing the GL compliance tests that is).
If you want to use pure OpenGL2.0, you can, but your legacy apps will still run just fine.
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:2)
1. Implementors would be cutting their own throats by dropping legacy support. Not to mention they would fail conformance.
2. Implementors can now point to a small subset of the API and say "this must be done the best/fastest" because future applications will be migrating toward it. Currently, implementors have to look and see what parts of OpenGL various applications are using, and pick and choose which subsets of the API to optimize.
3. Legacy OpenGL has a lot of CRAP in it. Crap that is useful from an academic perspective, but not heavily used in many real apps. (Selection and TexGen come to mind).
4. Immediate-mode/display lists/vertex arrays. Do you need three different ways to specify your scene? Chop out everything but vertex arrays and call it OpenGL Pure. Fewer code paths mean there are fewer ways implementors can screw up their drivers.
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:2, Interesting)
So, yeah: the Stanford PS stuff would be a very nice fit, from the looks of things. Certainly, something along those lines is needed, and it might as well be something that's actually be thrashed around a bit.
One thing that needs to be stressed wrt OpenML is that it's NOT necessarily what we want in OpenGL. While there's a clear overlap in some areas, the ARB needs to make sure that they don't end up kitchen-sinking it. The concern here is dependencies: in the same way that you try to keep classes as loosely coupled as possible, you also don't want your specs tied to each other so that changes in OpenML end up being mirrored slavishly in OpenGL irrespective of whether they actually add real value *from an OpenGL perspective* or not.
I'm absolutely NOT going to buy into the "xyz existing feature should be cut out" approach. Are the non-indexed primitives fundamentally worthless for "real" work? Yeah, pretty much. But they're also an easy way for newbies to mess around with OpenGL and get instant results. Those of us who suffered through the POS that was D3D's execute buffers would prefer that people not need 80 lines of code just to draw a single triangle. Every language needs its "Hello World", and that's basically the function that immediate mode serves.
Even display lists, which I haven't used in many years now, seem to have a place - I gather that on a fair number of non-"consumer class" systems they're still the way to go for a lot of things.
The "focus on this stuff which must be done the best/fastest" argument is a load of bollocks. Developers and IHV's already know what the important fast path is, and have done for years: it's glDrawElements on fully-featured data in CVAs.
The "fewer code paths mean there are fewer ways implementors can screw up their drivers" claim is fundamentally flawed, because we would certainly hope that any vendor who DOES drop "legacy" support goes out of business, so those "old" code paths will still be in every driver. Or at least, every driver for the cards *I'll* own...
Basically, given a decent shading language, we're pretty much set in terms of features. That's BY FAR the most important part of this.
GLsync (i.e. NV_fence extended and renamed) is a nice thing, although it's as much a matter of doing the Right Thing for the future as anything else, since we can EASILY saturate GeForce2-class cards ATM. Saving a few microseconds in setup/transform is all well and good, but since the true bottleneck for common apps (i.e. games) right now is memory bandwidth, the actual gains are basically non-existent. Still, there are other apps that DO benefit greatly from the improved parallelism, and like I say it's the Right Thing anyway.
The pack/unpack features are nice for certain effects, but they're clearly an example of "add this predominantly useless feature just to shut D3D people up". While a pack processor MAY lend itself to something useful (though given what's available through the pixel/vertex shaders that's a bit iffy), the unpack aspects seem almost completely pointless to me. Basically, the amount of "useful" work you can do in software is close to nil, and ANY glReadPixels-style call is going to bog because you're stalling the pipeline. That the other costs of such an action can be minimised is all very lovely, but next to the stall they're absolutely trivial.
The memory management is a *shrug* feature. Useful in some places, and I expect I'll whore certain aspects of it once it's available, but in reality it's nothing like as big a deal as some people seem to think - again, most of it is "D3D has this and we don't". Strangely enough though, this perceived lack doesn't stop my renderer or, say, Quake3's consistently outperforming any D3D-based one, does it?
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:1)
Re:In a nutshell: OpenGL2 good; "Pure" OpenGL bad (Score:1)
This illustrates another problem. ATI cards support 6 simultaneous textures. NVidia cards support 4. Kyro supports 8, but doesn't support pixel shaders. Code that will work with all of these will need to use ordinary DX or OpenGL multitexture commands, and only 4 textures. This is the worst of all of these. To get the best performance and image quality for all these cards, you need to write 3 different versions of the same code. We need a universal shader language with a good fallback mechanism.
the "black box" aspects of OpenGL are one the more DESIRABLE things about it.
Yep. Basic OpenGL is easy to program for. Tell it to transform an object, apply textures, light it, and it does it. No need to tell it about whether its using hardware for the T&L. Perhaps it would be nice to have more direct control over buffers since a lot of programmers think in terms of chunks of data rather than state, but the basic framework works very well and doesn't take long to learn.
The future of OpenGL article (Score:4, Interesting)
cannot wait to see (Score:1, Funny)
Humm... (Score:3, Interesting)
Let me explain, and ignore the hardware issues to some extent:
Looking back at a summary of 3d api's:
Glide: Wickedly fast , easy to write for, obsessivly propritary (IIRC) a la 3dfx.
OpenGL: Fast (glide speed on compliant/correct hardware), moderatly hard to write for initially but easier as time goes by. Open/closed? I honestly forget. Was it documented API's and closed source?
DirectX: Humm. Used to be "Dog ass slow". Moderatly fast, maybe medium speed, very compatible, used to be (maybe still is) hard as hell to write for (this may have changed).
Eventually, If memory recalls correctly, absorbed the "better aspects" of other 3d api's, but also added another degree of difficulty/confusion to implementation.
Now, realize I base this off of my: gaming, memory (oops) and discussion of merits I've read/heard/been privy to.
I'll focus on gaming because that is where the rubber meets the road (or where the 'trons hit the tube).
OpenGL: Quake/GLquake (...or was it GLquack? heh)
Even the "software" mode went from Impressive and Eye opening, but GLquake put the "am" in {higher octave in voice} 'Daaaaaaaammmmmnn'.
Fast and pretty (for its day).
Glide: Quake2, software vs Glide mode. No debate until you can pick your jaw off the floor and keep your mouth closed for more than 5 seconds.
(and stop drooling on my keyboard, dammit).
Glide was the...damn, what is the word?...pinnacle, saviour, "schwing" that low, med end hardware needed to be 'high endish'.
(example: friend of min playing tomb raider on a G3 333...Showed him TR on a p200 w/V2...'fuck you, man' was the response {seg})
DirectX...Thief: Glide vs DX{murmph-snort-bwahahahaha}...Hummmm: 30 fps in glide,
Nowadays, the software has improved incrementally, but the hardware by leaps and bounds...making the s/w look good. Humm.
Sort of the reverse of Glide--software made the h/w shine--here it is the other way around.
I think OpenGL is trying to bring back the "make the hardware shine" days back.
Good luck, because if I understand the way DX is now...it has "absorbed" the api's of GL and Glide and whatever company made the software in the first place (damned if I can find the link...british company I believe).
Well, I've blathered on long enough. Not bashing, just offering my opinion and what I saw and heard in a nutshell.
The eyes don't lie...that is the lips job.
Cheers.
Re:Humm... (Score:2)
API's can't be faster or slower. They're just interfaces. The "speed" of your graphics depends on the driver implementation and bus speed (for tris/sec) and the graphics hardware underneath (for texels/sec).
As for how easy one is to program, well, at this point anyone who knows one API and has a good grasp on 3D fundamentals can probably pick up another API in under a week. Immediate-mode 3D APIs aren't really that different anymore.
Re:Humm... (Score:1)
True, but API's can make a large difference in how well they lend themselves towards hardware acceleration. API's that require lots of data copying will leave you with slow-running applications.
One of the biggest problems with the current OpenGL API is that there are too many ways to try to accelerate geometry rendering, and they don't work equally well on different hardware.
In a way, it's kind of "nice" that DirectX totally rewrites itself every couple of versions. Now they have a single way of specifying primitives, and it's relatively simple to understand, and it lends itself well towards hardware acceleration.
Re:Humm... (Score:1)
Survey of Microsoft DirectX [depaul.edu]
I quote:
"What is now Direct3D was purchased by Microsoft in early 1995 from RenderMorphics, a British company founded in 1993 by Servan Keondjian"
A few (mglavin) corrections (Score:1)
As Stiletto already noted, Direct3D and OpenGL don't really differ in performance anymore (if they ever did in the first place).
Also, Quake 2 wasn't really Glide *per se*. It was OpenGL that interfaced a custom 3DFX 'MiniGL" driver. Nevertheless, it was coded to use the OpenGL API. (You can still run Quake 2 on an Nvidia card in hardware mode, which obviously wouldn't be possible if the game were truly Glide-based.)
Re:A few (mglavin) corrections (Score:1)
Too much caffinee, perhaps).
Sorry about that, I said "api" when "drivers" is what i was meaning. Thanks, Stiletto.
What made Glide, and by extension, 3dfx, "kick ass" vs. Direct X at the time was:
Glide was easy to program, closed, and most importantly meant for Voodoo cards, only.
If I am not mistaken, Nvidia hacked or created a wrapper for Glide to work with thier cards *and* use a "windowed" mode.
3dfx "screamed bloody murder" because they did not want Glide "co-opted", "diluted" or "tainted"...or something along those lines.
Also, Quake 2 wasn't really Glide *per se*. It was OpenGL that interfaced a custom 3DFX 'MiniGL" driver. Nevertheless, it was coded to use the OpenGL API. (You can still run Quake 2 on an Nvidia card in hardware mode, which obviously wouldn't be possible if the game were truly Glide-based.)
Ah, thank you, I was working to that. Sadly that was the mental jigsaw piece that did not fit into place...doh!
I think (based on my own observations) that most of the animosity to DX is base on the same premise as was Glide.
Only difference is Glide is/was hardware specific and DX is OS specific.
OpenGL is the "saving grace" as it is hardware and software "agnostic", but the latest shiney object/whizz bang features are not implemented yet.
So, the "perception" is that GL is behind the curve, even when some features have not been implemented by the drivers by the hardware manufacturers themselves.
(witness Nvidia, ATI and the company formerly know as 3dfx...they all did it to greater/lesser degrees).
I think that it boils down to on the politics that companies wanted higher "fame-rates" and gamers want higher "frame-rates".
Making GL the "dark horse", so to speak.
Sound about right?
And we'll have working 2.0 drivers in five years (Score:2)
The problem is that DirectX is getting very complex, and OpenGL is getting very complex. It has been hard enough for video card makers to get stable drivers of any kind in the past. This is only going to make it much worse. I can see cutting OpenGL support to be a common decision.
Calling John Carmack! (Score:3, Funny)
What do you think?
DG
wasn't the MS/SGI pack just a ploy to borrow/steal (Score:2, Interesting)
Unfortunately for SGI, there wasn't much to learn from Microsoft and MS is better at deception. As others have mentioned, DX has borrowed/stolen api and ideas from other competing API. It's great for gamers, but bad for the competition.
Is anyone really surprised Microsoft hasn't released new openGL drivers for windows? Unless game development on other platforms gain momentum, DX will eventually win and OpenGL will fade away. It's really shame, since OpenGL and Glide are better, but what does average joe care about. If it runs fast on their windows box, no one really cares. At the current rate of performance improvement in GPU's, no one is really going to care about squeezing out the last ounce of power. Only exception I can think is scientific application that absolutely need every ounce of power. Doing realtime simulations/modeling require insane power. But those are nitch products.
Re:wasn't the MS/SGI pack just a ploy to borrow/st (Score:1)
That's very true. But I very much doubt there are any exceptions at all that optimising the drivers and shaving off cycles is going to help much.
The only mental requirements for processing power I can tell is nuclear modelling and simulated medical trials. This is all precomputed and wouldn't really benefit from a perpixel shading language.. Also considering Churches law this point is made mute. Would a faster graphics API, get me more completed units with Seti? No.
I do agree with your comment about MS deception with Farenheit. But didn't we all see this coming?
-J
Re:wasn't the MS/SGI pack just a ploy to borrow/st (Score:1)
OpenGL 2.0 forward thinking (Score:1)
Somewhere along the line, the hardware surpassed the API. That's when shit hit the fan with the crap-assed extension system.
So OpenGL needs to forward thinking. Make the hardware push the limits to catch up to the API again. And that is exactly the reason behing the Shading Language.
Good luck guys. Can't wait to see the unveiling next year at Siggraph!!
Fahrenheit (Score:3, Interesting)
Don't get me wrong though. I Love OpenGL. Not only do I consider it the _only_ 3d api, I consider it one of the most professional and designed APIs. It's how an API should be designed.
Perpixel shading is a very welcome addition and should save some texture memory. Imaging all the Q3 shaders implemented in hardware... yum.. And I'm sure you could implement some nice trees with it too..
Although all these nice additions to an API won't stop inventive programming. There will still be a need for billboard trees and highlights..
Even so, the additions to the api will create even more ingenious implementations. Lionheads use of mipmapping for bluring distant objects was ingenious. Look at how far ModeX pushed the pc, or how Mode7 pushed the Nes. With a more powerful API the possibilities appear endless
Unfortunately I don't see drivers appearing for a long time..
-J
I hate being the voice of reason... (Score:3, Insightful)
a) OpenGL is dead!
b) OpenGL is out of date
c) Let's ditch OpenGL and do DirectX
DirectX isn't the same thing as OpenGL, however you can compare D3D and OpenGL. DirectX is for sound, input, and rendering not just rendering, kids.
OpenGL will outlive D3D, since it's what big iron and the 'professionals' use for high end graphics. Also hardware vendors produce GL extentions way before D3D work has even started. GL can use extentions made *after it's release to support more features quickly and easily. ( If you're in one of these camps you never done 3d development, or think all computers are consumer PCs. )
Also if you use DirectX, you're limiting yourself needlessly. If you want the "latest and greatest" , then you're not going to use an API that has no modular extention system to support hw/ideas made after the API release. OpenGL can support hw/algorthims that happened *after it's release. OpenGL also runs on manchines a lot more powerful than your pentium 4 you bought at comp usa.
Re:I hate being the voice of reason... (Score:1)
You are correct... Direct3D is at version 8.0 and OpenGL is at 1.3. That means Direct3D is ~6.15384 times better than OpenGL. If you have been told otherwise then you have been misinformed.
Re:I hate being the voice of reason... (Score:2)
2) Who cares what OpenGL runs on. If you're doing consumer level 3D, the x86 is all you have to worry about.
*> I'm so surpised that a GPL obsessed place like
Re:I hate being the voice of reason... (Score:2)
700,000 OpenGL systems sold in past 4 days alone! (Score:1, Interesting)
Random thought (Score:1)
Now that would really be cool.
Re:Random thought (Score:2)
I dunno - I quite like OpenGL, but for realistic rendering, raytracing is pretty cool... Imagine having HW capable of raytrace-rendering 100s of renderman frames per second, and using that to animate in real time.
The odd bit is that we're probably at the point where people could start using realtime raytracing engines. I suspect the reality is that if you were to drudge up some old ratracing engine from years ago and rig it to run in realtime, it still wouldn't look as impressive as as the next generation engines running on a GForce3. Raytracing initially had an edge over rendering the facets directly because it provided lighting effects, shadows, and reflections. The current engines are already capable of doing this stuff, why would you use raytracing?
At one point I assumed that eventually computers would be so powerful that raytraced VR was inevitable. The achievments that have been accomplished with other rendering technologies make that future doubtful.
Re:Random thought (Score:2)
Given a simple enough scene (under a few hundred thousand polys) i believe this hardware could just about do 30fps raytracing.
Their PCI card claims a peak rate of 1.1 billion ray/triangle intersections per second, which should probably correspond to (given ~5 pixel triangles), around 220 million triangles per second.
divide that by 30, and you would think that this card could render a 7.3 million polygon scene in realtime.
This is a peak rate, so halve that. It also doesn't account for texturing etc., so halve it again. Take into account reflections, refractions, transparency etc. - each ray likely intersects more than one triangle, so halve it again.
However, it would appear, based on the manufacturers specs, that ART's PURE 3D accelerator card could handle raytracing a 900 thousand polygon scene at 30fps, which is 'realtime' as far as i am concerned.
This doesn't address the PCI bus bottleneck, so the card would probably choke moving scene description data to the card and image data off the card fast enough - so while it could theoretically render this many polys, it most likely can't handle the I/O requirements of realtime rendering.
It also doesn't address the performance bottleneck of getting your CPU to transform those 900,000 polygons.
Of course, realtime interactive raytracing is not what this product is designed for, as far as i can see, but the capability is there to put a chipset like the one used in this product behind some super-fast data path on an AGP bus and do realtime raytracing.
It would be expensive, but it would probably work.
.
CAD (Score:2)
what do the SGI's and the NT boxen do ?
they run the CAD tools
where do you think 3DLabs gets it's money from ?
yes that CAD market, now CAD tool vendors actually HAVE to provide versions for evil versions of Unix (HP-UX) so they say
"we need to write a complex graphics product that works on **ix, ***UX and win**** where's that OpenGL docs"
but now they say fsck that they all run with OpenGL extension bob so we shall use bobs features
3DLabs problem is they and many others dont have bob and NVidia until now has kept them propriatry (once ATI offered their extensions free and open to the ARB NVidia changed their minds)
intresting but I wonder how long till silicon ?
regards
john jones
Re: (Score:1)
All I want from Open GL... (Score:1)
The closest thing OpenGL has is the KTX_BUFFER_REGION extension, but unfortunately that extension's specification is broken; it fails to allow this feature properly because it says that any copy operation can erase data from previous copy operations -- totally freakin stupid.