A Glimpse Into 3D future: DirectX Next Preview 222
Dave Baumann writes "Beyond3D has put up an article based on Microsoft's games developers presentations given at Meltdown, looking at the future directions of MS's next generation DirectX - currently titled "DirectX Next" (DX10). With Pixel Shaders 2.0 and 3.0 already a part of DirectX9 this article gives a feel of what to expect from PS/VS4.0 and other DirectX features hardware developers will be expected to deliver with the likes of R500 and NV50."
Re:It would be nice (Score:4, Informative)
DX10 will work fine with your new card. DX has always done this. DX9 works fine with DX8 cards like the Radeon 9000/9100 and GeForce4 series.
However the cards do not have support for the new features of DX10 (like PS/VS3/4 etc). The cards can work with the new software, and do, but the hardware just isn't there.
Re:Does this mean OpenGL is finished ? (Score:5, Informative)
Don't get this personal, I always post like this when someone compares these two
DX9, 10 or whatever already is "compatible"! (Score:5, Informative)
The issue is my card doesn't have the vertex shaders and other registers that DX9 takes advantage of so i won't be fully accelerating new DX9 features. I can run DX9 games just fine even though my card was designed with 8 in mind.
Its not that it isn't backwards compatible, it is that your hardware doesn't suport technology of the future since it didn't exist
Only way around this would be if your GPU core was software driven and they could update it. Otherwise to get new DX10 support, you need a DX10 card that was built with the new functionality in mind.
Backwards compatibility has nothing to do with it. Its just like in the days of MMX vs NON MMX. IF you had MMX it ran faster, if you didn't it never wouldn't work for you.. just would be slower.
Re:Does this mean OpenGL is finished ? (Score:4, Informative)
Re:So what is a shader? (Score:4, Informative)
Shaders programs let you do cool things like features [e.g. skin, roughness to things, etc...]
What I don't get is why didn't they just make the GPU a generic RISC with say 32/32 registers [ALU/FPU] and a set of instructions that fast graphics would require [say saturated X bpp operations, fast division, etc...]
That way you have a processor you can just upload code to. Also make it a standard so instead of having "every joe and their brothers graphic processor specs...." you have something truly conforming...
Tom
Re:Does this mean OpenGL is finished ? (Score:4, Informative)
Re:DX9, 10 or whatever already is "compatible"! (Score:4, Informative)
So it isn't likely DirectX is going to use an MMX implementation of a function when your processor flags don't agree. Other than that, most people aren't doing inline MMX assembly in their games now that DirectX has taken to supporting streaming instructions itself.
Re:So what is a shader? (Score:3, Informative)
Documentation of the OpenGL side is in the OpenGL Extension Registry [sgi.com], look for "shader" and "program".
Re:DX9, 10 or whatever already is "compatible"! (Score:3, Informative)
Now, in general you are correct - however, the Deus Ex 2 demo refused to run on my girlfriend's PC because it lacked support for pixel shaders (v1.1, iirc). That machine has the latest DX installed, but only has a GeForce 4 MX. My machine, with a Ti 200, runs the demo fine.
Perhaps it doesn't have to be that way, and I realise that it's only a demo, but that's the way it is at the moment.
Also, specifically addressing your MMX comment - I seem to remember Unreal refusing to run on my PC at the time, which had a Cyrix PR166 (with no MMX support), precisely because of the lack of MMX support.
Re:Horse, THEN Cart (Score:4, Informative)
Wow, you are very uninformed for someone who was rated +5 Insightful.
OpenGL exposes new 3D functionality much faster than DirectX, through the OpenGL extension mechanism. It may not be as convenient as having a "standardized" API (and OpenGL 2.0 will address as much of that issue as it can), but it is still better to be able to use new functionality immediately, rather than waiting for the next DirectX release (or worse yet beta) from Microsoft. NVIDIA's drivers even support all of this under Linux.
As to your "rare titles" comment, see my other post for top games using OpenGL [slashdot.org]. Also reflect on the fact that every id game plus all the games based on id engines (Heretic 1/2, RTCW/ET and many more) all use OpenGL exclusively.
And guess what, when id releases Doom3, I'm pretty sure it'll raise the bar again. Perhaps by then quite a few people will have shader-capable video cards. ;-)
For more correct information about OpenGL, feel free to check out the official OpenGL website [opengl.org].
Re:Version mania (Score:3, Informative)
OpenGL generally has features available BEFORE DirectX does, accessible via extensions.
However, once it's available through a vendor extension (NVidia or ATI proprietary) then it usually takes a while and some reworking to make your code work when an official extension is supported (ARB, or somethings EXT).
However your other comments are pretty much right on. You don't change your DX8 game to DX9 just because DX9 just came out, however you probably WILL change it to DX9 because your manager who knows nothing about technology says OOOH BUZZWORDS! and wants them on your game's box too...
Re:So what is a shader? (Score:5, Informative)
This was tried in the past, with TI's TIGA (Texas Instruments Graphics Architecture) which supported the TMS34010 and TMS34020/34082 graphics coprocessors. This was a really neat architecture which accelerated 2D and basic 3D operations. Unfortunately, the CPU chip manufacturers (Intel, etc...) would identify the bottlenecks and optimise their CPU's so that the next generation chips would be faster than a current generation CPU/GPU. "Local Bus" basically whacked out TIGA from the market. A real shame, since you could write your own extensions which had complete access to GPU memory (maybe this was a bad thing). They even got as far as having a trapezium rendering algorithm (halfway to rendering triangles).
Going back to the present day, look for the extensions like ARB_vertex_program and ARB_fragment_program. According to Microsoft's plans, these will at least have identical instruction sets. I wonder how long it will be before we can completely define an entire graphics pipeline using a single program.
(This would probabl require virtual "clip_vertex", "render_triangle" function calls).
Re:Does this mean OpenGL is finished ? (Score:5, Informative)
No it's not. With the approval of the ARB_vertex_buffer_object extension and GLSlang, both APIs expose about the same level of functionality. Render to texture is a mess in OpenGL right now. But there are Super Buffers/pixel_buffer_object extensions in the works. And the Super Buffers extension looks like it will cover most of the functionality that is slated for DirectX Next.
Revelant links:
http://oss.sgi.com/projects/ogl-sample/registry/
http://oss.sgi.com/projects/ogl-sample/registry/ ARB/vertex_buffer_object.txt
http://oss.sgi.com/projects/ogl-sample/registry/ ARB/shading_language_100.txt
http://www.opengl.org/about/arb/notes/meeting_no te_2003-06-10.html
http://developer.nvidia.com/docs/IO/8230/GDC2003 _OGL_ARBSuperbuffers.pdf
Note that OpenGL is usually updated once a year at Siggraph. The next version of DirectX is slated for after the release of Longhorn. That'll be 2005 or so.
Please do not perpetuate the myth that OpenGL is "falling behind" Direct3D. That is plain wrong. And a diservice to both the open source community and the graphics development community.
Re:They're failing to address a major bottleneck I (Score:3, Informative)
Re:Who cares? (Score:3, Informative)
Er, just what do you think was used for the MacOS and Linux ports?
There are three different scenarios:
Which makes the most sense to you?
Cliff's notes for people who don't want to read it (Score:4, Informative)
1. The big change is all memory goes virtual. What this means is that you don't need to load an entire texture to render a subset of it's pixels. This is a VERY good thing considering on most textures you're only using a low level mipmap anyway. Thus, texture memory on the card becomes more like a gigantic L2 or L3 cache that can be efficiently used. Also you can have massive texture spaces without having things go all slow over AGP. 3Dlabs' Wildcat already does this. This was originally mentioned by Carmack in the 3/27/2000
In addition, geometry is stored virtual as well, as are shaders, which can be loaded into the processor in pages, instead of being limited to a small block of instructions that have to fit entirely into the GPU registers. The registers now work more like an L1 cache, and shader programs can be effectively unlimited size. This means lots of neat special effects will be possible.
2. High ordered surfaces (curves) are getting mandated. No more n-patches vs truform, it's going to use standard curve systems like Beizer splines.
3. Fur rendering and shadow volumes are going into hardware as part of a new "tesselation processor"
4. You can have multiple instances of meshes. This means you can take one model, run a few vertex programs on it, and store each result seperately. Saves alot of time later.
5. Integer instruction set. This is so you don't have to deal with floating point data when you don't need to. There are some times you want simpler data for use in a shader program and having to pretend everything's a floating point texture isn't convenient.
6. Frame buffer current pixel value reads. This has been a developer request for a long time. It's not mandatory in the spec, but it can be used for all sorts of stuff. Basicly the GPU can read the current value in the framebuffer into the pixel pipeline without needing to maintain a second copy. This will both save alot of memory and allow you to do things such as light accumulation more efficiently.