Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software

A Glimpse Into 3D future: DirectX Next Preview 222

Dave Baumann writes "Beyond3D has put up an article based on Microsoft's games developers presentations given at Meltdown, looking at the future directions of MS's next generation DirectX - currently titled "DirectX Next" (DX10). With Pixel Shaders 2.0 and 3.0 already a part of DirectX9 this article gives a feel of what to expect from PS/VS4.0 and other DirectX features hardware developers will be expected to deliver with the likes of R500 and NV50."
This discussion has been archived. No new comments can be posted.

A Glimpse Into 3D future: DirectX Next Preview

Comments Filter:
  • Re:It would be nice (Score:4, Informative)

    by halo1982 ( 679554 ) * on Sunday December 07, 2003 @11:21AM (#7653294) Homepage Journal
    If they could somehow program Dx10 so it was backwards compatiable with cards now (such as radeon 9800 etc), if I'd bought such a card I'd be quite annoyed if there wasn't decent support for it in the future.

    DX10 will work fine with your new card. DX has always done this. DX9 works fine with DX8 cards like the Radeon 9000/9100 and GeForce4 series.
    However the cards do not have support for the new features of DX10 (like PS/VS3/4 etc). The cards can work with the new software, and do, but the hardware just isn't there.

  • by lemody ( 588908 ) on Sunday December 07, 2003 @11:26AM (#7653309)
    OpenGL and DirectX are different kind of systems, DirectX offering interfaces to input devices, sound controlling etc. OpenGL is just for graphics!
    Don't get this personal, I always post like this when someone compares these two :)
  • by cybrthng ( 22291 ) on Sunday December 07, 2003 @11:37AM (#7653366) Homepage Journal
    DX9 is backwards compatible with even my lowly NV25 and MX cards.

    The issue is my card doesn't have the vertex shaders and other registers that DX9 takes advantage of so i won't be fully accelerating new DX9 features. I can run DX9 games just fine even though my card was designed with 8 in mind.

    Its not that it isn't backwards compatible, it is that your hardware doesn't suport technology of the future since it didn't exist

    Only way around this would be if your GPU core was software driven and they could update it. Otherwise to get new DX10 support, you need a DX10 card that was built with the new functionality in mind.

    Backwards compatibility has nothing to do with it. Its just like in the days of MMX vs NON MMX. IF you had MMX it ran faster, if you didn't it never wouldn't work for you.. just would be slower.
  • by Anonymous Coward on Sunday December 07, 2003 @11:47AM (#7653398)
    SDL does this for Linux (and several other OS's including Windows.) It uses OpenGL for the 3D portion. Unfortunately, DirectX is years ahead of SDL.
  • by tomstdenis ( 446163 ) <tomstdenis AT gmail DOT com> on Sunday December 07, 2003 @12:05PM (#7653458) Homepage
    I'm no 3d coder guy but as I understand it shaders are short programs you can enter into the GPU to control how a face is rendered [at a given vertex]. Before that you used to say "render me with [phong|gouraud|flat] shading" and the whole thing looked uniform.

    Shaders programs let you do cool things like features [e.g. skin, roughness to things, etc...]

    What I don't get is why didn't they just make the GPU a generic RISC with say 32/32 registers [ALU/FPU] and a set of instructions that fast graphics would require [say saturated X bpp operations, fast division, etc...]

    That way you have a processor you can just upload code to. Also make it a standard so instead of having "every joe and their brothers graphic processor specs...." you have something truly conforming...

    Tom
  • by Molt ( 116343 ) on Sunday December 07, 2003 @12:10PM (#7653479)
    I'd say that if you're on about the graphics subsystem of DirectX then OpenGL is pretty much at the same level if.. and only if.. you are willing to use the standardised extensions [sgi.com]. If you're not using these expect slowness, if you're using the non-standardised vendor-specific extensions then expect more speed but more difficulty in making it work across the board.
  • by BasharTeg ( 71923 ) on Sunday December 07, 2003 @12:14PM (#7653507) Homepage
    Except, as I understand it, with DirectX, there are multiple implementations of each function. So if you're running a P54C, it loads the pointer to the classic method of that function's implementation. If you're running a Pentium MMX, it loads the pointer to the MMX implementation of that function. Etc. The same goes for choosing between x87, SSE, 3D-Now!, or SSE2.

    So it isn't likely DirectX is going to use an MMX implementation of a function when your processor flags don't agree. Other than that, most people aren't doing inline MMX assembly in their games now that DirectX has taken to supporting streaming instructions itself.
  • by Pius II. ( 525191 ) <PiusII@nospAM.gmx.de> on Sunday December 07, 2003 @12:26PM (#7653580)
    Er, they did. There's even a C-like programming language, in case you don't want to write raw assembler for these processors. The whole process of uploading stuff on the graphics card is halfway standarized, at least in OpenGL; I don't use DX, but according to documentation you can use the same shaders with similar commands.

    Documentation of the OpenGL side is in the OpenGL Extension Registry [sgi.com], look for "shader" and "program".
  • by Tim C ( 15259 ) on Sunday December 07, 2003 @12:37PM (#7653649)
    DX9 is backwards compatible with even my lowly NV25 and MX cards... Its just like in the days of MMX vs NON MMX. IF you had MMX it ran faster, if you didn't it never wouldn't work for you.. just would be slower.

    Now, in general you are correct - however, the Deus Ex 2 demo refused to run on my girlfriend's PC because it lacked support for pixel shaders (v1.1, iirc). That machine has the latest DX installed, but only has a GeForce 4 MX. My machine, with a Ti 200, runs the demo fine.

    Perhaps it doesn't have to be that way, and I realise that it's only a demo, but that's the way it is at the moment.

    Also, specifically addressing your MMX comment - I seem to remember Unreal refusing to run on my PC at the time, which had a Cyrix PR166 (with no MMX support), precisely because of the lack of MMX support.
  • Re:Horse, THEN Cart (Score:4, Informative)

    by Glock27 ( 446276 ) on Sunday December 07, 2003 @01:41PM (#7654022)
    Compare this to OpenGL, which is lagging so far behind that only rare titles take it seriously (Doom3 is the one that springs to mind).

    Wow, you are very uninformed for someone who was rated +5 Insightful.

    OpenGL exposes new 3D functionality much faster than DirectX, through the OpenGL extension mechanism. It may not be as convenient as having a "standardized" API (and OpenGL 2.0 will address as much of that issue as it can), but it is still better to be able to use new functionality immediately, rather than waiting for the next DirectX release (or worse yet beta) from Microsoft. NVIDIA's drivers even support all of this under Linux.

    As to your "rare titles" comment, see my other post for top games using OpenGL [slashdot.org]. Also reflect on the fact that every id game plus all the games based on id engines (Heretic 1/2, RTCW/ET and many more) all use OpenGL exclusively.

    And guess what, when id releases Doom3, I'm pretty sure it'll raise the bar again. Perhaps by then quite a few people will have shader-capable video cards. ;-)

    For more correct information about OpenGL, feel free to check out the official OpenGL website [opengl.org].

  • Re:Version mania (Score:3, Informative)

    by .pentai. ( 37595 ) on Sunday December 07, 2003 @03:09PM (#7654514) Homepage
    I'm sorry but your misinformed, in some ways.
    OpenGL generally has features available BEFORE DirectX does, accessible via extensions.

    However, once it's available through a vendor extension (NVidia or ATI proprietary) then it usually takes a while and some reworking to make your code work when an official extension is supported (ARB, or somethings EXT).

    However your other comments are pretty much right on. You don't change your DX8 game to DX9 just because DX9 just came out, however you probably WILL change it to DX9 because your manager who knows nothing about technology says OOOH BUZZWORDS! and wants them on your game's box too...
  • by SmackCrackandPot ( 641205 ) on Sunday December 07, 2003 @03:39PM (#7654667)
    What I don't get is why didn't they just make the GPU a generic RISC with say 32/32 registers [ALU/FPU] and a set of instructions that fast graphics would require [say saturated X bpp operations, fast division, etc...]

    This was tried in the past, with TI's TIGA (Texas Instruments Graphics Architecture) which supported the TMS34010 and TMS34020/34082 graphics coprocessors. This was a really neat architecture which accelerated 2D and basic 3D operations. Unfortunately, the CPU chip manufacturers (Intel, etc...) would identify the bottlenecks and optimise their CPU's so that the next generation chips would be faster than a current generation CPU/GPU. "Local Bus" basically whacked out TIGA from the market. A real shame, since you could write your own extensions which had complete access to GPU memory (maybe this was a bad thing). They even got as far as having a trapezium rendering algorithm (halfway to rendering triangles).

    Going back to the present day, look for the extensions like ARB_vertex_program and ARB_fragment_program. According to Microsoft's plans, these will at least have identical instruction sets. I wonder how long it will be before we can completely define an entire graphics pipeline using a single program.
    (This would probabl require virtual "clip_vertex", "render_triangle" function calls).
  • by Screaming Lunatic ( 526975 ) on Sunday December 07, 2003 @03:42PM (#7654687) Homepage
    At this point DirectX is years ahead of OpenGL

    No it's not. With the approval of the ARB_vertex_buffer_object extension and GLSlang, both APIs expose about the same level of functionality. Render to texture is a mess in OpenGL right now. But there are Super Buffers/pixel_buffer_object extensions in the works. And the Super Buffers extension looks like it will cover most of the functionality that is slated for DirectX Next.

    Revelant links:

    http://oss.sgi.com/projects/ogl-sample/registry/

    http://oss.sgi.com/projects/ogl-sample/registry/ ARB/vertex_buffer_object.txt

    http://oss.sgi.com/projects/ogl-sample/registry/ ARB/shading_language_100.txt

    http://www.opengl.org/about/arb/notes/meeting_no te_2003-06-10.html

    http://developer.nvidia.com/docs/IO/8230/GDC2003 _OGL_ARBSuperbuffers.pdf

    Note that OpenGL is usually updated once a year at Siggraph. The next version of DirectX is slated for after the release of Longhorn. That'll be 2005 or so.

    Please do not perpetuate the myth that OpenGL is "falling behind" Direct3D. That is plain wrong. And a diservice to both the open source community and the graphics development community.

  • by Lord_Dweomer ( 648696 ) on Sunday December 07, 2003 @04:48PM (#7655013) Homepage
    And just so everybody is clear. Samir Gupta is an infamous Slashdot troll who in fact, does NOT work for Nintendo. Do not be confused by his seemingly intelligent posts.

  • Re:Who cares? (Score:3, Informative)

    by Glock27 ( 446276 ) on Sunday December 07, 2003 @05:19PM (#7655192)
    You are arguing against your own point. Since DirectX games have been ported successfully to both MacOS and Linux, there's really no reason to use OpenGL.

    Er, just what do you think was used for the MacOS and Linux ports?

    There are three different scenarios:

    1. Use OpenGL for graphics in either your own game engine or a third party engine...then porting is almost trivial. (id Software uses this approach.)
    2. Use a third-party engine that supports both DirectX and OpenGL, then as long as you use vendor supplied functionality switching is no problem. What is a problem is that if you extend the game engine with additional graphics effects, thus differentiating your game from the pack, you have to do it for both Direct3D and OpenGL.
    3. Write your own game engine that wraps both Direct3D and OpenGL, and handle all the requisite hassles yourself.

    Which makes the most sense to you?

  • by Gldm ( 600518 ) on Sunday December 07, 2003 @07:53PM (#7656100)
    The article's really long, and somewhat technical. Here's the layman highlights for anyone who just wants to know "Ok what should I care about?"

    1. The big change is all memory goes virtual. What this means is that you don't need to load an entire texture to render a subset of it's pixels. This is a VERY good thing considering on most textures you're only using a low level mipmap anyway. Thus, texture memory on the card becomes more like a gigantic L2 or L3 cache that can be efficiently used. Also you can have massive texture spaces without having things go all slow over AGP. 3Dlabs' Wildcat already does this. This was originally mentioned by Carmack in the 3/27/2000 .plan update which you can find here: http://www.bluesnews.com/cgi-bin/finger.pl?id=1&ti me=20000308010919

    In addition, geometry is stored virtual as well, as are shaders, which can be loaded into the processor in pages, instead of being limited to a small block of instructions that have to fit entirely into the GPU registers. The registers now work more like an L1 cache, and shader programs can be effectively unlimited size. This means lots of neat special effects will be possible.

    2. High ordered surfaces (curves) are getting mandated. No more n-patches vs truform, it's going to use standard curve systems like Beizer splines.

    3. Fur rendering and shadow volumes are going into hardware as part of a new "tesselation processor"

    4. You can have multiple instances of meshes. This means you can take one model, run a few vertex programs on it, and store each result seperately. Saves alot of time later.

    5. Integer instruction set. This is so you don't have to deal with floating point data when you don't need to. There are some times you want simpler data for use in a shader program and having to pretend everything's a floating point texture isn't convenient.

    6. Frame buffer current pixel value reads. This has been a developer request for a long time. It's not mandatory in the spec, but it can be used for all sorts of stuff. Basicly the GPU can read the current value in the framebuffer into the pixel pipeline without needing to maintain a second copy. This will both save alot of memory and allow you to do things such as light accumulation more efficiently.

Happiness is twin floppies.

Working...