Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

NVIDIA's Pixel & Vertex Shading Language 263

Barkhausen Criterion writes "NVIDIA have announced a high-level Pixel and Vertex Shading language developed in conjunction with Microsoft. According to this initial look, the "Cg Compiler" compiles high level Pixel and Vertex Shader language into low-level DirectX and OpenGL code. While the press releases are going amok, CG Channel (Computer Graphics Channel) has the most comprehensive look at the technology. The article writes, "Putting on my speculative hat, the motivation is to drive hardware sales by increasing the prevalence of Pixel and Vertex Shader-enabled applications and gaming titles. This would be accomplished by creating a forward-compatible tool for developers to fully utilize the advanced features of current GPUs, and future GPUs/VPUs." "
This discussion has been archived. No new comments can be posted.

NVIDIA's Pixel & Vertex Shading Language

Comments Filter:
  • Re:Linux Support (Score:3, Informative)

    by friedmud ( 512466 ) on Thursday June 13, 2002 @02:37PM (#3695679)
    What are you talking about?? Nvidia makes great linux drivers - and from looking through the pages it looks to me like Cg just outputs regular OpenGL (Well - Nvidia-OpenGL anyway) so I would venture a guess that any of these will run just fine on the nvidia linux drivers.

    My only problem is that the toolkit itself is only for windows :-(

    Anyone try it with Wine/Winex yet?? I might when I get home.

    Derek
  • by Steveftoth ( 78419 ) on Thursday June 13, 2002 @02:37PM (#3695680) Homepage
    According to the web site, they are working to implement this on top of both OpenGL and DirectX. On linux and Mac as well.
    Basically this is a wrapper for the assembly that you would have to write if you were going to write a shader program. It compiles a C-like (as in look a like ) language into either the DirectX shader program or the OpenGL shader program. So you'll need a compiler for each and every API that you want to support. Which means that you'll need a different compiler for OpenGL/Nvidia and OpenGL/ATI until they standardize it.

    On a more technical note, the lack of branching in vertex/pixel shaders really needs to be fixed, it's really the only feature that they need to add to them. Which is why the Cg code looks so strange, it's C, but there's no loops.
  • by alriddoch ( 197022 ) on Thursday June 13, 2002 @02:44PM (#3695739) Homepage
    It seems to me that this is probably an attempt to kill OpenGL 2.0, and secure Direct X as the dominant 3D API. OpenGL 2.0 has as far as I can tell been well thought out, and most of the feedback to it has been very positive. The frontend to its shader language is Free Software, and the work done seems to have been done with the best of intentions. I am very cynical about an offering from NVIDIA, especially when you consider their behavoir towards the rest of the 3D card market, and the fact that Microsoft are involved.
  • In Fact...... (Score:3, Informative)

    by friedmud ( 512466 ) on Thursday June 13, 2002 @02:46PM (#3695765)
    From Nvidia's Homepage [nvidia.com] you can check out the press releases and find this:

    "NVIDIA's Cg Compiler is also cross platform, supporting programs written for Windows®, OS X, Linux, Mac and Xbox®."

    So maybe even though the tools aren't cross platform - the compiler is. I think this is a Great step forward towards OpenGL 2.0 - this is showing that Windows doesn't have to be the only platform to write graphically intensive applications for.

    Derek
  • by Dark Nexus ( 172808 ) on Thursday June 13, 2002 @02:47PM (#3695774)
    Well, they're quoted in this [com.com] article on ZDNet (the quote is near the bottom) as saying that they're going to release the language base so other chip makers can write their own compilers for their products.

    That was the first thing that popped into my head when I read this article, but it sounds like they're going to give open access to the standards, just not to the interface with their chips.
  • Re:No loops? (Score:2, Informative)

    by GameMaster ( 148118 ) on Thursday June 13, 2002 @03:21PM (#3696066)
    The actual assembly language used by the present generation of shader supporting video chips has no support for loops and only marginal support for conditional statements (meaning no explicit jmp op). Since this is the code that the cg compiler compiles down to, they can't add those features to the language. It makes some sense because shaders are meant to be short and sweet. Event though they are hardware accelerated, they get run so many times that even a shader that uses only the max number of ops (which is now at 128 ops for NVIDIA chips) is considered pretty slow. If loops were added it would slow the system down even more. When they say those features are "eventually planned to be supported" they mean that they'll be supported by a future generation of hardware (most likely the directX9 compatible chips).

    -GameMaster
  • What Cg means (Score:5, Informative)

    by Effugas ( 2378 ) on Thursday June 13, 2002 @03:36PM (#3696194) Homepage
    Seems like a decent number of people have absolutely no clue what Cg is all about, so I'll see if I can clear up some of the confusion:

    Modern NVidia(and ATI) GPU's can execute decently complex instruction sets on the polygons they're set to render, as well as the actual pixels rendered either direct to screen or on the texture placed on a particular poly. The idea is to run your code as close to the actual rendering as possible -- you've got massive logic being deployed to quickly convert your datasets into some lit scene from a given perspective; might as well run a few custom instructions while we're in there.

    There's a shit-ton of flexibility lost -- you can't throw a P4 into the middle of a rendering pipeline -- but in return, you get to stream the massive amounts of data that the GPU has computed in hardware through your own custom-designed "software" filter, all within the video card.

    For practical applications, some of the best work I've seen with realtime hair uses vertex shaders to smoothly deform straight lines into flowing, flexible segments. From pixel shaders, we're starting to see volume rendering of actual MRI data that used to take quite some time to calculate instead happening *in realtime*.

    It's a bit creepy to see a person's head, hit C, and immediately a clip plane slices the top of guy's scalp off and you're lookin' at a brain.

    Now, these shaders are powerful, but by nature of where they're deployed, they're quite limited. You've got maybe a couple dozen assembly instructions that implement "useful" features -- dot products, reciprocal square roots, adds, multiplies, all in the register domain. It's not a general purpose instruction set, and you can't use it all you like: There's a fixed limit as to how many instructions you may use within a given shader, and though it varies between the two types, you've only got space for a couple dozen.

    If you know anything about compilers, you know that they're not particularly well known for packing the most power per instruction. Though there's been some support for a while for dynamically adjusting shaders according to required features, they've been more assembly-packing toolkits than true compilers.

    Cg appears different. If you didn't notice, Cg bears more than a passing resemblance to Renderman, the industry standard language for expressing how a material should react to being hit with a light source. (I'm oversimplifying horrifically, but heh.) Renderman surfaces are historically done in software *very, very* slowly -- this is a language optimized for the transformation of useful mathematical algorithms into something you can texture your polys with...speed isn't the concern, quality above all else is.

    Last year, NVidia demonstrated rendering the Final Fantasy movie, in realtime, on their highest end card at the time. They hadn't just taken the scene data, reduced the density by an order of magnitude, and spit the polys on screen. They actually managed to compile a number of the Renderman shaders into the assembly language their cards could understand, and ran them for the realtime render.

    To be honest, it was a bit underwhelming -- they really overhyped it; it did not look like the movie by any stretch of the imagination. But clearly they learned alot, and Cg is the fruits of that project. Whereas a hell of alot more has been written in Renderman than in strange shader assembly languages (yes, I've been trying to learn these lately, for *really* strange reasons), Cg could have a pretty interesting impact in what we see out of games.

    A couple people have talked about Cg on non-nVidia cards. Don't worry. DirectX shaders are a standard; game authors don't need to worry about what card they're using, they simply declare the shader version they're operating against and the card can implement the rest using the open spec. So something compiled to shader language from Cg will work on all compliant cards.

    Hopefully this helps?

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • by 91degrees ( 207121 ) on Thursday June 13, 2002 @03:51PM (#3696310) Journal
    Okay - a basic OpenGL or D3d command will send a set of vertices to the card. The vertex will contain position, colour information, a normal vector, and other bits and pieces. The card will transform these vertices, convert to triangles, apply colour and several textures, and output to screen.

    A vertex shader will take the vertices before they are transformed, and apply a series of operations on the data inside these vertices. This allows certain clever lighting effects, and nice ripple patterns to be described algorithmically.

    The vertices are then converted to triangles as before.

    Then the pixel shader is used. Modern applications use several layers of textures. Often, we'll see a texture for the colour, another one giving a bumpmap, andother giving a reflection map. These can be combined in a number of different ways. A pixel shader determines how these textures are applied and combined. A good pixel shader will allow a texture to be defined algorithmically. This looks better than a normal texture map at very large zoom levels. Ken Perlin has done a lot of work on this. Look at his site [noisemachine.com] to see what results you can get. Pixel shaders are getting there, but haven't quite made it.

    In practice, all vertex shader operations can be done by the CPU, but this tends to be a bit slower. Pixel shader operations are still at an early stage on current graphics chips, but are getting better. The early Nvidia pixel shaders were no better than the normal texture combiners, but pixel shaders in general are getting more flexible.
  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Thursday June 13, 2002 @04:08PM (#3696448)
    Comment removed based on user account deletion
  • by Viking Coder ( 102287 ) on Thursday June 13, 2002 @04:58PM (#3696883)
    Yes, I have read the article.

    It does not have cross-platform support. It is not hardware independant. They say that other vendors will be able to support it, they don't say that it's a free or open standard.

    Think about the compiler part of it, for a second. So what if the compiler supports multiple targets? Each compiled program will only be able to run on that one platform! Does that sound like OpenGL to you? Even if they allow a mechanism where the code can be targeted to multiple platforms in one executable, they're still making that decision at compile time. As opposed to at runtime, like OpenGL 2.0. That means that an executable today will be able to run on future hardware. Not true with Cg. Also, the compiler they talk about in Cg is an NVidia product. They're giving it away like free beer, not like free speach. In order for any given company to have Cg targeted to their platform, they'll need to go through NVidia to make it happen. Doesn't this scare you?

    Other video card manufacturers can not write their own compilers. The intended method is for other manufacturers to provide new "profiles" (eg fp20, vp20, dx8vs, dx8ps) which will be integrated into the one and only Cg compiler, which NVidia controls.

    That's how it locks people in to NVidia.

    I'm not talking about ATI. I'm talking about 3dlabs, the people who created the OpenGL 2.0 standard.

    I agree that it's to NVidia's advantage to release their hardware sooner rather than later, and that the OpenGL 2.0 standard won't be a standard for some time to come. But NVidia could put their weight behind it, or they could write their own thing. They chose to abandon OpenGL 2.0. Entirely. And they're hoping everyone else will, too.

    The Cg language is different from the OpenGL 2.0 shader and vertex language. They're different, but they do the same thing, essentially. To rephrase your question, perhaps someone will be able to provide a translator from Cg to OpenGL 2.0 and vice-versa. Just as people have created a layer that makes DirectX work on top of OpenGL.

    Is there really a question in your mind about whether OpenGL is a better standard that we can all live with than DirectX?

    The possible objections are the fact that DirectX has more features than OpenGL. Well, that's why OpenGL 2.0 is a good thing.

    Throwing Cg into the mix doesn't make OpenGL 2.0 any less of a good thing.

    I'm pissed at NVidia for deciding to go with a closed standard, rather than an open standard. What else is new?
  • Re:What Cg means (Score:4, Informative)

    by Effugas ( 2378 ) on Thursday June 13, 2002 @05:24PM (#3697043) Homepage
    Depends on your approach. Klaus Engel and company are all sorts of interesting isosurface and direct volume rendering in realtime of really high detail models -- check out:

    http://wwwvis.informatik.uni-stuttgart.de/~engel /p re-integrated/

    That's being done in realtime. They're doing more than just slapping slices on top of eachother and letting 'em alpha blend. :-)

    --Dan

Always try to do things in chronological order; it's less confusing that way.

Working...