NVIDIA's Pixel & Vertex Shading Language 263
Barkhausen Criterion writes "NVIDIA have announced a high-level Pixel and Vertex Shading language developed in conjunction with Microsoft. According to this initial look, the "Cg Compiler" compiles high level Pixel and Vertex Shader language into low-level DirectX and OpenGL code. While the press releases are going amok, CG Channel (Computer Graphics Channel) has the most comprehensive look at the technology. The article writes, "Putting on my speculative hat, the motivation is to drive hardware sales by increasing the prevalence of Pixel and Vertex Shader-enabled applications and gaming titles. This would be accomplished by creating a forward-compatible tool for developers to fully utilize the advanced features of current GPUs, and future GPUs/VPUs." "
Re:Linux Support (Score:3, Informative)
My only problem is that the toolkit itself is only for windows
Anyone try it with Wine/Winex yet?? I might when I get home.
Derek
This could be really good. (Score:3, Informative)
Basically this is a wrapper for the assembly that you would have to write if you were going to write a shader program. It compiles a C-like (as in look a like ) language into either the DirectX shader program or the OpenGL shader program. So you'll need a compiler for each and every API that you want to support. Which means that you'll need a different compiler for OpenGL/Nvidia and OpenGL/ATI until they standardize it.
On a more technical note, the lack of branching in vertex/pixel shaders really needs to be fixed, it's really the only feature that they need to add to them. Which is why the Cg code looks so strange, it's C, but there's no loops.
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:3, Informative)
In Fact...... (Score:3, Informative)
"NVIDIA's Cg Compiler is also cross platform, supporting programs written for Windows®, OS X, Linux, Mac and Xbox®."
So maybe even though the tools aren't cross platform - the compiler is. I think this is a Great step forward towards OpenGL 2.0 - this is showing that Windows doesn't have to be the only platform to write graphically intensive applications for.
Derek
Re:3dfx/Glide part 2? (Score:4, Informative)
That was the first thing that popped into my head when I read this article, but it sounds like they're going to give open access to the standards, just not to the interface with their chips.
Re:No loops? (Score:2, Informative)
-GameMaster
What Cg means (Score:5, Informative)
Modern NVidia(and ATI) GPU's can execute decently complex instruction sets on the polygons they're set to render, as well as the actual pixels rendered either direct to screen or on the texture placed on a particular poly. The idea is to run your code as close to the actual rendering as possible -- you've got massive logic being deployed to quickly convert your datasets into some lit scene from a given perspective; might as well run a few custom instructions while we're in there.
There's a shit-ton of flexibility lost -- you can't throw a P4 into the middle of a rendering pipeline -- but in return, you get to stream the massive amounts of data that the GPU has computed in hardware through your own custom-designed "software" filter, all within the video card.
For practical applications, some of the best work I've seen with realtime hair uses vertex shaders to smoothly deform straight lines into flowing, flexible segments. From pixel shaders, we're starting to see volume rendering of actual MRI data that used to take quite some time to calculate instead happening *in realtime*.
It's a bit creepy to see a person's head, hit C, and immediately a clip plane slices the top of guy's scalp off and you're lookin' at a brain.
Now, these shaders are powerful, but by nature of where they're deployed, they're quite limited. You've got maybe a couple dozen assembly instructions that implement "useful" features -- dot products, reciprocal square roots, adds, multiplies, all in the register domain. It's not a general purpose instruction set, and you can't use it all you like: There's a fixed limit as to how many instructions you may use within a given shader, and though it varies between the two types, you've only got space for a couple dozen.
If you know anything about compilers, you know that they're not particularly well known for packing the most power per instruction. Though there's been some support for a while for dynamically adjusting shaders according to required features, they've been more assembly-packing toolkits than true compilers.
Cg appears different. If you didn't notice, Cg bears more than a passing resemblance to Renderman, the industry standard language for expressing how a material should react to being hit with a light source. (I'm oversimplifying horrifically, but heh.) Renderman surfaces are historically done in software *very, very* slowly -- this is a language optimized for the transformation of useful mathematical algorithms into something you can texture your polys with...speed isn't the concern, quality above all else is.
Last year, NVidia demonstrated rendering the Final Fantasy movie, in realtime, on their highest end card at the time. They hadn't just taken the scene data, reduced the density by an order of magnitude, and spit the polys on screen. They actually managed to compile a number of the Renderman shaders into the assembly language their cards could understand, and ran them for the realtime render.
To be honest, it was a bit underwhelming -- they really overhyped it; it did not look like the movie by any stretch of the imagination. But clearly they learned alot, and Cg is the fruits of that project. Whereas a hell of alot more has been written in Renderman than in strange shader assembly languages (yes, I've been trying to learn these lately, for *really* strange reasons), Cg could have a pretty interesting impact in what we see out of games.
A couple people have talked about Cg on non-nVidia cards. Don't worry. DirectX shaders are a standard; game authors don't need to worry about what card they're using, they simply declare the shader version they're operating against and the card can implement the rest using the open spec. So something compiled to shader language from Cg will work on all compliant cards.
Hopefully this helps?
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Just what are shaders? (Score:2, Informative)
A vertex shader will take the vertices before they are transformed, and apply a series of operations on the data inside these vertices. This allows certain clever lighting effects, and nice ripple patterns to be described algorithmically.
The vertices are then converted to triangles as before.
Then the pixel shader is used. Modern applications use several layers of textures. Often, we'll see a texture for the colour, another one giving a bumpmap, andother giving a reflection map. These can be combined in a number of different ways. A pixel shader determines how these textures are applied and combined. A good pixel shader will allow a texture to be defined algorithmically. This looks better than a normal texture map at very large zoom levels. Ken Perlin has done a lot of work on this. Look at his site [noisemachine.com] to see what results you can get. Pixel shaders are getting there, but haven't quite made it.
In practice, all vertex shader operations can be done by the CPU, but this tends to be a bit slower. Pixel shader operations are still at an early stage on current graphics chips, but are getting better. The early Nvidia pixel shaders were no better than the normal texture combiners, but pixel shaders in general are getting more flexible.
Comment removed (Score:4, Informative)
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2, Informative)
It does not have cross-platform support. It is not hardware independant. They say that other vendors will be able to support it, they don't say that it's a free or open standard.
Think about the compiler part of it, for a second. So what if the compiler supports multiple targets? Each compiled program will only be able to run on that one platform! Does that sound like OpenGL to you? Even if they allow a mechanism where the code can be targeted to multiple platforms in one executable, they're still making that decision at compile time. As opposed to at runtime, like OpenGL 2.0. That means that an executable today will be able to run on future hardware. Not true with Cg. Also, the compiler they talk about in Cg is an NVidia product. They're giving it away like free beer, not like free speach. In order for any given company to have Cg targeted to their platform, they'll need to go through NVidia to make it happen. Doesn't this scare you?
Other video card manufacturers can not write their own compilers. The intended method is for other manufacturers to provide new "profiles" (eg fp20, vp20, dx8vs, dx8ps) which will be integrated into the one and only Cg compiler, which NVidia controls.
That's how it locks people in to NVidia.
I'm not talking about ATI. I'm talking about 3dlabs, the people who created the OpenGL 2.0 standard.
I agree that it's to NVidia's advantage to release their hardware sooner rather than later, and that the OpenGL 2.0 standard won't be a standard for some time to come. But NVidia could put their weight behind it, or they could write their own thing. They chose to abandon OpenGL 2.0. Entirely. And they're hoping everyone else will, too.
The Cg language is different from the OpenGL 2.0 shader and vertex language. They're different, but they do the same thing, essentially. To rephrase your question, perhaps someone will be able to provide a translator from Cg to OpenGL 2.0 and vice-versa. Just as people have created a layer that makes DirectX work on top of OpenGL.
Is there really a question in your mind about whether OpenGL is a better standard that we can all live with than DirectX?
The possible objections are the fact that DirectX has more features than OpenGL. Well, that's why OpenGL 2.0 is a good thing.
Throwing Cg into the mix doesn't make OpenGL 2.0 any less of a good thing.
I'm pissed at NVidia for deciding to go with a closed standard, rather than an open standard. What else is new?
Re:What Cg means (Score:4, Informative)
http://wwwvis.informatik.uni-stuttgart.de/~enge
That's being done in realtime. They're doing more than just slapping slices on top of eachother and letting 'em alpha blend.
--Dan