Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

NVIDIA's Pixel & Vertex Shading Language 263

Barkhausen Criterion writes "NVIDIA have announced a high-level Pixel and Vertex Shading language developed in conjunction with Microsoft. According to this initial look, the "Cg Compiler" compiles high level Pixel and Vertex Shader language into low-level DirectX and OpenGL code. While the press releases are going amok, CG Channel (Computer Graphics Channel) has the most comprehensive look at the technology. The article writes, "Putting on my speculative hat, the motivation is to drive hardware sales by increasing the prevalence of Pixel and Vertex Shader-enabled applications and gaming titles. This would be accomplished by creating a forward-compatible tool for developers to fully utilize the advanced features of current GPUs, and future GPUs/VPUs." "
This discussion has been archived. No new comments can be posted.

NVIDIA's Pixel & Vertex Shading Language

Comments Filter:
  • 3dfx/Glide part 2? (Score:2, Interesting)

    by PackMan97 ( 244419 ) on Thursday June 13, 2002 @02:29PM (#3695599)
    Hopefully NVidia will be able to avoid the proprietary pitfall that ultimately doomed 3dfx and Glide.

    From the story it sounds like NVidia will allow other cards to support Cg so maybe they can. However I wonder if ATI will be willing to support a standard which NVidia controls. It's like wrestling with a crocodile if you ask me!~
  • by UnknownSoldier ( 67820 ) on Thursday June 13, 2002 @02:54PM (#3695829)
    > You've got to wonder, is this yet another load of Nvidia corporate hype (a la "HW TnL will revolutionise gaming")

    Have you *even* done *any* 3d graphics programming?? HW TnL *offloads* work from the CPU to the GPU. The CPU is then free to spend on other things, such as AI. Work smarter, not harder.

    I'm not sure what type of revolution you were expecting. Maybe you were expecting a much higher poly count with HW TnL, like 10x. The point is, we did get faster processing. Software TnL just isn't going to get the same high poly count that we're starting to see in today's games with HW TnL.

    > it takes a few months till anyone really knows if this is good or bad.

    If it means you don't have to waste your time writing *two* shaders (one for DX, and other for OpenGL) then that is a GOOD THING.
  • Inefficiencies (Score:5, Interesting)

    by Have Blue ( 616 ) on Thursday June 13, 2002 @02:57PM (#3695856) Homepage
    One of these days, nVidia will ship a GPU whose functionality is a proper superset of that of a traditional CPU and then we can ditch the CPU entirely. Just like MMX, but backwards. This is a a recognized law of engineering [tuxedo.org]. At that point, Cg will have to become a "real" compiler. Let's hope nVidia is up to the task...
  • by mysticbob ( 21980 ) on Thursday June 13, 2002 @03:04PM (#3695920)
    there's already lots of other shading stuff out there, nvidia's hardly first. at least two other hardware shading languages exist. these languages allow c-like coding, and convert that into platform-specific stuffs. unfortunately, none of the things being marketed here, or now by nvidia, are really cross-platform. references: of course, the progenitor of all these, conceptually, is renderman's shading language. [pixar.com]

    hopefully, opengl2's [3dlabs.com] shading will become standard, and mitigate the cross-platform differences. it's seemingly a much better option than this new thing by nvidia, but we'll have to wait and see what does well in the marketplace, and with developers.

  • Re:Inefficiencies (Score:3, Interesting)

    by doop ( 57132 ) on Thursday June 13, 2002 @03:28PM (#3696127)
    They're already heading that way; the Register [theregister.co.uk] had an article [theregister.co.uk] describing some work [stanford.edu] being done to do general raycasting in hardware. I guess it's heading towards turning graphics cards into boards full of many highly parallel mini-CPUs, since vertex and/or pixel shading are rather parallelizable in comparison to other things the main CPU might be doing. Of course, OpenGL is already a sufficiently versatile system that one can implement [geocities.com] Conway's Life using the stencil buffer [cc.fer.hr], so for a sufficiently large buffer, you could implement a Turing machine [rendell.uk.co]; I don't know how much (if any) acceleration you'd get out of the hardware, though.
  • Re:Analogy (Score:3, Interesting)

    by ThrasherTT ( 87841 ) <thrasherNO@SPAMdeathmatch.net> on Thursday June 13, 2002 @04:01PM (#3696392) Homepage Journal
    You missed the main point of Cg:

    Vertex Shader ASM is hard(er than Cg)
    Pixel Shader ASM is hard(er than Cg)

    My understanding of Cg is that it'll be used as a shader replacement, NOT an OpenGL replacement. You'll still have to write tons and tons of OGL. Now you can just simplify the SHADER part.
  • by Steveftoth ( 78419 ) on Thursday June 13, 2002 @04:27PM (#3696587) Homepage
    of bandwidth that the average graphics card has then it would.
    Also don't forget that a GPU has more transistors then the average cpu these days.
    The VGA -> CPU interface was SLOWWWWW. In fact it's still slow, that's why AGP (X8) was invented and that's even slow. The graphics cards have larger buses, and are designed to push data to the DAC.
    All you need is more bandwith for the CPU and you're set.
  • Re:Like C? (Score:2, Interesting)

    by GameMaster ( 148118 ) on Thursday June 13, 2002 @05:04PM (#3696929)
    One of the things a lot of people seem to be missing is that this is a special language used only for shader design. Vertex shaders area lot like inline assembly functions in C or C++. They are small pieces of code that do a short string of operations over and over again in hardware.

    It makes a lot of sense to base it off of C for a number of reasons. First, most game programmers are familiar with C or C++. Second, and more important, there are extreme limitations on the size of shaders. Vertex shaders have a limit of 128 ops on a geforce card. This is just base ops and can go away real fast when you use macro commands (which are composites of multiple ops) as are most likely available in cg. Future cards will, no doubt, increase the number of ops allowed per shader but it will be a while before we see shaders that are large enough to find any use for OOP features. If we do find a time where some OOP features would be handy then I'm sure they could add basic OOP functionality similar to C++.

    -GameMaster

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...