Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

NVIDIA's Pixel & Vertex Shading Language 263

Barkhausen Criterion writes "NVIDIA have announced a high-level Pixel and Vertex Shading language developed in conjunction with Microsoft. According to this initial look, the "Cg Compiler" compiles high level Pixel and Vertex Shader language into low-level DirectX and OpenGL code. While the press releases are going amok, CG Channel (Computer Graphics Channel) has the most comprehensive look at the technology. The article writes, "Putting on my speculative hat, the motivation is to drive hardware sales by increasing the prevalence of Pixel and Vertex Shader-enabled applications and gaming titles. This would be accomplished by creating a forward-compatible tool for developers to fully utilize the advanced features of current GPUs, and future GPUs/VPUs." "
This discussion has been archived. No new comments can be posted.

NVIDIA's Pixel & Vertex Shading Language

Comments Filter:
  • 3dfx/Glide part 2? (Score:2, Interesting)

    by PackMan97 ( 244419 )
    Hopefully NVidia will be able to avoid the proprietary pitfall that ultimately doomed 3dfx and Glide.

    From the story it sounds like NVidia will allow other cards to support Cg so maybe they can. However I wonder if ATI will be willing to support a standard which NVidia controls. It's like wrestling with a crocodile if you ask me!~
    • "Hopefully NVidia will be able to avoid the proprietary pitfall that ultimately doomed 3dfx and Glide."

      NVidia is avoiding at least one 3dfx pitfall ... their product is not simply a beefier version of the previous product (which in itself was a beefier version of the previous product (which in itself was a beefier version of the previous product (which in itself was a beefier version ... )))

    • Hopefully NVidia will be able to avoid the proprietary pitfall that ultimately doomed 3dfx and Glide.

      Given that it was nVidia that purchased 3dfx and brought along many of the employees, I should hope that there would be some internal discussion of this.

    • > like NVidia will allow other cards to support Cg so maybe they can

      Um, well, Cg gets compiled to DirectX or OpenGL, so it follows that any card that can do DirectX or OpenGL (read: all of them?) will benifit Cg. I guess different cards to different levels of support, but if they want this to fly, itd be in their best interest to generate multi-card compatible code. Or at least allow you to specify what extentions your generated code will support, to tailer to specific card feature sets? Correct me if I'm confused, if anyone is really in the know.

      I think the idea here is that you could use this language to write new shaders for cards on the market _now_ ... the gain is that it supports two targets (DX and OpenGL) and seems a significantly easier way of incorperating new shaders into games than current methods?
      • There's directX and there's directX 8.1 oh and DirectX 8.1a.
        Remember when the Radeon first came out? Well they had to release a special directX just to support it's pixel shaders as opposed to just nvidias.
        So as a game developer you'll probably have to compile your Cg code with the Nvidia one and the ATI one just to make it work (better).
        This tool will really help those XBox developers.
        Same thing with OpenGL, since the spec isn't nailed down yet and with Nvidia 'leading the pack' of development. It wouldn't surprise me if they decided to not support any other cards with the OpenGL compiler (which they haven't even released yet).
        So hopefully this will NOT turn into a Glide type issue. Since this is actually a level above glide. Glide was very low level, all the Glide functions mostly mapped directly onto the 3dfx hardware, while this is a little bit more abstract.
    • by Dark Nexus ( 172808 ) on Thursday June 13, 2002 @02:47PM (#3695774)
      Well, they're quoted in this [com.com] article on ZDNet (the quote is near the bottom) as saying that they're going to release the language base so other chip makers can write their own compilers for their products.

      That was the first thing that popped into my head when I read this article, but it sounds like they're going to give open access to the standards, just not to the interface with their chips.
  • News.com [com.com] had this story for awhile.

    My biggest question - from reading this, this would actually work correctly on other competing VCards... why did nVidia create it?
    • If they didn't ATI would
    • 'Cause 3Dlabs [3dlabs.com] already created it ... see their OpenGL 2.0 proposal [3dlabs.com]. My fear is this is _only_ a preemptive strike against the "full programability" of the 3dlabs P10 which will sooner or later have a consumer version. High level interoperating programability is very much needed to access the power of current and more complicated future cards.
      • OpenGL 2.0 (Score:3, Insightful)

        IMHO, OpenGL 2.0 is more portable, less NVidia-specific and backed by more manufacturers. Cg is a ripoff of OpenGL 2.0's design, in a cheap attempt to turn it into a NVidia/Microsoft controlled standard.

        Remember, NVidia may be good now, but they got where they were by being competitive and overturning old-guard 3D guys (like 3DFX who were themselves trying to lock the industry in to APIs they controlled).

        Competition=good.
        Single-vendor-controlled APIs=bad.
        OpenGL2.0=good.

        Now, Ilike my NVidia hardware as much as the next guy, but I fear lock-in. Seems like most of us have already experienced the downsides of lock-in.

        Yes, NVidia is talking up the buzzwords "portable" and "vendor-neutral" but if that's what they were after, the wouldn't have created Cg at all, they would have gone with the already-available open standard, OpenGL2.0. This is embrace, extend and extinguish.
  • Remember 3dfx's GLIDE libraries? This could end up like those... an "industry standard" supported only by one manufacturer's chipsets, used by all major games. At least 3dfx made good, cheap cards before they died, though.

    If it doesn't work with my RADEON, it must be evil!
  • If we wanted cutting edge Pixels, we'd go back and play Wolfenstein. Man, I remember those days, the people with sharp features and a whole four frames of animation. And we were glad to have it, too.
  • by moonbender ( 547943 ) <moonbenderNO@SPAMgmail.com> on Thursday June 13, 2002 @02:31PM (#3695619)
    You've got to wonder, is this yet another load of Nvidia corporate hype (a la "HW TnL will revolutionise gaming"), or is this useful technology? I wouldn't trust any of the current articles on answering that, judging by the previous Nvidia hypes, it takes a few months till anyone really knows if this is good or bad.
    • ummm.... T&L DID revolutionize the game industry. We are at a point where companies dont have to worry about pushing polygons. Now they are finally moving on to actually improving visual quality, as opposed to geometric complexity.Have you seen what ID has been up to lately?
      • by Pulzar ( 81031 )
        Have you seen what ID has been up to lately?

        Have you read about how much effort JC has put into pushing polygons in Doom 3? We're hardly at a point where companies don't have to worry about speed issues..

        If anything, companies have to put in even more effort into producing some stunning results, because everybody has been spoiled by recent titles.
        • This piqued my curiosity.

          Assumptions:

          Capacity of 1 CD - 650,000,000 bytes
          # of CDs that will fit into 1 cu ft comfortably - 400
          Cargo space on a 747 - 6,190 cu ft

          plus - you'll have enough CD-ROM drives available on either end to write 2.5 million CDs in 1 hour, and read 2.5 million CDs in 1/2 hour (about 125,000 drives - not an impossible number)

          plus - the CDs on the shipping end are packed as they are burned, adding little or no time to the process of loading - same goes for receiving end. This assumes you have about 25,000 hard workers.

          The flight time from Paris to New York is 6 1/2 hours. Writing/packing is 1 hour. Reading/unloading is 1/2 hour. Total time 8 hours. Total bits transfered is 12,875,200,000,000,000 bits, or 12,875 terabits (1,609 terabytes). Total time for transfer, 8 hours.

          Resulting bandwidth: 447 Gb/sec, or 56 GB/sec.

          New York to Miami would be 894 Gb/sec, or 112 GB/sec.

          Sounds impressive, but you might want to reconsider laying fiber considering the impossible costs and logistics...
          • Cargo space on a 747 - 6,190 cu ft

            It can only carry the equivalent of one 18' cube? (Oh, I see where you got your numbers. Why are you assuming a plane with passengers? Cargo versions are closer to 25,000)

            If you're packing CDs solid, then you'll hit the 113,000 Kg limit long before any volume limit anyway.

            One offhand reference [dualplover.com] suggests 9kg/500 cds, or around 6.3 million discs.

          • You need to add one more assumption to your assesment: maximum takeoff weight. When I did this thought experiment a long time ago, assuming a 747-400 freightliner full of DAT tapes, it was apparent thet you could only use about 60% of the available cargo space and still have a plane that could get off the ground.

    • > You've got to wonder, is this yet another load of Nvidia corporate hype (a la "HW TnL will revolutionise gaming")

      Have you *even* done *any* 3d graphics programming?? HW TnL *offloads* work from the CPU to the GPU. The CPU is then free to spend on other things, such as AI. Work smarter, not harder.

      I'm not sure what type of revolution you were expecting. Maybe you were expecting a much higher poly count with HW TnL, like 10x. The point is, we did get faster processing. Software TnL just isn't going to get the same high poly count that we're starting to see in today's games with HW TnL.

      > it takes a few months till anyone really knows if this is good or bad.

      If it means you don't have to waste your time writing *two* shaders (one for DX, and other for OpenGL) then that is a GOOD THING.
      • by friedmud ( 512466 ) on Thursday June 13, 2002 @03:06PM (#3695928)
        "If it means you don't have to waste your time writing *two* shaders (one for DX, and other for OpenGL) then that is a GOOD THING."

        Even better then that! It means you don't have to waste your time writing *4* shaders:

        Nvidia/DirectX
        Nvidia/OpenGL
        ATI/DirectX
        ATI/ OpenGL

        That is of course, pending a compiler for ATI cards - but I don't think it will be long... Unless ATI holds out for OpenGL2 - but in between now and when OGL2 comes out there is a lot of time to lose maket share to Nvidia because people are writing all of their shaders in Cg - and ATI is getting left out in the rain....

        So I would expect ATI to jump on this bandwagon - and quick!

        Derek
  • While I this is a great move by NVIDIA to increase the use of Pixel and Vertex Shader in games, is this wholly proprietary? I mean wouldn't it be better for ATI to have a hand in it as well, to work out a standard to make it easier for game developers? I just hope this doesn't turn out like 3dfx..
    • "While I this is a great move by NVIDIA to increase the use of Pixel and Vertex Shader in games, is this wholly proprietary?"

      Cg compiles down to OpenGL and DirectX statements, which are not proprietary. Some of the statements are recent extensions to support the kind of stuff they want to do. So, yes, other companies can support these as well. However, they might be following a target being moved around at will by Nvidia. "Oh, you don't support DirectX 9.001's new pixel puking extensions?"

      It remains to be seen how it's used. Obviously, Nvidia wants to use this to sell their cards. But MS doesn't have to listen to them when designing DirectX, either. It seems to me that at the very least, it'll be faster than writing separate old-school (last week) vertex and pixel shader code for each different brand.
  • The article writes, "Putting on my speculative hat, the motivation is to drive hardware sales by increasing the prevalence of Pixel and Vertex Shader-enabled applications and gaming titles. This would be accomplished by creating a forward-compatible tool for developers to fully utilize the advanced features of current GPUs, and future GPUs/VPUs."
    Putting on my speculative hat, the motivation is to, um, make better looking graphics?

    The real test will be how well the crosscompiler outputs OpenGL 2 & DX 9 shaders in practic, not theory.

    But let's be serious: cel shading is the only shading anyone really needs. ^^
  • I'm buying one right away, and praying that they become industry standard. The next "Amy men" game will be all the sweeter, along with Pac-man 3D!
    • The next "Amy men" game will be all the sweeter...


      Yes, I agree! We all like a good cross-dressing game for the early morning hours at lan parties!
  • by Steveftoth ( 78419 ) on Thursday June 13, 2002 @02:37PM (#3695680) Homepage
    According to the web site, they are working to implement this on top of both OpenGL and DirectX. On linux and Mac as well.
    Basically this is a wrapper for the assembly that you would have to write if you were going to write a shader program. It compiles a C-like (as in look a like ) language into either the DirectX shader program or the OpenGL shader program. So you'll need a compiler for each and every API that you want to support. Which means that you'll need a different compiler for OpenGL/Nvidia and OpenGL/ATI until they standardize it.

    On a more technical note, the lack of branching in vertex/pixel shaders really needs to be fixed, it's really the only feature that they need to add to them. Which is why the Cg code looks so strange, it's C, but there's no loops.
  • by rob-fu ( 564277 ) on Thursday June 13, 2002 @02:38PM (#3695683)
    That's like asking which of the following would I rather do...

    a) have a 3-way with two hot chicks
    b) clean the floor behind my refrigerator

    I wonder.
  • I just bought my GeForce 4 TI4600. *sigh* Looks like Ill have to give my other ARM and LEG to pay for the upcoming GeForce 5.
    • This technology is compatible with your current card, is it not? My impression is that cG simply makes it easier to generate the same OpenGL and DirectX code games are feeding your GF4 with now. Its to ease the work for the programmer and allow folks to concetrate more on the design of the shaders then their in-code implementation.
    • Did you expect any different?

      And instead of shelling out 300$ for your GF4 TI4600, you should have gotten a Gainward GF4 TI4200 for 150$(shipped, check pricewatch.com). Especially considering they easily overclock to TI4600 speeds with no problem.

  • One has to wonder if this allience is from the current relationship Nvidia and MS has with the Xbox.
    • Probably somewhat, to help developers make better looking graphics (or at least make them more easily) on the XBox.

      The article claim "cross platform compatibility" on Windows, Linux, Mac and the XBox.

      • What will happen is that Nvidia only develops the standard DirectX compilers and leaves it open for anyone else to do the others.

        This keeps Micro$oft happy as they will get the best drivers at the same time as not insulting the OpenGL community.

        They do NOT want every one else to turn their back on Nvidia.

        Although most of the customers use M$ most of the poeple that advise them dont. I advised tons of people as to what they want in their PC and I base the advice on how "nice" the company is.
  • this is good because now it will be easier to create cross platform games. which means more games for linux/mac. that is assuming i read it correctly.
  • I dont think that it should be such a good idea if one wendor would control the backends to the cards. I do dislike directx. Not because it is a vorse standard than open gl but because its not avaliable on every platform. If microsoft has a finger in the game it sure smells funny, that finger is up someones butt if you ask me. Sure it can generate opengl but i would almost presume that the day it gets really widely used it stops or does that in a less efficient way.

    Microsoft have never done anything without a hidden agenda (microsoft bob not included).

  • [BEGIN]
    SET_PIXELFORMAT(SHINY)
    ADD_BUMP_MAPS
    BL END_REFLECTION
    SET_TRANSPARENCY(0.5)
    SET_TEXTURE ("WALL")
    [END]

    [BEGIN]
    SET_PIXELFORMAT(WET)
    ADD_BUMP_MAPS
    BL END_REFLECTION
    SET_TRANSPARENCY(0.3)
    SET_COLOR(B LUE)
    ADD_FISHIES(YELLOW)
    [END]
  • Inefficiencies (Score:5, Interesting)

    by Have Blue ( 616 ) on Thursday June 13, 2002 @02:57PM (#3695856) Homepage
    One of these days, nVidia will ship a GPU whose functionality is a proper superset of that of a traditional CPU and then we can ditch the CPU entirely. Just like MMX, but backwards. This is a a recognized law of engineering [tuxedo.org]. At that point, Cg will have to become a "real" compiler. Let's hope nVidia is up to the task...
    • Re:Inefficiencies (Score:3, Interesting)

      by doop ( 57132 )
      They're already heading that way; the Register [theregister.co.uk] had an article [theregister.co.uk] describing some work [stanford.edu] being done to do general raycasting in hardware. I guess it's heading towards turning graphics cards into boards full of many highly parallel mini-CPUs, since vertex and/or pixel shading are rather parallelizable in comparison to other things the main CPU might be doing. Of course, OpenGL is already a sufficiently versatile system that one can implement [geocities.com] Conway's Life using the stencil buffer [cc.fer.hr], so for a sufficiently large buffer, you could implement a Turing machine [rendell.uk.co]; I don't know how much (if any) acceleration you'd get out of the hardware, though.
    • Actually, they're more likely to focus on graphics, which gives game developers a sort of dual-processor system. If any other areas of game development were standardized and needed this kind of power (I can think of a few that could use it) they might get accelerators too...
  • Did everybody read the comparison between writing in CG and writing hand-optimized assembly code?

    Thank GOD they wrote CG, because now I won't have to write all of my programs in assembly anymore.

    What is this "compiler" technology that they keep talking about? This might revolutionize computer science!
    • I'm not sure if you're acting like a moron for the sake of a joke (I do that myself a lot :)) - but just in case you aren't:

      It's for programming vertex and pixel shaders. Up until now, all you had was this nearly-evil assembly code to program them with.

      It's not as if they've discovered something new, because they haven't. It's that they've applied it to an area that will give more people access to the new cards' powerful new features - which would otherwise go underused because so many people shy away from assembly of any kind.

      I was skeptical myself until I saw how they've put it together. I can't wait until I have enough dough to buy one of these cards now...
  • Cg is to OpenGL 2.0
    as DirectX is to OpenGL

    It's a closed ("partially open") standard, for a subset of hardware, which is not as forward looking as a proposed competing standard.

    Support OpenGL 2.0!
  • Yeah, but... (Score:2, Insightful)

    by NetRanger ( 5584 )
    There are some issues that I think nobody seems to be addressing, as in:

    * Realistic fog/smoke -- not that 2-D fog which looks like a giant translucent grey pancake. Microsoft comes closer with Flight Sim 2002, but it's not quite there yet.

    * Fire/flame -- again, nobody has created more realistic acceleration for this kind of effect. It's very important for many games.

    Furthermore I would like to see fractal acceleration techniques for organic-looking trees, shrubs, and other scenery. Right now they look like something from a Lego box. In fact, fractals could probably help with fire/smoke effects as well, to add thicker & thinner areas which take on a "semi-random", but not an obvious pattern, effect.

    Perhaps I'm just too picky...
    • Actually, the Serious Engine [croteam.com] does a pretty impressive job with both fog and flame effects. It's still a few steps away from realistic, but it does a substantially better job than the Unreal or Quake technology at the moment.

      The demo versions of both Serious Sam games have a "technology test" level you can walk through in single-player mode that shows off the engine's capabilities pretty well.
  • by Anonymous Coward
    Does this mean they'll finally be able to make a decent nose-picking routine for Counter-Strike hostage models?

  • by mysticbob ( 21980 ) on Thursday June 13, 2002 @03:04PM (#3695920)
    there's already lots of other shading stuff out there, nvidia's hardly first. at least two other hardware shading languages exist. these languages allow c-like coding, and convert that into platform-specific stuffs. unfortunately, none of the things being marketed here, or now by nvidia, are really cross-platform. references: of course, the progenitor of all these, conceptually, is renderman's shading language. [pixar.com]

    hopefully, opengl2's [3dlabs.com] shading will become standard, and mitigate the cross-platform differences. it's seemingly a much better option than this new thing by nvidia, but we'll have to wait and see what does well in the marketplace, and with developers.

  • If this makes it easier to create high end video games maybe it could boost the Duke Nukem release schedule. I did say maybe
  • Hmm... How long before we have Graphic Card Worms/Subliminal messages...

    Segue to someone playing a video game at a high frame rate...

    Gee, the more I play this game, the less bad I feel about buying proprietary technology and the angrier I get at those 9 states for disagreeing with the DoJ Settlement. Oh, and I'd like to buy all of Britney Spears CD's and eat every meal at McD's... I'm sure I didn't feel this way yesterday... What's odd, too is that every so many frames seems to flicker something I can't quite make out...

    Screen breifly flickers something else

    Hmm... I can't remember what I was just thinking about, but I do have the strangest desire to email all of my personal information and credit card numbers to mlm5767848@hotmail.com...

  • What Cg means (Score:5, Informative)

    by Effugas ( 2378 ) on Thursday June 13, 2002 @03:36PM (#3696194) Homepage
    Seems like a decent number of people have absolutely no clue what Cg is all about, so I'll see if I can clear up some of the confusion:

    Modern NVidia(and ATI) GPU's can execute decently complex instruction sets on the polygons they're set to render, as well as the actual pixels rendered either direct to screen or on the texture placed on a particular poly. The idea is to run your code as close to the actual rendering as possible -- you've got massive logic being deployed to quickly convert your datasets into some lit scene from a given perspective; might as well run a few custom instructions while we're in there.

    There's a shit-ton of flexibility lost -- you can't throw a P4 into the middle of a rendering pipeline -- but in return, you get to stream the massive amounts of data that the GPU has computed in hardware through your own custom-designed "software" filter, all within the video card.

    For practical applications, some of the best work I've seen with realtime hair uses vertex shaders to smoothly deform straight lines into flowing, flexible segments. From pixel shaders, we're starting to see volume rendering of actual MRI data that used to take quite some time to calculate instead happening *in realtime*.

    It's a bit creepy to see a person's head, hit C, and immediately a clip plane slices the top of guy's scalp off and you're lookin' at a brain.

    Now, these shaders are powerful, but by nature of where they're deployed, they're quite limited. You've got maybe a couple dozen assembly instructions that implement "useful" features -- dot products, reciprocal square roots, adds, multiplies, all in the register domain. It's not a general purpose instruction set, and you can't use it all you like: There's a fixed limit as to how many instructions you may use within a given shader, and though it varies between the two types, you've only got space for a couple dozen.

    If you know anything about compilers, you know that they're not particularly well known for packing the most power per instruction. Though there's been some support for a while for dynamically adjusting shaders according to required features, they've been more assembly-packing toolkits than true compilers.

    Cg appears different. If you didn't notice, Cg bears more than a passing resemblance to Renderman, the industry standard language for expressing how a material should react to being hit with a light source. (I'm oversimplifying horrifically, but heh.) Renderman surfaces are historically done in software *very, very* slowly -- this is a language optimized for the transformation of useful mathematical algorithms into something you can texture your polys with...speed isn't the concern, quality above all else is.

    Last year, NVidia demonstrated rendering the Final Fantasy movie, in realtime, on their highest end card at the time. They hadn't just taken the scene data, reduced the density by an order of magnitude, and spit the polys on screen. They actually managed to compile a number of the Renderman shaders into the assembly language their cards could understand, and ran them for the realtime render.

    To be honest, it was a bit underwhelming -- they really overhyped it; it did not look like the movie by any stretch of the imagination. But clearly they learned alot, and Cg is the fruits of that project. Whereas a hell of alot more has been written in Renderman than in strange shader assembly languages (yes, I've been trying to learn these lately, for *really* strange reasons), Cg could have a pretty interesting impact in what we see out of games.

    A couple people have talked about Cg on non-nVidia cards. Don't worry. DirectX shaders are a standard; game authors don't need to worry about what card they're using, they simply declare the shader version they're operating against and the card can implement the rest using the open spec. So something compiled to shader language from Cg will work on all compliant cards.

    Hopefully this helps?

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • If I remember correctly, one of the major features of OpenGL 2.0 was going to be a high level language and compiler that would compile down to low level pixel and vertex shader assembly. And, if I'm not mistaken, nVidia was one of the biggest contributers to this language. Has nVidia decided to just screw OpenGL altogether? Or is this a temporary equivalent for the time being while we're still using OpenGl 1.2?
  • Their website is at www.cgshaders.com. Their is a coding contest, some articles, and forums to ask questions.
  • Maybe I'm a bit paranoid (I probably fit right in on /.) but when I read news subheadlines like "Nvidia, the dominant PC graphics chip maker, has teamed up with Microsoft and developed a new cross-platform graphic language called Cg that it hopes becomes an industry standard" I don't really feel all warm & fuzzy inside. CG Channel states [cgchannel.com] "NVIDIA's compiler toolkit would be more optimized for their own hardware owing to greater understanding of their own technology. ATI would have the option of writing their own backend compiler to support their hardware more optimally, but the exisiting NVIDIA toolkit should generate working code on ATI's part. [...] NVIDIA are hoping that Cg will be the industry's defacto standard simply due to its time on the market [...]" If NVIDIA can't be reasonably criticized for supporting their own chipset more with optimized code (and leaving it open to others with competing chipsets), can co-developer Microsoft be criticized for favouring their own software in this? Couldn't MS solutions (DirectX, XBox-specific tools, etc) be favoured under Cg merely by them investing more in Cg development and (as one of the two developers controlling the standard) updating compilers and shader functions for their software sooner or more completely than for others? If this was the case, Cg could just end up being another "embrace, extend, etc" scenario, this time in the graphics market to push MS & Nvidia techologies.

    Nvidia has been fair to good in their cross-platform support so far, but of course MS has not been. To the relief of many CG Channel reports that "Interestingly, key components of NVIDIA's Cg compiler will be open-sourced and will work on Linux, Mac OS X and Xbox platforms. [...] Compiled code for Direct3D will be cross-platform (well, as cross platform as Microsoft might expect). OpenGL code should work much the same as long as the OpenGL extensions are supported on the target. NVIDIA says it will provide compiler binaries for all of the major platforms." The real proof will be in how Nvidia supports Cg on other platforms and OpenGL over the long term. Will these binaries be released at the same time and with the same feature sets? And will this continue to be the case or will full cross-platform support only exist in the beginning until Cg becomes a de facto standard?

    I'm skeptical at this point, since we all know there's a world of difference between being merely compatible and being optimized. There's some evidence so far of how Cg is being implemented. For instance, it looks like there isn't an OpenGL fragment program profile for the Cg toolkit while there is one for Direct3D8. Nvidia says that the reason Cg has for no OpenGL ARB vertex_program extension while there are both dx8ps and dx8vs profiles is that OpenGL is dragging it's heels with the standard, perhaps valid but nonetheless the result is Cg is better implemented under DX8 than the OGL side. While it's theoretically possible to program Cg textureShaders and regcombiners in OpenGL, it's not currently supported. Much of the feature set in Cg looks like that announced so far for OpenGL2 - could nVidia just be trying to repeat OpenGL2 functions using their own identical and properitary Cg extentions instead? Finally, Nvidia announced support for Windows, MacOSX and Linux; the first and last platforms should have native Cg compilers (Linux soon apparently) but what about MacOSX?
  • Research goes on at the top graphics hardware companies to accelerate more of the graphics pipeline by having more generic programmability. To this end, programmers will be wanting to shift more of their existing graphics pipeline onto hardware. Games programmers in particular, working on multiple platforms, value cross-platform code. So where does a custom language fit in which is tailored to some a particular vendors hardware? I'm not sure it does.

    nVidia may offer ATI the ability to get on board this Cg language, but the reality will be different. What disturbs me is that nVidia's chief scientist went on record as saying that ATI's refusal to implement nVidia's shader technology (they did their own, which some consider superior) amounted to destabilising the industry. No, that would be competing dear chap.

    Who exactly will need to use Cg and what market ultimate will use it? I have no doubt that PC game developers (and Xbox) will take a look at it but let's not pretend that this is a solution which embraces other vendors. Of course I'll be glad to eat my hat if ATI and Matrox come out in support of this.

    It's not an entirely bad idea but writing regular language compilers for exotic hardware is more than feasible. My company has done exactly this for the PS2's vector units with a C/C++ compiler. Those VLIW co-processors are quite similar to the sort of more generically programmable hardware that you'll see in graphics hardware down the line (combined with shaders of course).

    There are some good reasons for using a custom language, better control over the implicit parallelism of multiple shaders/vertices etc. However creating a new language for people to use destroys the notion of recyclable code and introduces yet more platform specific issues. And let me tell you, there's quite enough IF/THEN statements in the graphics engines of PC games as it is. Unless your work is being used by multiple developers, in which case any decent authoring tools for specific hardware may be welcomed.

    Anyhow, I'm not entirely negative about nVidia's efforts - it's an interesting stab at a problem we had kind of thought everyone (but us) was ignoring. At the very least it's destined to become a more useful shader authoring tool for PC/Xbox game engine/middle ware developers.

    I wonder what ATI and Matrox's approach will be. I wonder if they'd like a regular compiler for their shaders? :)

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...