Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software

Codeplay Responds to NVidia's Cg 163

Da Masta writes "Codeplay Director Andrew Richards has some interesting things to say about NVidia's Cg graphics language. Just to refresh, Codeplay is the company that publishes Vector C, the badassed, high performance C compiler. In brief, it seems as though Cg isn't the universal, standard graphics language some pass it off to be. Certain design considerations in the language, such as the lack of integers, break/continue/goto/case/switch structures, and pointers suggest a general lack of universal usefulness. This leads to suspicion that NVidia plans to add and tailor the language in the future according to its own hardware and their respective features. Of course, this is all old news to those of us who noticed NVidia co-developed the language with the Evil Empire."
This discussion has been archived. No new comments can be posted.

Codeplay Responds to NVidia's Cg

Comments Filter:
  • by Anonymous Coward
    Evil Empire [microsoft.com]

    No bias here at all!
  • Who uses goto statements? I'd count that as a feature.
    • Who uses goto statements? I'd count that as a feature.

      Some CS people hear "go to hell" and are more offended by the "go to" than by the "hell", but in my C code, I use goto carefully to break out of nested loops and to bail on exceptions. Even though the Java language has goto bound to nothing, it has nearly equivalent structures: 1. try...catch; 2. labeled break.

      • FYI, Eiffel, which can be considered a truly rigorous language, doesn't have abort-block statements like break. I haven't learned about its exception model yet, so I can't comment on that.
      • 2. label break is more equivalent to goto than 1. try...catch because exceptions have more overhead.
      • I use goto carefully to break out of nested loops

        Funny, I use return to break out of nested loops. If your functions are so huge you feel the need to use goto, that's a sign they need to be split up. I've found it pays off to do this, because I often end up reusing the new, smaller subfunction. There's really no justification for using goto in human-written C code.

        • Actually, many numerical methods people (who tend to use C extensively) seem to like 'goto' a lot. I'm not sure I quite see things their way, but try using 'return' to break out of a triple nested 'for' loop -- often you need to jump to different spots depending on various error conditions, and believe it or not labels do come in handy. Sure, it's relatively straightforward to avoid using 'goto' even in those kind of applications, but usually not without making your code less elegant and/or compact. And these people take writing compact code to ridiculous extremes, believe me. We all learn about the evils of 'goto' in CS101 but I think it does have its (admittedly very limited) uses. Presumably the end goal is to write nice, readable code, not avoid using 'goto' at all costs.
          • And that's just it: The anti-goto fanatics always use as their defense the "Well if you did this, and this, and this, then you wouldn't have the goto! See, therefore I've proven that goto is bad!", but to those of us for whom goto is just one of many programming tools at our disposal, we use the goto because we feel that it simplifies and cleans up the code, and can significantly clarify a segment of code.

            The anti-goto campaign is understandable in the respect that it is a method of keeping the untalented from shooting themselves in the foot. By the same token, all languages but VB.NET are evil and are only for the crazy kooks!
            • Trouble is that even the most talented of us make errors in judgement pretty often (what new code doesn't have bugs?). And a lot of experienced programmers have found that even if it seems like a good idea at first, using goto is one of those things that always seems to turn out to be an error in the long run.

              I would make an analogy with multiple inheritance in C++. Now if you've reached the point where you're considering using MI, you've already got some modicum of experience and judgement. I used MI maybe 3 times in my code (all of which seemed eminently reasonable), considering the situation carefully before making the decision. And guess what: one of those 3 times it unexpectedly blew up in my face horribly, just like the anti-MI people always said. (I hit a bug in GCC that left me baffled for 2 days.) Somehow MI seems to be intrinsically evil, although it seems so tempting and useful sometimes.

              I've learned my lesson, and now I have a strong anti-MI bias in my coding. Using goto is the same kind of thing for me, although it isn't nearly as serious a problem.

              I'm not an "anti-goto fanatic" (I don't really care that much either way), I just like arguing about it :).

              • MI works well if the only things you inherit are virtual base classes (basically treating them like java interfaces). It's useful for writing event handlers.
                • Careful, I think you mean "abstract" base class, not "virtual". A "virtual base class" is a strange and obscure feature of C++ a lot of people don't know about: look it up. It's used to have only one copy of class A in class D when you have a situation where B inherits A, C inherits A, and D inherits B and C.

                  Anyway, if you meant abstract base classes, I agree with you :).

                  • >A "virtual base class" is a strange and obscure >feature of C++ a lot of people don't know about: >look it up.

                    Let's not make rash assumptions about what Tony does and does not know, shall we? Especially when he's making perfect sense. Suppose you have two classes that publicly inherit from Automobile (the former NOT being abstract at all), Car and Truck say. Now let's say you want a new class, imaginatively called Caruck, to publicly inherit from both Car and Truck. You should probably make Car and Truck _virtual_ Automobiles, so that Caruck only has one copy of Automobile's variables.
          • Actually, many numerical methods people (who tend to use C extensively) seem to like 'goto' a lot.

            A lot of numerical methods people use Fortran and Fortran has "GO TO". If you did a quick conversion to C you'd probably just leave it. f2c probably uses goto, but I don't remember for sure.

            -Kevin

            • >A lot of numerical methods people use Fortran >and Fortran has "GO TO".

              This is certainly a factor. Lots of NA people started out on FORTRAN and some still cling to old habits. I don't think that's the main reason though; the individuals I have in mind have been using C/C++ exclusively for the past ten years or so -- f2c is definitely out. I even seem to remember reading an article in the Communications of ACM (quite a while ago) extolling the virtues of 'goto' for numerical (specifically, matrix) applications. It seemed a bit peculiar at the time but perhaps the guy had a valid point to make after all.
        • Funny, I use return to break out of nested loops. If your functions are so huge you feel the need to use goto, that's a sign they need to be split up.

          I write video drivers for soft real-time applications. If I split up a function into an inner function and an outer function wherever I need to break out of something, performance drops 30% because now I have to pass a dozen arguments to a function call in a loop, and on some architectures, function calls are very expensive.

      • by Anonymous Coward
        Indeed. The goal of good programming is to make the code readable and maintainable. If any coding practise gets in the way, do either of two things:
        1) rewrite the sections
        2) break the rule

        If the code section is long, but any functional section fits on a page, then the code is fine. If goto helps keep the code clean (rather than nested if's, or shedloads of small function calls), then use it.

        Rules are there to make you think before you break them.
    • Re:what? (Score:2, Interesting)

      by whizzter ( 592586 )
      In the case of supporting the ps2 vu's, goto is a gift from (insert the god you worship).
      And i think the same would apply for other vertex/pixel shading units.
      Why?
      Because the VUs has a very small amount of memory and to get the most of it you should almost count cycles aswell.
      I haven't tried vectorC but considering the VU i think it's the only sane option apart from the stuff in asm(that i think might be preferable anyhow).
      As for Cg i think i have to agree with the points Codeplay makes about Cg being too simple and aimed at the current NVidia implementations, but the statement itself is to be considered FUD.

      I have been working with 3d for some years now and spent the last two years coding professionally for the PS2 and Dreamcast.

      IMHO most stuff defining appearance(bumpmap,etc) should when it comes to supporting 3d hardware be translated from a really high level(renderman?).
      Cg feels too lowlevel and specific for this, especially if you plan on supporting consoles.
      vectorC could possibly(though questionable in reality?) get away in this case because the most of the code will be C/C++ anyhow and thus the specific code could be taken out and recompiled with vectorC when aiming for performance.

      / Jonas Lund
    • Re:what? (Score:2, Informative)

      by XMunkki ( 533952 )
      I'd simply say, that without "goto" (the functionality, not the language contruct), many tasks of programming woul be nearly impossible. The trick is, many high level languages discourage or lack goto, and that's a completely different matter when compared to a low-level "assembly like" programming like the PS2 VUs or (I assume) CG.

      So even if you never use gotos in c/c++, they still are compile in as unconditional jumps at assembly level. The same should happen with pixel/vertex programming.
    • Who uses goto statements? I'd count that as a feature.

      Anyone who wants to efficiently implement a state machine.

    • You do. The only difference is that anyone who has programmed assembly KNOWS that they do. GOTO, JMP, or any of their variants is extremely useful for error handling in languages that don't have the snobbish try/catch mechanism (or have a less than stellar one), and of course it's good for those humans who still produce better code than the compiler.

  • Mmmm... I'd love to see Gnome and KDE compiled with VectorC or intel's compiler...
  • an almost-monopoly and a true monopoly combine like voltron to create yet another monopoly on graphics languages.

    who gets to be the battleship?
    • Like many of you, i'm concerned about nVidia's intentions with this language. However, calling nVidia a 'almost-monopoly' because they have the most market share is ridiculous. nVidia has gained this market share despite the competition in the marketplace, and that is an inarguable point in my opinion. Ask any gamer who knows their history and they'll tell you: nVidia is at the top of this game due to technical merit.

      ATI has been seriously competing with nVidia since the TNT2, which intensified with their introduction of the Radeon. Now they are in a game of leapfrog and many other vendors have entered the race (SiS, Matrox, 3Dlabs, even Trident.)

      • Re:wow! (Score:2, Interesting)

        by rizawbone ( 577492 )
        However, calling nVidia a 'almost-monopoly' because they have the most market share is ridiculous.

        One entry found for monopoly.

        Main Entry: monopoly
        Pronunciation: m&-'nä-p(&-)lE
        Function: noun
        Inflected Form(s): plural -lies
        Etymology: Latin monopolium, from Greek monopOlion, from mon- + pOlein to sell
        Date: 1534
        1 : exclusive ownership through legal privilege, command of supply, or concerted action

        they have almost all of the market share. they are almost a monopoly. i don't see what's so hard about this point. ati is making a valiant effort, but they aren't shipping the kind of units/chips that nvidia is. sis, matrox, 3dlabs and trident do not have a 3d architecture even comparable to the latest nvidia offerings.

        if tomorrow nvidia released detonator drivers with a special feature or program, it would become a standard. big companies make games for nvidia chips. people speak about games in nvidia terms (that game's a geforce2 featureset game).

        face it kiddo, nvidia is *the* video card maker right now when it comes to consumer desktop 3d.

        • Yes, but is Nvidia *the* videocard maker because they have exclusive ownership through legal privilege, or because they command supply, or because of concerted action? No, they are *the* videocard maker for none of these reasons. It's not about "almost". They don't fit the definition. There is no "almost" concerted action, or "almost" command of supply, or "almost" exclusive ownership. They aren't a monopoly, that's silly.
          • the point of any business is to

            1) make money
            2) once 1 is done, gain a monopoly

            i think it's safe to say that nvidia (yeah, screw their damn capital letter, the bums) is profitable. that leaves only 2 to be done, so any action taken by nvidia would be a concerted effort to gain a monopoly.
            • 1) make money
              2) once 1 is done, gain a monopoly
              >>>>>>
              I suppose you'll now tell us that you learned this at Harvard business school. Exactly where do you get off making up your own economic theories? Ford is profitable, are they a monopoly? Dell is profitable, are they a monopoly?
              • just because there is competition impeding the acquisition of a monopoly doesn't mean that the business isn't trying to gain one. and i'm 18, so it's apparent that i didn't learn this in law school. however, it makes sense, no?
                • Wow. Have a little more faith in corporate America. NVIDIA has proven to be a good company, which is why its users like it. I'm sure most companies want to get to and stay on top, but I doubt that many of them want to do so by becoming an illegal monopoly.
              • besides, why the hell can't i pretend to know everything like everyone else on /.?
              • Ford has a huge foot in the car industry, have you ever taken the time out to research what percentages of other car companies they own? In Edison, NJ (i'm not sure if that plant actually shut down or not) they make all of Mazda's small trucks. Sounds monopolistic to me!
                • Yes, they own lots of car companies. They fully own Jaguar, Volvo, and Aston-Martin, among others. They bought these companies so they could compete effectively with the likes of BMW and Diamler-Chrysler at the high-end. I'm sure they want to be more competitive by buying these companies, but do they necessarily want to be an illegal monopoly? Sounds like quite a strech to me.

                  PS> As for the Mazda thing, its done all the time by car makers. Ford used to make some of Nissan's cars (an SUV and a minivan) from a line parallel to their Mercury brand. A lot of times particular manufacturers do not have a product in a popular market (in the case of Mazda, small trucks) so they make a deal with another company to get in on the action. Volvo, for example did this with its S40 compact, which is actually made by Mitsubishi.
        • they have almost all of the market share. they are almost a monopoly. i don't see what's so hard about this point. ati is making a valiant effort, but they aren't shipping the kind of units/chips that nvidia is. sis, matrox, 3dlabs and trident do not have a 3d architecture even comparable to the latest nvidia offerings.

          They have 68% of the video card market. Not quite the "almost all", is it? Sure it's a high percentage but this is not due to them being a monopoly. The fact that some of those abovementioned companies haven't come out with a comparable product (which is garbage in ATI's case considering the 8500 was comparable to a Geforce 3 Ti500 at a lower price point up until the Geforce 4's release ... not to mention how the R300 is, by all accounts, up to 50% faster than a Ti4600 in some games) is irrelevant. The fact of the matter is, competition DOES exist, it's alive and well and it's going to be heating up even more in the next few months (I don't know if you know this or not, but 3dlabs is on the verge of releasing an architecture that will be found in workstation-class sytems and consumer-level systems and Trident will be releasing their own line of GPU's as well)

          if tomorrow nvidia released detonator drivers with a special feature or program, it would become a standard. big companies make games for nvidia chips. people speak about games in nvidia terms (that game's a geforce2 featureset game).

          Features that nVidia has implemented in their drivers and hardware were conceived by other standards bodies. Support for Hardware Transforming and Lighting was implemented in OpenGL long before the Geforce line of cards was released. The Vertex/Pixel shader standards were NOT conceived by nVidia, they were simply the first consumer-level video card manufacturer to implement them in hardware.

          And yes, I do not dispute that the Geforce line of cards are THE benchmark for todays games. It's the most popular video hardware, how could it not be?

          face it kiddo, nvidia is *the* video card maker right now when it comes to consumer desktop 3d.

          It is not *the* only viable 3D solution, as any user with an ATI Radeon 8500 can attest (and there are a lot of those ... browse the forums on an enthusiast site like www.hardocp.com and see some of the flame wars which result over this point)

          In summary, nVidia has made a superior product and that has paid off for them. I don't believe that they are a monopoly until there is no serious competition and I do not believe that to be the case.

  • "such as the lack of ... pointers"

    Well, Nvidia sure knows how to make a C guru cry!
  • by t0qer ( 230538 ) on Saturday July 27, 2002 @08:58PM (#3966154) Homepage Journal
    This leads to suspicion that NVidia plans to add and tailor the language in the future according to its own hardware and their respective features.

    Smells like glide!
    • In that case, I will be screwed again, that is if cg has the same effect on the market as glide did, then I will not be able to play games, seeming as I cannot currently afford a graphics board that can be used with this type of language. So I would hope that people will really create the games in opengl first, and then if they want to crank up the detail levels for the games, then they can use cg. Kind of what is in some games now, although they kind of don't have a choice with the way this language is looking to be designed, with no , ints and such.
  • Just found this quote funny... especially because of the knock on nvidia for working with our favorite evil empire (emphesis added)...
    This approach will be
    extended to embrace emerging new hardware features without the need of proprietary 'standards'.
    I guess the evil empire has gone everywhere they wanted to go today.
  • by invispace ( 159213 ) on Saturday July 27, 2002 @09:05PM (#3966181)
    1. It's not C.

    2. Exluna was bought by nVidia.

    3. Exluna makes 2 renderman compliant renderers.

    4. Shaders are used by renderman.

    5. nVidia is touting CG as a compiled shader... just like one has to compile shaders for renderman.

    6. Fill in the next 2 years here of having look development tools for major graphics studios that look close enough to the final renders that it speeds up FX work to near realtime for shading and lighting.
    • A couples notes about the above:

      - CG came around long before exluna was bought by Nvidia. I overheard Tony Apodaca (the co-author author of Advanced Renderman with Larry Gritz, the founder of exluna, at siggraph this week saying how CG was pioneered by a bunch of renderman users who wanted to bring that technology to real-time hardware. Since renderman is almost 15 years old now, the user base that CG could reach is vast.

      - PIxar's lawsuit with exluna over trade secrets was settled, and the company was subsequently bought out by Nvidia. There is no guanantee that any of exluna's work will be continued. It's not uncommon in these situations to never see anything come of it. It's a nice idea that maybe nvidia will make renderman-supported hardware, but don't hold your breath. Nvidia wants to be the next intel, or microsoft, and you don't get there by support other people's software, you get there by eating competitors alive and flushing their work down the toilet.

      - Best-case scenario: Larry Gritz re-releases BMRT, his free (as in beer) renderman ray-tracer, and at least we don't have to pay thousands for a production-quality renderman implementation. Or (even better) someone writes and releases another free (as in freedom) renderer that actually works.

      Dan
    • Although Cg is a nice contribution, I disagree that we can really call it a shading language. It's a nice processor for the snippets of code that wind up in the vertex and fragment shaders, but almost completely neglects the assignment and semantics of streams and constant registers from main memory to the vertex unit: until our shading language compilers address this, and so make it easy and consistent to swap out art and shaders *as data* rather than by changing your code to change shaders, we won't have a shader compiler.

      Paul

  • This guys got some good points, espically with regard to pipelines with more progammibility such as the PlayStation 2 gpu. 3Dlabs is also developing highly progammible pipelines
    3DLabs press release [3dlabs.com]

    Having better control structures and pointers will be important down the line.
  • OMG! That compiler is $800 (the one with P4 support). These guys have got a case of the M$ pricing delerium. /Me thinks I save my money and buy a shiny new p4-2.8G instead.

    "Never cuss a man until you've walked a mile in his shoes. Then if he gets mad, you'll have a headstart and he'll be barefoot."

    • Well, you need to understand that those products really aren't ment for the average joe. They are ment for game developement houses, who don't have any issues spending 800 dollars for a production tool.

      If I was a gaming corporation, and I knew that an 800 dollar tool could make my game faster for my end users, I'd do it in a heart beat.

      Lastly, you failed to mention that there is an 80 dollar version of the same software. It may not have the same features as its 800 version, but it comes close enough.

      Sunny

    • My thoughts are if I were a company looking to pull of a coupe against the M$/Nvidia alliance, I would want to get my product adopted by as many people as possible, whether they be a mega game developer or joe blow working out of his garage writing the next Doom. It seems to me that it would be a lot easier to sell 10 units at $80 than one at $800 which would bring a broader exposure for my product. There's strength in numbers.

      Never settle with words that which can be settled with a flamethrower.

  • by kbonin ( 58917 ) on Saturday July 27, 2002 @09:21PM (#3966230)
    I love the line regarding Codeplay's own product: "This approach will be extended to embrace emerging new hardware features without the need of proprietary 'standards'." So, the point is: 'Don't waste your time on that competitors product, wait for us to finish ours!' Where have I heard this marketing approach before? :)

    Codeplay's VectorC is optimized around Playstation2 centric vector graphics hardware and scene graph libraries, whereas nVidia's Cg is optimized for most current PC accelerators. Most playstation developers use licensed scene graph libraries, whereas most PC game developers use custom or licensed engines over low level libraries, so both approaches are appropriate for their current customer base...

    I think its reasonable to assume Cg will evolve with its target hardware, and I'd rather nVidia do a good job with the current version than waste time hypothetical future features. I'm using Cg now, and its a great step in the right direction - a high level shader language not owned by M$ and/or tied to D3D.

    I think the biggest issue w/ Cg is how nVidia is going to address the divergence in silicon budgets between themselves and ATI - nVidia is pushing for more, faster vertex shaders, while ATI (w/ M$'s backing) is pushing for more powerful pixel shaders, i.e. D3D pixel shaders v1.4, exposed in D3D 8.1, are supported in ATI Radeon 8*, but no publically announced nVidia cards support the nicely expanded instruction set and associated capabilities. nVidia also needs to complete fragment[pixel] shader support for OpenGL (and release source or a multithreaded version of the GL runtimes...)

    • I'd say most Playstation 2 developers use custom graphics code for each game, whereas most PC developers use custom translators for high level graphics libraries (because low level libraries would require too much per card customization).

      But that's just my opinion. And Vector C fits pretty well in with the PS2 VUs but almost every developer has at least one person doing mainly VU asm anyway so they may not be so eager to switch.
    • All good points. My tuppence:

      FYI, the current version is 1.01. Maybe that's why whole bunches of features aren't available? Has Microsoft's influence made us so paranoid that we feel compelled to seek Evil Empires around every corner?

      • Dunno... the OpenGL support has been sufficient to date, I'm using Cg a few hours a day, and you can generate OpenGL fragment shaders by compiling to a D3D target and passing the output through their D3D to GL shader translator, which while a pain has allowed our shader prototyping to continue.

        I'm more hoping that nVidia can leverage Cg to keep the pressure up on the OpenGL ARB to move to v2 faster, despite the intra-vendor infighting, patent pressure from M$, and the dilema r/e multipass strategies.

        To do this, nVidia must make Cg usable for bleeding edge ATI products as well, and the divergence w/r/t pixel shader functionality is considerable. I'm hoping they won't do anything to preclude ATI supporting Cg, should ATI really choose to do so. At the same time, I don't expect ATI to do anything to damage their role reversal w/ nVidia w/r/t being M$'s favored vendor (and candidate for XBox2 chip supplier), so I'm not expecting ATI to step forward anytime soon onto the Cg front, or even GL 2...

        This means nVidia needs to take some initiative to demonstrate they will not preclude support for next gen pixel shaders, especially features that cannot be trivially abstracted unrolled from a high-level language, like 3 and 4 levels of dereferencing of textures instead of one. Unfortunately, I have not had any favorable impressions on this topic when I talk to developer relations at nVidia.

        Other than that, for a v1.01, they've done their usual damn fine job of providing stable code, and I'm enjoying the advantages of C over ASM for most of the grunt level shaders we need...
    • That's funny. Because CodePlay has been around for a few years now pitching their PC compiler. The PS2 compiler is still in development. PS2 devlopers have access to the prerelease versions right now.

      So I think it's incorrect to say they're porting their PS2 technology to the PC.

  • This is the second article in a row with the word "badass" in it :)
  • NVidia has it right (Score:5, Informative)

    by Animats ( 122034 ) on Saturday July 27, 2002 @09:48PM (#3966304) Homepage
    I'll have to go with NVidia on this one. A shader language should be limited and highly parallelizable. Ideally, you'd like to be able to run the per-pixel shaders simultaneously for all the pixels. In practice, you're going to be running more and more shaders simultaneously as the hardware transistor count goes up. So you don't want these little programs interacting with outside data too much.

    If you put a general-purpose execution engine in the graphics engine, you need an OS, context switching, protection mechanisms, and all the usual system stuff. If pixel shaders aren't allowed to do much or run for long, managing them is much simpler. Bear in mind that all this stuff has to work in multiwindow environments - one app does not own the graphics engine, except on game consoles.

  • by andycat ( 139208 ) on Saturday July 27, 2002 @09:53PM (#3966314)

    I think what Richards is overlooking in his commentary is that Cg is not *supposed* to be a general-purpose graphics programming language. Its design goal was precisely what he said later in the article -- to expose the capabilities of current (and presumably future) NVIDIA hardware without requiring programmers to write assembly code. Likewise, conditionals like if, case, and switch aren't in there right now because the profiles the compiler is aimed at -- DirectX and OpenGL extensions -- don't yet support them. I expect this to change.

    Also, Cg programs run at the level of vertices and pixels. This is the wrong place to be thinking about a scene graph: that happens at a much higher level of abstraction. Dealing with scene graphs in a fragment shader is a little bit like making L1 cache aware of the memory-management policy of whatever OS happens to be running.

    After reading the article a few times, I think it's meant more as a "here's why our product is better than theirs" release than an honest criticism of the design of Cg. If he was interested in the latter, there are a few obvious issues. I won't go into them all, but here are two I ran into last week at a Cg workshop:

    • Cg limits shaders to single-pass rendering. This is a design limitation: there are lots of interesting multipass effects, and it's not all that difficult to get the compiler to virtualize the shader to do multipass on its own. The Stanford Real-Time Shading Project [stanford.edu] people wrote a compiler that does precisely that: it uses Cg as an intermediate language. The advantage of that design decision is that you-the-programmer have fuller control over what's happening on the hardware, which is the entire point of the exercise.
    • Cg requires that you write separate vertex and fragment shaders. You can't do things like texture lookups inside a vertex shader; you can't change the position of your vertices inside a fragment shader. Again, this gives you control over the details of the pipeline at the cost of some added complexity. This can be changed by changing the semantics of the language.

    One final note: Cg is not the be-all and end-all of real-time shading languages. Nor is DirectX 8.1, 9, or whatever. Nor is the SGI shading language [sgi.com]. Real-time shading on commodity hardware is still a new enough field that the languages and the hardware are evolving. DirectX 9 and OpenGL 2.0 [3dlabs.com] both incorporate shading languages that will by nature be less tightly coupled to one vendor's hardware. Watch those over the next year or so.

    • by Anonymous Coward
      Cg limits shaders to single-pass rendering.

      Cg doesn't limit you to single-pass rendering at all. It just doesn't do it for you automatically, which would require some sort of scene graph or driver level support. You can still do multipass in the same manner its always been done.

      The Stanford Real-Time Shading Project [stanford.edu] people wrote a compiler that does precisely that: it uses Cg as an intermediate language.

      The Stanford language doesn't use Cg.

      Cg requires that you write separate vertex and fragment shaders. You can't do things like texture lookups inside a vertex shader; you can't change the position of your vertices inside a fragment shader

      Current hardware/drivers don't expose any way of doing this so the only way to do it would be some sort of slow readback to CPU memory, which would be to slow to be of much use. Cg is designed to work with current and future hardware in the least intrusive way possible so it wouldn't make sense to implement some high level abstraction like what was done with the Stanford language.
      • The Stanford language doesn't use Cg.

        At a SIGGRAPH course on real-time shading last week, Eric Chan described a version of the Stanford compiler that broke a shader down into its component passes and used Cg as an intermediate language when aiming at NVIDIA cards. (Eric -- or anyone else who was at that course -- if you're reading this, I'd welcome corrections. I know it was the Stanford project and Cg as an intermediate language but I'm hazy on the details.)

        Current hardware/drivers don't expose any way of [texture lookups in vertex shaders / position change in fragment shaders] so the only way to do it would be some sort of slow readback to CPU memory, which would be to slow to be of much use.

        Exactly. This goes back to the Richards' claim that the Right Solution is to program everything in C/C++ and make the compiler smart enough to figure out how to partition it all into vertex shaders, fragment shaders, multiple passes, scene graph management, vector processing... I don't want a fully general language down in the fragment shaders. I want to know about what the hardware there can and can't do so I can make things run fast.

  • Quote from the article :

    "Overall, Cg is not a generic language for graphics programming; it is a language for NVIDIA's current generation of graphics card."

    Well, what does anyone expect! Cg is a tool to help developers take advantage of pixel and vertex shaders *today*. And which pixel and vertex shaders do they know best? NVidia stated in their original press announcement that it would work across DirectX and OpenGL, as well as with other pixel and vertex shaders that comply with the specifications and even mentioned that other chip makers would be able to optimise the compiler for their particular hardware.

    The games industry moves along at a rocket pace, it's all about performance and getting the most out of the hardware right now. I applaud NVidia for actually doing something for today, rather than just talking about how great things are going to be tomorrow, and fail to see why leaving features unimplemented is a cardinal sin when they're not available in their current generation of chips.

    Reading their press release, I don't know what the hell Codeplay want. Some attention maybe.
  • I would like to see Linux ported to GeForge 6 (or whatever) shader language.

    Than we'll have "portable" software!

    Just kidding... :)

  • What does a vectorizing compiler for a C-like language for the x86/PS2 have to do with a C-like shader language for nVIDIA graphics processors? It seems to me they are different languages for different purposes, even running on different parts of the same system.
    • Re:I don't get it (Score:3, Interesting)

      by zenyu ( 248067 )
      What does a vectorizing compiler for a C-like language for the x86/PS2 have to do with a C-like shader language for nVIDIA graphics processors?

      The PS2 has two almost identical MIPS chips souped up with fast SIMD math instructions. One runs the OS/AI/etc, as a pretty standard processor. The other is dedicated to process vertex arrays and the like. This is the "GPU" But really it is a full fledged processor that simply used in a streaming fashion.

      The talk this year at SIGGRAPH was all about how the GPU's are all becoming streaming processors that can handle almost anything thrown at them. Using them effectively is all about never asking for any data not already fetched, that is no random access to memory. A huge portion of the die on a modern CPU is dedicated to caching, GPU's get faster at a greater rate because they use most of their die for logic gates and what is left is used for the very specialized texture memory access, with little or no caching of random access.

      As for the integer arguement, I think nVidia heard that loud and clear, and it's almost certainly going to be in OpenGL 2 shader language (which is C like and has branching and loops even in the fragment shader.) 3D Labs and ATI promise drivers weeks after ratification. Except for integers nVidia should be able to do this in their driver with extra passes. For integers they can just suggest 'safe' texture sizes for data as a work around.

      OpenGL2 also asks for effectively unlimited program size, which no one will have initially. This of course can be simulated with more passes, but you'll be reponsible for any such splitting since it tells you whether the shader compiled and 'out of program space' is one of the acceptable errors.

      ATI is known for not so great drivers and 3D Labs has an external company writing their linux drivers, so I'm not sure they will be any more stable than the nVidia ones. Both booths told me the linux drivers would be closed source, but I didn't ask so I guess they are hearing people asking for easier to fix drivers.
  • The deal here is that we currently have GPU's that simply cannot implement anything like a full-up C compiler. There is no point in wishing for something that the hardware simply cannot support.

    Cg doesn't have integers because GeForce chips don't implement integer math operations. There are no pointers because the hardware doesn't implement them.

    So, the choice here is either to put up with a C subset that will grow with the hardware until it's not a subset anymore (and live with the consequent lack of compatibility between versions of the hardware and Cg compilers) -- OR you carry on writing GPU-based shaders in machine-code (which *also* changes with hardware versions).

    We are at the very beginning of a revolution here and as such, we have to put up with some initial inconveniences.

    A better debate for the /. crowd is whether we should embrace Cg now - or wait a year (or more) for the hardware to catch up with something like the OpenGL 2.0 shader language (which is very similar to Cg - but isn't implementable on most hardware...yet).

    At the OpenGL Birds-of-a-Feather session at SigGraph last week, nVidia clearly expressed an interest in working with ATI and 3Dlabs on the OpenGL 2.0 standard - but those of us who need to use realtime shader languages simply cannot wait another year. I think we should expect to use Cg until something better shows up - probably in the form of the OpenGL 2.0 shader language.

    One should note that the Direct3D DX-9 shader language (called HLSL by Microsoft) is basically the same thing as Cg.
  • No pointers. This is Cg's most serious omission. Pointers are necessary for storing scene graphs, so this will quickly become a serious omission for vector processors that can store and process the entire scene or even sections of it.

    You do not need (C-style) pointers for high performance graphics. You do not need pointers even for representing relational structures. People have been implementing graph algorithms in languages without pointers since before most of us were born. You can even do it in a language as tedious as Fortran 66. C pointers are a bizarre aberration in language design and play havoc with high performance computing and optimization. You have to jump through hoops in order to even optimize siple uses, and then add lost of special purpose declarations to make them work. Any use of C pointers can be replaced with the use of arrays and integers (but the logic of your program may change dramatically).

    Another reason pointers are generally not such a good idea in graphics or high performance is that they have to be large enough to address all of memory. An index into a array can be 1, 2, 3, or 4 bytes long depending on how much data it actually needs to address. That can lead to saving a lot of space.

    When dedicated C hackers make such statements, it is understandable. But a company in the business of writing high performance compilers ought to be familiar with the work, programming styles, and languages that people in high performance computing adopt, and those often don't include pointers. C programmers want pointers becaues they are used to them, and CodePlay is in the business of satisfying this desire, but that doesn't make it a good engineering choice.

    Incidentally, I program in C++, including numerical and graphics code. It is quite easy to eliminate most uses of pointers from C++ code, and the result is code that is a lot easier to debug and often run faster, too.

    • If you use references in C++, you're using pointers. I'm well aware that there are semantic differences between the two, but a lot of the optimization problems that come from using pointers, also apply to references.

      You're not particularly creative with pointers, if you honestly believe they can be replaced by arrays. Lots of neat affects are very easy to do with pointers, but a pain to do with arrays. Function pointers come to mind. Along with the fact, that pass by reference just doesn't exist in straight C. An array is nothing but a const pointer you didn't have to malloc the memory for most of the time. Linked lists can't be duplicated in any way shape or form in an array. They can be horribly cut into little useless pieces and sorta have the semantics of a linked list, except for that unlimited size part. Pointers are there for dynamic memory, and that can't be replaced with arrays.

      Now in C++ because of the inlining, and some of the other neat tricks like templates, you don't have to use pointers nearly as much. Some of the aliasing problems go away, but pointer's are ungodly useful in C and C++. They get used all the time for overloading, and polymorphism in C++ so they aren't useless there either.

      Oh yeah, pointer's at Intel don't have to be big enough to address all memory, just write some huge model DOS code... Oh the horror of working on that. Done it in a former life. In fact, on all recent x86 there is no such thing as a pointer that can address all of memory. On the 386SX, I believe there was a 16MB limit due to address lines coming off the CPU. But even those CPU's could address something like 64TB (yes that's a T as in Tera) of virtual address space. Even the latest greatest CPU's from Intel can't deal with that in a single pointer. Even Protected 32bit mode is segmented, just nobody in their right mind uses it for much.

      I think the major beef with the Cg that a lot of people are missing is that it's being touted as highly portable to lots of hardware (hmm, that's my impression from what I've read, I don't know that I've seen the actual press release from NVidia), but has very, very specific design limitations so that there is a lot of hardware features on cards not made by NVidia that can't be taken advantage of. If they didn't try and say it'd be highly useful for competitor's hardware, and that it is specifically for their hardware, there would be fewer gripes. I'm not particularly up to speed on this area, but I think it would be a huge boon to be able to have a specific language for the a GPU. A lot of very cool stuff could be done using it. My guess is that nobody will want to deal with the lock-in problems of using a specific vendor's code on a non-embedded system. So it'd be less useful to somebody who was developing for stuff he couldn't control the GPU unit on (say like game developers). So NVidia is trying to bill it as portable so there is less concern. I hope it all ends up that cool 3d rendering is both easier, cheaper and faster, if that happens I'm happy.

      Kirby

      • If you use references in C++, you're using pointers. [...] Lots of neat affects are very easy to do with pointers, but a pain to do with arrays. Function pointers come to mind.

        C and C++ use the term "pointer" for a lot of things. It is the totality of those things, and the fact that they are all the same datatype, that presents problems.

        "Not using pointers" means not using the stuff that is specific to pointers in C/C++: pointers into the middle of arrays, pointer arithmetic, pointers to local variables. Things would be a lot less problematic if "pointer into the middle of an array", "pointer to local variable", "pointer to heap-allocated object", etc., all had different types.

        References are not "pointers": you can't do arithmetic on them, you can't store them in data structures, etc., many of the things that cause problems. But even unrestricted use of references is problematic and error prone in C++, and it is best to limit oneself mostly to references to variables passed as function arguments.

        You're not particularly creative with pointers, if you honestly believe they can be replaced by arrays.

        I am decidedly not creative with pointers. I used to be very creative with pointers, but I found that hand-crafted pointer code that looked "efficient" didn't run any faster on modern hardware and was a lot harder to debug.

        • "References are not "pointers": you can't do arithmetic on them, you can't store them in data structures, etc.,"

          Just to be clear: you can, of course, initialize a reference in an object. That is as problematic as pointers, and I avoid it.

        • and it is best to limit oneself mostly to references to variables passed as function arguments

          Unfortuantly, doing that leads to most of the optimization problems you were trying to avoid. Aliasing is not you're friend when doing optimization.

          Things would be a lot less problematic if "pointer into the middle of an array", "pointer to local variable", "pointer to heap-allocated object", etc., all had different types.

          Having all the various kinds of pointers be different would be a very bad idea. Very, very bad, especially if they added it to C++ where typesafty is important. Both a pointer and a reference are nothing more then an address with an associated layout (granted a pointer has more operations that are legal). You'd have to template anything that worked on a pointer, which isn't my idea of fast compilation, or small code.

          The problem with most pointer code that is creative is it isn't the highly optimized case. For loops that work on pointers aren't as fast as the same for loop with an index, precisely because for loops with an index are more common, so more time is spent making them run optimally, not because there is anything about the code that isn't equivilent. It also involves aliasing, and several other things the compiler has to make conservative assumptions on which makes it very difficult to optimize. References are nice, but they aren't much better then pointers for optimization. Inlineing, and templates are where C++ really shines at optimization over C.

          I found that hand-crafted pointer code that looked "efficient" didn't run any faster on modern hardware

          That has nothing to do with modern hardware, and a lot to do with modern compilers. Modern compilers are orders of magnitude better at common case code. Ancient C compilers translated the code in obvious ways to assembler, so you used pointers, if you knew pointers were faster when generated by you're compiler. Compiler's are the big tricks, the hardware is merely a cost/reliability issue for most things.

          Kirby

    • The last time I worked on a commercial game (it's been a few years), we avoided pointers like the plauge. We also avoided dynamically allocating and deallocating memory. Everything was kept in big honkin' arrays because it was FAST. Accessing into an array with a constant index is a very simple operation because the details are taken care of at compile time. Using pointers, however, took a huge amount of overhead. In real time games, performance is everything.

      Another benefit of this system was reduced bugs. Eliminate pointers and you automatically eliminate a good portion of your potential bugs.
  • Codeplay is probably just upset because Nvidia is setting some precendece for graphics companies providing stuff like this free of charge to whoever wants it. It's got to be hurting Codeplay's business. It's obviously in Codeplay's favor if companies like Nvidia stay away from this stuff and leave it up to Codeplay so Codeplay can sell their proprietary commercial products to fill the gap. Other than dissent from Codeplay itself, Cg seems to be fairly well accepted by developers.

    Codeplay was probably planning on making a DX9 backend for their commercial product, so Nvidia is just raining on their picnic. We'll see what happens to Codeplay over the next year or so.
  • i, for one will laugh when M$/Nvidia lose the race to ATI/Linux

    *crosses fingers*
    • Isn't ati more going with microsoft, when developing their hardware than nvidia is? And last I checked, NVidia actually had better linux support for their current line of products than all other companies, well, although ati users I guess, have used firegl drivers to get 3d for their newer radeons. Although I hope that the best technology wins the battle, wether it be rendermonkey(ati), 3dlabs implementation, or cg. Although there are two proposals for one such language to be included as a part of the opengl 2.0 standard I believe. Or at least according to the latest opengl.org poll. Also sometime next month, as it was posted earlier here, NVidia is actually open-sourcing their cg toolkit, so hopefully it will be enough for the linux community to port it. That and look at the responses to questions about cg here [cgshaders.org]
      it shows some interesting answers to questions that would come up about cg. It seems to also be available for linux. So hopefully, if NV can remain able to enhance both apis(being directx and opengl), everyone will benefit.
  • lmb is SO horribly, wonderfully good looking!!
  • Anyone who's played games in the last two years knows that an NVidia chip is the way to go. They update their drivers on an almost weekly basis, and make all their beta's available for download as well. The On-line support is excellent, and just about every driver update seems to add a boost in performance. Add to this that their drivers are backwards compatible to at least the TNT2 chips, and I say I don't really care if they want to tout their own language, and make it propriatary to their chips. This is a company that actually survived the dotcom burst, and is continuing to thrive because they make a damn good product.
    • Anyone who's played games in the last two years knows that an NVidia chip is the way to go.

      Use Nvidia for games if you like. For some of us, open source - which Nvidia drivers are not - matters more than video gaming frame rate.

      Nvidia => no BSD support, no support if your Linux kernel strays too far from the snapshots they use.

      • The situation you describe is still consistent with frame rates mattering most. It's just that your frame rates are currently null. If NVidia provided drivers of equal quality for arbitrary kernels and versions, I don't think you would be so snooty about it.
      • I certainly can't argue with the relationship between NVidia and Open Source, but I also can't fault the company for consistently delivering quality hardware and software to *their* market, which is people running MS who want high quality graphics. As long as they continue to do this, I'll continue to support their products.
      • Nvidia => no BSD support, no support if your Linux kernel strays too far from the snapshots they use.
        >>>>
        That's crap. Based on some patches for 2.5.17 floating around, I managed to get my set working on 2.5.23. If you take a look at the actual abstraction layer, it is complete. You could port the kernel driver to OS X if you wanted to (oh wait, they already have!)
  • C/C++ is not a language of choice for vector applications, hpf (High Performance FORTRAN) is, although a crockish hack on f90 it's quite usable, we really need some more vectorable languages. I have made an early atempt at a vector unit (GPL now, disregard copyright) Ganymede [bensin.org] but that is far from completion and will never se the light of day. Still we need vector processors though. For a good free hpf compiler check out this: Adaptor [www.gmd.de]
  • by KewlPC ( 245768 )
    I thought that Cg didn't support if...else (etc.) because the pixel and vertex shader hardware itself doesn't support that sort of thing. If the hardware doesn't support it, why should Cg?

    I distinctly remember John Carmack saying in his .plan file how the lack of support for conditional jumps and suchsort in the shader hardware really annoyed him.

    I doubt there is some evil conspiracy going on here. nVidia may add if...else to Cg in the future, not due to some underhanded plot, but because once the shader hardware supports conditional jumping it only makes sense that Cg would as well.
  • Cg isn't a universal, all-purpose graphics language. It is specifically tailored for writing custom pixel and vertex shaders for newer 3D cards like the GeForce3 & 4 and newer ATI cards.

    The hardware in those cards has certain limitations (dunno 'bout the integers, but I've heard (from John Carmack's .plan file) that the hardware itself lacks support for conditional jumps i.e. if...else) when it comes to custom pixel and vertex shaders.

    It seems like there's rampant misunderstanding when it comes to Cg, so I'll try to clear things up:
    1)It is *ONLY* for writing custom pixel and vertex shaders for 3D cards that support custom pixel and vertex shaders.
    2)The alternative to Cg is to write your pixel/vertex shader(s) in an assembly-like language. This is assembly language for the 3D hardware, not the CPU or anything else. Again, this isn't x86 assembly.
    3)The shaders produced are only used by the 3D hardware, and only for the purpose of allowing developers more control over how objects look (i.e. the developer can write a shader that mathematically simulates the way light bounces off human skin, then tell the 3D hardware to use that shader for all polygons that are supposed to be human skin), and have absolutely nothing to do with speeding up graphics operations or other speed optimizations.
  • ...film at 11. If NVidia Cg can be used without diseasing your code, it makes VectorC's product irrelevant along with Intel's SIMD hack. It takes all of that and moves it onto dedicated hardware. The key word here is *IF* you can use Cg without mucking up your code. Just looking at VectorC's source for their benchmarks, I saw some things that were nonconforming for ANSI/ISO C. Pot. Kettle. Black. Also, maybe I missed it, but I didn't see what switches they used on the other compilers (grin).

    I suspect that disciplined programmers can use either tool without making their code proprietary. Use MACROS for compiler dependant stuff! Wrap proprietary functions!!! Of course, when you are shoving games out the door, how many stop to think about coding discipline? So, then it becomes a question of who you would rather risk getting locked into...

  • Is it just me or are break, continue and goto all considered to be evil programming practices (with the exception of break in a switch structure)? And switch isn't at all necessary if you have if and else (you can make a switch structure out of those). Lack of pointers and integers may or may not be a problem, I don't really know much about the language.
  • Um, I see a lot of comments along the lines of "NVIDIA aren't including loops in the language because GPUs just can't do loops". It seems NVIDIA [gamers.com] aren't aware of that, the next-generation "NV3x" hardware supports loops up to 64 levels of nesting... They've also grown the maximum pixel shader program size by a nice little factor of 512 (65,536 rather than 128 instructions per program). Also, it says "dynamic flow control" in the chart, which sounds like maybe arbitrary branching (there's your GOTO right there) could be supported.

    That said, it does seem a bit weird not to make Cg strong enough to include features that are obviously needed for their own next-generation of hardware... But all the conspiracy theories have already been used up, I'll just settle for introducing some facts in the discussion. ;^)
  • I haven't read too deeply into it, but I got the impression that it was supposed to be more of a markup language than a procedural language, much like VRML was a markup language.

    You're supposed to be able to specify a scene in a procedure neutral way. Then the hardware will decide how best to optimize it in-terms of its capabilities.
  • It's a shader language, not an all-purpose programming language. It's more like DSP-style programming, for those few people who are familiar with that sort of thing.

    You don't need integers, for example, because NVidia's hardware works entirely in floating point. It's not like you could use Cg to parse text files, nor would you want to.
  • Cg is useful right now. Some people will use it. If graphics hardware evolves into more general purpose CPU's, then people will use "something else". "Vector C" might be that "something else", although it is hard to predict the future.

    I'd rather put my eggs in the "works right now"-basket then gamble on how the future will be. It is too early to standardize on something that doesn't even exist (or only exists for the PS2), so "Vector C" will not replace Cg anytime soon. Give it 5-10 years, and we will see what happens. In the meantime, if Cg is useful to you, go ahead and make use of it.

    And please don't worry about standardization just yet. Before we can standardize, we need to find out which features are useful, and that will take several years of experimentation and competition in the marketplace. In the meantime, Cg could come in handy.

  • One of the most significant points made by the author of this article is the lack of pointers in Cg. While his reason for including pointers seems to be focussed on higher-level constructs such as scene description mechanisms (scene graphs, etc.) which are handled in the rendering pipeline entirely outside the likes of Cg (and very efficiently, I might add), this point is fundamentally untrue to begin with:

    Dependent texture operations ARE pointer operations, and they have been in Cg from the start.

Life is a game. Money is how we keep score. -- Ted Turner

Working...