Codeplay Responds to NVidia's Cg 163
Da Masta writes "Codeplay Director Andrew Richards has some interesting things to say about NVidia's Cg graphics language. Just to refresh, Codeplay is the company that publishes Vector C, the badassed, high performance C compiler. In brief, it seems as though Cg isn't the universal, standard graphics language some pass it off to be. Certain design considerations in the language, such as the lack of integers, break/continue/goto/case/switch structures, and pointers suggest a general lack of universal usefulness. This leads to suspicion that NVidia plans to add and tailor the language in the future according to its own hardware and their respective features. Of course, this is all old news to those of us who noticed NVidia co-developed the language with the Evil Empire."
Bias?! What bias?! (Score:1, Funny)
No bias here at all!
Re:Bias?! What bias?! (Score:2)
Yes?
Re:Bias?! What bias?! (Score:1, Funny)
Re:Yes, Microsoft is Evil (Score:1)
"As a result of Microsoft's acts of destruction, PC technology is ten years behind where it should be."
I feel compelled to agree with this though. Had standards bickering been pushed aside we may well have hover cars, cold fusion and AI spacecraft searching the galaxy on our behalf by now
Re:Yes, Microsoft is Evil (Score:2)
But that's just my opinion, I've been proven wrong in the past...
what? (Score:1)
Goto is not a cuss word when used wisely (Score:2)
Who uses goto statements? I'd count that as a feature.
Some CS people hear "go to hell" and are more offended by the "go to" than by the "hell", but in my C code, I use goto carefully to break out of nested loops and to bail on exceptions. Even though the Java language has goto bound to nothing, it has nearly equivalent structures: 1. try...catch; 2. labeled break.
Re:Goto is not a cuss word when used wisely (Score:1)
Re:Goto is not a cuss word when used wisely (Score:2)
Re:Goto is not a cuss word when used wisely (Score:2)
Funny, I use return to break out of nested loops. If your functions are so huge you feel the need to use goto, that's a sign they need to be split up. I've found it pays off to do this, because I often end up reusing the new, smaller subfunction. There's really no justification for using goto in human-written C code.
Re:Goto is not a cuss word when used wisely (Score:1)
Re:Goto is not a cuss word when used wisely (Score:1)
The anti-goto campaign is understandable in the respect that it is a method of keeping the untalented from shooting themselves in the foot. By the same token, all languages but VB.NET are evil and are only for the crazy kooks!
Re:Goto is not a cuss word when used wisely (Score:2)
I would make an analogy with multiple inheritance in C++. Now if you've reached the point where you're considering using MI, you've already got some modicum of experience and judgement. I used MI maybe 3 times in my code (all of which seemed eminently reasonable), considering the situation carefully before making the decision. And guess what: one of those 3 times it unexpectedly blew up in my face horribly, just like the anti-MI people always said. (I hit a bug in GCC that left me baffled for 2 days.) Somehow MI seems to be intrinsically evil, although it seems so tempting and useful sometimes.
I've learned my lesson, and now I have a strong anti-MI bias in my coding. Using goto is the same kind of thing for me, although it isn't nearly as serious a problem.
I'm not an "anti-goto fanatic" (I don't really care that much either way), I just like arguing about it :).
Re:Goto is not a cuss word when used wisely (Score:2)
Re:Goto is not a cuss word when used wisely (Score:1)
Anyway, if you meant abstract base classes, I agree with you :).
Re:Goto is not a cuss word when used wisely (Score:1)
Let's not make rash assumptions about what Tony does and does not know, shall we? Especially when he's making perfect sense. Suppose you have two classes that publicly inherit from Automobile (the former NOT being abstract at all), Car and Truck say. Now let's say you want a new class, imaginatively called Caruck, to publicly inherit from both Car and Truck. You should probably make Car and Truck _virtual_ Automobiles, so that Caruck only has one copy of Automobile's variables.
Re:Goto is not a cuss word when used wisely (Score:1)
Re:Goto is not a cuss word when used wisely (Score:1)
A lot of numerical methods people use Fortran and Fortran has "GO TO". If you did a quick conversion to C you'd probably just leave it. f2c probably uses goto, but I don't remember for sure.
-Kevin
Re:Goto is not a cuss word when used wisely (Score:1)
This is certainly a factor. Lots of NA people started out on FORTRAN and some still cling to old habits. I don't think that's the main reason though; the individuals I have in mind have been using C/C++ exclusively for the past ten years or so -- f2c is definitely out. I even seem to remember reading an article in the Communications of ACM (quite a while ago) extolling the virtues of 'goto' for numerical (specifically, matrix) applications. It seemed a bit peculiar at the time but perhaps the guy had a valid point to make after all.
Please cut the profanity (Score:1)
one should never NEED to break ... out of a ... for loop
Then what do you do when you get a "Virtual memory exhausted" error on a widely deployed machine with extremely limited but non-upgradable RAM? Surely you don't switch to C++ exceptions, because that will take up even more RAM for the exception handling code, right?
USE A ... WHILE OR DO WHILE LOOP
I've considered this, but I'm not sure GCC's optimizer will know what to do with putting complicated expressions in a while loop's condition. I don't want my profiler to report a 30% performance hit from inefficient control structures.
Function calls have overhead (Score:1)
Funny, I use return to break out of nested loops. If your functions are so huge you feel the need to use goto, that's a sign they need to be split up.
I write video drivers for soft real-time applications. If I split up a function into an inner function and an outer function wherever I need to break out of something, performance drops 30% because now I have to pass a dozen arguments to a function call in a loop, and on some architectures, function calls are very expensive.
Re:Goto is not a cuss word when used wisely (Score:1, Insightful)
1) rewrite the sections
2) break the rule
If the code section is long, but any functional section fits on a page, then the code is fine. If goto helps keep the code clean (rather than nested if's, or shedloads of small function calls), then use it.
Rules are there to make you think before you break them.
Re:what? (Score:2, Interesting)
And i think the same would apply for other vertex/pixel shading units.
Why?
Because the VUs has a very small amount of memory and to get the most of it you should almost count cycles aswell.
I haven't tried vectorC but considering the VU i think it's the only sane option apart from the stuff in asm(that i think might be preferable anyhow).
As for Cg i think i have to agree with the points Codeplay makes about Cg being too simple and aimed at the current NVidia implementations, but the statement itself is to be considered FUD.
I have been working with 3d for some years now and spent the last two years coding professionally for the PS2 and Dreamcast.
IMHO most stuff defining appearance(bumpmap,etc) should when it comes to supporting 3d hardware be translated from a really high level(renderman?).
Cg feels too lowlevel and specific for this, especially if you plan on supporting consoles.
vectorC could possibly(though questionable in reality?) get away in this case because the most of the code will be C/C++ anyhow and thus the specific code could be taken out and recompiled with vectorC when aiming for performance.
/ Jonas Lund
Re:what? (Score:2, Informative)
So even if you never use gotos in c/c++, they still are compile in as unconditional jumps at assembly level. The same should happen with pixel/vertex programming.
Re:what? (Score:2)
Anyone who wants to efficiently implement a state machine.
Re:what? (Score:2)
Re:what? (Score:1)
LOL. That's now permanently inscribed in my fortune file.
Re:what? (Score:1)
VectorC (Score:1)
Mmmm... I'd love to see Gnome and KDE compiled with VectorC or intel's compiler...
wow! (Score:1)
who gets to be the battleship?
Re:wow! (Score:1)
ATI has been seriously competing with nVidia since the TNT2, which intensified with their introduction of the Radeon. Now they are in a game of leapfrog and many other vendors have entered the race (SiS, Matrox, 3Dlabs, even Trident.)
Re:wow! (Score:2, Interesting)
One entry found for monopoly.
Main Entry: monopoly
Pronunciation: m&-'nä-p(&-)lE
Function: noun
Inflected Form(s): plural -lies
Etymology: Latin monopolium, from Greek monopOlion, from mon- + pOlein to sell
Date: 1534
1 : exclusive ownership through legal privilege, command of supply, or concerted action
they have almost all of the market share. they are almost a monopoly. i don't see what's so hard about this point. ati is making a valiant effort, but they aren't shipping the kind of units/chips that nvidia is. sis, matrox, 3dlabs and trident do not have a 3d architecture even comparable to the latest nvidia offerings.
if tomorrow nvidia released detonator drivers with a special feature or program, it would become a standard. big companies make games for nvidia chips. people speak about games in nvidia terms (that game's a geforce2 featureset game).
face it kiddo, nvidia is *the* video card maker right now when it comes to consumer desktop 3d.
Re:wow! (Score:1)
Re:wow! (Score:1)
1) make money
2) once 1 is done, gain a monopoly
i think it's safe to say that nvidia (yeah, screw their damn capital letter, the bums) is profitable. that leaves only 2 to be done, so any action taken by nvidia would be a concerted effort to gain a monopoly.
Re:wow! (Score:2)
2) once 1 is done, gain a monopoly
>>>>>>
I suppose you'll now tell us that you learned this at Harvard business school. Exactly where do you get off making up your own economic theories? Ford is profitable, are they a monopoly? Dell is profitable, are they a monopoly?
Re:wow! (Score:1)
Re:wow! (Score:2)
Re:wow! (Score:1)
Re:wow! (Score:1)
Re:wow! (Score:2)
PS> As for the Mazda thing, its done all the time by car makers. Ford used to make some of Nissan's cars (an SUV and a minivan) from a line parallel to their Mercury brand. A lot of times particular manufacturers do not have a product in a popular market (in the case of Mazda, small trucks) so they make a deal with another company to get in on the action. Volvo, for example did this with its S40 compact, which is actually made by Mitsubishi.
Re:wow! (Score:1)
They have 68% of the video card market. Not quite the "almost all", is it? Sure it's a high percentage but this is not due to them being a monopoly. The fact that some of those abovementioned companies haven't come out with a comparable product (which is garbage in ATI's case considering the 8500 was comparable to a Geforce 3 Ti500 at a lower price point up until the Geforce 4's release ... not to mention how the R300 is, by all accounts, up to 50% faster than a Ti4600 in some games) is irrelevant. The fact of the matter is, competition DOES exist, it's alive and well and it's going to be heating up even more in the next few months (I don't know if you know this or not, but 3dlabs is on the verge of releasing an architecture that will be found in workstation-class sytems and consumer-level systems and Trident will be releasing their own line of GPU's as well)
if tomorrow nvidia released detonator drivers with a special feature or program, it would become a standard. big companies make games for nvidia chips. people speak about games in nvidia terms (that game's a geforce2 featureset game).
Features that nVidia has implemented in their drivers and hardware were conceived by other standards bodies. Support for Hardware Transforming and Lighting was implemented in OpenGL long before the Geforce line of cards was released. The Vertex/Pixel shader standards were NOT conceived by nVidia, they were simply the first consumer-level video card manufacturer to implement them in hardware.
And yes, I do not dispute that the Geforce line of cards are THE benchmark for todays games. It's the most popular video hardware, how could it not be?
face it kiddo, nvidia is *the* video card maker right now when it comes to consumer desktop 3d.
It is not *the* only viable 3D solution, as any user with an ATI Radeon 8500 can attest (and there are a lot of those ... browse the forums on an enthusiast site like www.hardocp.com and see some of the flame wars which result over this point)
In summary, nVidia has made a superior product and that has paid off for them. I don't believe that they are a monopoly until there is no serious competition and I do not believe that to be the case.
Wahhh! (Score:1)
Well, Nvidia sure knows how to make a C guru cry!
This leads to suspicion that NVidia plans to add a (Score:3, Troll)
Smells like glide!
Re:This leads to suspicion that NVidia plans to ad (Score:1)
Re:This leads to suspicion that NVidia plans to ad (Score:1, Informative)
And we are calling NVidia bad? (Score:4, Funny)
It's a shader language... (Score:4, Interesting)
2. Exluna was bought by nVidia.
3. Exluna makes 2 renderman compliant renderers.
4. Shaders are used by renderman.
5. nVidia is touting CG as a compiled shader... just like one has to compile shaders for renderman.
6. Fill in the next 2 years here of having look development tools for major graphics studios that look close enough to the final renders that it speeds up FX work to near realtime for shading and lighting.
Re:It's a shader language... (Score:2, Informative)
- CG came around long before exluna was bought by Nvidia. I overheard Tony Apodaca (the co-author author of Advanced Renderman with Larry Gritz, the founder of exluna, at siggraph this week saying how CG was pioneered by a bunch of renderman users who wanted to bring that technology to real-time hardware. Since renderman is almost 15 years old now, the user base that CG could reach is vast.
- PIxar's lawsuit with exluna over trade secrets was settled, and the company was subsequently bought out by Nvidia. There is no guanantee that any of exluna's work will be continued. It's not uncommon in these situations to never see anything come of it. It's a nice idea that maybe nvidia will make renderman-supported hardware, but don't hold your breath. Nvidia wants to be the next intel, or microsoft, and you don't get there by support other people's software, you get there by eating competitors alive and flushing their work down the toilet.
- Best-case scenario: Larry Gritz re-releases BMRT, his free (as in beer) renderman ray-tracer, and at least we don't have to pay thousands for a production-quality renderman implementation. Or (even better) someone writes and releases another free (as in freedom) renderer that actually works.
Dan
Re:It's a shader language... (Score:1)
Paul
what about 3dlabs (Score:1)
3DLabs press release [3dlabs.com]
Having better control structures and pointers will be important down the line.
Yikes! Mucho deneros! (Score:1)
"Never cuss a man until you've walked a mile in his shoes. Then if he gets mad, you'll have a headstart and he'll be barefoot."
Actually, $800 isn't that much for such a tool ... (Score:1)
If I was a gaming corporation, and I knew that an 800 dollar tool could make my game faster for my end users, I'd do it in a heart beat.
Lastly, you failed to mention that there is an 80 dollar version of the same software. It may not have the same features as its 800 version, but it comes close enough.
Sunny
Re:Yikes! Mucho deneros! (Score:1)
Never settle with words that which can be settled with a flamethrower.
Re:Yikes! Mucho deneros! (Score:1)
Whining about competition... (Score:5, Insightful)
Codeplay's VectorC is optimized around Playstation2 centric vector graphics hardware and scene graph libraries, whereas nVidia's Cg is optimized for most current PC accelerators. Most playstation developers use licensed scene graph libraries, whereas most PC game developers use custom or licensed engines over low level libraries, so both approaches are appropriate for their current customer base...
I think its reasonable to assume Cg will evolve with its target hardware, and I'd rather nVidia do a good job with the current version than waste time hypothetical future features. I'm using Cg now, and its a great step in the right direction - a high level shader language not owned by M$ and/or tied to D3D.
I think the biggest issue w/ Cg is how nVidia is going to address the divergence in silicon budgets between themselves and ATI - nVidia is pushing for more, faster vertex shaders, while ATI (w/ M$'s backing) is pushing for more powerful pixel shaders, i.e. D3D pixel shaders v1.4, exposed in D3D 8.1, are supported in ATI Radeon 8*, but no publically announced nVidia cards support the nicely expanded instruction set and associated capabilities. nVidia also needs to complete fragment[pixel] shader support for OpenGL (and release source or a multithreaded version of the GL runtimes...)
Re:Whining about competition... (Score:2, Informative)
But that's just my opinion. And Vector C fits pretty well in with the PS2 VUs but almost every developer has at least one person doing mainly VU asm anyway so they may not be so eager to switch.
Re:Whining about competition... (Score:2)
All good points. My tuppence:
FYI, the current version is 1.01. Maybe that's why whole bunches of features aren't available? Has Microsoft's influence made us so paranoid that we feel compelled to seek Evil Empires around every corner?
Re:Whining about competition... (Score:2)
I'm more hoping that nVidia can leverage Cg to keep the pressure up on the OpenGL ARB to move to v2 faster, despite the intra-vendor infighting, patent pressure from M$, and the dilema r/e multipass strategies.
To do this, nVidia must make Cg usable for bleeding edge ATI products as well, and the divergence w/r/t pixel shader functionality is considerable. I'm hoping they won't do anything to preclude ATI supporting Cg, should ATI really choose to do so. At the same time, I don't expect ATI to do anything to damage their role reversal w/ nVidia w/r/t being M$'s favored vendor (and candidate for XBox2 chip supplier), so I'm not expecting ATI to step forward anytime soon onto the Cg front, or even GL 2...
This means nVidia needs to take some initiative to demonstrate they will not preclude support for next gen pixel shaders, especially features that cannot be trivially abstracted unrolled from a high-level language, like 3 and 4 levels of dereferencing of textures instead of one. Unfortunately, I have not had any favorable impressions on this topic when I talk to developer relations at nVidia.
Other than that, for a v1.01, they've done their usual damn fine job of providing stable code, and I'm enjoying the advantages of C over ASM for most of the grunt level shaders we need...
Re:Whining about competition... (Score:1)
So I think it's incorrect to say they're porting their PS2 technology to the PC.
whats the go? (Score:1)
NVidia has it right (Score:5, Informative)
If you put a general-purpose execution engine in the graphics engine, you need an OS, context switching, protection mechanisms, and all the usual system stuff. If pixel shaders aren't allowed to do much or run for long, managing them is much simpler. Bear in mind that all this stuff has to work in multiwindow environments - one app does not own the graphics engine, except on game consoles.
Cg isn't supposed to be a general-purpose language (Score:5, Informative)
I think what Richards is overlooking in his commentary is that Cg is not *supposed* to be a general-purpose graphics programming language. Its design goal was precisely what he said later in the article -- to expose the capabilities of current (and presumably future) NVIDIA hardware without requiring programmers to write assembly code. Likewise, conditionals like if, case, and switch aren't in there right now because the profiles the compiler is aimed at -- DirectX and OpenGL extensions -- don't yet support them. I expect this to change.
Also, Cg programs run at the level of vertices and pixels. This is the wrong place to be thinking about a scene graph: that happens at a much higher level of abstraction. Dealing with scene graphs in a fragment shader is a little bit like making L1 cache aware of the memory-management policy of whatever OS happens to be running.
After reading the article a few times, I think it's meant more as a "here's why our product is better than theirs" release than an honest criticism of the design of Cg. If he was interested in the latter, there are a few obvious issues. I won't go into them all, but here are two I ran into last week at a Cg workshop:
One final note: Cg is not the be-all and end-all of real-time shading languages. Nor is DirectX 8.1, 9, or whatever. Nor is the SGI shading language [sgi.com]. Real-time shading on commodity hardware is still a new enough field that the languages and the hardware are evolving. DirectX 9 and OpenGL 2.0 [3dlabs.com] both incorporate shading languages that will by nature be less tightly coupled to one vendor's hardware. Watch those over the next year or so.
Re:Cg isn't supposed to be a general-purpose langu (Score:1, Interesting)
Cg doesn't limit you to single-pass rendering at all. It just doesn't do it for you automatically, which would require some sort of scene graph or driver level support. You can still do multipass in the same manner its always been done.
The Stanford Real-Time Shading Project [stanford.edu] people wrote a compiler that does precisely that: it uses Cg as an intermediate language.
The Stanford language doesn't use Cg.
Cg requires that you write separate vertex and fragment shaders. You can't do things like texture lookups inside a vertex shader; you can't change the position of your vertices inside a fragment shader
Current hardware/drivers don't expose any way of doing this so the only way to do it would be some sort of slow readback to CPU memory, which would be to slow to be of much use. Cg is designed to work with current and future hardware in the least intrusive way possible so it wouldn't make sense to implement some high level abstraction like what was done with the Stanford language.
Re:Cg isn't supposed to be a general-purpose langu (Score:1)
At a SIGGRAPH course on real-time shading last week, Eric Chan described a version of the Stanford compiler that broke a shader down into its component passes and used Cg as an intermediate language when aiming at NVIDIA cards. (Eric -- or anyone else who was at that course -- if you're reading this, I'd welcome corrections. I know it was the Stanford project and Cg as an intermediate language but I'm hazy on the details.)
Current hardware/drivers don't expose any way of [texture lookups in vertex shaders / position change in fragment shaders] so the only way to do it would be some sort of slow readback to CPU memory, which would be to slow to be of much use.
Exactly. This goes back to the Richards' claim that the Right Solution is to program everything in C/C++ and make the compiler smart enough to figure out how to partition it all into vertex shaders, fragment shaders, multiple passes, scene graph management, vector processing... I don't want a fully general language down in the fragment shaders. I want to know about what the hardware there can and can't do so I can make things run fast.
Sour grapes? (Score:1)
"Overall, Cg is not a generic language for graphics programming; it is a language for NVIDIA's current generation of graphics card."
Well, what does anyone expect! Cg is a tool to help developers take advantage of pixel and vertex shaders *today*. And which pixel and vertex shaders do they know best? NVidia stated in their original press announcement that it would work across DirectX and OpenGL, as well as with other pixel and vertex shaders that comply with the specifications and even mentioned that other chip makers would be able to optimise the compiler for their particular hardware.
The games industry moves along at a rocket pace, it's all about performance and getting the most out of the hardware right now. I applaud NVidia for actually doing something for today, rather than just talking about how great things are going to be tomorrow, and fail to see why leaving features unimplemented is a cardinal sin when they're not available in their current generation of chips.
Reading their press release, I don't know what the hell Codeplay want. Some attention maybe.
Linux on GeForge 6... (Score:2)
Than we'll have "portable" software!
Just kidding... :)
Re:Linux on GeForge 6... (Score:2)
I don't get it (Score:2)
Re:I don't get it (Score:3, Interesting)
The PS2 has two almost identical MIPS chips souped up with fast SIMD math instructions. One runs the OS/AI/etc, as a pretty standard processor. The other is dedicated to process vertex arrays and the like. This is the "GPU" But really it is a full fledged processor that simply used in a streaming fashion.
The talk this year at SIGGRAPH was all about how the GPU's are all becoming streaming processors that can handle almost anything thrown at them. Using them effectively is all about never asking for any data not already fetched, that is no random access to memory. A huge portion of the die on a modern CPU is dedicated to caching, GPU's get faster at a greater rate because they use most of their die for logic gates and what is left is used for the very specialized texture memory access, with little or no caching of random access.
As for the integer arguement, I think nVidia heard that loud and clear, and it's almost certainly going to be in OpenGL 2 shader language (which is C like and has branching and loops even in the fragment shader.) 3D Labs and ATI promise drivers weeks after ratification. Except for integers nVidia should be able to do this in their driver with extra passes. For integers they can just suggest 'safe' texture sizes for data as a work around.
OpenGL2 also asks for effectively unlimited program size, which no one will have initially. This of course can be simulated with more passes, but you'll be reponsible for any such splitting since it tells you whether the shader compiled and 'out of program space' is one of the acceptable errors.
ATI is known for not so great drivers and 3D Labs has an external company writing their linux drivers, so I'm not sure they will be any more stable than the nVidia ones. Both booths told me the linux drivers would be closed source, but I didn't ask so I guess they are hearing people asking for easier to fix drivers.
Re:I don't get it (Score:1)
Both VPUs consist of: VU, data memory, and a data decompression engine. Both VPUs can handle integers. (microinstructions IADD*, IAND, IOR, ISUB*)
However, VU1 is indeed different from VU0. It only operates in micro mode (ie, not as a coprocessor to the CPU). It also has more memory (both data and program) and an EFU (exponential and trig functions). Realize though, VU0 can operate in micromode as well, and thus has both program and data memory itself.
But the thing about it is... VPU0 is coupled with the CPU, while VPU1 is coupled with the GS through the GIF. (graphic synthesizer and its interface) The transfer between VU1 and the GIF is the highest priority transfer of the PS2.
It is a glorious architecture. I realize you can't see that from the little i have described here. Ars Technica had a couple articles a while ago; PS2 vs. PC (system level comparison) [arstechnica.com] and an overview of the EE [arstechnica.com]. They were written... well, a while ago. Perhaps even before the PS2 was in production - but they do catch the beauty of the EE.
It really is a better way of computing, for "dynamic applications." (better than the PC architecture) PCs still rule for word processors though...
A storm approaches, and i have no UPS. Please forgive any lack of polish - i fear losing this post to preview alone.
I think this is not such a bad solution. (Score:1)
Cg doesn't have integers because GeForce chips don't implement integer math operations. There are no pointers because the hardware doesn't implement them.
So, the choice here is either to put up with a C subset that will grow with the hardware until it's not a subset anymore (and live with the consequent lack of compatibility between versions of the hardware and Cg compilers) -- OR you carry on writing GPU-based shaders in machine-code (which *also* changes with hardware versions).
We are at the very beginning of a revolution here and as such, we have to put up with some initial inconveniences.
A better debate for the
At the OpenGL Birds-of-a-Feather session at SigGraph last week, nVidia clearly expressed an interest in working with ATI and 3Dlabs on the OpenGL 2.0 standard - but those of us who need to use realtime shader languages simply cannot wait another year. I think we should expect to use Cg until something better shows up - probably in the form of the OpenGL 2.0 shader language.
One should note that the Direct3D DX-9 shader language (called HLSL by Microsoft) is basically the same thing as Cg.
C-style pointers (Score:2)
You do not need (C-style) pointers for high performance graphics. You do not need pointers even for representing relational structures. People have been implementing graph algorithms in languages without pointers since before most of us were born. You can even do it in a language as tedious as Fortran 66. C pointers are a bizarre aberration in language design and play havoc with high performance computing and optimization. You have to jump through hoops in order to even optimize siple uses, and then add lost of special purpose declarations to make them work. Any use of C pointers can be replaced with the use of arrays and integers (but the logic of your program may change dramatically).
Another reason pointers are generally not such a good idea in graphics or high performance is that they have to be large enough to address all of memory. An index into a array can be 1, 2, 3, or 4 bytes long depending on how much data it actually needs to address. That can lead to saving a lot of space.
When dedicated C hackers make such statements, it is understandable. But a company in the business of writing high performance compilers ought to be familiar with the work, programming styles, and languages that people in high performance computing adopt, and those often don't include pointers. C programmers want pointers becaues they are used to them, and CodePlay is in the business of satisfying this desire, but that doesn't make it a good engineering choice.
Incidentally, I program in C++, including numerical and graphics code. It is quite easy to eliminate most uses of pointers from C++ code, and the result is code that is a lot easier to debug and often run faster, too.
Re:C-style pointers (Score:1)
You're not particularly creative with pointers, if you honestly believe they can be replaced by arrays. Lots of neat affects are very easy to do with pointers, but a pain to do with arrays. Function pointers come to mind. Along with the fact, that pass by reference just doesn't exist in straight C. An array is nothing but a const pointer you didn't have to malloc the memory for most of the time. Linked lists can't be duplicated in any way shape or form in an array. They can be horribly cut into little useless pieces and sorta have the semantics of a linked list, except for that unlimited size part. Pointers are there for dynamic memory, and that can't be replaced with arrays.
Now in C++ because of the inlining, and some of the other neat tricks like templates, you don't have to use pointers nearly as much. Some of the aliasing problems go away, but pointer's are ungodly useful in C and C++. They get used all the time for overloading, and polymorphism in C++ so they aren't useless there either.
Oh yeah, pointer's at Intel don't have to be big enough to address all memory, just write some huge model DOS code... Oh the horror of working on that. Done it in a former life. In fact, on all recent x86 there is no such thing as a pointer that can address all of memory. On the 386SX, I believe there was a 16MB limit due to address lines coming off the CPU. But even those CPU's could address something like 64TB (yes that's a T as in Tera) of virtual address space. Even the latest greatest CPU's from Intel can't deal with that in a single pointer. Even Protected 32bit mode is segmented, just nobody in their right mind uses it for much.
I think the major beef with the Cg that a lot of people are missing is that it's being touted as highly portable to lots of hardware (hmm, that's my impression from what I've read, I don't know that I've seen the actual press release from NVidia), but has very, very specific design limitations so that there is a lot of hardware features on cards not made by NVidia that can't be taken advantage of. If they didn't try and say it'd be highly useful for competitor's hardware, and that it is specifically for their hardware, there would be fewer gripes. I'm not particularly up to speed on this area, but I think it would be a huge boon to be able to have a specific language for the a GPU. A lot of very cool stuff could be done using it. My guess is that nobody will want to deal with the lock-in problems of using a specific vendor's code on a non-embedded system. So it'd be less useful to somebody who was developing for stuff he couldn't control the GPU unit on (say like game developers). So NVidia is trying to bill it as portable so there is less concern. I hope it all ends up that cool 3d rendering is both easier, cheaper and faster, if that happens I'm happy.
Kirby
Re:C-style pointers (Score:2)
C and C++ use the term "pointer" for a lot of things. It is the totality of those things, and the fact that they are all the same datatype, that presents problems.
"Not using pointers" means not using the stuff that is specific to pointers in C/C++: pointers into the middle of arrays, pointer arithmetic, pointers to local variables. Things would be a lot less problematic if "pointer into the middle of an array", "pointer to local variable", "pointer to heap-allocated object", etc., all had different types.
References are not "pointers": you can't do arithmetic on them, you can't store them in data structures, etc., many of the things that cause problems. But even unrestricted use of references is problematic and error prone in C++, and it is best to limit oneself mostly to references to variables passed as function arguments.
You're not particularly creative with pointers, if you honestly believe they can be replaced by arrays.
I am decidedly not creative with pointers. I used to be very creative with pointers, but I found that hand-crafted pointer code that looked "efficient" didn't run any faster on modern hardware and was a lot harder to debug.
Re:C-style pointers (Score:2)
Just to be clear: you can, of course, initialize a reference in an object. That is as problematic as pointers, and I avoid it.
Re:C-style pointers (Score:1)
Unfortuantly, doing that leads to most of the optimization problems you were trying to avoid. Aliasing is not you're friend when doing optimization.
Things would be a lot less problematic if "pointer into the middle of an array", "pointer to local variable", "pointer to heap-allocated object", etc., all had different types.
Having all the various kinds of pointers be different would be a very bad idea. Very, very bad, especially if they added it to C++ where typesafty is important. Both a pointer and a reference are nothing more then an address with an associated layout (granted a pointer has more operations that are legal). You'd have to template anything that worked on a pointer, which isn't my idea of fast compilation, or small code.
The problem with most pointer code that is creative is it isn't the highly optimized case. For loops that work on pointers aren't as fast as the same for loop with an index, precisely because for loops with an index are more common, so more time is spent making them run optimally, not because there is anything about the code that isn't equivilent. It also involves aliasing, and several other things the compiler has to make conservative assumptions on which makes it very difficult to optimize. References are nice, but they aren't much better then pointers for optimization. Inlineing, and templates are where C++ really shines at optimization over C.
I found that hand-crafted pointer code that looked "efficient" didn't run any faster on modern hardware
That has nothing to do with modern hardware, and a lot to do with modern compilers. Modern compilers are orders of magnitude better at common case code. Ancient C compilers translated the code in obvious ways to assembler, so you used pointers, if you knew pointers were faster when generated by you're compiler. Compiler's are the big tricks, the hardware is merely a cost/reliability issue for most things.
Kirby
Re:C-style pointers (Score:2)
Another benefit of this system was reduced bugs. Eliminate pointers and you automatically eliminate a good portion of your potential bugs.
Sounds like Codeplay doesn't like the competition. (Score:1)
Codeplay was probably planning on making a DX9 backend for their commercial product, so Nvidia is just raining on their picnic. We'll see what happens to Codeplay over the next year or so.
could be interesting (Score:1)
*crosses fingers*
Re:could be interesting (Score:1)
it shows some interesting answers to questions that would come up about cg. It seems to also be available for linux. So hopefully, if NV can remain able to enhance both apis(being directx and opengl), everyone will benefit.
Prettiest Girl I know (Score:1)
They can program in Logo for all I care (Score:2, Insightful)
drivers still closed source (Score:1)
Use Nvidia for games if you like. For some of us, open source - which Nvidia drivers are not - matters more than video gaming frame rate.
Nvidia => no BSD support, no support if your Linux kernel strays too far from the snapshots they use.
Re:drivers still closed source (Score:1)
Re:drivers still closed source (Score:1)
Re:drivers still closed source (Score:2)
>>>>
That's crap. Based on some patches for 2.5.17 floating around, I managed to get my set working on 2.5.23. If you take a look at the actual abstraction layer, it is complete. You could port the kernel driver to OS X if you wanted to (oh wait, they already have!)
Vectoriziing stuff is not a new thing (Score:1)
Huh? (Score:1)
I distinctly remember John Carmack saying in his
I doubt there is some evil conspiracy going on here. nVidia may add if...else to Cg in the future, not due to some underhanded plot, but because once the shader hardware supports conditional jumping it only makes sense that Cg would as well.
Not a universal graphics language. (Score:2, Informative)
The hardware in those cards has certain limitations (dunno 'bout the integers, but I've heard (from John Carmack's
It seems like there's rampant misunderstanding when it comes to Cg, so I'll try to clear things up:
1)It is *ONLY* for writing custom pixel and vertex shaders for 3D cards that support custom pixel and vertex shaders.
2)The alternative to Cg is to write your pixel/vertex shader(s) in an assembly-like language. This is assembly language for the 3D hardware, not the CPU or anything else. Again, this isn't x86 assembly.
3)The shaders produced are only used by the 3D hardware, and only for the purpose of allowing developers more control over how objects look (i.e. the developer can write a shader that mathematically simulates the way light bounces off human skin, then tell the 3D hardware to use that shader for all polygons that are supposed to be human skin), and have absolutely nothing to do with speeding up graphics operations or other speed optimizations.
Competitor X Hates Competitor Y's Product... (Score:2)
I suspect that disciplined programmers can use either tool without making their code proprietary. Use MACROS for compiler dependant stuff! Wrap proprietary functions!!! Of course, when you are shoving games out the door, how many stop to think about coding discipline? So, then it becomes a question of who you would rather risk getting locked into...
Hrmz..... (Score:1)
What about NV30? (Score:2)
That said, it does seem a bit weird not to make Cg strong enough to include features that are obviously needed for their own next-generation of hardware... But all the conspiracy theories have already been used up, I'll just settle for introducing some facts in the discussion.
markup language vs. procedural language (Score:1)
You're supposed to be able to specify a scene in a procedure neutral way. Then the hardware will decide how best to optimize it in-terms of its capabilities.
Who ever said Cg was general purpose? (Score:2)
You don't need integers, for example, because NVidia's hardware works entirely in floating point. It's not like you could use Cg to parse text files, nor would you want to.
yes it is probably going to happen (Score:2)
I'd rather put my eggs in the "works right now"-basket then gamble on how the future will be. It is too early to standardize on something that doesn't even exist (or only exists for the PS2), so "Vector C" will not replace Cg anytime soon. Give it 5-10 years, and we will see what happens. In the meantime, if Cg is useful to you, go ahead and make use of it.
And please don't worry about standardization just yet. Before we can standardize, we need to find out which features are useful, and that will take several years of experimentation and competition in the marketplace. In the meantime, Cg could come in handy.
Dependent Texture Reads == Pointer Dereferencing (Score:1)
Dependent texture operations ARE pointer operations, and they have been in Cg from the start.
Re:My bone with Cg (Score:1)
Re:My bone with Cg (Score:1)
1. There are no pointers in Cg, not sure what you are talking about here.
2. Possibly
3. No idea what you are trying to say with this one. Cg will compile to DirectX 8 vertex/pixel shaders and ARB_vertex_program under OpenGL, meaning it will run on any card.
4. Again, no pointers.
5. No class's at all, this isn't C++, its C for graphics.
6. Its a shading language...
7. No idea what you are trying to say here.
---
Re:My bone with Cg (Score:3, Funny)
Re:My bone with Cg (Score:2)
Or possibly a bitter ex-BitBoys employee?