NVIDIA's Pixel & Vertex Shading Language 263
Barkhausen Criterion writes "NVIDIA have announced a high-level Pixel and Vertex Shading language developed in conjunction with Microsoft. According to this initial look, the "Cg Compiler" compiles high level Pixel and Vertex Shader language into low-level DirectX and OpenGL code. While the press releases are going amok, CG Channel (Computer Graphics Channel) has the most comprehensive look at the technology. The article writes, "Putting on my speculative hat, the motivation is to drive hardware sales by increasing the prevalence of Pixel and Vertex Shader-enabled applications and gaming titles. This would be accomplished by creating a forward-compatible tool for developers to fully utilize the advanced features of current GPUs, and future GPUs/VPUs." "
3dfx/Glide part 2? (Score:2, Interesting)
From the story it sounds like NVidia will allow other cards to support Cg so maybe they can. However I wonder if ATI will be willing to support a standard which NVidia controls. It's like wrestling with a crocodile if you ask me!~
Re:3dfx/Glide part 2? (Score:1)
NVidia is avoiding at least one 3dfx pitfall ... their product is not simply a beefier version of the previous product (which in itself was a beefier version of the previous product (which in itself was a beefier version of the previous product (which in itself was a beefier version ... )))
Re:3dfx/Glide part 2? (Score:2)
Given that it was nVidia that purchased 3dfx and brought along many of the employees, I should hope that there would be some internal discussion of this.
Re:3dfx/Glide part 2? (Score:2)
Um, well, Cg gets compiled to DirectX or OpenGL, so it follows that any card that can do DirectX or OpenGL (read: all of them?) will benifit Cg. I guess different cards to different levels of support, but if they want this to fly, itd be in their best interest to generate multi-card compatible code. Or at least allow you to specify what extentions your generated code will support, to tailer to specific card feature sets? Correct me if I'm confused, if anyone is really in the know.
I think the idea here is that you could use this language to write new shaders for cards on the market _now_
sorta... (Score:2)
Remember when the Radeon first came out? Well they had to release a special directX just to support it's pixel shaders as opposed to just nvidias.
So as a game developer you'll probably have to compile your Cg code with the Nvidia one and the ATI one just to make it work (better).
This tool will really help those XBox developers.
Same thing with OpenGL, since the spec isn't nailed down yet and with Nvidia 'leading the pack' of development. It wouldn't surprise me if they decided to not support any other cards with the OpenGL compiler (which they haven't even released yet).
So hopefully this will NOT turn into a Glide type issue. Since this is actually a level above glide. Glide was very low level, all the Glide functions mostly mapped directly onto the 3dfx hardware, while this is a little bit more abstract.
Re:3dfx/Glide part 2? (Score:4, Informative)
That was the first thing that popped into my head when I read this article, but it sounds like they're going to give open access to the standards, just not to the interface with their chips.
News.com beat ya. (Score:2)
My biggest question - from reading this, this would actually work correctly on other competing VCards... why did nVidia create it?
Re:News.com beat ya. (Score:2)
Re:News.com beat ya. (Score:1)
OpenGL 2.0 (Score:3, Insightful)
Remember, NVidia may be good now, but they got where they were by being competitive and overturning old-guard 3D guys (like 3DFX who were themselves trying to lock the industry in to APIs they controlled).
Competition=good.
Single-vendor-controlled APIs=bad.
OpenGL2.0=good.
Now, Ilike my NVidia hardware as much as the next guy, but I fear lock-in. Seems like most of us have already experienced the downsides of lock-in.
Yes, NVidia is talking up the buzzwords "portable" and "vendor-neutral" but if that's what they were after, the wouldn't have created Cg at all, they would have gone with the already-available open standard, OpenGL2.0. This is embrace, extend and extinguish.
NVidia != compatibility? (Score:1)
If it doesn't work with my RADEON, it must be evil!
Re:NVidia != compatibility? (Score:1)
Re:NVidia != compatibility? (Score:2)
Pixels & Hardware sales (Score:1)
Hype or innovation? (Score:3, Insightful)
Re:Hype or innovation? (Score:2, Insightful)
Re:Hype or innovation? (Score:3, Insightful)
Have you read about how much effort JC has put into pushing polygons in Doom 3? We're hardly at a point where companies don't have to worry about speed issues..
If anything, companies have to put in even more effort into producing some stunning results, because everybody has been spoiled by recent titles.
OT - Bandwidth of a 747 filled with CD-ROMS (Score:2)
Assumptions:
Capacity of 1 CD - 650,000,000 bytes
# of CDs that will fit into 1 cu ft comfortably - 400
Cargo space on a 747 - 6,190 cu ft
plus - you'll have enough CD-ROM drives available on either end to write 2.5 million CDs in 1 hour, and read 2.5 million CDs in 1/2 hour (about 125,000 drives - not an impossible number)
plus - the CDs on the shipping end are packed as they are burned, adding little or no time to the process of loading - same goes for receiving end. This assumes you have about 25,000 hard workers.
The flight time from Paris to New York is 6 1/2 hours. Writing/packing is 1 hour. Reading/unloading is 1/2 hour. Total time 8 hours. Total bits transfered is 12,875,200,000,000,000 bits, or 12,875 terabits (1,609 terabytes). Total time for transfer, 8 hours.
Resulting bandwidth: 447 Gb/sec, or 56 GB/sec.
New York to Miami would be 894 Gb/sec, or 112 GB/sec.
Sounds impressive, but you might want to reconsider laying fiber considering the impossible costs and logistics...
Re:OT - Bandwidth of a 747 filled with CD-ROMS (Score:2)
It can only carry the equivalent of one 18' cube? (Oh, I see where you got your numbers. Why are you assuming a plane with passengers? Cargo versions are closer to 25,000)
If you're packing CDs solid, then you'll hit the 113,000 Kg limit long before any volume limit anyway.
One offhand reference [dualplover.com] suggests 9kg/500 cds, or around 6.3 million discs.
Re:OT - Bandwidth of a 747 filled with CD-ROMS (Score:2)
You need to add one more assumption to your assesment: maximum takeoff weight. When I did this thought experiment a long time ago, assuming a 747-400 freightliner full of DAT tapes, it was apparent thet you could only use about 60% of the available cargo space and still have a plane that could get off the ground.
Re:OT - Bandwidth of a 747 filled with CD-ROMS (Score:2)
Re:OT - Bandwidth of a 747 filled with CD-ROMS (Score:2)
My only concern with the hard drive solution is that if you hit a little bit of turbulence, you just knocked 50,000 drive arms out of place, ruining 25% of your data. DVD-RWs would be less fragile, and are probably the happy medium to all this.
We should save this page, because who knows who might use this kind of transfer someday... after all, the main transoceanic cable laying company (Tyco) is having some problems right now, so who knows what kind of wacky desperate solutions people might come up with in a future bandwidth crunch?
Re:Hype or innovation? (Score:3, Interesting)
Have you *even* done *any* 3d graphics programming?? HW TnL *offloads* work from the CPU to the GPU. The CPU is then free to spend on other things, such as AI. Work smarter, not harder.
I'm not sure what type of revolution you were expecting. Maybe you were expecting a much higher poly count with HW TnL, like 10x. The point is, we did get faster processing. Software TnL just isn't going to get the same high poly count that we're starting to see in today's games with HW TnL.
> it takes a few months till anyone really knows if this is good or bad.
If it means you don't have to waste your time writing *two* shaders (one for DX, and other for OpenGL) then that is a GOOD THING.
Re:Hype or innovation? (Score:4, Insightful)
Even better then that! It means you don't have to waste your time writing *4* shaders:
Nvidia/DirectX
Nvidia/OpenGL
ATI/DirectX
ATI
That is of course, pending a compiler for ATI cards - but I don't think it will be long... Unless ATI holds out for OpenGL2 - but in between now and when OGL2 comes out there is a lot of time to lose maket share to Nvidia because people are writing all of their shaders in Cg - and ATI is getting left out in the rain....
So I would expect ATI to jump on this bandwagon - and quick!
Derek
Re:Hype or innovation? (Score:2)
Proprietary standards? (Score:1)
Re:Proprietary standards? (Score:2)
Cg compiles down to OpenGL and DirectX statements, which are not proprietary. Some of the statements are recent extensions to support the kind of stuff they want to do. So, yes, other companies can support these as well. However, they might be following a target being moved around at will by Nvidia. "Oh, you don't support DirectX 9.001's new pixel puking extensions?"
It remains to be seen how it's used. Obviously, Nvidia wants to use this to sell their cards. But MS doesn't have to listen to them when designing DirectX, either. It seems to me that at the very least, it'll be faster than writing separate old-school (last week) vertex and pixel shader code for each different brand.
zerg (Score:2)
The real test will be how well the crosscompiler outputs OpenGL 2 & DX 9 shaders in practic, not theory.
But let's be serious: cel shading is the only shading anyone really needs. ^^
Good times (Score:1)
Re:Good times (Score:2, Funny)
Yes, I agree! We all like a good cross-dressing game for the early morning hours at lan parties!
This could be really good. (Score:3, Informative)
Basically this is a wrapper for the assembly that you would have to write if you were going to write a shader program. It compiles a C-like (as in look a like ) language into either the DirectX shader program or the OpenGL shader program. So you'll need a compiler for each and every API that you want to support. Which means that you'll need a different compiler for OpenGL/Nvidia and OpenGL/ATI until they standardize it.
On a more technical note, the lack of branching in vertex/pixel shaders really needs to be fixed, it's really the only feature that they need to add to them. Which is why the Cg code looks so strange, it's C, but there's no loops.
Right, but it's not in the HW yet. (Score:2)
Basically, after they do that programming a PC game will be similar to programming for the PS2. You'll have to write multiple programs that are all executing concurrently to use all the power you've got at your disposal.
Assembly vs. High-level? hmmm.... (Score:3, Funny)
a) have a 3-way with two hot chicks
b) clean the floor behind my refrigerator
I wonder.
Re:Assembly vs. High-level? hmmm.... (Score:2)
Damnit (Score:1)
Re:Damnit (Score:2)
Re:Damnit (Score:1)
And instead of shelling out 300$ for your GF4 TI4600, you should have gotten a Gainward GF4 TI4200 for 150$(shipped, check pricewatch.com). Especially considering they easily overclock to TI4600 speeds with no problem.
Re:Damnit (Score:1)
Is this happenning because of Xbox on Nvidia? (Score:2)
One has to wonder if this allience is from the current relationship Nvidia and MS has with the Xbox.
Re:Is this happenning because of Xbox on Nvidia? (Score:1)
The article claim "cross platform compatibility" on Windows, Linux, Mac and the XBox.
Re:Is this happenning because of Xbox on Nvidia? (Score:2)
This keeps Micro$oft happy as they will get the best drivers at the same time as not insulting the OpenGL community.
They do NOT want every one else to turn their back on Nvidia.
Although most of the customers use M$ most of the poeple that advise them dont. I advised tons of people as to what they want in their PC and I base the advice on how "nice" the company is.
good for mac games (Score:1)
One must wonder (Score:1)
Microsoft have never done anything without a hidden agenda (microsoft bob not included).
Sample Code (Score:1)
SET_PIXELFORMAT(SHINY)
ADD_BUMP_MAPS
B
SET_TRANSPARENCY(0.5)
SET_TEXTUR
[END]
[BEGIN]
SET_PIXELFORMAT(WET)
ADD_BUMP_MAPS
B
SET_TRANSPARENCY(0.3)
SET_COLOR(
ADD_FISHIES(YELLOW)
[END]
Inefficiencies (Score:5, Interesting)
Re:Inefficiencies (Score:3, Interesting)
Re:Inefficiencies (Score:2)
Re:Inefficiencies (Score:3, Insightful)
Re:Inefficiencies (Score:2)
A GPU is specialized, and (should) have faster access to the system bus. Just because it's as powerful as a CPU, doesn't mean it should be one.
If a dual CPU system had the kind (Score:3, Interesting)
Also don't forget that a GPU has more transistors then the average cpu these days.
The VGA -> CPU interface was SLOWWWWW. In fact it's still slow, that's why AGP (X8) was invented and that's even slow. The graphics cards have larger buses, and are designed to push data to the DAC.
All you need is more bandwith for the CPU and you're set.
Re:Inefficiencies (Score:2)
You're closer to the truth than you think. It pretty much would run just as fast; it's just that the rendering quality wouldn't be competitive. Yet.
The GPU's specialization has more to do with vectorized math and optimized memory access than anything else. Those are both engineering topics that Intel and AMD take very seriously.
Re:Inefficiencies (Score:2)
Actually, they're not. An Athlon, for example, has to support hundreds of instructions, branch prediction, integer and floating point operations, SIMD, multiple levels of caches, etc. A GPU, for the most part, supports one basic pipeline, but with lots of parallelism (e.g. 16 pixel units). Now that's an oversimplification, but the principle is there. A GPU does one thing well, while a CPU has to do quite a bit more
Interesting Comparison (Score:2, Funny)
Thank GOD they wrote CG, because now I won't have to write all of my programs in assembly anymore.
What is this "compiler" technology that they keep talking about? This might revolutionize computer science!
Re:Interesting Comparison (Score:2)
It's for programming vertex and pixel shaders. Up until now, all you had was this nearly-evil assembly code to program them with.
It's not as if they've discovered something new, because they haven't. It's that they've applied it to an area that will give more people access to the new cards' powerful new features - which would otherwise go underused because so many people shy away from assembly of any kind.
I was skeptical myself until I saw how they've put it together. I can't wait until I have enough dough to buy one of these cards now...
Analogy (Score:1)
as DirectX is to OpenGL
It's a closed ("partially open") standard, for a subset of hardware, which is not as forward looking as a proposed competing standard.
Support OpenGL 2.0!
Re:Analogy (Score:3, Interesting)
Vertex Shader ASM is hard(er than Cg)
Pixel Shader ASM is hard(er than Cg)
My understanding of Cg is that it'll be used as a shader replacement, NOT an OpenGL replacement. You'll still have to write tons and tons of OGL. Now you can just simplify the SHADER part.
Re:Analogy (Score:2)
Cg is attempting to be a replacement for OpenGL 2.0. In that effort, I hope it fails.
Re:Analogy (Score:2)
Re:Analogy (Score:2)
C and Pascal are most definately competing with each other and replacing eachother. I've been programming in Pascal for 21 years and C (and C++) for 14 or so. I never get to code in both languages at the same time. It's possible to mix code from each, but it doesn't happen that often (other than in the form of pre-compiled libraries.) It's simply not efficient to ask all of the developers on one project to understand the intracacies of both programming languages and be effective at programming in both at the same time.
The same argument holds for Cg and the shading language component of OpenGL 2.0. I doubt that people will be succesful at mixing Cg in with OpenGL 2.0 code.
Re:Analogy (Score:2)
Re:Analogy (Score:2)
I don't think people are free to write a back-end compiler for Cg. I don't think nVidia has released the Cg language into the public domain, and I don't think they intend to license the language for reasonable terms. I think they intend to control the language, with the option of other vendors supplying profiles to the one and only Cg compiler (e.g. fp20, vp20, dx8vs, dx8ps).
You are absolutely correct that competition has ultimately helped the graphics industry. I just wish that the competition of shading languages was between competing open standards. Not one open standard, and one closed standard.
Re:Analogy (Score:2)
Re:21 YEARS?!?!?! (Score:2)
I find that's typically the greatest advantage of any new programming language, is that the developers all tend to agree (or at least agree more often), and to make as many of the libraries standard as possible. It's fun to be part of a community that all agrees with each other.
It's much harder to rework an old environment (C++) than it is to design a new one (Python, Perl, Java, etc.) You have to try to get an already existing community to agree to some standard, and they'll all have opinions. But it's important work to do - it helps everyone to try to improve the single environment with the most users, across platforms. That's one of the reasons why I think STL is fantastic. (And scary - I don't love every bit of it, but by and large, I think it's a wonderful tool.) It's good to see the C++ community work together to improve their environment by setting new standards that we can all live with.
Much like the OpenGL community! Which goes back to why I don't like Cg!!! =) (You just knew I had to work that in there, didn't ya?)
Re:Analogy (Score:2)
What disturbs me though, is NVidia's behavior in the matter.
They could either push to ratify OpenGL 2.0 early, and do everything they can to help the process... Which helps the industry a lot, and helps them, too!
OR, they could create their own closed standard, not help OpenGL 2.0, do everything they can to hurt the process... Which helps them a lot, and makes their competitors in the industry license their technology...
*sigh*
I'd be a lot more comfortable if NVidia were saying things like, "and it provides for the direct and easy translation of Cg code into the proposed OpenGL 2.0 standard, so that code written today can be easily migrated to the OpenGL 2.0 standard, once it's ratified."
Yeah, but... (Score:2, Insightful)
* Realistic fog/smoke -- not that 2-D fog which looks like a giant translucent grey pancake. Microsoft comes closer with Flight Sim 2002, but it's not quite there yet.
* Fire/flame -- again, nobody has created more realistic acceleration for this kind of effect. It's very important for many games.
Furthermore I would like to see fractal acceleration techniques for organic-looking trees, shrubs, and other scenery. Right now they look like something from a Lego box. In fact, fractals could probably help with fire/smoke effects as well, to add thicker & thinner areas which take on a "semi-random", but not an obvious pattern, effect.
Perhaps I'm just too picky...
Re:Yeah, but... (Score:2)
The demo versions of both Serious Sam games have a "technology test" level you can walk through in single-player mode that shows off the engine's capabilities pretty well.
hihgly detailed facial features in Cg (Score:1, Funny)
other hardware shading languages (Score:4, Interesting)
hopefully, opengl2's [3dlabs.com] shading will become standard, and mitigate the cross-platform differences. it's seemingly a much better option than this new thing by nvidia, but we'll have to wait and see what does well in the marketplace, and with developers.
Duke Nukem (Score:2)
Microsoft's Participation... (Score:2, Funny)
Segue to someone playing a video game at a high frame rate...
Gee, the more I play this game, the less bad I feel about buying proprietary technology and the angrier I get at those 9 states for disagreeing with the DoJ Settlement. Oh, and I'd like to buy all of Britney Spears CD's and eat every meal at McD's... I'm sure I didn't feel this way yesterday... What's odd, too is that every so many frames seems to flicker something I can't quite make out...
Screen breifly flickers something else
Hmm... I can't remember what I was just thinking about, but I do have the strangest desire to email all of my personal information and credit card numbers to mlm5767848@hotmail.com...
What Cg means (Score:5, Informative)
Modern NVidia(and ATI) GPU's can execute decently complex instruction sets on the polygons they're set to render, as well as the actual pixels rendered either direct to screen or on the texture placed on a particular poly. The idea is to run your code as close to the actual rendering as possible -- you've got massive logic being deployed to quickly convert your datasets into some lit scene from a given perspective; might as well run a few custom instructions while we're in there.
There's a shit-ton of flexibility lost -- you can't throw a P4 into the middle of a rendering pipeline -- but in return, you get to stream the massive amounts of data that the GPU has computed in hardware through your own custom-designed "software" filter, all within the video card.
For practical applications, some of the best work I've seen with realtime hair uses vertex shaders to smoothly deform straight lines into flowing, flexible segments. From pixel shaders, we're starting to see volume rendering of actual MRI data that used to take quite some time to calculate instead happening *in realtime*.
It's a bit creepy to see a person's head, hit C, and immediately a clip plane slices the top of guy's scalp off and you're lookin' at a brain.
Now, these shaders are powerful, but by nature of where they're deployed, they're quite limited. You've got maybe a couple dozen assembly instructions that implement "useful" features -- dot products, reciprocal square roots, adds, multiplies, all in the register domain. It's not a general purpose instruction set, and you can't use it all you like: There's a fixed limit as to how many instructions you may use within a given shader, and though it varies between the two types, you've only got space for a couple dozen.
If you know anything about compilers, you know that they're not particularly well known for packing the most power per instruction. Though there's been some support for a while for dynamically adjusting shaders according to required features, they've been more assembly-packing toolkits than true compilers.
Cg appears different. If you didn't notice, Cg bears more than a passing resemblance to Renderman, the industry standard language for expressing how a material should react to being hit with a light source. (I'm oversimplifying horrifically, but heh.) Renderman surfaces are historically done in software *very, very* slowly -- this is a language optimized for the transformation of useful mathematical algorithms into something you can texture your polys with...speed isn't the concern, quality above all else is.
Last year, NVidia demonstrated rendering the Final Fantasy movie, in realtime, on their highest end card at the time. They hadn't just taken the scene data, reduced the density by an order of magnitude, and spit the polys on screen. They actually managed to compile a number of the Renderman shaders into the assembly language their cards could understand, and ran them for the realtime render.
To be honest, it was a bit underwhelming -- they really overhyped it; it did not look like the movie by any stretch of the imagination. But clearly they learned alot, and Cg is the fruits of that project. Whereas a hell of alot more has been written in Renderman than in strange shader assembly languages (yes, I've been trying to learn these lately, for *really* strange reasons), Cg could have a pretty interesting impact in what we see out of games.
A couple people have talked about Cg on non-nVidia cards. Don't worry. DirectX shaders are a standard; game authors don't need to worry about what card they're using, they simply declare the shader version they're operating against and the card can implement the rest using the open spec. So something compiled to shader language from Cg will work on all compliant cards.
Hopefully this helps?
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Comment removed (Score:4, Informative)
Re:What Cg means (Score:4, Informative)
http://wwwvis.informatik.uni-stuttgart.de/~enge
That's being done in realtime. They're doing more than just slapping slices on top of eachother and letting 'em alpha blend.
--Dan
Re:What Cg means (Score:2)
What about the OpenGL 2.0 shader language? (Score:2)
Re:What about the OpenGL 2.0 shader language? (Score:2)
Their website (Score:2, Flamebait)
Re:Their website (Score:2)
Stripes will be Revealed with Time (Score:2, Insightful)
Nvidia has been fair to good in their cross-platform support so far, but of course MS has not been. To the relief of many CG Channel reports that "Interestingly, key components of NVIDIA's Cg compiler will be open-sourced and will work on Linux, Mac OS X and Xbox platforms. [...] Compiled code for Direct3D will be cross-platform (well, as cross platform as Microsoft might expect). OpenGL code should work much the same as long as the OpenGL extensions are supported on the target. NVIDIA says it will provide compiler binaries for all of the major platforms." The real proof will be in how Nvidia supports Cg on other platforms and OpenGL over the long term. Will these binaries be released at the same time and with the same feature sets? And will this continue to be the case or will full cross-platform support only exist in the beginning until Cg becomes a de facto standard?
I'm skeptical at this point, since we all know there's a world of difference between being merely compatible and being optimized. There's some evidence so far of how Cg is being implemented. For instance, it looks like there isn't an OpenGL fragment program profile for the Cg toolkit while there is one for Direct3D8. Nvidia says that the reason Cg has for no OpenGL ARB vertex_program extension while there are both dx8ps and dx8vs profiles is that OpenGL is dragging it's heels with the standard, perhaps valid but nonetheless the result is Cg is better implemented under DX8 than the OGL side. While it's theoretically possible to program Cg textureShaders and regcombiners in OpenGL, it's not currently supported. Much of the feature set in Cg looks like that announced so far for OpenGL2 - could nVidia just be trying to repeat OpenGL2 functions using their own identical and properitary Cg extentions instead? Finally, Nvidia announced support for Windows, MacOSX and Linux; the first and last platforms should have native Cg compilers (Linux soon apparently) but what about MacOSX?
Do we need a custom language? (Score:2)
nVidia may offer ATI the ability to get on board this Cg language, but the reality will be different. What disturbs me is that nVidia's chief scientist went on record as saying that ATI's refusal to implement nVidia's shader technology (they did their own, which some consider superior) amounted to destabilising the industry. No, that would be competing dear chap.
Who exactly will need to use Cg and what market ultimate will use it? I have no doubt that PC game developers (and Xbox) will take a look at it but let's not pretend that this is a solution which embraces other vendors. Of course I'll be glad to eat my hat if ATI and Matrox come out in support of this.
It's not an entirely bad idea but writing regular language compilers for exotic hardware is more than feasible. My company has done exactly this for the PS2's vector units with a C/C++ compiler. Those VLIW co-processors are quite similar to the sort of more generically programmable hardware that you'll see in graphics hardware down the line (combined with shaders of course).
There are some good reasons for using a custom language, better control over the implicit parallelism of multiple shaders/vertices etc. However creating a new language for people to use destroys the notion of recyclable code and introduces yet more platform specific issues. And let me tell you, there's quite enough IF/THEN statements in the graphics engines of PC games as it is. Unless your work is being used by multiple developers, in which case any decent authoring tools for specific hardware may be welcomed.
Anyhow, I'm not entirely negative about nVidia's efforts - it's an interesting stab at a problem we had kind of thought everyone (but us) was ignoring. At the very least it's destined to become a more useful shader authoring tool for PC/Xbox game engine/middle ware developers.
I wonder what ATI and Matrox's approach will be. I wonder if they'd like a regular compiler for their shaders? :)
Re:Linux Support (Score:3, Informative)
My only problem is that the toolkit itself is only for windows
Anyone try it with Wine/Winex yet?? I might when I get home.
Derek
In Fact...... (Score:3, Informative)
"NVIDIA's Cg Compiler is also cross platform, supporting programs written for Windows®, OS X, Linux, Mac and Xbox®."
So maybe even though the tools aren't cross platform - the compiler is. I think this is a Great step forward towards OpenGL 2.0 - this is showing that Windows doesn't have to be the only platform to write graphically intensive applications for.
Derek
Re:Linux Support (Score:1)
Re:Linux Support (Score:1)
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:3, Informative)
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
It would be easier to read the damned article.
OpenGL 1.4 is a completely different beast than OpenGL 2.0. Cg is a direct competitor (and attempt to kill) OpenGL 2.0, and secure NVidia as the dominant provider of 3D APIs.
Tell me, Anonymous Coward, why you think that NVidia made Cg instead of supporting OpenGL 2.0 on their hardware? Try not to use words like "monopoly", "closed standard", and "platform specific."
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
How do you know Doom 3 isn't going to OpenGL 2.0? Carmak has repeatedly said that he's under NDA's, and can only talk about what hardware is currently available.
You'll notice that Carmak's name is not listed as an endorser of Cg.
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
It doesn't help the rest of the industry, though. I wish OpenGL 2.0 were already ratified. That's the problem with standards like that, though - they don't like to ratify them until the hardware exists to test out the theories on. Well, no hardware that's shipping today can support OpenGL 2.0. Chicken and the egg.
I just with NVidia were being more supportive of OpenGL 2.0. Because it's the better of the two standards, in respect to its effects on the industry as a whole.
Same way that I wish Microsoft had never developed DirectX. Sure, it has more features today, but in the long run OpenGL is a better alternative.
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2, Informative)
It does not have cross-platform support. It is not hardware independant. They say that other vendors will be able to support it, they don't say that it's a free or open standard.
Think about the compiler part of it, for a second. So what if the compiler supports multiple targets? Each compiled program will only be able to run on that one platform! Does that sound like OpenGL to you? Even if they allow a mechanism where the code can be targeted to multiple platforms in one executable, they're still making that decision at compile time. As opposed to at runtime, like OpenGL 2.0. That means that an executable today will be able to run on future hardware. Not true with Cg. Also, the compiler they talk about in Cg is an NVidia product. They're giving it away like free beer, not like free speach. In order for any given company to have Cg targeted to their platform, they'll need to go through NVidia to make it happen. Doesn't this scare you?
Other video card manufacturers can not write their own compilers. The intended method is for other manufacturers to provide new "profiles" (eg fp20, vp20, dx8vs, dx8ps) which will be integrated into the one and only Cg compiler, which NVidia controls.
That's how it locks people in to NVidia.
I'm not talking about ATI. I'm talking about 3dlabs, the people who created the OpenGL 2.0 standard.
I agree that it's to NVidia's advantage to release their hardware sooner rather than later, and that the OpenGL 2.0 standard won't be a standard for some time to come. But NVidia could put their weight behind it, or they could write their own thing. They chose to abandon OpenGL 2.0. Entirely. And they're hoping everyone else will, too.
The Cg language is different from the OpenGL 2.0 shader and vertex language. They're different, but they do the same thing, essentially. To rephrase your question, perhaps someone will be able to provide a translator from Cg to OpenGL 2.0 and vice-versa. Just as people have created a layer that makes DirectX work on top of OpenGL.
Is there really a question in your mind about whether OpenGL is a better standard that we can all live with than DirectX?
The possible objections are the fact that DirectX has more features than OpenGL. Well, that's why OpenGL 2.0 is a good thing.
Throwing Cg into the mix doesn't make OpenGL 2.0 any less of a good thing.
I'm pissed at NVidia for deciding to go with a closed standard, rather than an open standard. What else is new?
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
I agree it might be possible to write code for the NVidia platform which can be redirected to the proposed standard of OpenGL 2.0. But in a year from now, I hope everyone's using OpenGL 2.0 instead of Cg.
Yes, you can make hardware support a proposed standard. How do you think hardware gets designed in the first place?
By the way - that's a silly argument - "don't make hardware until the standard exists," and "don't make the standard until hardware exists." I'm hearing both of those arguments at the same time in here, which is pretty amazing.
Don't be rude, dude.
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
I'm not arguing that people shouldn't use shading languages. They're fantastic. They're the best thing ever to hit the PC graphics marketplace. It's unfortunate that a closed-source, non-free implementation is available before the open-source, free implementation. Companies can move faster than comittees. (Especially when the companies are on the comittees, too.) It's almost as though a bi-law of the OpenGL review board should be that memebers can not publish competing standards. (It's like a conflict of interests.) That would encourage them to play nice with eachother. I don't know, I'm just venting steam.
I have to throw words like "closed standard" around because this is a closed standard. As to throwing around "monopoly", the reason I do it is because nVidia could either play with the proposed open standard, or invent their own closed standard. I'm not saying that they're abusing their monopoly. I'm just saying that they're trying to establish one. There's nothing illegal about having a monopoly - it's illegal to abuse it. Everyone in their right mind wants their company to have a monopoly.
I disagree strongly with your assesment that they are taking "an extremely active role in the development of OpenGL 2.0." Based on that disagreement, you can imagine why I'm frustrated at their behavior. If you are correct, then I agree, they can potentially dramatically improve the OpenGL 2.0 standard. (And I hope they do!) When 2.0 is ratified, we'll see how quickly they come out with an implementation. I hope for everyone's sakes that they do it quickly, and that it's good.
You can ship a proposed standard. People do it all the time. C++ compilers come to mind.
If they're out to "kill Direct3D 9", I'm happy. Granted, they'd only be trying to "kill" the shader language part - but that's fine by me. I just hope, hope, hope, hope, hope that they don't kill OpenGL 2.0 before it's born.
I don't think I'm saying there are conspiracies. And I don't think my standpoint is "absurd." Maybe you disagree with my conclusions, and some of my assumptions - but do you honestly think I'm acting in an "absurd" manner? You think that I'm "manifesting the view that there is no order or value in human life or in the universe"? [dictionary.com] =)
Yes, I think Microsoft is happy that Cg supports OpenGL. Because I think you'll find that Cg works much better on DirectX than it does on OpenGL 1.4 (which, by the way, has not yet been ratified - which proves my side of the argument, not yours). That's just my guess, but it's what I think will happen. And I think that developers will tend to chose the platform that supports it better. (Other than Carmack, who always seems to chose the standard he thinks is better in the long run.) It's embrace and extend all over again.
Honestly, I wish there was only one shading language : RenderMan Shading Language. Not that it couldn't use some improving, but wouldn't it be cool if you could literally use the exact same code on every platform? Offline renderers, included?
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
What makes you think there isn't a prototype implementation of OpenGL 2.0?
nVidia had the opportunity, ALL ALONG to drive the development of the next generation of OpenGL so that it could be capable of supporting the features that nVidia wants. I'm pissed off that everyone forgets this fact. They chose not to. They went off in isolation, and developed their own propietary API, which is incompatible with the OpenGL 2.0 specification (the shading language part.) It's not as though OpenGL 2.0 happened all of a sudden, without nVidia's involvement. They had every chance to steer the proposed standard to their liking. Instead, they abandoned the proposals, and they're releasing a closed solution. And you like them for this behavior? This is Microsoft and DirectX all over again.
Would you agree that SGI did a pretty good job in producing OpenGL? I think they did an amazing job. I've also read the OpenGL 2.0 specification. It's just as stunning as the original OpenGL specification. And nVidia had every opportunity to make it work the way that they wanted to, or to make it even better. Don't try to defend their actions as though this is 3dlabs forcing their closed solution on everyone. 3dlabs is playing nice with others, nVidia is not. I suppose it's pointless for us to continue discussing the matter, if you disagree on that simple point.
I do cross-platform development all of the time, thank you. And I happen to do OpenGL things all of the time, as commercial products, and I've never once licensed the name from the ARB. Oh, you meant preparing an OpenGL implementation. So, you're saying that licensing the name from the steering committee is a bad thing? I suppose you think that Microsoft's Java implementation was good, too.
I do not believe that nVidia has a strong investment in OpenGL 2.0. Why would they spend their money twice, and implement Cg? It doesn't make any sense. Why not implement the OpenGL 2.0 shading language, as defined in the proposal, and start shipping it as a provisional implementation? *shrug* I don't blame them for the behavior, and I don't think it's horrible - but I don't like it, and I don't think it's a good thing.
"A de facto standard is not a true standard." So, you don't use Ethernet? Or GIF? PostScript?
Same way as you have to license the OpenGL name, you have to license the RenderMan name. It's the same thing. And I don't disagree with either tactic. I do disagree when one company hopes to control the only implementation.
I hope you're right with your Glide prediction. Then again, DirectX is still shockingly popular. Help me kill DirectX. =)
Re:Pixel and Vertex Shading and OpenGL2.0? (Score:2)
A lot of key nVidia personnel came from SGI. They know this. They also know that OpenGL's openness, while useful versus other workstation vendors, didn't help them (SGI) combat Microsoft very much, given Microsoft's OS + programming tools monopoly. There's no reason that shoving their key vertex shader technology interfaces into OpenGL would substantially help them sell more units or compete more effectively versus ATI or forestall Microsoft market power. In contrast, getting their interfaces into DirectX potentially helps them sell more units (by lowering the barriers for the largest set of developers using shaders, a key upgrade-driving feature), compete more effectively versus ATI (whose vertex shaders no doubt work a bit differently from nVidia), and forestall Microsoft's market power (since, by offering such technological gems, they can get various concessions on other issues or IP licensing fees from Microsoft).
3Dlabs deserves major kudos for delivering the OpenGL spec. Given that 80+% of their CAD/workstation base predominately uses OpenGL, it makes sense that they'd push programmer interfaces to their IP through OpenGL. And for a similar reason, it makes sense for nVidia to insure interfaces to their hardware are in DirectX. IMHO.
--LP
Re:No loops? (Score:2, Informative)
-GameMaster
Re:No loops? (Score:2)
temp0=READ texture1[x, y];
temp1=READ texture2[x, y];
output=BLEND temp0, temp1;
The programs are extremely simple, with a few inputs, one output, and a few dozen instructions. More of a function than a program, really. These programs in no way replace any game logic. They just transform the vertex and pixel values passed to the graphics card.
Re:READ THE ARTICLE! (Score:2, Funny)
HAHAHAHAHAHAHA...
oh wait I'm OK now....
BLHAAHAHAHAHAHAH
When pigs fly out of my ass and mutate into fine wine!
Re:Just what are shaders? (Score:2, Informative)
A vertex shader will take the vertices before they are transformed, and apply a series of operations on the data inside these vertices. This allows certain clever lighting effects, and nice ripple patterns to be described algorithmically.
The vertices are then converted to triangles as before.
Then the pixel shader is used. Modern applications use several layers of textures. Often, we'll see a texture for the colour, another one giving a bumpmap, andother giving a reflection map. These can be combined in a number of different ways. A pixel shader determines how these textures are applied and combined. A good pixel shader will allow a texture to be defined algorithmically. This looks better than a normal texture map at very large zoom levels. Ken Perlin has done a lot of work on this. Look at his site [noisemachine.com] to see what results you can get. Pixel shaders are getting there, but haven't quite made it.
In practice, all vertex shader operations can be done by the CPU, but this tends to be a bit slower. Pixel shader operations are still at an early stage on current graphics chips, but are getting better. The early Nvidia pixel shaders were no better than the normal texture combiners, but pixel shaders in general are getting more flexible.
Re:Like C? (Score:2, Interesting)
It makes a lot of sense to base it off of C for a number of reasons. First, most game programmers are familiar with C or C++. Second, and more important, there are extreme limitations on the size of shaders. Vertex shaders have a limit of 128 ops on a geforce card. This is just base ops and can go away real fast when you use macro commands (which are composites of multiple ops) as are most likely available in cg. Future cards will, no doubt, increase the number of ops allowed per shader but it will be a while before we see shaders that are large enough to find any use for OOP features. If we do find a time where some OOP features would be handy then I'm sure they could add basic OOP functionality similar to C++.
-GameMaster
Re:Like C? (Score:2)