Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

The Future of Real-Time Graphics 254

Bender writes "This article lays out nicely the current state of real-time computer graphics. It explains how movie-class CG will soon be possible on relatively inexpensive consumer graphics cards, considers the new 'datatype rich' graphics chips (R300 and NV30), and provides a context for the debate shaping up over high-level shading languages in OpenGL 2.0 and DirectX 9. Worth reading if you want to know where real-time graphics is heading."
This discussion has been archived. No new comments can be posted.

The Future of Real-Time Graphics

Comments Filter:
  • on my heads up display sunglasses that shows ip addresses of the other people walking around me!

    So we all have to have portable gargoyle type computer rigs on..
    FP?

    Puff
  • Yesterday (Score:4, Insightful)

    by coryboehne ( 244614 ) on Monday August 12, 2002 @12:01PM (#4055007)
    Ahhh it seems like only yesterday I was sooo impressed by mario, blocky graphics and plain colors with no frills really. Then I was impressed by doom, 3-d, awesome action, multi-player, toystory-animated movie? WOW! Final Fantasy-Good animated movie! double WOW! Now what next? The way things advance today it really is getting hard to tell real world film scenes from CG stuff, I just watched lord of the rings and it really is hard to tell real from CG, now with the hardware getting cheaper and software becoming more advanced what was only a fantastic dream in the 80's, a big movie corp with render farm only dream in the 90's is now becoming possible for the home user. Don't know about you, but I'm still impressed, and I can't wait to see what is just over the horizon.
    • by Quixadhal ( 45024 ) on Monday August 12, 2002 @12:11PM (#4055102) Homepage Journal
      Step 1: Fully computer generated GOLF tournament. This may have already happened, and the public would be none the wiser.

      Step 2: Fully computer generated and scripted soap-opera. This may have already happened, and the public would be none the wiser.

      Step 3: Fully computer generated, scripted, and supported news broadcast with physical evidence fabricated by "unnamed agencies" as needed. This may have already happened, and the public would be none the wiser.

      Step 4: Greetings citizen. Please place your tongue on the screen for clearance level verification.
  • Just a few more years and the transition to featureless games with cool graphics but no game play will be complete.

    It's already at the point where the graphics department in a game house can be larger then the development team.
    • by zwoelfk ( 586211 ) on Monday August 12, 2002 @12:32PM (#4055294) Journal
      hell, I wish that were true! where do -you- work?

      I can only speak for consoles, but there have been some interesting developments over the last five years or so...

      1. Knowledge prerequisite for engine development has gone up, not down, as was previously thought (hoped?) Some people had thought that with the latest generation of h/w (XBox, PS2, Gamecube) that more programmers would be able to work on the graphics-end (XBox because of DirectX -- PS2 well, because they didn't know any better). But just like on the previous generation of h/w although we don't have to do some of the lower level tasks anymore (s/w render, perspective correction, blah blah) more complex tasks are required for the latest games. I think everyone's hoping (again) that this will change in the -next- generation (e.g. send 3DMax/Maya file to hardware! yeah, right.). maybe. we'll see...

      2. The ever-increasing (and always lamented) trend of h/w shy programmers has (maybe?) kept the graphics engine teams small. It still is very common to have one or two man teams building the engine. For example, we have two engine programmers (working on different engines on different platforms) and about 25 on titles. Based on other companies I've been at or seen, this isn't really unusual. If you meant artists (by "graphics department") then yes, there is clearly a trend for having more artists than any other role.

      3. Game teams are not oblivious to the severe lack of quality gameplay. Publishers aren't either (really!) ...but as always it's an issue of cost/market/etc. Game development is big business now, it's not a make-something-fun-and-sell-it-in-a-ziplock-bag industry anymore.

      4. Unfortunately, the idea of "game designer" as a profession (outside of a few notable individuals) has been historically ridiculed. It's been ranked with "tester" and "your mom" as far as development teams were/are concerned. However, even though only a small percentage of development houses (still!) recognize "game designer" as a legitimite role, one of the most promising trends has been (perhaps out of necessity?) the steady increase in them. -That- is good news. Basically, more places have someone in charge of "fun."

      It'll get better. Probably.
    • There is an upside to this. Eventually we will reach the point where its impossible for graphics to get better (ie indistinguishble from a photo of the real thing, or maybe vr or something). At that point when there are no more innovations to be made in graphics the game companies will have to, in order to sell games, concentrate on, yep you guessed it, game play.
  • I am hopeing that the Playstation 3 has OpenGL 2.0 interface

    it does not have to have the legacy stuff that a full OpenGL 2.0 has to have just all the cleaned up interfaces

    now that would be cool

    reddering stuff on my playstation !

    regards

    John Jones

  • That the comparison do Pixar-style in real time might be inaccurate...especially when considering the 1.2 million computer hours. How does the equation change when you consider the resolution used for each animated frame (of Toy Story), and resolutions that are common in the gaming arena?
    • > How does the equation change when you consider the resolution used for each animated frame (of Toy Story), and resolutions that are common in the gaming arena?

      Not a lot.

      Toy Story was rendered at approx 1536 x 922, only 8% more pixels than 1280 x 1024.
      • Do you know about other CG effects? What was Monsters Inc. rendered at? T2? For a long time I figured movies ought to be rendering fx shots at 2000x3500 minimum, and all-CG movies at similar resolutions. Then recently I've been disappointed as I learn about all the limitations of film, the optical soundtrack taking up space, and general shortcuts. Gladiator, for example had terribly low resolution shots that were quite blurry, as did Minority Report. I read that Pleasantville was shot entirely in color ,then digitally scanned at 4000x4000 to change it into black and white. Of course 4000x4000 doesn't work as an aspect ratio, and that seems high to be DPI. Any thoughts on this?

      • Wow...I had no idea that the resolution was so low.

        I'd like to thank everyone for the awesome responses. :)
  • PS2 (Score:3, Interesting)

    by delta407 ( 518868 ) <(slashdot) (at) (lerfjhax.com)> on Monday August 12, 2002 @12:06PM (#4055060) Homepage
    Wait, didn't Sony claim the Playstation 2 could do movie-quality graphics in realtime? Ah, here's a copy of the press release [arstechnica.com], back three years ago. The second paragraph reads:

    The current PlayStation introduced the concept of the Graphics Synthesizer via the real-time calculation and rendering of a 3D object. This new GS rendering processor is the ultimate incarnation of this concept - delivering unrivalled graphics performance and capability. The rendering function was enhanced to generate image data that supports NTSC/PAL Television, High Definition Digital TV and VESA output standards. The quality of the resulting screen image is comparable to movie-quality 3D graphics in real time.

    Silly people.
    • Sony likes to hype its ass off, but the PS2 couldn't do shit for shading compared to Renderman. As this article is saying, the shading capabilities in todays AND tomorrows Renderman farms are going to be coming to an AGP board near you.
    • The quality of the resulting screen image is comparable to movie-quality 3D graphics in real time.

      Sony's graphics are definately comparable to hollywood's:

      Compared to movies, PS2 graphics are crap.

      Erik
  • by Anonymous Coward on Monday August 12, 2002 @12:09PM (#4055084)
    First, the polygon-based rendering used by the cards is based on fairly special-purpose trickery to get good effects (like shadows - they're not really implemented by the API or the card directly). Second, the other part of really high-quality rendering is high-complexity models - the PC's themselves start to balk at swimming through the massive data sets required in real time. There's always been the speculation that raytracing would catch up to polygon rendering (as CPU's get faster) because the former has sublinear complexity in the number of objects, where the latter is more like linear complexity. _That_ would give you some pretty images!
    • Take a look at crystal space [sourceforge.net], or black and white for that matter.
      The number of vertices in the objects are relative to 'your' distance from the object in a kinde of half-way house between object based and polygon based rendering, a bit like mip-mapping with textures.
  • by Dark Paladin ( 116525 ) <jhummel&johnhummel,net> on Monday August 12, 2002 @12:10PM (#4055095) Homepage
    My own feelings on the little debate is rather simple. Unix machines (Linux, SGI, OS X) are becoming the standard for Hollywood level style movies - they're powerful, you can cluster them with relative ease, and they don't crash that often.

    DirectX9 is really about games - render less polygons on the fly, and it only works with one operating system (any guesses on which one?).

    As games and movies start to approach each other in graphical abilities, I wonder if OpenGL will become more important as the Unix graphics programmers start getting pulled from their Toy Story 3 seats to help with the guys making Toy Story 3: The Game.

    Right now, the #1 reason why OpenGL is still in a good number of Windows machines is John Carmack. Will things change? Maybe, maybe not. But I still wonder about the future.

    • Dont you think we need openGL interfaces by now?
    • by silversurf ( 34707 ) on Monday August 12, 2002 @03:44PM (#4056675)
      >>Unix machines (Linux, SGI, OS X) are becoming the standard for Hollywood level style movies - they're powerful, you can cluster them with relative ease, and they don't crash that often.

      Actually, SGI is the standard and has been for a while. Linux is becoming one and OSX probably never will be. And yes these machines do crash alot. Speaking from experience as someone who was one of the sys admin's of a render farm at an effects house, our 8 and 16 proc. Origin 2k's went down alot. Not because the OS was messed up or we were bad admins, but because we pushed them really, really hard. I used to replace disks in arrays and on systems at least once a week. The workstations used to puke all the time. Why? NFS, SAN, memory, that crazy Maya script someone wrote, etc. etc. Any number of reasons.

      Unix is great and it gets the job done better over all, but keep in mind how hard these guys/gals are pushing this equipment. Linux on a pumped x86 is the next workstation of the movie industry as they look to cut costs, there's a lot of reasons that film studios and FX houses are looking at Linux, cost being probably the most important. Keeping the per shot/frame cost down of matte painting and animation is really the key for the accountants and producers at these places.

      Remember, most CG work is stuff you don't notice in the movies, like color correction, background things like snow or rain, lighting effects, etc. Stuff like LoTR, tho more common these days, isn't the bread and butter of FX houses. It's the utility work that gets these guys through the times between the big projects.

      Side note: I don't dislike OSX, however to say it's going to be the "film" standard is probably not correct. OSX with Maya will find it's way in to ad agencies and boutique design houses who are very mac centric, but the hardware cost is high overall and the software selection is poor. Many might argue with me, but the big fx houses are very invested in SGI hardware right now and are starting to put in Linux based hardware to take advantage of the costs.

      -s
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday August 12, 2002 @12:11PM (#4055105) Homepage Journal
    Before you all get excited about the Pixar-class films you're going to be able to make on your PC, I'd like to forecast when you will be able to do that.
    For most of you, that time is, unfortunately,
    NEVER.
    This is not at all due to lack of technical facilities or computer talent. You can, right now, make Pixar-quality films on your PC. Some talented people make some pretty good short films, and they will go on to more. The most important talent they rely on is not skill in computer imagery, but skill in telling a compelling story using all of the tools of the visual idiom. This is what most people don't have, and it is an essential element to producing good film. I don't have enough of it either, and I did go to film school. If more of us had that talent, videotape recorders and affordable home movie cameras would have had a much greater impact than they have had.

    But this is all going to be great fun for gaming, VR, simulation, and so on.

    Bruce

    • I sometimes wonder about the amount of people who have the talent who go on to do something else.

      I usually think about a related area, music.

      There's a big tie between music and mathmatics.

      But yet the free music scene still seems empty.

      Hrm...

      Perhaps there just needs to be some reason to want to create.

      This said by a guy who left violin after playing 18 years, to go do Computer Science. You could argue that I saw I didn't have what it took to make it, but I wonder how many others there are like me.
    • The most important talent they rely on is not skill in computer imagery, but skill in telling a compelling story using all of the tools of the visual idiom.

      You took the words out of my mouth. No amount of increase in rendering speed (and it's only a matter of time before a home PC can do high quality rendering in a sensible timeframe) will make you a movie. To do that you need both sufficient modelling skills to make your generated images and characters look good, and the ability to write a decent script. Neither of those are much affected by Moore's law (although tools to assist with modelling may make some progress here).

    • by CausticPuppy ( 82139 ) on Monday August 12, 2002 @12:37PM (#4055345)
      But imagine downloading Toy Story 3 or something to your PC... not as a pre-rendered movie, but as a real-time scripted 3D engine with a soundtrack. Run it in whatever resolution you are able to. Use your own camera angles.

      Or play a realtime version of Final Fantasy: The Spirits Within, but walk around the "set" in realtime with the characters or just keep the camera focused on Aki's bizznoobies.

      • Or play a realtime version of Final Fantasy: The Spirits Within, but walk around the "set" in realtime with the characters or just keep the camera focused on Aki's bizznoobies.

        I think that's never going to happen. Even when realtime graphics cards are capable of producing trillions of polygons per frame and are able to acurately render Aki and all the other digital characters in realtime, it takes a lot more than 1 render to create every frame in a movie like Final Fantasy. It takes painstaking compositing and color correcting of hundreds of layers to create the final image that you see in the theatre and I think it will be way too complicated and time consuming to make an image look good not only in one frame or sequence but in a full 3d environment that the viewer can rotate and "walk" around in. The only way I see that we will be able to walk around in photorealistic digital sets with unrestrained motion is when the graphics processor can handle real-time raytracing as well as global illumination , sub surface light scattering, and on top of that act exactly like a real camera lens, or a real eye - which is what compositors try to "fake" and immitate on every digital frame. I don't see that for a really long time and I doubt it will even be worthwhile, since I don't pay to see the movie that I can dream up, but to see the director's vision of the story. It's the difference between a Hemmingway novel and a choose-your-own-adventure.
    • IOW, we usually don't listen to amature music and don't watch amature movies because they suck from an entertainment standpoint, and not necessarily because they didn't have enough equipment.

      (Or, they spent so much on equipment that they have no promotional budget left.)
    • by FreeUser ( 11483 ) on Monday August 12, 2002 @01:34PM (#4055821)
      ... isn't the "rampant piracy" Red Herring they've been feeding the press and their tame politicians in Washington, D.C., it is the possibility that anyone who does have a story to tell will be able to make a quality movie with nothing more than their home PC and a little time.

      Suddenly we don't need studios, we don't need actors, and we don't need tens or hundreds of millions to produce a blockbuster movie. And with the internet to distribute the material on, we don't need their distribution network of cinemas either.

      The most important talent they rely on is not skill in computer imagery, but skill in telling a compelling story using all of the tools of the visual idiom. This is what most people don't have, and it is an essential element to producing good film.

      Like musicians using home-studios to record music, without talent this will go largely unusued, or, more likely, there will be a lot of less-than-good material out there ... a state of affairs the mimicks the current, cartel-controlled situation rather well, actually. Even if only 1 in 100,000 has the story telling talent to put together a good film, that would amount to 2,800 potent competitors to the media cartels.

      Musicians really don't need million dollar studios anymore to produce an album, and while this means a lot of junk is pressed onto CD, it also means a lot of musicians are able to produce and market their music outside of the RIAA's cartel, through mp3.com and elsewhere. Hollywood doesn't fear the napstersization of their medium nearly as much as they fear the mp3.com-ization of it, and competition with a few thousand talented people not on their payroll.

      This, I think, is why we are experiencing such an onslought of attempts through legislation and back door regulation via the FCC [fcc.gov] and a little known "standards" body called the BPDG to take both the internet, and general computers, out of the hands of private citizens.

      It isn't about 'piracy,' it is about competition, and they don't fear competition from 'everybody' so much as they fear general access to the tools, which means those talented persons not a part of the cartels would be able to compete for viewership and marketshare on a level playing field with the big studios.

      And that is something they simply cannot abide.
      • damn it, I hit submit when I meant to hit preview.

        Even if only 1 in 100,000 has the story telling talent to put together a good film, that would amount to 2,800 potent competitors to the media cartels.

        should have read:

        Even if only 1 in 100,000 has the story telling talent to put together a good film, that would amount to 2,800 potent competitors to the media cartels in the U.S. alone.
      • "Musicians really don't need million dollar studios anymore to produce an album, and while this means a lot of junk is pressed onto CD, it also means a lot of musicians are able to produce and market their music outside of the RIAA's cartel, through mp3.com and elsewhere."

        Unfortunately, MP3.com hasn't produced any great victories over the RIAA. Like it or not people are generaly too cheap to pay for anything that they can get for free. If you don't believe me look back on all of the "hell no I won't pay..." posts that were posted right here on /. when Mandrake requested help.

        The sad fact is that most people are too self centered and short sited to pay money unless forced to do so.
        • Unfortunately, MP3.com hasn't produced any great victories over the RIAA. Like it or not people are generaly too cheap to pay for anything that they can get for free. If you don't believe me look back on all of the "hell no I won't pay..." posts that were posted right here on /. when Mandrake requested help.

          And yet, I own several CDs of artists I discovered, and purchased, through mp3.com.

          Even more interesting, the Free Blender Campaign just passed the 80,000 mark, or 80% of the amount needed to purchase and GPL the source code from the Blender holding company.

          Clearly people are willing to pay, when they see a benefit, indeed, even when they can get things for free. With Mandrake, many didn't see a benefit (though even there, many others did).

          The sad fact is that most people are too self centered and short sited to pay money unless forced to do so.

          There are other solutions. The fundamental design of the internet, for example, shares the cost of propogating information between the sender and the receiver. In other words, the basic design of TCP/IP is P2P in nature. Unfortunately, the HTTP protocol was designed in a client-server manner, placing the bulk of the cost on the providor and making the cost not scale gracefully as demand rises. Ditto for ftp.

          Contrast that with SMTP and even NNTP, as well as FreeNet. The "Free Speech" aspect of the internet depends in no small part on the "Free Beer" aspect of the internet, or, more correctly, on the balance of cost shared between sender and receiver (i.e. if "free speech" is expensive, only the wealthy will have freedom of speech, and the value of that freedom will become negligable to the common person).

          Once FreeNet, or another P2P application level infrastructure is in place, with solid search capabilities and HTML-like facilities, we may be able to return to a state of affairs where costs are shared naturally, and popular sites like slashdot are no longer incredibly expensive to run because bandwidth costs are shared and distributed across the entire network, among all those who read the content equally.

          This is why P2P is so important, and must be preserved from the depradations of the Copyright Cartels. Not for inane, juvinile file trading, but to fix the bottlenecks of the internet and to keep the medium free and accessible for all to use, regardless of wealth.
          • "And yet, I own several CDs of artists I discovered, and purchased, through mp3.com."

            And you are the exception. I and all of my friends own none.

            "Even more interesting, the Free Blender Campaign just passed the 80,000 mark, or 80% of the amount needed to purchase and GPL the source code from the Blender holding company."

            Interesting that you mention Free Blender. Free Blender is the 3D rendering program that was owned by a company that went bankrupt due to the very attitudes that I talked about. I hope that Free Blender does go GPL so that we don't loose this fine product but it's not been a great success proving that voluntary contributions work. Conversely, it is a good example showing that what I have said is true.

            "Clearly people are willing to pay, when they see a benefit, indeed, even when they can get things for free. With Mandrake, many didn't see a benefit (though even there, many others did)."

            No, its not "clear" at all. What is clear is that most are unwilling to pay. If you read the posts that I was talking about you'll see that "seeing the benefit" had nothing to do with not paying. People just didn't care to pay. More than one poster said something to the effect of "I use Mandrake but I won't pay for it as long as I can get it for free."

            Obviously they found Mandrake useful, the fact that they use it proves that but they had the mentality of a blood sucking leaches.

            "There are other solutions. The fundamental design of the internet, for example, shares the cost of propogating information between the sender and the receiver. In other words, the basic design of TCP/IP is P2P in nature. Unfortunately, the HTTP protocol was designed in a client-server manner, placing the bulk of the cost on the providor and making the cost not scale gracefully as demand rises."

            I work in a small shop where I am a programming/network specialist so I understand all about TCP/IP, P2P and HTTP. When I was in college they told us that there were two ways to get through school. 1. Dazzle them with data or 2. Buffalo them with bullshit.

            No offense but your statement didn't dazzle me. It's rather a non-statement. Yes, HTTP is used by web servers (Apache, IIS, etc) to communicate with web clients (Modzilla, Netscape, etc) but it is the program itself that determines if it is a P2P application, not the protocol that it uses. You could for example create a P2P program that used HTTP to communicate. Not very efficient but it would still be P2P.

            As far as cost sharing goes, I don't think that your statement makes any sense. Sure, it costs companies money to do business on the internet. The cost of the physical servers, the software (If they are using Microsoft anyway.) and the cost of system administrators, and employees to provide content. But what is the alternative and why would it be better? The fact that companies spend a significant about of money to do business on the internet does not effect me unless they pass the expense onto me. However, I have found that the internet makes shopping around for the best buy very easy thus driving down the prices. Would you suggest that we go exclusively P2P? That may be fine for MP3's (If you ignore the obvious problems of having the artists paid for their efforts.) but what about products that simply cannot be transmitted over the internet. Amazon.com for example sells books etc. Web servers/clients meet my needs quite nicely in these cases, thank you very much.

            What do you mean by your statement that costs don't "scale gracefully as demand rises?" The statement does not make sense. Yes, if a company needs greater bandwidth it will cost them more. So what? Again, P2P is not going to get me a new VCR from the electronics store of my choice. Web server/client technology is well suited for the task.

            "Contrast that with SMTP and even NNTP, as well as FreeNet. The "Free Speech" aspect of the internet depends in no small part on the "Free Beer" aspect of the internet, or, more correctly, on the balance of cost shared between sender and receiver (i.e. if "free speech" is expensive, only the wealthy will have freedom of speech, and the value of that freedom will become negligable to the common person)."

            You're going way off base with these statements. What does a mail protocol have to do with P2P? It is squarely in the realm of client server technology as are news servers. Free speech is not expensive and I don't think it wouldn't be any cheaper if we adopted your suggestions. Although I'm unclear on exactly what it is you are suggesting.

            "Once FreeNet, or another P2P application level infrastructure is in place, with solid search capabilities and HTML-like facilities, we may be able to return to a state of affairs where costs are shared naturally, and popular sites like slashdot are no longer incredibly expensive to run because bandwidth costs are shared and distributed across the entire network, among all those who read the content equally."

            Popular sites like slashdot wouldn't exist in a totally P2P world and the cost to the individual would rise.

            Remember, the costs don't disappear, they get redistributed. If companies like slash.dot and Amazon.com aren't paying the share that they now pay the lost revenues would have to be made up by you and me.

            "This is why P2P is so important, and must be preserved from the depradations of the Copyright Cartels. Not for inane, juvinile file trading, but to fix the bottlenecks of the internet and to keep the medium free and accessible for all to use, regardless of wealth."

            P2P has it's place as do the client/server technologies and no doubt the copyright holder's don't like having their IP stolen. But you haven't convinced me or even communicated to me a better system than we have now.

    • Bruce Perens wisely said:

      Before you all get excited about the Pixar-class films you're going to be able to make on your PC, I'd like to forecast when you will be able to do that.

      For most of you, that time is, unfortunately, NEVER.

      Yes, talent is required. But consider this: there are lots of talented people out there in the performing arts -- writers, actors, dancers, directors, etc. Dropping the cost of means of production will allow people with such talents to create productions without going through the current production model. Making it easier to do things with computers will have a similar effect. That's where the real impact will be. Not with geeks with swelled heads.

    • In fact, bringing movie-quality, real-time CG to PCs will result in a lot more good movies being made. Right now, people can direct films only if they have the ability to work within the commercial film industry and have some knack for telling commercially successful stories. There are lots of excellent storytellers that would love to tell stories visually but don't want to put up with producers, financing, actors, sets, or any of that other junk and/or who have stories to tell that no company is interested in telling.

      That doesn't mean everybody will have the talent to make excellent movies, but it may expand the pool of people able to do it by several orders of magnitude.

      Besides, many commercial movies are apparently made by people who don't have a clue how to tell a good story.

    • Well it hasn't seemed to stop Lucas when he made Sound of Music sequences in Ep 2. Or Spielberg from butchering "AI" and "Minority Report". It doesn't like talent is much of a requirement when it comes to making movies these days.

      And I realize that what I and other people can do is at most to make "Phantom Edits" of current movies. The making of a complete movie is quite another issue. But the state of the art when it comes to story telling these days are not really what I'd classify as impressive.

      I would think that the hardest part would be modelling and animation. There are quite a lot of good stories made by amateurs. (That is, short stories on paper.)

  • by zorander ( 85178 ) on Monday August 12, 2002 @12:13PM (#4055133) Homepage Journal
    A year back or so I did the blender work for a starwars fanflic...Now this was only a fifteen minute film...but the 5 minutes or so of 3d easily took a day to render. As this stuff gets faster, amateur movies will become better and more sophisticated. Low budget films and TV shows will gain access, and the graps of the MPAA will weaken. Anything that makes low-budget films easier is a good thing.

    With the internet and a DVD Burner, a low budget film could be distributed DIRECTLY on DVD. Now the films just need to get good enough that people will want them. This would be a good direction for both music and movies.

    Cool eye picture. How the heck do you make a model for that?

    Brian
  • by Milinar ( 176767 ) on Monday August 12, 2002 @12:16PM (#4055163) Homepage
    As computers get more powerful, our demands on them get greater. The time it takes to render a single frame of a animated feature has stayed pretty much constant over the last few years.

    I mean, come on people, it's apples and oranges here. Two similar tools for two VERY different purposes. Rendering 80 FPS at 1600x1200 makes good games, but I doubt there will ever be a day when film frames are rendered in real-time. There's just no reason to!

    That's not to say that yesterday's movies couldn't be rendered on today's technology. Absolutely! But tommorow's movies are a different story.
    • I mean, come on people, it's apples and oranges here. Two similar tools for two VERY different purposes. Rendering 80 FPS at 1600x1200 makes good games, but I doubt there will ever be a day when film frames are rendered in real-time. There's just no reason to!

      Reason 1: To play games with film-quality graphics (or near enough as makes no difference).

      Reason 2: Great looking real-time architectural walkthroughs, CAD renderings and so on.

      Reason 3: To render 100 or more times as many frames per unit time in the film render farm.

      All of those reasons seem pretty compelling to me!

      I suspect you'll be changing your tune when you see what the R300 and NV30 can do... :-)

      • State of the art Hollywood films are never going to be mastered in real-time. If they can be rendered in real-time, they are not state of the art. Period.

        The animators may render in reduced resolution in real-time for the reasons you mentioned, but if it renders in real-time, they'll add more effects/higher resolution/sub-pixel tesselation/whatever, just because they can, for that tiny little bump in quality.

        Right now the biggest barrier is faces: CG faces (plus the hair) are a heck of a lot better than they used to be, but they're years away from photographic quality when animated.

        Bryan
  • there's got to be another goal that will step up after quake (whatever number) looks as good as a live action movie. i think it will probably be a new interface. possibly vr type things.

    i see pixar's point that by the time pc graphics catch up to now, cinematic cg will be further down the road. but the gap is getting smaller. i would be really impressed if a desktop could make use of something like MASSIVE [sgi.com] in real time during a game. good looks will look better when there is more intelligent behaviour of bots in games. someday it will be possible, but by that time movies will be even prettier.

    or maybe i'll finally be able to game at full speed without worrying about ruining the cds i'm burning.
  • Multi chip and multi card solutions are also coming, meaning that you will be able to fit more frame rendering power in a single tower case than Pixar's entire rendering farm. Next year.

    The problem with this is that a company will never believe that such a miniaturization rate is possible.
    They prefer to think of that as "silly"
    ("silly" == "you are trying to rip them off")
    The proper way to do this is what most companies that make RAID servers and other computers with large hardware arrays are doing:
    1: Build large motherboard with low circut density
    2: Build case roughly the size of small wardrobe. 3: Attach motherboard inside case, and fit parts to motherboard.
    4: Put lots of flashy lights on case, contemplate adding machine which goes 'bing'.
    5: Use lots of external connections, so that there is lots of assembly required and there are lots of bells, whisles, cords, and dongles hanging from the case after assembly.
    6: Next year, make the case about 2" smaller in each direction and raise the price about $250 per unit

    You think I'm kidding - our school's grade server has less than half the volume of the one that was replaced, and they're not that different in performance.

    Remember - size may not matter if you know how to use it, but it may stop you from getting the chance to use it in the first place.
  • At long last, my computer will be capable of producing cinematic quality graphics, much like those in that famous Disney movie....

    TRON.

    I can't wait.
  • I can do nothing but agree that the visuals, CG and effects are all getting better, but what is the cost? I don't know about the rest of you but it seems to me that the quality of the movies themselves have been going steadily down. The plots are getting thinner, the acting poorer and it seems that everyone is just using visual quality as a crutch. I'd take a good old silent, black and white coemdy by the likes of Chaplin or and old Akira Kirusawa (spelling?) samurai film over most comedies and action movies that come out these days. Or a little more recently, compare the older Star Wars or James Bond movies to the newer ones; sure the fight scenes have gotten cooler, but the acting and stories are nowhere near as good. I love visuals, but can we use them to make good movies instead of just more pulp.
  • It may be the case that in, say, 2005 for example, you can create CGI graphics on your home computer that are equal to the quality of 2002 movie graphics. But the movie graphics standard won't be sitting still - it will always be ahead. Both will always get better. CGI studios will always be willing to invest more in their equipment than a home user, and they will always be able to get better results by spending more money. You can't convince me that someday the $200 video card in my home PC will be equal to the cluster of high-end hardware sitting in Lucas Arts server room at that same moment.
    • This assumes that a steady quality increase is possible. But maybe there's a point at which you just don't see any difference at all? Remember, our eyes just have a given (very high, but still finite) quality, too. That is, there are differences our eyes just won't see any more. So it may well be possible that we reach some point where adding more computer power simply doesn't make sense, because you simply won't see the difference.

      Now, I don't say you'll get to that point in the next years, but I think eventually this point might be reached.
  • by Wesley Everest ( 446824 ) on Monday August 12, 2002 @12:36PM (#4055328)
    Already, if you have enough polygons, raytracing is faster than rasterization. The only problem is that the crossover point for interactive frame rates hasn't been reached yet. Check out some of the current research [uni-sb.de] in intereactive raytracing.

    Basically, the explanation is that rasterization takes a time proportional to the number of polygons to render a frame, while raytracing takes time proportional to the log of the number of polygons. That might make you think raytracing should be always faster, which it clearly isn't -- the reason it isn't is that the constant factor in each is very different. So you have a*N vs b*log(N), where b is much bigger than a. As N gets bigger (apparently in the neighborhood of 10 million polygons), the difference between a and b becomes less important than the difference between N and log(N).

    The main benefits of raytracing over rasterizing is that it is very easy to get things like reflections, shadows, refraction, and other important effects with raytracing, but with rasterizing, you need to do a lot of complicate and CPU-intensive hacks to get the same effect. Another benefit is that raytracing is parallelizable while rasterization generally isn't. That means that if you have a raytracing accelerator card in your PC, you can nearly double the frame rate or resolution by adding a second raytracing card.

    Of course, it's all a chicken-and-egg sort of thing, nobody's going to buy a raytracing card until it's a cheap way to do the rendering they want, and it won't be available unless there is a market. Fortunately, there is research [uiuc.edu] into using the next generation of rasterizing graphics cards to greatly speed up raytracing. This will help bridge the gap, by making raytraced games possible using soon-to-be-existing hardware.

    • Another neat thing about raytracing is that it's very easy to render just a subset of pixels. I remember Strata, an old 3D program for the Mac, had an option to select a part of the screen and render just that bit. It really helped when you were trying to get a particular detail on the screen (like a reflection or shading or image map) to come out right.

      I don't know if that has any application to realtime raytracing for games, but it's neat anyhow.
    • by sacrilicious ( 316896 ) <qbgfynfu.opt@recursor.net> on Monday August 12, 2002 @01:10PM (#4055596) Homepage
      Another benefit is that raytracing is parallelizable while rasterization generally isn't.

      Not true... in fact, the opposite is the truth. Rasterization - which I assume is used here to refer to zbuffering - is very highly parallelizable, while raytracing is only moderately so. The fundamental reason for this is that zbuffering needs only serial access to the database of polygons, while raytracing needs unpredictable access to the polygons depending on where rays are reflected and where lights cast shadows. This means that raytracing is subject to a high degree of unpredictable memory needs and disk accesses, whereas zbuffering can be nicely predicted, pipelined, and parallelized. This is why hardware implementations of zbuffering are a dime a dozen, while the only hardware implementations I've seen of raytracing parallelizes only one very tiny portion of the rendering pipeline and has all kinds of problems that as yet are not addressed sufficiently for practical needs.

      .

      • Rasterization - which I assume is used here to refer to zbuffering - is very highly parallelizable

        Except for that little problem with alpha-blending...

      • by Anonymous Coward
        Not quite, generally one does something along the lines of BSP processing on the scene before raytracing, excluding any elements that move outside of said nodes in the BSP tree. This can vastly reduce one's search space for polygonal access. Furthermore, if one wishes to use reflections or shadows with a rasterizer, one is still required to use non-serial access to the polygon list.

        Raytracing is quite parallelizable in macroscopic sense (shared source data). Each pixel carries no dependancies on any other. But one problem is the need for a large cache of intermediary data during processing for a single pixel, which penalizes multiple rendering paths; they would each require as much memory. I imagine a raytracing chip will not evaluate the pixels in the image but speed up common operations (color mixing, vector operations, ray intersections, texture calc. etc.)

        Finally, raytracing opens up new avenues for modelling and realizing objects as polygon counts skyrocket: it allows one to rasterize non-triangular objects whose intersections and normals are easily formulated for example quadrics, planes, spheres and simple NURBs.
    • There was an interesting panel on this subject at SIGGRAPH a few weeks ago. An attendee from Pixar said that their main issue now is that their scenes are so large that it's becoming impractical to keep them in memory throughout the entire render, even with a smart acceleration datastructure (Pixar's Renderman, although fundamentally a rasterizer, needs to keep at least bounding boxes for the entire scene in RAM due to its "bucketing" process). Also he noted that many of Pixar's scenes are I/O bound - it takes longer to read the geometry and textures over the network than it takes to actually render them!

      The gentleman went on to describe Pixar's work on a new renderer which (if I heard correctly) goes to the extreme of keeping no scene data permanently in RAM, and just streams primitives in and splats them to the screen.

      So, while most rasterizers are indeed O(n) in the number of geometric primitives, a raytracer or other retained-mode renderer could be "O(n + log(n))", if you count the I/O required to read the scene and build the in-memory representation!

      Of course I've conveniently ignored the fact that Renderman and its siblings cannot handle any sort of non-local lighting computations. (true reflections, shadows, ambient illumination, etc). But all of the high-end 3D studios seem to prefer "faking" these effects (with shadow/reflection maps or separate "ambient occlusion" passes) rather than taking the extreme speed penalty of a retained-mode renderer.

      (incidentally the same sort of issue came up at Digital Domain's presentation on their voxel renderer - although they started out using volume ray-marching, the render times got so long they switched to voxel splatting, and used various tricks to make up for the lower image quality)
      • So, while most rasterizers are indeed O(n) in the number of geometric primitives, a raytracer or other retained-mode renderer could be "O(n + log(n))", if you count the I/O required to read the scene and build the in-memory representation!
        It's not that bad, but there are definitely some unsolved issues with raytracing. If the scene is entirely static and you're just doing a fly-through, then all the processing of the scene can be done beforehand and it truly is O(LogN) regardless of camera movement. The problem is that as you get more and more dynamic objects, you need to update their position in the bounding hierarchy. If your scene is a random collection of tiny triangles all flying around randomly at high speed, then it all breaks down and you might as well give up and do something that is O(N) rather than some sort of NLogN reconstruction of the hierarchy. But if the number of moving objects is small and/or they are moving slowly compared to their size, and each moving object tends to be a collection of polygons that move as a unit (say, an animated character), then updating the hierarchy isn't so expensive, and much of the work can be done as needed. Then, you certainly don't need to have O(N) IO time.
  • Several years ago I started planning for a house to build. I use PovRAY to see what it would look like. Not photo realism, but what could I see from the kitchen, where does light come in at different times of the year, practical stuff like that.

    What is most annoying is having to specify an exact position and camera angle, wait a few hours, and see that it wasn't quite what I wanted, edit, repeat. A complete generation of all collected scenes takes a couple of months on a dual Athlon.

    What I would like is to "move" in real time thru the house plans. Obviously games can do something like that. Freshmeat shows a ton of projects, too many to tell me which are useful and good.

    What most seem to lack is a laguage like PovRAY has, where I can move a wall and the stairs code recalculates how many steps before and after the landing for a stairwell. Point and click, drag and drop, neither interests me, because moving a wall this way would be too tedious and prone to error, when the steps adjustment requires moving a door, appliances on the other side, etc.

    Any recomendations? Conversion from, or using directly, PovRAY sources would be best. It has to have some kind of specification language, even if I have to write my own PovRAY converter or front end. Fine detail is not necessary. As long as I can move the sun around, and it shows walls and windows and doors, that's good enough. Even one frame a minute would be acceptable, but several frames a second would be wonderful.
  • Don't expect to see nVidia- or ATI-rendered imagery on the big screen anytime soon. Yes, graphics cards are advancing in ability and speed. However, this is largely irrelevant to the world of CGI. No matter how good it looks on a monitor, it's not good enough for film.

    Yes, I'm sure a Toy Story 2 quality film could be rendered on one of these cards. However, by the time these cards come out, TS2 will be nearly 3 years old.

    The vision of the artists themselves is what drives CGI, and no matter how good the real-time solution is, the brute-force computational solution will be better - simply because it doesn't have to be real-time.

    Fast graphics cards definitely make animators lives easier. We (medium-sized visual effects studio) recently switched from SGI Octanes to Intellistations with ATI FireGL 8800s running Linux. Being able to tumble fully-shaded medium res characters in real-time is sweet. But, it's not good enough, even for TV. And, by the time a card exists that IS good enough, the bar will be much higher.

    This is what Pixar's Photorealistic RenderMan does RIGHT NOW. link [pixar.com]

    Now show me that in real-time and I'll marry your ugly Aunt Hilda.

  • ugh (Score:5, Insightful)

    by tux-sucks ( 550871 ) on Monday August 12, 2002 @12:55PM (#4055468)
    I get sick of hearing this crap. "When will my graphics card be able to do rendering?!", "When will my graphics card be able to display pixar-quality rendering?!", "When will my graphics card be able to put out graphic realism?" ect, ect, ect.

    This is a bunch of crap. By the time your playstation 6 or ge-force 7 or whatever the hell it's going to be gets to a point where it can run enough cycles to achieve toy-story quality pictures in real time (which is still years off) the bar will be raised again for cgi.

    Just as moores law doubles technology, the technolgy of rendered cgi doubles. Think back when cray supercomputers rendered frames and took about an hour a frame for untextured geometry with little of the properties that are avaliable today. Today, the images still take 1 hour a frame, even though the technology is billions times faster. Why is that? Because cgi artists will continually pump in as much as they can per frame. If it took 20 minutes last year, it's going to take 20 minutes this year because studio x is going to add some new thing that improves quality but still retains their time margin.

    Do you honestly think that gpu's are going to be able to achieve real-time radiosity in next couple years? Real time raytracing like renderes have now? Hundreds of thousands of blades of grass with no tricks? Individual hairs? Do you think that will happen anytime soon? Perhaps sometime - but when it does pre rendered images will feature something new that real-time can't match. Face it - real time graphics will never replace the quality of pre-rendered.
    • Short answer: yes (Score:5, Informative)

      by SeanAhern ( 25764 ) on Monday August 12, 2002 @02:43PM (#4056293) Journal
      Do you honestly think that gpu's are going to be able to achieve real-time radiosity in next couple years? Real time raytracing like renderes have now? Hundreds of thousands of blades of grass with no tricks? Individual hairs? Do you think that will happen anytime soon?

      I used to think as you do. That was before I got a large amount of education while attending Siggraph this year.

      At Siggraph, I saw a live demonstration of a real-time raytracer that was also computing a diffuse interreflection solution (radiosity-like, for those who don't understand) on the fly. I also saw a real-time recursive raytracer written by Stanford researchers that was implemented in a GPU's pixel shader. There is currently research on displacement map "textures" that could conceivably let you render thousands of blades of grass or individual hairs without having to send all of that geometry down the AGP bus.

      All of these things blow the doors off what people think a graphics chip can do. Your post would have been accurate last year. Not now.

      I will agree with one point: software-based rendering will always be able to compute certain effects that will be difficult or cumbersome to do in a GPU. But I'll also claim that the gap is dramatically shrinking.

      I'll also say that the two techniques are not really in conflict. You can use them both in conjunction with each other. You can use a hardware-accelerated Z-buffer to help a raytracer determine first-level occlusion. You can use a raytracer to generate textures and maps for a GPU. In the future, we will see both techniques used to compliment each other.
    • We won't see real-time in state of the art for years, but we are starting to see convergence:

      Ie, it used to be 1 hour per frame. In the future it might be several seconds per frame. But it won't be 25 per second, not until we can do physical modelling on the human body at just a bit higher than the cell level at real time will Hollywood agree that real-time is good enough.

      Bryan
  • As a CG person... (Score:5, Insightful)

    by Frobnicator ( 565869 ) on Monday August 12, 2002 @12:57PM (#4055482) Journal
    ... I can tell you that for the forseable future, realtime movie-quality 3D is out of the question. PERIOD

    While it isn't addressed in the article, there are a LOT of things that need to be handled in the hardware that just isn't there.

    • Orders of magnitude more polygons. Artists want more polys. I personally would want at least four polys per pixel at any view, just to make sure that the rendering would be correct.
    • Raytracing and Radiosity. Both of these have been proposed in realtime, there is a realtime ratracer (RTRT, google on it) and Realtime Radiosity is a siggraph paper (look it up, too.). BUT they only work with limited numbers of objects. They need to work with the number of polygons given above, and they can't.
    • Lights and light maps. Current video cards are limited to just a few lights. The latest generation can only do 8 lights. That works for games, Quake uses just a few and artists hate being limited to them. The video cards would need to handle thousands of light sources, and be able to process light maps (for shadows) in realtime.
    • Textures. Currently we use textures to hide the fact that we don't have detail. But as long as that detail is missing, things will look bad up close. The 'ideal image' is not a wireframe with a texture draped over it. The 'ideal image' is based completely on the vertex lists. To build a model at the right detail, each pixel in the texture would need to be replaced with a vertex, including color and other material properties, normals, edges, etc. So each value goes from a 32-bit RGBA color entry to a fairly big (about 1000-bit) structure.
    • RAM and BUS speed, and model size. Once we have these massive scenes, we have the bottlenecks of RAM and the system bus. We have always fought these in graphics, which is why the push from ISA to PCI to AGP. Trying to make the graphics better will just compound the problem.
    The facts are that these won't go away. We will continue making texture mapped wireframe models for the near future. By the time the graphics card industry can do realtime what movie studios do in months today, the movie studios will be playing with the ideas above.

    A perfect rendering system is almost near-infinite recursive, requires infinitely detailed models, and takes a long time to render. We can't do the infinite perfect system, but we can tell our artists to let it run for about an hour per frame. That means 'no realtime top of the line movies', no matter what.

    frob.

    • by Animats ( 122034 ) on Monday August 12, 2002 @01:28PM (#4055759) Homepage
      • Orders of magnitude more polys
        You only need all that detail for nearby objects, which is what subdivision surfaces and level of detail processing are for. With procedural shaders and bump mapping, you don't need that much for most surfaces. The detail may be there in the model, but only a small fraction of it needs to go through the graphics pipeline for any given viewpoint. Given that pixel size is finite and human vision has finite resolution, at some point you max out.
      • Radiosity
        For fixed lighting, you can do radiosity in realtime. (Check out Lightscape, now called 3D Viz.) The radiosity map only has to be recomputed when the lights move. Mostly this is used for architectural fly-throughs. Of course, Myst and Riven are basically architectural fly-throughs. (They're rendered with multiple hand-placed lights in Softimage, though; when they were made, the radiosity renderers couldn't handle a scene that big.)
      • Textures vs. geometry
        I tend to agree, but at some level of detail, you can render geometry into a texture (maybe with a bump map) and use that until you get really close. Microsoft prototyped a system for doing this automatically a few years back, called Talisman. Talisman was a flop as a graphics card architecture, but as a component of a level of detail system, it has potential.
      • RAM and bus speed
        Moving around in a big virtual world is going to require massive data movement. But we're getting there. This may be the driver that makes 64-bit machines mainstream. Games may need more than 4GB of RAM.

      Rendering isn't the problem, anyway. Physics, motion generation, and AI are the next frontiers in realism.

    • Well why don't you just demand to render reality while you're at it? Insist on calculating the molecular makeup of each splinter of wood. Really, explain to me why 1 poly per pixel is so much worse than 4 poly/pixel. I doubt even Monsters Inc. had 4poly/pixel.

      What scene requires 1000 lights? Rendering spiderman swinging from a Manhattan building and every light through the windows accurately rendered? Please. In the recent /. interview with the FX guy he made it clear Hollywood uses reality when it suits them, and fakes it when it doesn't. Time and money often are reasons for fakery. 1000 individual lights is completely not needed, even the 1000 flourescent tubes in the computer lab I'm typing from could be faked easily. As for textures, RAM and Bus speed, if you read the article you'd see DX9 helps fake high poly models by combining a lower poly model with displacement maps.

      Movies aren't perfect either, see the foam columns in the Matrix lobby. Rending movie quality in real time isn't about rendering a scene perfectly, its about making our eyes think its perfect. So in ten years assuming Texas Instruments doesn't have a gargantuan breakthrough allowing for 10,000 by 18,500 pixel projection, realtime top of the line could be possible.
      • Well why don't you just demand to render reality while you're at it?

        Nobody in the entertainment industry wants to render reality. That's because there is nothing at all real about movies.

        No scene has 1000 lights, by the way, unless you're generating lights automatically from a HDR light probe image. We're talking on the scale of a hundred or so here, compared with the ten or so which modern video cards can handle.

        I doubt even Monsters Inc. had 4poly/pixel.
        alking on the scale of a hundred or so here, compared with the ten or so which modern video cards can handle.

        BTW, a hundred or so lights is not uncommon on a big traditional movie set, either.

        I doubt even Monsters Inc. had 4poly/pixel.

        Actually, Monsters Inc ran at some

        Actually, Monsters Inc ran at somewhat over one polygon per pixel. PRMan dices curved primitives down to one polygon per pixel as part of the rendering process. The figure is configurable on a per-object basis, and it's automatically less for highly blurred objects (e.g. objects which are out of focus or moving), but this is made up for by culling around silhouette edges rendering more layers.

        Of course, more geometry than this actually goes into the pipeline. About 2Gb gzipped is a typical size, according to Tom Duff; more if there's a lot of LOD (e.g. the room full of doors).

    • I have to note that the delta between perfect and good enough is ever decreasing, and the law of diminishing returns is already starting to kick in. Eventually, that extra hour of processing per frame will improve image quality by an imperceptible amount, because the eye and brain have bandwidth and processing limits of their own, and once you've reached them, who cares how much better it gets.

      Oh, and just to have a dig, a good real-time CG artist can generally achieve the same effect that a normal CG artist can, with an order of magnitude fewer resources (poly's, texels, lights), precisely because they have to. It's good to have to work within constraints. Stops you getting lazy...;)

  • ...Bankrupt Hollywood since it'd put actors, grip boys and other essencial film personnel out of work since CGI actors don't need pay! ...Or did I just read something like this a couple stories back? Naaah.
  • by DG ( 989 ) on Monday August 12, 2002 @01:30PM (#4055779) Homepage Journal
    Man, you can tell at a glance who read the article and who didn't.

    I'm going to simplify a great deal here to try and boil this down to the essence. John Carmack please feel free to correct any mistakes I make.

    Up to this point, the imagery coming out of the gaming graphics cards has been limited by the hardware design of the cards. The feature set implemented by the cards limits how complicated you can get with the details in the final image.

    Note that we're not just talking about simple things like pure polygon counts. Film Industry CGI isn't of higher quality just because they throw more polygons at the problem; they have all kinds of highly complex shaders that can generate special textures without changing the number of polygons in the model - if you saw the "special features" on the Shrek DVD, you can see this at work with Donkey's fur.

    Rendering all these extra shaders is CPU expensive, which is why the big animation houses have big render farms.

    But two things have happened that stand to change that.

    The first (and the most ingenious) is that it has been discovered that you can compile any shader into a series of OpenGL language commands. The tradeoff is the number of passes through the pipeline that implementing a given shader may require may well be a large number - but even so, any shader currently in use by a Hollywood Mouse House can, in theory, be compiled into OpenGL and executed on any OpenGL card.

    And here's the really cool part - rendering in OpenGL is many times faster than doing it in software on a general-purpose CPU. Many, many times faster.

    Secondly, the biggest problem with trying to crank Shrek through your GF2MX400 (assuming you've compiled all the shaders into OpenGL) is that each shader may require 200 passes, but the data structures inside the card lack precision - either not enough bits as a float, or perhaps not even floats at all, but integers.

    That means the data is being savaged by rounding errors and lack of precision during each render pass. It's like photocopying photocopies.

    BUT, the latest generation of graphics chips have the necessary precision to do 200-pass rendering without falling victim to rounding errors.

    Combine these two things together, and you can quite literally take a frame from Shrek, with all the crazy shaders, compile it to OpenGL, and render the frame on your GF6-whatever **faster than the native render platform**

    A very good deal faster than the native render platform.

    Is this "Shrek in real-time"? No, not by a long shot. But it may well be "Pixar's renderfarm in a box".

    Now, as Bruce pointed out, having Pixar's renderfarm in a box doesn't make you Pixar. There is still a requirement for artistic talent. But all that cheap extra horsepower may well mean that the quality of CGI is going to explode for those talented enough to make use of it.

    How will this affect games? It makes a bunch of shader techniques that were previously availible only to the movie industry possible within the framework of a game. And it divorces, somewhat, the game visuals from the card's hardware because these shaders are executed as general-purpose OpenGL instructions, not as dedicated hardware on the card. If you, as a game designer, can write a "fur shader" that runs in few enough passes to meet real-time output timings, then you get fur on your model, even if the card doesn't have a built-in "fur shader" or "fur engine".

    THIS is why this is all such a big deal. The amount of quality per mSec of render time is about to explode.

    Cool stuff!

    DG
    • A very good deal faster than the native render platform
      However, as Tom Duff asked in a rebuttal, is it really 800,000 times faster? And can the PC it's in feed it the 500MB of data per frame it would need to achieve that performance?
  • NVidia has released a compiler for their GPU's, Cg (C for graphics). I had great fun playing with it and see different effects (charcoal, dynamic fur, ...) in real time (pixel & vertex shading). It's even open source :P

    See http://developer.nvidia.com/view.asp?PAGE=cg_main [nvidia.com] and www.cgshaders.org [cgshaders.org].

    Sack the sigs.
  • by norwoodites ( 226775 ) <pinskia@BOHRgmail.com minus physicist> on Monday August 12, 2002 @01:51PM (#4055925) Journal
    There is already a high-level shading language, even LGPLed, the API is from Apple's QuickDraw3D, it is called Quesa, http://www.quesa.org/. It can sit above any other API, such as OpenGL or Direct3D. It is scene ordinated. It is a pretty cool api, it is a lot easier to use than Direct3D or OpenGL. The file format to save the scene is 3DMF binary or text (XML like); in fact the binary format was appointed to be the binary format for VMRL.
  • Can we simulate all the lawyers and bean counters and keep the actors please??? ;->

    PPP
  • I don't think polygon graphics in games have more than 10-15 years left. At that time, we'll move on to advanced voxel-based technology. Voxels have a couple of important advantages, such as that things don't get angular edges at all, and also and most importantly, that game physics can get a lot more authentic. Still, voxels require lots of CPU and RAM power, so they won't come yet.

    ---
  • The irony is that if graphics cards take the place of 'beowulf clusters' for Renderman rendering farms, this whole scenario is a net minus for Linux.

    So be careful what you root for. ;-)

    --LP, who imagines the truth is somewhere in between
  • by lnxpilot ( 453564 )
    Well, it's getting real close.
    This 3D package has it:
    http://www.equinox3d.com

    The first two scenes on he following page run around 4-30 frames per second, fully ray-traced (multiple reflections, refractions etc.) at low resolutions (~160x120) on an Athlon 2000XP.

    http://www.equinox3d.com/News.html

    The renderer will be released in a couple of weeks.
    The program runs on Linux and it's free (shareware).

    The author.

God made the integers; all else is the work of Man. -- Kronecker

Working...