Forgot your password?
typodupeerror
Graphics Software

Final Fantasy At 2.5FPS 308

Posted by timothy
from the that's-a-lot-of-pixels dept.
Rikardon writes: "Adding a little fuel to the ATi-vs-NVIDIA fire started earlier today on Slashdot, NVIDIA and Square are showing a demo at SIGGRAPH of Final Fantasy: The Spirits Within being rendered in 'real time' (four-tenths of a second per frame) on a Quadro-based workstation. Now that I think of it, this should also inject new life into this debate." Defender2000 points to the Yahoo article. Update: 08/14 09:30 PM by T : Original headline was wrong, said ".4FPS" but as cxreg pointed out, .4 frame per second isn't .4 seconds per frame. Sorry.
This discussion has been archived. No new comments can be posted.

Final Fantasy At .4FPS

Comments Filter:
  • Rendering a movie at 320 x 240 or 640 x 480 is much easier than rendering it at the resolution and size of a movie theater's screen. If the Quadro was rendering the movie at 100 x 75 pixels, all this doesn't mean much.
  • i was mildly surprised when i saw this for a couple reasons.

    1. they actually were able to do it, this quickly
    2. it was said that nvidia had no chance of ever being able to do something like this as it they weren't serious graphics cards.

    this comes from an article several months ago on maximum pc [maximumpc.com] (sorry i checked for the link but it's not to be found, but if a staff member from there reads this and they can find the link please link it) which was about a flame war that someone from nvidia and one of the guys from sgi (i might be wrong on sgi, sorry if i am and i forget the names, it's not cause i don't want to leave them out it's that i don't remember), where the guy from sgi said that nvidia was off his rocker with saying that nvidia cards could ever come near the level of performance to do something like toy story and it would be many years, tho the article is less than a year old.

    well i guess the guy from sgi is eating crow now after reading what the nvidia cards are doing what they said would take years till bill gates becomes a linux lover in less than a year and right now i don't think that bill has really embraced the penguin quite yet.
    it was just an interesting side note to this story.
    • by donglekey (124433) on Tuesday August 14, 2001 @03:53PM (#2140807) Homepage
      No, he was absolutly correct. Toy story is not close to being rendered in real time yet and this isn't the same. There are many details to the REYES architecture that is used in PRman and likewise used to render toy story. One is the subdivision of NURBS patches and subdivision surfaces to the pixel level. Another is the surface, light, and volume shaders used. There are many many things that people are missing when they say 'Movie X rendered in real time'. What they really mean is 'Movie X rendered in near realtime, at a MUCH lower resolution, with a bajillion hacks to make it look as close as possible to the original.'
  • Typically, I believe a movie has a framerate of 24 FPS. Therefore, 1 sec / 24 frames = 0.04166 SPF. Right?
    • Re:0.04166 SPF (Score:2, Interesting)

      by xphase (56482)
      That's pure film. Anything using an optical printer(used for special effects, titles, CG stuff) or other method for combining film and computer effects needs to be at least twice as fast.
  • by Hobart (32767)
    0.4sec/Frame rendering is more powerful than Excel has EVER had. This is the first time I've a convincing argument in favor of Quattro based systems.

  • So what? (Score:3, Offtopic)

    by ChristianBaekkelund (99069) <<draco> <at> <mit.edu>> on Tuesday August 14, 2001 @03:27PM (#2114887) Homepage
    So what if it can render fast?... That still doesn't mean things like that can be MADE fast!...The ungodly massive number of man-hours that went into:
    • Modelling
    • Matte painting
    • Painting textures
    • Lighting
    • Shading
    • Animating
    • Writing!
    • Making the sound effects
    • Making the music
    • Doing the voice work & lip-sync'ing
    • Writing custom graphics applications for the skin, hair, etc.
    • Using said applications in the afformentioned modelling/animating/texturing, etc.

    So, yippee, it can render fast...too bad that has NO BEARING on the actual quality of the production (with the possible exception of the team gets to iterate on the work a little more).

    • Re:So what? (Score:3, Informative)

      by donglekey (124433)
      You have obviously never worked in a production environment. Rendering isn't everything. Nothing is everything. No one will ever be saying that rendering is the only thing, because anyone with half a brain cell knows that it isn't, and anyone who has ever looked at CG knows that you are stating the obvious.

      Rendering fast is a big deal though. Actually, its a fucking big deal. The faster something can be rendered, that faster people can work because the interactivity is there. Many 3D programs are instituting semi-real time fully rendered previews over limited spaces, like Softimage, 3DS etc. Everyone realizes the extensive work that goes into a movie. Toy Story took around a month and a half to render, I don't think anyone thinks that a movie can be made in a month and a half and it probable never will. (A good movie that is). Fast rendering is what drives the animation industry by allowing more interactivity, more complexity, and an every increasingly powerful toolset.

      I can't make a movie sitting here on my computer. I don't have the computing power for it. All of those other things keep me from the mecca of the one-man movie as well, but I could do them in theory. What I cannot overcome is the power it takes to render, and that takes computers, which likewise take money. So 'yippee' is right, it is a big deal to render faster.

      Now does this particulare demo mean anything? Yes and no. Geforce 3's and Radeon 8500's won't mean anything to final rendering time for a while, that would take alot of programming that hasn't been done yet. But interactivity is a huge deal, and it makes all the difference in the world to an artist who doesn't want to be constrained.
      • I don't think anyone thinks that a movie can be made in a month and a half and it probable never will. (A good movie that is).

        Rocky (1976) was shot in 28 days. Granted, that doesn't include editing and other post production but it's possible (probable?) the final product was finished in around 1.5 months. And yes, I think it's a good movie. =P

    • The key point they were attempting to make here is that if you can do it real time you can make Quake look as good as Final Fantasy. Everyone knows you can't create a movie in the same time it takes to show it.
    • Re:So what? (Score:5, Funny)

      by typedef (139123) on Tuesday August 14, 2001 @03:33PM (#2143291)
      After seeing the movie, I don't believe that more than 30 minutes was spent on the writing process.
    • No, rendering isn't the whole of the movie-making process -- but you're wrong to say that rendering doesn't take a lot of time from the production. The prospect of turning an hour-long render into minutes means that final video can be produced several times faster -- which also means it can be proofed and edited faster, and that (eventually) directors can test several angles, shots, or compositions without worrying about the amount of rendering time wasted.

    • what does realtime rendering give us?

      1. "What now, master?"
      2. "Now turn around, bend down and touch your toes!"

  • by PanBanger (465405) on Tuesday August 14, 2001 @03:25PM (#2116823)
    will this improve the plot?
  • by soboroff (91667) on Tuesday August 14, 2001 @03:25PM (#2116825)
    ``It has long been an artist's dream to render CG animation in real-time,'' stated Kazuyuki Hashimoto, CTO at Square USA.
    We've been able to render CG animation in real time since Ivan Sutherland was a grad student. What makes it hard is a classic Parkinson's law: your needs expand to fill existing processor power. When the movie companies and animation houses have more horsepower, they will go to the next level and push the state of the art in CG back from what's capable of being done in real-time.

    The FF render times sound about the same as numbers I heard from Pixar about Toy Story. What was that post a couple weeks ago, about the machine you want always costing $5000? Well, the frame you want to render will always take 90 minutes.

    • Blinn's Law (Score:1, Informative)

      by Anonymous Coward
      It's called Blinn's Law (after Jim Blinn). The artist will increase complexity of the scene to negate any speed improvement due to upgraded hardware [wait time is constant]
    • Real-time rendering could be made more accessible by using lossy type algorithms.

      For audio, MP3 uses the quirks in our hearing systems to filter out useless or less important data, how often do we stare at what is in our peripheral vision?

      Of course, this would be good for the general population, and not CG artists and fans of their work.

      ~poloco

      • Yeah...TV shows are all dark now anyway cause it's cheaper to shoot at night.

        I can just the the wave of "real-time rendering" promos.

        Sure...we can render this at 30fps - it's a polar bear in an snow storm, or there is our other demo...a story of one mans view of his world around him...oh, did we mention that man is blind, so the screen stays black the whole time.

        Or is that lossy type algorithms applied to human intelligence :)

        I suppose one could do things like "This part of the scene will be blurred in post - render it in low-res" kinda optimizations, if they are not done already.
    • I'd be curious to see if directors/artists will ever be "satisfied" with the quality of cg... if the level of cg will be indiscernible from reality to the untrained eye, as processor speed increases and the render complexity remains constant, render time would speed up. given the teasers i've seen of FF, that level can't be too many years off. do you think fully "realistic" cg is attainable?
      • I'd submit that the ultimate limit of real-time rendering will be when the onscreen characters are able to pass a sort of Turing test - are they human or computer generated actors? When the audience can't decide (ie, when the vote is split), the point of diminishing returns will have been reached. Further effort beyond that time will be devoted to better physics and more realistic modeling of human behavior - doesn't matter if you have that perfect rendering of a human face if the eyes never smile when the mouth does.
  • Although people seem to be quoting massive resolutions for digital image to film transfer, in most of the theaters I've visited (even the good ones) you would not be able to take advantage of resolutions like that. I'm guessing that due to inperpections in the screen shape and curviture, the picture is focused for optimum coverage which can be quite out of focas. Anyway the digital cinema standard is only 1280x1024. Check out the The Barco website [http] for more info.
  • ...downloading the Final Fantasy movie, or just the models used to create it?
    • It'd still be smaller to download the actual movie as the original textures , audio samples, etc would take much, much, much more space. Especially if the movie is Divx4 compressed comapred to MPEG2.
  • by donglekey (124433) on Tuesday August 14, 2001 @04:19PM (#2120697) Homepage
    Can be found at http://www.nvidia.com/view.asp?IO=final_fantasy [nvidia.com]

    The article (on yahoo) is pretty exagerated and sensationalistic, but the images are still very impressive, even they are about what you would expect at 2.5 FPS with such a powerful card. I think it is a pretty good indication of what the next generation of console games (after gamecube and x-box) will look like.
  • I still read newsgroups :) and while there's not much about Siggraph 2001 on c.g.r.renderman at the moment, there's some stuff about the GS cube - if the RenderMan users bother about this FF demo at all, I suspect they won't be impressed. Vermifax's /. post (#44) Score 5 Interesting [slashdot.org] makes the point well, plus it has the Tom Duff quote.

    ,
  • by ThisIsNotATest (515208) on Tuesday August 14, 2001 @03:40PM (#2124677)
    While it is impressive to see the movie rendered in real-time (with adjustable lighting sources and shadows and reflections) it really doesn't look as good as the movie did. I'm at siggraph now (just saw the demo five minutes ago) and the interactive polygon rendering techniques just can't match the radiosity/raytracing used for professional moves - its getting close though!
  • Lets see... (Score:4, Informative)

    by swordboy (472941) on Tuesday August 14, 2001 @03:23PM (#2125189) Journal
    60 frames per second divided by .4 (frames per second) = 150. If we oversimplify and apply Moore's law to the speed of 3D processors, we will halve this every 18 months.

    As I see it, we are about 7 - 8 years away from this kind of rendering in real time.

    Thoughts? Comments? Complaints?
    • Erm, this might be informative if it weren't for the fact that, like many many people, swordboy misunderstands and misinterprets Moore's Law.

      Moore's Law says absolutely NOTHING about performance (let alone "double speed in 18 months".

      All Moore's Law states is that chip COMPLEXITY (*NOT* performance) doubles every 18-24 months.
    • Re:Lets see... (Score:3, Informative)

      by jedwards (135260)
      You only need 24fps for the cinema. Knocks a year or so off your estimate.
    • Re:Lets see... (Score:3, Informative)

      by skroz (7870)
      Actually, at 2.5 frames a second, you'll only need about 5 years, give or take a few months.
    • Re:Lets see... (Score:2, Insightful)

      by benb (100570)
      > 60 frames per second divided by .4 (frames per
      > second) = 150

      Not .4 frames per second, but "four-tenths of a second per frame", i.e. the other way around, which is the 2.5fps you can see in the title.

      If you double that 3.5 times, you have 30 fps, i.e. it will be ~3.5*9 months = ~2.5 years until we have it.
  • by misaka (13816) <`moc.xobop' `ta' `akasim'> on Tuesday August 14, 2001 @03:45PM (#2127260) Homepage
    It sure sounds nice when they write that they can render something that took 90 mins per frame at .4 seconds per frame, but is this really a fair comparison? I don't doubt that NVIDIA is bringing some wicked technologies to the table, but let's also consider:
    1. Size of rendered frames. What resolution was NVIDIA rendering out, maybe 640x480? 1024x768? FF was probably rendered out at 1880x1024 (about 2-3 times the number of pixels as compared to 1024x768) if not more.
    2. How did they have to massage the data before passing it to the rendering pipeline? I hear FF was rendered with Renderman ... are they claiming they can render RIB files through the Quadra chipset? If not, how much time does it take to convert/cook the data? If so, then ... wow
    3. How good did it look in the end? Were all the elements rendered properly, and does it really look anywhere near as good as the movie we saw in the theatre?
    Don't get me wrong, I'm excited to see this kind of technology coming, I can totally see this replacing, or at least complementing, our Linux render farm at some point in the future. But it sure would be nice if we had some usefull technical details to qualify this 90 mins verses .4 seconds render time comparison.

    --M

    • Ahhhh, the first rational comment I have seen yet on this topic. Also, where are their pictures? I find it odd that there are no shots of the render yet.
    • 1. The size of the rendered frames probably doesn't matter at all. A scene from the FF-movie is most likely bound by polygon-throughput and texture memory which has to be swapped in and out (a real performance killer that one). You'd probably get roughly the same performance in 640x480 as in 1600x1200 antialias or no antialias.

      2. Renderman shader code implemented using pixel shaders? Hah, surely not in current hardware, and I doubt we'll see it for a few years at least, and by then Renderman will have moved on.

      3. Of course, the lighting model is a lot more primitive in the real time version, and the card can't do all the nifty post-prosessing done in the movie.

      All the macho marketing crap from Square and NVidia aside, this shows that graphics cards are able to give a pretty darn good preview of the finished frame in a very short time, which will be very valuable to animators when compositing and lighting scenes etc.
  • I'm the original submitter. Just to vindicate my otherwise-good name:

    My suggested headline was "FF:TSW rendered in real time on NVIDIA Quadro."

    I said nothing in the headline about .4fps. In fact, if you actually READ THE BODY of the submission, you'll notice that the rendering speed was SPELLED OUT (four-tenths of a second per frame) so people wouldn't get it wrong. The original Slashdot headline (which read .4fps) was the editor's modification and bad math, not mine.

    Just to clear that up. :)
  • Man, in spite of what the post says: 'real time' (four-tenths of a second per frame)
    . . . I kind of think .4 frames per second is a bit slow. I mean, my measly GeForce 2 does a nice 30 frames per second in Half Life. *grin*

    Just kidding, of course. :-)

  • Soon, we'll have the ability to render DVD quality video in real time. This opens up tons of possibilities - imagine a version of final fantasy with DVD-style seamless branching based on user interactivity!

    The user could interact with the movie and affect the animation in real time. Or, to put that in perspective, imagine fragging your office mates in a photo-realistic Quake VIII. :)

  • Do the math... (Score:2, Informative)

    by tweakt (325224)
    .4FPS is NOT the same as "Four-tenths of a second per frame" which is it?
  • by Animats (122034) on Wednesday August 15, 2001 @12:38AM (#2133534) Homepage
    If the "Quadro" can do it, so can a GeForce 3.

    As I pointed out previously [slashdot.org], NVidia's "Quadro" and "GeForce" lines are actually the same hardware. GeForce 2 boards can be "converted" to Quadro 2 boards with a jumper. [geocities.com]

    The GeForce 3 and "Quadro DCC" boards both use the NVidia NV20 chip, have the same driver, and appear to be very similar if not identical. It's hard to find differences in the feature set. Only ELSA (which is basically a unit of NVidia) sells the Quadro DCC, and apparently only through 3DS Max dealers, along with a special 3DS MAX driver. It's more of a private label than a real product line at this point.

  • From the: Who-needs-a-woman-when-you-have-a-video-card dept.

    Ok, quick Geek Test: If, upon reading this news post (despite the ditzy title), you did not instantly gasp, shiver, or become aroused, you are NOT a geek. Period.

    Which sort of answers my question to my friend after we watched FF for the first time. Is this the top of our abilities in CG? Or was it a matter of the producers saying, "Um...no. We can do a LOT better, but we'd have to wait 100 years for it to build/animate/render instead of 2, so we cut it down to size."

    If that is the case, then it's just a matter of BBF (Bigger, Better, Faster (tm)) in terms of hardware before we see something twice as good as FF. Otherwise, if this is the height of skill we have, then we're talking development of new technologies and methods of doing this sort of detail before we see something else come out.

    I'm no graphics expert, so maybe someone can answer that question for me. At any rate, the movie still made me shiver. Now I can watch it on my desktop...at 2.5fps, not .4 (dipstick)
  • were they just rendering it on a 21 inch screen or rendering it at what must be the fantastic resolution needed to get it to look right on a giant movie screen?
  • Hmm... (Score:2, Interesting)

    by SilentChris (452960)
    Interesting.

    - Square has tie-ins to Sony (exclusivity clause of Final Fantasies on the PS1, rights to publish the movie).
    - Microsoft has tie-ins to nVidia (nVidia makes some of the chips for the XBox).
    -Square now has tie-ins to nVidia with this demonstration.

    Does this mean that more Square games will get ported to the nVidia chipsets, most notably Final Fantasy for the XBox? If I had a choice between the relative hardwares (rather than my PC, which would come first) I'd love to see what Square could do with an nVidia chipset.

  • What res? Film is usually somewhere well above anything a GForce can touch. 640x480 != 2048x1152 (or higher for Super 70 mm).

    Also, 2.5 FPS isn't "real time". 24 fps film is "real time". 30 fps on video is "real time".

    HOWEVER, this would be incredibly useful for generating dalies; spot render checks; web-based trailers and streaming video; Television-quality animation; etc.

    Now you can PROVE to a director that a plot sucks, even in final form, and no all the whiz-bang graphics don't help!

  • Apples to Oranges? (Score:5, Insightful)

    by All Dat (180680) on Tuesday August 14, 2001 @03:37PM (#2143859) Homepage
    Notice how the official "press release" doesn't state the resolution it was rendered at? What's the movie resolution? Several thousand by Several thousand I imagine. Does doing it a 640x480 or LOWER mean the same thing? I have a hard time believing that a Quadro Setup can render something in .4 of a sec that their SGI setup takes 90mins to do. If Nvidia WAS INDEED 100,000 times faster at this using a Quadro setup, we might have heard of this before? Something's missing from this methinks.
    • by donglekey (124433)
      Yep, very true. Also, I think that a lot of people aren't considering that even though frames might have taken 90 min. on an SGI the entire frame is not rendered all at once. I don't know for sure if the 90 min. refers to the entire frame, but I doubt it. There are layers upon layer for backgrounds, main characters, the ghost alien phantom things, shadow passes, reflection passes, caustic passes (in rare cases) and on and on.
    • by zhensel (228891)
      It doesn't actually take 90 minutes per frame to render. That is if the rendering was done on a single CPU. Square used a massive renderfarm so each frame took a variable amount of time based on the complexity of the rendered image and the fraction of the farm dedicated to that particular render operation. That's why you see things like "Final Fantasy took 1 million years to render" or whatever when you know it isn't exactly true. Look at ArsTechnica where they did an interview with some people from Square about the rendering process. I think there was even a slashdot article about it.

      And yes, it's a little rediculous for NVidia to suggest that their card is 100k times faster than Square's rendering hardware for FF-TSW. But what's more rediculous is that yahoo took that statement and printed it in its article with no explanation of exactly what NVidia means when they say that.
    • Lucasfilm's Sony camera, on which they have filmed Episode II, and which was considered to completely supercede analog film, picks up 1920x1080 resolution. You don't really need that much resolution to look fantastically better than what passes for film these days.
      • Generally considered to supercede analog film? Please. 2K resolution is really just marginally passable. High quality work is generally done at 4K with some even at 8K, which is what it really takes to match 35mm at its best.
      • by lordpixel (22352)
        Hmm, I seem to remember reading a very convinving analysis which said about 3000x2000 was what was required to match standard 35mm film *well projected* in an average size theatre.

        I don't have a link, but it was all to do with the physical optics of the eye and the point at which the eye can't tell the difference between one dot and two dots when projected onto the opposite wall. Sooner or later you just don't have enough retinal cells to be able to see any more detail.

        My fear is that by pushing this through a couple of years too early at this slightly lower resolution we'll see a net loss of quality. If the switch to digital was to happen in 5 years time then theatres' projectors and studio's cameras would be more likely to be 3000x2000 equipment.

        If the public accepts the lower resolution, why spend the money on upgrading.

        That said I saw Akira digitally projected this year on a huge screen (of course, it was originally film, not digital tape) and it was beautiful.

        Of course, given most movies most of us see are projected using dirty equipment by an untrained 16 year old at a multiplex it probably doesn't make any difference. The current resolution is probably good enough. A bit like DVD and HDTV.
        • The 1920x1080 figure quoted is the upper end of the HDTV standards. There is a very good reason the new digital cameras used by Lucas et al.. are capturing at that size. If you capture at 1920x1080 digital, then you cut an entire step out of the process of producing for the much more lucrative home markets.

          I know that no one is broadcasting or releasting at 1080i resolution yet, but it's only a matter of time. DVD has allowances for this, as do some of the new tubes coming out of sony. Even my 19inch monitor sitting here on my desk does 1920x1080.

          Scott

          PS. DivX encoded 1920x1080 Lightwave rendered animations look sweeeeeeeet...
    • by MarkoNo5 (139955) <MarkovanDooren@gm[ ].com ['ail' in gap]> on Tuesday August 14, 2001 @04:07PM (#2144710)
      From http://movieweb.com/movie/toystory/toystory.txt [movieweb.com]

      "RESOLUTION:The resolution of a digital image refers to the number of pixels stored. For "Toy Story," the resolution is typically 1536 x 922 pixels."

      Marko No. 5
    • I have a hard time believing that a Quadro Setup can render something in .4 of a sec that their SGI setup takes 90mins to do.

      You have a point, but it's not as crazy as you think. Although SGI's have 3D acceloration (as our gaming machines do), this is used for the actuall modelling. The server farms use don't use the video card to render (they can't for multiple reasons). Consider this: try playing Quake 3 on a 1ghz Athlon in Software mode (if you can). It looks like crap and runs sloooow. Put a geforce 3 in it, and you can get a 100x speed boost (ambiguous number to make a point). This is because the rendering get's done in the hardware.

      The only problem with this, is that when you move to hardware acceloration, you can't use the super complex rendering engine that they used to render the movie. Therefore, the visual quality can't possibly be as good.

      One final example: go buy a mega PRO 3D card that accelorates the modeling in 3DSMax. Ask your vender if the card will also speed up the rendering. They will tell you as I have - the rendering can't use hardware acceleration! Therefore, the machine with the GF3 has a HUGE advantage, albeit at a visual quality loss.
    • by Dynedain (141758)
      I was at SIGGRAPH, they are using a 34" (widescreen proportions) plasma (HDTV?) display, with absolutely no visible pixelation. As for the speed difference, remember that the SGI cluster square used was handleing composite rendering, and as such the various "layers" (specular, shading, etc...) can easily be split up to significantly increase the speed. The NVidia solution doesn't break it up the same way, and simplifies a lot of it (the hair for instance!) Lighting and radiosity is significantly downgraded from the original, and her hair is definately not to the same level as in the movie, and the bumpmapped cloth textures seem much more pronounced....but....this is far beyond any display of on the fly 3D rendering compared to final movie-quality product.
  • and believe me, there's no way nvidia's chip came anywhere remotely close to that of the movie.

    Until their chip can produce a single frame that matches the image quality, they're still just making toys for quake fiends. Diffraction, interference, antialiasing...just a few of the photorealistic rendering staples, and nvidia has only recently been able to do antialiasing. They've got a long, long way to go before we're going to see actual movies rendered using their hardware.
  • Jesus! We do math good at slashdot.

    4/10s of a frame per second means you can do just over 2 frames per second.

    God damn. People go to college and come out knowing this much about math?
  • So it renders to a computer screen at 2.5 FPS. That's nice and all, but a long way from film-making.

    Consider the resolution. Images rendered for film are typically done at about 3000 x 2000 (give or take depending on aspect ratio, etc). Now, even assuming we could gang up 16 or 25 or whatever of these nvidia boards, we're left with another problem: you can't record VGA signals on film. All the hardware shortcuts and special-purpose circuitry in the latest video card are useless when it comes to final render for film, because they're not built into the gadget (and there are several different sorts) that's actually bombarding the emulsion with photons. (Typically some sort of three-pass (R,G,B) laser scanner).

    Yes, it'll make for wonderful computer games (if you like that sort of thing) and maybe even some interesting experiments in real-time porno animation, but it doesn't do much for the film industry, nor would it at 10 times the speed (24 FPS is typical movie framerate). It'd have to be about 250 times faster for full framerate, full resolution images. About 12 years at Moore's Law rates. (Althogh I suspect at that resolution the flaws in the rendering and physics would become very distracting.)

    (Actually, it helps the film production process, where animators can preview their work that much quicker. Faster graphics is always good, just let's not get carried away with the hype.)
    • That's nice and all, but a long way from film-making.

      Not as far as you might think. Actually, it's about 100 feet down the hall from where I'm sitting right now.

      Consider the resolution. Images rendered for film are typically done at about 3000 x 2000

      The final product is about half that. Resolutions up to 4k x 3k are used for intermediate special-effects editing. Digital cinema will be operating about HDTV resolutions.

      we're left with another problem: you can't record VGA signals on film.

      With digital cinema, it's (roughly speaking) VGA in - so you could sit in the theater watching what's being generated at that moment. For film, it's not that big a deal to replace the VGA out with a connector to a real-time film recorder (remember the hall I mentioned above? go to the other side of the hall).

    • Consider the resolution. Images rendered for film are typically done at about 3000 x 2000 (give or take depending on aspect ratio, etc). Now, even assuming we could gang up 16 or 25 or whatever of these nvidia boards, we're left with another problem: you can't record VGA signals on film. All the hardware shortcuts and special-purpose circuitry in the latest video card are useless when it comes to final render for film, because they're not built into the gadget (and there are several different sorts) that's actually bombarding the emulsion with photons. (Typically some sort of three-pass (R,G,B) laser scanner).

      Eh? You could easily read the contents of the video framebuffer out after it is finished rendering. Then you could save it to disk, spit it out to a special purpose film framebuffer or whatever. Yes, this is a relatively slow operation compared to writing to the videocard framebuffer, but if you're only rendering 2.5 frames per second anyway it would be a negligable hit.

      There are tons of issues they are glossing over (the resolution issue you mentioned, the fact that current videocards dont have enough color precision for complex multipass effects and many others), but this 'VGA' issue isn't one of them.

      At any rate, nobody who knows what they are talking about is saying that this process will replace traditional raytracing for film..but it a fairly good indicator of how quickly videocard performance and quality if progressing.

      Also, it has other uses than a final render. If you can get a 'pretty good idea' of what a particular scene will look like at near realtime rates it would speed up some processes (like light placement for a scene) tremendously.

  • by GC (19160)
    by... who... where is your math....

    Has the whole world gone mad?

    a new class of people emerged... the innumerates...
  • .4 FPS? (Score:2, Redundant)

    by JesseL (107722)
    .4 FPS != 4/10 second per frame.
    • I read this as 2.5 frames per second. Or about 1/10 the frame rate you see when you go to the theatre. I wonder if this is an average for the whole film, surely it must vary somewhat from frame to frame. I'm curious...so they render it in real time, but what about the physics and such? Precalculated, or did they do all that on the fly as well? No matter how you slice it, this is some amazing stuff.
    • Because the Nvidia press release [nvidia.com] says differently:
      The average time it took to render a single frame in the Final Fantasy Technology Demo was less than one-tenth of a second, compared to the 90 minutes it took in the movie,
      Final Fantasy The Spirits Within!
      Spin doctoring?

  • I may be very, very wrong, however it was my impression that the "rendered" graphics of modern video cards are shortcut 3D images that are very, very unlike raytraced images: i.e. Quake3 looks nice, but it looks absolutely nothing like the stunning beauty of a Truespace or 3dsmax image (i.e. one is averaging surface point lighting, whereas the other is actually tracing the rays of light throwing shadows, umbras, etc). I though the Quadra cards were only really relevant for modeling (i.e. moving stuff around and such), but they still used an FPU for the real rendering.

  • Was the image exactly the same as the movie, including:
    • Motion blur? Motion blur is generally done on 3D cards by rendering the scene several times over per "frame"; if they could pull this off I'd be very impressed but right now it just makes me wonder more.
    • Volumetric effects. These are hard to do with just polygons, even with programmable shaders. And as others have said, the Quadro is nothing compared the Renderman's software-based shading system used for the movie.
    • Animation identical to the movie. I assume some nontrivial processing of the motion was used to model the hair, cloth, whatever in the movie. That would be a processing load, if nothing else.
    • Full resolution textures. I believe the movie used something around 500MB of data per image (this figure may be from Toy Story 2, I don't remember exactly, but is so FF is probably higher). Moving that much data over the AGP bus would take a large chunk of that .4 of a second by itself.
  • by Anonymous Coward
    I just saw the demo. It looks NOWHERE as good as the real movie, geometry and lighting/textures are all greatly simplified. The action is definately watchable, which implies > 1 fps. (Looks like around 5-10 fps to me). Aki's hair is made up of WAY less strands (but thick, so she's not balding) and the skin textures aren't as detailed. Still, very impressive. If what I saw was a video game, it'd be GREAT!!!
  • If you have a Geforce3, go find the Zoltar demo. It's on the web if you look hard enough. Something like 220 megabytes worth of crap, and all it does is model and animate a human head. But HOLY SHIT does it look incredible! Also, find the Chameleon demo. Again, Google is your friend.

Optimization hinders evolution.

Working...