Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

Ray Tracing for Gaming Explored 266

Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting: "How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."
This discussion has been archived. No new comments can be posted.

Ray Tracing for Gaming Explored

Comments Filter:
  • by MessyBlob ( 1191033 ) on Friday January 18, 2008 @08:08AM (#22091858)
    Adaptive rendering would seem to be the way forward. Ray tracing has the advantage that you can bail out when it gets complicated, or render areas to the desired resolution. This means a developer can prioritise certain regions of the scene and ignore others: useful during scenes of fast motion, or to bring detail to stillness. The result is similar to a decoded video stream, with detail in the areas that are usefully perceived as detailed. Combining this with eye position sensing (for a single user) would improve the experience.
  • by bobdotorg ( 598873 ) on Friday January 18, 2008 @08:08AM (#22091860)
    That completely depends on your point of view.
    • Now hear this (Score:5, Insightful)

      by suso ( 153703 ) * on Friday January 18, 2008 @08:32AM (#22092030) Journal
      I get tired of hearing this talk about real time ray tracing. They might be able to get basic ray tracing at 15 frames per second or more. But it won't matter, the quality won't be as good as some of the high quality images that you see that take hours to render. Sometimes days.

      See, the two are incompatible because the purpose is different. With games, the idea is "How realistic can we make something look at a generated rate of 30 frames per second". But with photorealistic rendering the idea is "How realistic can we make something look, regardless of the time it takes to render one frame."

      And as time goes on and processors become faster and faster, the status quo for what people want becomes higher. Things like radiosity, fluid simulations and more becomes more expected and less possible to do in real time. So don't ever count on games looking like those still images that take hours to make. Maybe they could make it look like the pictures from 15-20 years ago. But who cares about that? Real time game texturing already looks better than that.
        • Re: (Score:3, Interesting)

          by mdwh2 ( 535323 )
          It says that a quad core processor gets 16.9 frames at 256x256 resolution.

          Wow.

          (Like most ray tracing advocates, he points out that ray tracing is "perfect for parallelization", but this ignores that so is standard 3D rendering - graphics cards have been taking advantage of this parallisation for years.)
          • Re: (Score:3, Informative)

            by Facetious ( 710885 )

            It says that a quad core processor gets 16.9 frames at 256x256 resolution.
            Keep reading there, genius. If you had read beyond page one you would see that they are getting 90 fps at 768x768 on a quad core OR 90 fps at 1280x720 on 8 cores.
          • Re: (Score:3, Informative)

            by Enahs ( 1606 )
            As someone pointed out in another comment, they're getting much higher framerates than that. Plus, their ultimate goal seems to be an OpenRT, roughly analagous to OpenGL, with the goal of interfacing with raytracing gfx cards.

            I for one welcome our new raytracing overlords...
        • Re: (Score:3, Informative)

          well, if you look more closely, you would notice that the articles have the same author!
      • Re:Now hear this (Score:5, Insightful)

        by IceCreamGuy ( 904648 ) on Friday January 18, 2008 @09:46AM (#22092888) Homepage
        from TFA:

        At HD resolution we were able to achieve a frame rate of about 90 frames per second on a Dual-X5365 machine, utilizing all 8 cores of that system for rendering.
        The quote is referring to Quake 4. So they already can raytrace a semi-modern game at 90 FPS, and they have a graph that very clearly shows raytracing at a performance advantage as complexity increases. Just look at the damn graph (page three), the point where raster performance and raytracing performance intersect can't be more than a couple years off, and it's apparent that we may even have crossed that point already. Continue becoming tired of hearing about raytracing, the rest of us will sit patiently as the technology comes of age. Personally, I'm tired of hearing about this HD stuff, I mean, it's not like HD TVs will ever be mainstream, with their huge pricetags and short lifespans. Oh wait...
        • Re:Now hear this (Score:4, Insightful)

          by suso ( 153703 ) * on Friday January 18, 2008 @10:05AM (#22093166) Journal
          The quote is referring to Quake 4. So they already can raytrace a semi-modern game at 90 FPS, and they have a graph that very clearly shows raytracing at a performance advantage as complexity increases. Just look at the damn graph (page three),

          I don't have to look at the damn graph to tell you that what people are going to want is this [blenderartists.org]

          And what they are going to get is this [pcper.com]

          And, they should just be happy with this [computergames.ro] (which, is pretty awesome)

          My point is that real time photorealistic rendering will never catch up with what people expect from their games. It will always be behind. If all you want is mirrors, then find a faster way to implement them at the expense of a bit of quality.

          • Where are my mod points when I need them?

            If your going to do it, you might as well do it properly.
            It cant be done so dont worry about it.
          • Re:Now hear this (Score:5, Insightful)

            by The_Wilschon ( 782534 ) on Friday January 18, 2008 @10:59AM (#22094064) Homepage
            I think you still should look at (and understand) the damn graph. The point of the article was that if you want a given complexity of scene (which translates into quality of image), you only have to get a little bit more complex than current game scenes before current ray tracing techniques become faster than current raster techniques. Thus, ray tracing at 30 fps will look better than raster at 30 fps in the near future, perhaps already. Ray tracing is the quickest known route to better graphics quality at the same frame rate in games.

            Yes, what can be produced will still be behind what people want or expect. But ray tracing will be less far behind than rasters in the near future.

            All of this is according to TFA; I don't know much about this from a technical standpoint.
            • Re:Now hear this (Score:4, Insightful)

              by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Friday January 18, 2008 @05:53PM (#22102108) Homepage
              The problem I have with ray-tracing is that I find it hard to see its advantages. Sure, it can put out more polygons then raster and it can do reflective spheres and sharp shadow. But those things have very little to do with visual quality. Reflections are pretty much a non-issue, nobody really cares as long as there is something that looks somewhat close to a reflection and environment maps can get that done. Sharp shadows are actually extremely ugly and people are moving onto soft shadows now. Higher polygon counts, sure nice to have, but again not really all that important, slap on a few normal maps and you can have your 5'000'000 reduced to a 5'000 polygon model without noticeable detail loss.

              The majority of quality improvement these days seems to come from post processing effect and clever textures and programmable shader use. If you want to get fur on an animal via polygons you will have to spend a load of rendering time, but if you fake it with textures you can get pretty good results on todays hardware. Same with shadows and a lot of other stuff. Doing it 'right' takes a load of computing power, faking it works in realtime.

              I simply haven't seen all that much raytracing that actually could compete with current day 3D hardware, those that do look better then todays 3D hardware is done in offline renderers and takes hours for a single frame.
              • realism (Score:3, Interesting)

                by j1m+5n0w ( 749199 )

                Higher polygon counts, sure nice to have, but again not really all that important...

                This is a rather good point; at some point, adding more polygons doesn't do anything to make an unrealistic scene look more realistic. This is true for raytracing and polygon rendering alike. Ray tracing has some advantages here, for what it's worth; it scales better with huge scenes, and it can represent non-triangular primitives natively (though all the fast ray-tracers I've seen only deal with triangles). I wouldn't

          • Re:Now hear this (Score:5, Informative)

            by tolan-b ( 230077 ) on Friday January 18, 2008 @11:02AM (#22094132)
            I think you're missing the point. The reason Quake 4 looks crap raytraced was because it wasn't written to be raytraced, no shaders are being applied (because they were all written for a raster engine) so of course it looks bad. This stuff is just research.

            One of the biggest hurdle in game graphics is geometry detail. Normal mapping is just a hack to make things appear more detailed, it breaks down in some situations. Raytracing will allow *much* higher geometry detail than rasterisation. Better reflection, refraction, caustics and so on are just gravy.
          • Re:Now hear this (Score:5, Interesting)

            by Cornelius the Great ( 555189 ) on Friday January 18, 2008 @11:21AM (#22094556)
            You completely missed the parent's point. Traditional rasterization chugs when a scene gets complex enough (I think the complexity is O(n)). Ray tracing scales very nicely (O(Log n)) and you can throw in stuff like TRUE reflection/refraction with minimal decreases in performance, with millions more polygons. Yes, rasterization is faster in current games, but throw in hundreds of millions of polygons into a scene and see what happens.

            Furthermore, rasterization requires tricks (many would call them "hacks") to make the scene approach realism. In games today, shadows are textures (or stencil volumes) created by rendering more passes. While they look "good enough", they still have artifacts and limitations falling short of realistic. Shadows in raytracing come naturally. So do reflections, and refractions. Add some global illumination and the scene looks "real".

            Rasterization requires hacks like occlusion culling, depth culling, sorting, portals, levels of detail, etc to make 3D engines run realtime, and some of these algorithms are insanely hard to implement for best case scenarios, and even then you're doing unnecessary work and wasting unnecessary ram rendering things you never see. Raytracing only renders what's on the screen.

            That being said, I don't think raytracing will completely replace rasterization, at least not right away. Eventually, some games may incorporate a hybrid approach like most commercial renderers do today (scanline rendering for geometry, add raytracing for reflections and shadows). Eventually, 3D hardware will better support raytracing, and maybe in another decade we'll begin to see fast 3D engines that use ray tracing exclusively.
          • Re: (Score:3, Insightful)

            Again, your short-sightedness really amazes me. Have you not noticed how fast cores are being added? Look up Intel's Larrabee, for example. It's really a little silly to use the word "never" in this context. Unless you're being philosophical. Take for example youre first link. I'd bet you $50 that within 7 years we will be able to render that in realtime, at 1920x1200 resolution at 60 frames per second. 7 years is a lot sooner than "never".
        • Re:Now hear this (Score:4, Interesting)

          by MrNemesis ( 587188 ) on Friday January 18, 2008 @10:45AM (#22093818) Homepage Journal
          Even more interestingly, they managed to do Quake 4 using CPU's only. Since modern graphics card are no longer just a bunch of vector processors but merely a colossal stack of many scalar processing units they should be able to be much more flexibly adapted to different types of processing - at the moment their internal software is generally specialised for polygon pushing, but I don't see any reason why nVidia or whoever could start developing an OpenRT stack to sit alongside their OpenGL and DirectX stacks, other than there not being much interest in consumer level raytracing just yet (is there raytracing work being done for GPGPU projects?).

          Are there any reasons why current GPU designs can't be adapted for hardware assisted raytracing?
          • Re: (Score:2, Informative)

            by Anonymous Coward
            (is there raytracing work being done for GPGPU projects?)

            In a word, yes. Check Beyond3D.com forum's GPGPU subforum (lots of raytracer stuff discussed and introduced) -- and also the CellPerformance subforum about raytracers on PS3's CPU (those SPEs rock for that) running on Linux of course.
            • Re: (Score:3, Interesting)

              by MrNemesis ( 587188 )
              Thanks for that, good to finally see something that seems ideally suited to the Cell.

              As an aside, isn't the work to combine your current bog-standard processors with inbuilt "graphics processors" (a la AMD Fusion and Intel Larrabee) just going to turn every consumer CPU into a Cell-ish architecture within five years or so - a number-crunching core or two plus an array of "dumb" scalar processors?
          • Re: (Score:3, Interesting)

            by IdeaMan ( 216340 )
            Actually if the Open Source world gets a clean, easy to use OpenRT stack and standard going before MS they would have one shot at making the next Killer App. Once they get that one truly awesome game out using RT and an easy way to switch to Linux, the rest of the gaming world could fall right into their laps.
            • Re: (Score:3, Insightful)

              by MrNemesis ( 587188 )
              Sod the game - if there was an open source, cross-platform hardware, architecture agnostic accelerated RT API available (I'm assuming that vendor-specific OpenRT acceleration a la nVidia's OGL would still be closed source) that made RT viable for the mainstream (either for gaming or CAD/animation work), there'd be practically no difference between MS and everything else either for gaming (including consoles) or 3D workstation work, and D3D would no longer be the de feacto standard for accelerated 3D - that'
        • Re:Now hear this (Score:5, Informative)

          by roystgnr ( 4015 ) * <roy AT stogners DOT org> on Friday January 18, 2008 @11:41AM (#22094926) Homepage
          they have a graph that very clearly shows raytracing at a performance advantage as complexity increases.

          No, they have a graph that very clearly shows that raytracing while using a binary tree to cull non-visible surfaces has a performance advantage over rasterizing while using nothing to cull non-visible surfaces. Perhaps someday a raster engine will regain that advantage by using these "BSP Trees" [gamedev.net] as well.
      • Re: (Score:3, Interesting)

        And people used to say the same thing about real-time 3D gaming in the late 80s. Along with - water and fire will never look realistic, you will never be able to render more than 10 frames per second, and nobody will ever buy an expensive video card with tons of memory and a FPU built in.
  • by Lurks ( 526137 ) on Friday January 18, 2008 @08:13AM (#22091888) Homepage

    I guess one has to state the obvious in that by moving to a process which is not implemented in silicon, as with current graphics cards, the work must necessarily be done in software. That means it runs on CPUs and that's something Intel is involved in where as when you look at the computational share of bringing a game to your senses right now, NVIDIA and ATI/AMD are far more likely to be providing the horsepower than Intel.

    But really, even if this wasn't a vested interest case (and it may not be, no harm exploring it after all) - the fact remains that we don't actually need this for games. Graphics hardware has gone down an entirely different route whereby you write little shader programs which create surface visual effects on top of the bread and butter polygons and textures. This is a well established system by now and has a naturally compressive effect. It's like making all your visual effects procedural in nature rather than giving objects simple real-world textures and then doing a load of crazy maths to simulate reality. It works very well. Rememeber a lot of the time you want things to look fantastical and not ultra-realistic so lighting is some of the challenge.

    Games aren't having a problem looking great. They're having a problem looking great and doing it fast enough and game developers are having a problem creating the content to fill these luscious realistic-looking worlds. That's actually what's more useful, really. Ways to aid game developers create content in parallel rather than throwing out the current rendering strategy adopted world wide by the games industry.

    • Re: (Score:3, Interesting)

      by BlueMonk ( 101716 )
      I think the problem with the current system, however, is that you have to be a professional 3D game developer with years of study and experience to understand how it all works, whereas if you could define scenes in the same terms that ray tracers accept scene definitions, I think the complexity might be taken down a notch for developers making quality 3D game development a little more accessible and easy to deal with, even if it doesn't provide technical advantages.
      • Re: (Score:3, Interesting)

        I think the problem with the current system is that it scales horribly. Right now we're barely pushing 2 Megapixel displays with all those shader effects turned on. If the Japanese have their way we'll have 33MP displays in only 7 years - because this is the 'broadcast standard' they're shooting for. Can they double the performance of the current tech every 2 years to eventually meet that? I have my doubts.
        • by Lurks ( 526137 )

          "I think the problem with the current system is that it scales horribly."

          On the contrary, it scales very well actually. You can simply use more pixel pipelines to do things in parallel to do less passes (which is what you see in most graphics hardware including the current consoles), or you can render alternate lines, or chunks of the display etc for multiple entire chunks of hardware such as SLI/Crossfire on the PC.

          The problem you describe is essentially that any complicated visual processing in very

        • The japanese standard is, to put it bluntly, complete BS.

          We MIGHT have the technical capability to encode 4K video in 4:4:4 by 2015 in realtime with upper-pro-level gear - it's a stretch. We won't see cameras like the Red One standardize in movie studios until well after this decade, much less television studios. 33 megapixels for a broadcast standard is ludicrous - and will be impossible even for the highest end cinema to implement in 7 years.

          I'd settle for a solid, end-to-end 1080p60 in VC-1 as a broadc
      • And why do you think that is any different with raytracing? Model geometry is very similar and I doubt that you will understand the math behind BDRF (Bidirectional Reflectance Function, a way to describe surface characteristics) or scattering in participating media without some time to study it. In fact they are so similar that OpenRT [openrt.de] is a raytracer with an interface quite similar to OpenGL. The shader system is completely different, though as it wouldn't make sense to limit it to GPU-shaders without hardwa

    • by darthflo ( 1095225 ) on Friday January 18, 2008 @08:46AM (#22092144)
      Keep in mind recent parallelization advances. According to TFA, raytracing performance scales almost linearly with the number of processors (factor 15.2 for four quadcore machines connected via GigE over a single core); both Crossfire and SLI don't scale remotely that great.
      If the parallelization trend continues like it's progressing now, manicore CPUs are probable to arrive before 2010. Also, both AMD and Intel appear to be undertaking steps in the direction of enthusiast-grade multi-socket systems, increasing the average number of cores once again. Assuming raytracing can be parallelized as gread as TFA makes it sound, rendering could just return to the CPUs. I'm no expert, but it does sound kinda nice.
      • Re: (Score:2, Informative)

        by mdwh2 ( 535323 )
        Keep in mind recent parallelization advances. According to TFA, raytracing performance scales almost linearly with the number of processors

        Yes, and standard rasterisation methods are embarrassingly parallel, too. As the other reply points out, we already have parallel processors in the form of GPUs. So I don't see that either multicore CPUs, or that raytracing is easily parallised, is going to make raytracing suddenly catch up.

        What we might see perhaps is that one day processors are so fast enough that peo
      • The little problem is that power usage scales almost linearly with the number of cores, and so does the price to a degree.

        2010 sounds realistic for a top-shelf equipment for the few chosen. 2020 looks more like consumer-grade electronics.
    • It is hard to predict what creative artists heads are capable of, given possibilities of Ray Tracing. In the beginning it will not being much improvement over current state-of-the-art graphics, just like graphics in late Amiga 2D games looked much better then first 3D textured games (few polygons, low-res textures). With Ray Tracing technology, complexity of scene (number of polygons) can be increased by several orders of magnitude. This can especially improve games like flight simulators, where you can see
    • As I understand it from the article as the number of polygons goes up ray tracing becomes more and more attractive, and that's the major draw. Apparently there'll come a point where scenes are complex enough that it is more efficient to use ray tracing.

      He also hinted that ray tracing could make collision detection much better, so that you dont get hands/guns sticking through thin walls/doors, which would also be good.

      But hey I'm not rooting for one of the other, game devs will use whatever is best and
    • Graphics hardware has gone down an entirely different route whereby you write little shader programs which create surface visual effects on top of the bread and butter polygons and textures. This is a well established system by now and has a naturally compressive effect. It's like making all your visual effects procedural in nature rather than giving objects simple real-world textures and then doing a load of crazy maths to simulate reality.

      ... a way which was pioneered by the Reyes renderer (if I am not i

    • the fact remains that we don't actually need this for games.

      Your post is heavily dependent on availability of suitable hardware. Software can be ported and recompiled to new platforms, but hardware-dependent software has a short lifespan precisely because getting usable hardware doesn't last particularly long. There's a lot of otherwise good enjoyable games which are unplayable now because they depended on the early Voodoo cards or other unrepeated graphics hardware. Now with CPU power ramping back up (rel
    • Raytracing scales up far better than rasterization. Adding triangles to a raytraced scene has far less effect on it than to a rasterized scene, because you don't have to render anything that's not actually part of the scene... and you don't have to run what is effectively a separate rendering pass to eliminate parts of the scene to limit the hidden variables, and the processing for each collision is much simpler so you can fit thousands of dedicated raytracing processors in a hardware raytracer where the sa
      • by daVinci1980 ( 73174 ) on Friday January 18, 2008 @12:55PM (#22096546) Homepage
        Disclaimer: I work for NVIDIA. I speak not for them.

        People keep saying this, that raytracing scales up better than rasterization. It's simply not true. Both of them have aspects that scale linearly and logarithmically. They do scale differently, but in a related sort of wy.

        Raytracing is O(resolution), and O(ln(triangles)), assuming you already have your acceleration structures built. But guess what? It takes significant time to built your acceleration structures in the first place. And they change from frame to frame.

        Rasterization is O(ln(resolution)), and O(triangles). Basically, in a rasterizer, we only draw places that we have triangles. Places that don't have triangles have no work done. But the thing is, we've highly pipelined our ability to handle triangles. When people talk about impacting the framerate, I want to be clear what we're talking about here: adding hundreds, thousands, or even a million triangles is not going to tank the processing power of a modern GPU. The 8800 Ultra can process in the neighborhood of 300M triangles per second. At 100 FPS, that'd be (not suprisingly) 3M triangles per frame.

        Modern scenes typically run in the 100-500K triangles per frame, so we've still got some headroom in this regard.

        Cheers.
    • by MobyDisk ( 75490 )
      But it is much harder to do certain effects this way. Ray tracing gives free per-pixel lighting, shadowing, reflections, and radiosity with minimal or no added work on the part of the developer. Tto do that same on today's cards the programmer must master a whole series of mathematical and psychological tricks and shortcuts to fake-out the apperance of those same things, then implement those in some funky assembly language for the video card.
    • One of the problems with the current techniques is that programmers have to write shaders and artists have to make normal maps, high and low poly versions of models, and so on. It may be that raytracing reduces some of the need to do this work, and thus making games cheaper and faster to create.
    • according to TFA raster processes are a rough approximation of ray tracing, and an ultimately inefficient and non scalable one at that. once you get to certain number of polys ray tracing becomes better than raster processes in every regard (faster, more efficient, more detail), and can even be merged with the collision detection process for further efficiency and pinpoint detection. The point of TFA is that the advent of multicore CPUs will make it more efficient and less expensive to raytrace than to rast
  • There is a project 'The OpenRT Real-Time Ray-Tracing Project' [openrt.de] (not so much open despite name, but noncommercial code available) out there, and presumably Blender [blender.org] should be there soon [slashdot.org].

    CC.
    • Its name is OpenRT just because it's modeled after OpenGL. It has nothing open in it, only name.
  • Wow. (Score:3, Insightful)

    by SCHecklerX ( 229973 ) <greg@gksnetworks.com> on Friday January 18, 2008 @08:23AM (#22091958) Homepage
    I remember some scenes that I would create with PoV to sometimes take several hours for a single frame to complete. Now we're looking at doing it in real time. Amazing.
  • Further Reading (Score:5, Interesting)

    by moongha ( 179616 ) on Friday January 18, 2008 @08:24AM (#22091970)
    ... on the subject, from someone that doesn't have a vested interest in seeing real time ray tracing in games becoming a reality.

    http://realtimecollisiondetection.net/blog/?p=38 [realtimeco...ection.net]
    • Re:Further Reading (Score:4, Insightful)

      by DeadChobi ( 740395 ) <(moc.liamg) (ta) (ibohCdaeD)> on Friday January 18, 2008 @12:27PM (#22095930)
      I think the article that your blog entry points to is a much better read on the subject. The article linked in the summary gushes on about how it's finally possible to ray trace in HD in real time, but only if you're willing to build a small cluster computer. In addition, the summary's article goes on about how the ray traced model scales logarithmically while the raster model scales linearly, but it doesn't provide a very rigorous explanation of where the writer is getting these values from.

      In short, I don't buy the summary article's viewpoint because at times he can be confusing or ambiguous with respect to his "proof." I like the parent's linked article, because the author of that article at least provides something computationally meaningful to think about.
  • by dada21 ( 163177 ) <adam.dada@gmail.com> on Friday January 18, 2008 @08:31AM (#22092026) Homepage Journal
    I was a founder of one of the Midwest's first rendering farms back in 1993, a company that has now moved on to product design. Back then we had Pentium 60s (IIRC) with 64MB of RAM. A single frame of non-ray traced 3D Studio animation took an hour or more. We had probably 40 PCs that handled the rendering, and they'd chug along 20 hours a day spitting out literally seconds of video. I remember our first ray trace sample (can't recall the platform for the PC, though) and it took DAYS to render a single frame.

    I do remember that someone found some shortcuts for raytracing, and I wonder if that shortcut is applicable to realtime rendering today. From what I recall, the shortcut was to do the raytracing backwards, from the surface to the light sources. The shortcut didn't take into account ALL reflections, but I remember that it worked wonders for transparent surfaces and simple light sources. I know we investigated this for our business, but at the time we also were considering leaving the industry since the competition was starting to ignite. We did leave a few months early, but it was a smart move on our part rather than continue to invest in ever-faster hardware.

    Now, 15 years later, it's finally becoming a reality of sorts, or at least considered.

    Many will say that raytracing is NOT important for real time gaming, but I disagree completely. I wrote up a theory on it back in the day on how real time raytracing WOULD add a new layer of intrigue, drama and playability to the gaming world.

    First of all, real time raytracing means amazingly complex shadows and reflections. Imagine a gay where you could watch for enemies stealthily by monitoring shadows or reflections -- even shadows and reflections through glass, off of water, or other reflective/transparent materials. It definitely adds some playability and excitement, especially if you find locations that provide a target for those reflections and shadows.

    In my opinion, raytracing is not just about visual quality but about adding something that is definitely missing. My biggest problem with gaming has been the lack of peripheral vision (even with wide aspect ratios and funky fisheye effects). If you hunt, you know how important peripheral vision is, combined with truly 3D sound and even atmospheric conditions. Raytracing can definitely aid in rendering atmospheric conditions better (imagine which player would be aided by the sun in the soft fog and who would be harmed by it). It can't overcome the peripheral loss, but by producing truer shadows and reflections, you can overcome some of the gaming negatives by watching for the details.

    Of course, I also wrote that we'd likely never see true and complete raytracing in our lives. Maybe I'll be wrong, but "true and complete" raytracing is VERY VERY complicated. Even current non-real time raytracing engines don't account for every reflection, every shadow, every atmospheric condition and every change in movement. Sure, a truly infinite raytracer IS impossible, but I know that with more hardware assistance, it will get better.

    My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.

    The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?
    • by Twisted64 ( 837490 ) on Friday January 18, 2008 @08:48AM (#22092154) Homepage

      Imagine a gay where you could watch for enemies stealthily...
      How's that voice recognition software working out for you?
    • Re: (Score:3, Interesting)

      You can do global lighting (the next step from radiosity) with no side effects using interval rendering/raytracing. and since interval math lends itself to parallelization, your only limit would be hardware cost, which should eventually be low enough to have a globally-lit and raytraced real-time game. At first maybe just 3d pacman, but how awesome would that be!

    • Re: (Score:3, Funny)

      by 4D6963 ( 933028 )

      Imagine a gay who could watch for enemies stealthily by monitoring shadows or reflections

      There, fixed it for you, it makes a bit more sense now, I guess..

    • My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.

      All this is fine, but I think we will have to wait another 20+ years for computers to be fast and cheap enough before this becomes a reality.

      The next step: a truly 3D immersive peripheral video system,
      • All this is fine, but I think we will have to wait another 20+ years for computers to be fast and cheap enough before this becomes a reality.
        This is the part that I don't understand... We have the massively parallel video cards, couldn't silicon be etched to speed up common ray trace functions?
      • by dada21 ( 163177 )
        All this is fine, but I think we will have to wait another 20+ years for computers to be fast and cheap enough before this becomes a reality.

        I'm not sure I agree, only because we're currently considering what horsepower we would need tomorrow to do it the way we do it today. I've looked at the technology many times over 15 years, including writing a few theoretical thoughts that I sold to private developers back in the day. One thing I looked at was a pre-rendered set of values for each object and face th
  • by Dr. Eggman ( 932300 ) on Friday January 18, 2008 @08:35AM (#22092046)
    Although I have a hard time arguing in the realm of 3D lighting, I will direct attention to the Beyond3D article, Real-Time Ray Tracing: Holy Grail or Fool's Errand? [beyond3d.com]. Far be it of me to claim that this article applies to all situations of 3D lighting, it may be that Ray Tracing is the best choice for games, but I for one am glad to see an article that atleast looks into the possibility that Ray Tracing is not the best solution; I hate to just assume such things. Indeed, the article concludes that Ray Tracing has its own limitations and that a hybrid with rasterisation techniques would be superior to one or the other.
  • From TFA

    If you use a 16-core machine instead of a single-core machine then the frame rate increases by a factor of 15.2!


    No kidding?? Well if you drive a car with a 16 cylinder, 1500 HP engine, it's a LOT faster than a 4 cylinder compact. More on this story as it develops.
    • In a lot of cases in computing, doubling the number of pipelines (read: adding a second core, for example) does not, in fact, double performance unless the problem being worked on is highly parallelizable. For example, this is why one can not accurately describe a machine with two 2.5GHz processors as a "5GHz machine". Most computation that personal computers do on a day to day basis does not scale well, and the average user will reach a point of diminishing returns very quickly if they add many cores to in
    • It's more trucking sand with 16 cars instead of one. Now try to truck one big rock with 16 cars. Some things aren't easily parallelizable (tough word). Also some processes don't look good when making car methaphore.
    • Re: (Score:2, Insightful)

      by prefect42 ( 141309 )
      I'm sure you thought you were being clever, but you weren't.

      If you knew anything about parallel algorithms you'd know that it's fairly common to have things that scale with more like 75% efficiency, and you're still happy. It's all down to how much communication is required, and how often it's required. With raytracing normally (as in, a decade ago when I knew anything about this) you'd parallelise over multiple frames. With real-time rendering you'd need to parallelise within a frame. Depending on your
      • I'm sure you thought you were being clever, but you weren't.

        Wow, aren't you a ray of sunshine?? I didn't say I knew anything about parallel algorithms, I admit I know nothing about them. I can grasp what you (and some others) have replied, but none of that information was stated in conjunction with the line I quoted from TFA. My point? It just struck me funny to have this odd statement that (at least on its surface) seems to state something OBVIOUS which is, 16 processors are faster than 1. So for crying out loud, lighten up, it was a joke.

  • by Speare ( 84249 ) on Friday January 18, 2008 @08:50AM (#22092170) Homepage Journal

    Daniel Pohl, a marketer at Intel

    There, fixed that for you.

    Raytracing the shiny first-intersection makes a lot of sense, even if it doesn't sell more CPUs. Sure, some day we will all have stunning holistic scene graphs that fit entirely within the pipeline cache of the processor, but it's not yet time for that.

    Every change in developing a game engine requires changes in the entire toolset to deal with how to produce assets, how to fit within render time limit budgets, and how to model the scene graph and the logic graphs so that both are easily traversed and managed.

    In the meantime, we have a pretty nice raster system right now, with a development strategy that provides for all those needs. You might not think that fullscale raytracing would upset this curve, but I'm not convinced. What do you do when a frame suddenly is taking more than 1/30sec to render, because the player is near a crystalline object and the ray depth is too high? How do you degrade the scene gracefully if your whole engine is built on raytracing? We've all played games where things like this were not handled well.

    I contend that game AI is sometimes more advanced than academic AI because game developers are results-oriented and cut corners ruthlessly to achieve something that works well enough for a niche application. The same goes for game graphics: 33 milliseconds isn't enough to render complex scene graphs in an academically perfect and general way, it will require the same results-oriented corner-cutting to nudge the graphics beyond what anyone thought possible in 33ms. If that means using raytracing for a few key elements and ray-casting/z-buffering/fragment-shading the rest of the frame, game developers will do it.

    • by JMZero ( 449047 ) on Friday January 18, 2008 @12:04PM (#22095452) Homepage
      I contend that game AI is sometimes more advanced than academic AI

      I contend that game AI is almost always laughably bad (or pretty much non-existent). I realize Mass Effect doesn't exactly win a lot of points for its AI, but the problems very nearly ruined a AAA-developer/large-budget game. I remember one point where, out of battle, I was telling one of my squad to go somewhere. There was pretty much one feature in the room - a wall intersecting the direct path to the target point. Getting around this wall would require one small deviation from the direct path. Instead of walking around the wall, the character just stood there "I'm going to need a transporter to get there" or something.

      I can't imagine how the "AI" could have been implemented in order for that kind of failure to be possible (and common - I had repeated problems with this through the game). I assume they must have just cheated vigorously on the "follow" logic, as - if they'd used the same system - you'd be losing your squad around every corner.

      Really, though, none of the maps were that complicated. The "navigable area" map for the problem location couldn't have had more than 200 or so vertices (it was a very simple map, one of the explorable bunker type areas). That's few enough that you could just Floyd-Warshall the whole graph. But, more generally, a stupidly naive, guessing DFS (that capped at 3 turns or something) would have worked just fine too. I can't think of a solution or algorithm that would fail the way their system did constantly. Mind-boggling.

      Stepping back a bit, this shouldn't even be a consideration. There are simple, fast, robust algorithms that could handle this kind of trivial pathing problem without putting any strain on CPU or occupying more than a couple pages of code. That they don't have a better solution says that they (and most of the games industry, in my experience as a player) value AI at very close to 0.
  • by master_p ( 608214 ) on Friday January 18, 2008 @08:58AM (#22092258)
    What's the Amiga have to do with raytracing? well, let me explain:

    When the Amiga was released, it was a quantum leap in graphics, sound, user interface and operating system design. It could run full screen dual-playfield displays in 60 frames per second with a multitude of sprites, it had 4 hardware channels of sound (and some of the best filters ever put on a sound board), its user interface was intuitive and allowed even different video modes, and its operating system supported preemptive multithreading, registries per executable (.info files), making installation of programs a non-issue, and a scripting language that all programs could use to talk to each other.

    20 years later, PCs have adopted quite a few trends from the Amiga (the average multimedia PC is now filled with custom chips), and added lots more in terms of hardware (hardware rendering, hardware transformation and lighting). It seems that the problems we had 20 years ago (how to render 2d and 3d graphics quickly) are solved.

    But today's computing has some more challenges for us: concurrency (how to increase the performance of a program through parallelism) and, when it comes to 3d graphics, raytracing! Indicentally, raytracing is a computational problem that is naturally parallelizable.

    So, what type of computer shall we have that solves the new challenges?

    It's simple: a computer with many many cores!

    That's right...the era of custom chips has to be ended here. Amiga started it for personal computers, and a new "Amiga" (be it a new console or new type of computer) should end it.

    A machine with many cores (let's say, a few thousand cores), will open the door for many things not possible today, including raytracing, better A.I., natural language processing and many other difficult to solve things.

    I just wish there are some new RJ Micals out there that are thinking of how to bring concurrency to the masses...
    • So, what type of computer shall we have that solves the new challenges?

      A custom chip that has hundreds or thousands of dedicated raytracing processors that run in parallel. Raytracing is embarrassingly parallelizable, so it's far better suited to specialized processors than vectorizing.

      Saarland University, the people who designed OpenRT in the first place, were getting 60+ frames a second on a hardware raytracing engine in 2005... and their raytracing engine only had 6 million gates and ran at 75 MHz. Today
    • "That's right...the era of custom chips has to be ended here"

      I wish people would stop saying this, the real-estate budget for transisters, memory bandwidth and a lot of other things get in the way of "have the cpu all do it". There's a reason custom chips have cornered the market ever since 3Dfx and the other first generation 3D cards from 10 years or so ago. Since no matter how fast a CPU is, you can't compete with the degree of specialization (not to mention experience) a company designing custom chips
  • OK when models get further away in games, they get less detailed, both in terms of textures and polygons. Now if you have a light, with a simplified character, casting a shadow onto a wall quite a long way away, this simplification might become very much more obvious.
    • That is an optimization that is used today. It is NOT a law. Think the real world, just because that huge billboard is miles away doesn't mean some guy runs up to it and tears down the paper and puts a low res version up on it. The entire world in RL and 3D has the same detail no matter where it is.

      As you shoot the ray, it finds the entire world the same size and detail. This is actually one of the problems, for proper raytracing you can't use a lower res model for faraway objects because then the scene mi

  • Professer Philipp Slusallek of the University of Saarbruecken demonstrated a dedicated raytracer in 2005, using a 66 MHz Xilinx FPGA with about 6 million gates. The latest ATI and nVidia GPUs have 100 times as many transistors and run at 6-8 times the clock with hundreds of times the memory bandwidth. Raytracing is completely parallelizable, and scales up almost linearly with processors, so it's not at all unlikely that if those kinds of resources were applied to raytracing instead of vectorizing you'd be able to add a raytracer capable of rendering 60+ FPS at the level of detail of the very latest games into the transistor budget of the chips they're designing now without even noticing.

    Here's a debate between Professer Slusallek and chief scientist David Kirk of nVidia: http://scarydevil.com/~peter/io/raytracing-vs-rasterization.html [scarydevil.com] .

    Here's the SIGGRAPH 2005 paper, on a prototype running at 66 MHz: http://www.cs.utah.edu/classes/cs7940-010-rajeev/sum06/papers/siggraph05.pdf [utah.edu]

    Here's their hardware page: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]
  • I'm not disagreeing with the author ( I did RTFA ), but I want to say there is some ray tracing ( in a sense ) already in some modern games. Specifically, some types of parallax mapping can be considered to be bastard red-headed stepchildren of raytracing.

    What you have is ray tracing in texture space, but that texture is brought to the screen via conventional scanline rasterization methods. Sort of. My glsl parallax shader code sucks though ( looks all gelatinous close up ) so I'm no expert....
  • Awesome. (Score:2, Funny)

    by randomaxe ( 673239 )
    I can't wait to play a game where I'm a shiny silver ball floating above a checkered marble floor.
  • The upper screenshot looks quite boring, because it has no lights and no shadows in it. In the lower example each pixel is being lit by an average of 3-5 light sources simultaneously.

    Screenshot [pcper.com]

    If we use ray tracing to render all the shadows accurately, then each pixel would ordinarily require one visibility ray and 4 shadow rays. If we were to use rasterization to achieve the same quality image, it would only save us the 1 visibility ray per pixel. Rasterization techniques simply are not robust enough to ac

  • by Animats ( 122034 ) on Friday January 18, 2008 @11:52AM (#22095168) Homepage

    It's amusing to read this. This guy apparently works for Intel's "find ways to use more CPU time" department. Back when I was working on physics engines, I encountered that group.

    Actually, the Holy Grail isn't real time ray tracing. It's real time radiosity. Ray-tracing works backwards from the viewpoint; radiosity works outward from the light sources. All the high-end 3D packages have radiosity renderers now. Here's a typical radiosity image. [icreate3d.com] of a kitchen. Radiosity images are great for interiors, and architects now routinely use them for rendering buildings. Lighting effects work like they do in the real world. In a radiosity renderer, you don't have to add phony light sources to make up for the lack of diffuse lighting.

    There's a subtle effect that appears in radiosity images but not ray-traced images. Look at the kitchen image and look for an inside corner. Notice the dark band at the inside corner. [mrcad.com] Look at an inside corner in the real world and you'll see that, too. Neither ray-tracing nor traditional rendering produces that effect, and it's a cue the human vision system uses to resolve corners. The dark band appears as the light bounces back and forth between the two corners, with more light absorbed on each bounce. Radiosity rendering is iterative; you render the image with the starting light sources, then re-render with each illuminated surface as a light source. Each rendering cycle improves the image, until, somewhere around 5 to 50 cycles, the bounced light has mostly been absorbed.

    There are ways to precompute light maps from radiosity, then render in real time with an ordinary renderer, and those yield better-looking images of diffuse surfaces than ray-tracing would. Some games already do this. There's a demo of true real-time radiosity [dee.cz], but it doesn't have the "dark band in corners" effect, so it's not doing very many light bounces. Geometrics [geomerics.com] has a commercial real-time game rendering system.

    Ray-tracing can get you "ooh, shiny thing", but radiosity can get to "is that real?"

  • by ardor ( 673957 ) on Friday January 18, 2008 @12:37PM (#22096138)
    Hybrids do make a lot of sense. The author's argument is the need for a spatial partitioning structure if one mixes ray tracing with rasterization. This is a no-brainer; you'd have such a structure anyway.

    In fact, his points actually show why a hybrid is perfect: most surfaces are not shiny, refractive, a portal, etc. Most are opaque - and a rasterizer is much better for this (since no ray intersection tests are necessary). He shows pathological scenes where most surfaces are reflective; however, most shots do show a lot of opaque surfaces (since Quake 4 does not feature levels where one explores a glass labyrinth or something).

    Yes, if a reflective surface fills the entire screen, its all pure ray tracing - and guess what, that is exactly what happens in a hybrid. Hybrid does not exclude pure ray tracing for special cases.

    Ideally, we'd have a rasterizer with a cast_ray() function in the shaders. The spatial partitioning structure could well reside within the graphics hardware's memory (as an added bonus, it could be used for predicate rendering). This way haze, fog, translucency, refractions, reflections, shadows could be done via ray tracing, and the basic opaque surface + its lighting via rasterization.

    Now, I keep hearing the argument that ray tracing is better because it scales better with geometric complexity. This is true, but largely irrelevant for games. Games do NOT feature 350 million triangles per frame - it just isn't necessary. Unless its a huge scene, most of these triangles would be used for fine details, and we already have normal/parallax mapping for these. (Note though that relief mapping usually doesn't pay off; either the details are too tiny for relief mapping to make a difference, or they are large, and in this case, traditional geometry displacement mapping is usually better.) So, coarse features are preserved in the geometry, and fine ridges and bumps reside in the normal map. This way, triangle count rarely exceeds 2 million triangles per frame (special cases where this does not apply include complex grass rendering and very wide and fine terrains). The difference is not visible, and in addition the mipmap chain takes care of any flickering, which would appear if all these details were geometry (and AA is more expensive than mipmaps, especially with ray tracing).

    This leaves us with no pros for pure raytracing. Take the best of both worlds, and go hybrid, just like the major CGI studios did.

"I am, therefore I am." -- Akira

Working...