Ray Tracing for Gaming Explored 266
Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting:
"How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."
Holy Grail? Maybe not. (Score:4, Informative)
He's talking about scaling (Score:2, Informative)
So all he's demonstrating here with his "16-core" experiment is that ray-tracing is a highly parallel process, and that throwing lots of cores at it will work effectively to increase performance without reaching that point of diminishing returns (at least, not reaching it very quickly.) Yes, we expect 16 cores to be faster than 4 cores or 1 core, but he's saying that when we're ray-tracing we can expect 16 cores to be almost four times faster than four cores and almost sixteen times faster than one.
Re:This isn't what we need in games (Score:2, Informative)
Yes, and standard rasterisation methods are embarrassingly parallel, too. As the other reply points out, we already have parallel processors in the form of GPUs. So I don't see that either multicore CPUs, or that raytracing is easily parallised, is going to make raytracing suddenly catch up.
What we might see perhaps is that one day processors are so fast enough that people are willing to take the performance hit to get better effects from ray tracing (though even then, I hear that even non-real-time realistic 3D rendering often doesn't use ray tracing these days?)
Re:already done with Quake (Score:3, Informative)
Re:Now hear this (Score:5, Informative)
One of the biggest hurdle in game graphics is geometry detail. Normal mapping is just a hack to make things appear more detailed, it breaks down in some situations. Raytracing will allow *much* higher geometry detail than rasterisation. Better reflection, refraction, caustics and so on are just gravy.
Re:Now hear this (Score:2, Informative)
In a word, yes. Check Beyond3D.com forum's GPGPU subforum (lots of raytracer stuff discussed and introduced) -- and also the CellPerformance subforum about raytracers on PS3's CPU (those SPEs rock for that) running on Linux of course.
Re:Now hear this (Score:5, Informative)
No, they have a graph that very clearly shows that raytracing while using a binary tree to cull non-visible surfaces has a performance advantage over rasterizing while using nothing to cull non-visible surfaces. Perhaps someday a raster engine will regain that advantage by using these "BSP Trees" [gamedev.net] as well.
Nice introduction, but wrong conclusion (Score:4, Informative)
In fact, his points actually show why a hybrid is perfect: most surfaces are not shiny, refractive, a portal, etc. Most are opaque - and a rasterizer is much better for this (since no ray intersection tests are necessary). He shows pathological scenes where most surfaces are reflective; however, most shots do show a lot of opaque surfaces (since Quake 4 does not feature levels where one explores a glass labyrinth or something).
Yes, if a reflective surface fills the entire screen, its all pure ray tracing - and guess what, that is exactly what happens in a hybrid. Hybrid does not exclude pure ray tracing for special cases.
Ideally, we'd have a rasterizer with a cast_ray() function in the shaders. The spatial partitioning structure could well reside within the graphics hardware's memory (as an added bonus, it could be used for predicate rendering). This way haze, fog, translucency, refractions, reflections, shadows could be done via ray tracing, and the basic opaque surface + its lighting via rasterization.
Now, I keep hearing the argument that ray tracing is better because it scales better with geometric complexity. This is true, but largely irrelevant for games. Games do NOT feature 350 million triangles per frame - it just isn't necessary. Unless its a huge scene, most of these triangles would be used for fine details, and we already have normal/parallax mapping for these. (Note though that relief mapping usually doesn't pay off; either the details are too tiny for relief mapping to make a difference, or they are large, and in this case, traditional geometry displacement mapping is usually better.) So, coarse features are preserved in the geometry, and fine ridges and bumps reside in the normal map. This way, triangle count rarely exceeds 2 million triangles per frame (special cases where this does not apply include complex grass rendering and very wide and fine terrains). The difference is not visible, and in addition the mipmap chain takes care of any flickering, which would appear if all these details were geometry (and AA is more expensive than mipmaps, especially with ray tracing).
This leaves us with no pros for pure raytracing. Take the best of both worlds, and go hybrid, just like the major CGI studios did.
Re:Adaptive techniques: make or break (Score:3, Informative)
No cookie for whoever gave the bozo above me "insightful".
Casting from the light source forward (photon tracing) is too expensive and is rarely done, though it does give you more accurate effects - caustics, colored bleeding from reflected illumination etc.
Some of that can be approximated more cheaply with radiosity algorithms.
Re:already done with Quake (Score:3, Informative)
Re:already done with Quake (Score:3, Informative)
I for one welcome our new raytracing overlords...