Ray Tracing for Gaming Explored 266
Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting:
"How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."
Adaptive techniques: make or break (Score:4, Interesting)
Further Reading (Score:5, Interesting)
http://realtimecollisiondetection.net/blog/?p=38 [realtimeco...ection.net]
Re:This isn't what we need in games (Score:3, Interesting)
How far we've come in just 15 years (Score:5, Interesting)
I do remember that someone found some shortcuts for raytracing, and I wonder if that shortcut is applicable to realtime rendering today. From what I recall, the shortcut was to do the raytracing backwards, from the surface to the light sources. The shortcut didn't take into account ALL reflections, but I remember that it worked wonders for transparent surfaces and simple light sources. I know we investigated this for our business, but at the time we also were considering leaving the industry since the competition was starting to ignite. We did leave a few months early, but it was a smart move on our part rather than continue to invest in ever-faster hardware.
Now, 15 years later, it's finally becoming a reality of sorts, or at least considered.
Many will say that raytracing is NOT important for real time gaming, but I disagree completely. I wrote up a theory on it back in the day on how real time raytracing WOULD add a new layer of intrigue, drama and playability to the gaming world.
First of all, real time raytracing means amazingly complex shadows and reflections. Imagine a gay where you could watch for enemies stealthily by monitoring shadows or reflections -- even shadows and reflections through glass, off of water, or other reflective/transparent materials. It definitely adds some playability and excitement, especially if you find locations that provide a target for those reflections and shadows.
In my opinion, raytracing is not just about visual quality but about adding something that is definitely missing. My biggest problem with gaming has been the lack of peripheral vision (even with wide aspect ratios and funky fisheye effects). If you hunt, you know how important peripheral vision is, combined with truly 3D sound and even atmospheric conditions. Raytracing can definitely aid in rendering atmospheric conditions better (imagine which player would be aided by the sun in the soft fog and who would be harmed by it). It can't overcome the peripheral loss, but by producing truer shadows and reflections, you can overcome some of the gaming negatives by watching for the details.
Of course, I also wrote that we'd likely never see true and complete raytracing in our lives. Maybe I'll be wrong, but "true and complete" raytracing is VERY VERY complicated. Even current non-real time raytracing engines don't account for every reflection, every shadow, every atmospheric condition and every change in movement. Sure, a truly infinite raytracer IS impossible, but I know that with more hardware assistance, it will get better.
My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.
The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?
shaders vs ray tracing .... (Score:1, Interesting)
And to the (+5 Insightful) naysayer who says that the future of games will be in shaders not in RT
You can implement a ray tracer on the gpu. I.e. through the use of shaders.
already done with Quake (Score:3, Interesting)
Re:This isn't what we need in games (Score:3, Interesting)
Re:Ray tracing is so wrong... (Score:2, Interesting)
Re:How far we've come in just 15 years (Score:3, Interesting)
AND...it doesn't produce realistic images! (Score:3, Interesting)
Re:already done with Quake (Score:3, Interesting)
Wow.
(Like most ray tracing advocates, he points out that ray tracing is "perfect for parallelization", but this ignores that so is standard 3D rendering - graphics cards have been taking advantage of this parallisation for years.)
General purpose CPUs: a REALLY bad way to do this. (Score:5, Interesting)
Here's a debate between Professer Slusallek and chief scientist David Kirk of nVidia: http://scarydevil.com/~peter/io/raytracing-vs-rasterization.html [scarydevil.com] .
Here's the SIGGRAPH 2005 paper, on a prototype running at 66 MHz: http://www.cs.utah.edu/classes/cs7940-010-rajeev/sum06/papers/siggraph05.pdf [utah.edu]
Here's their hardware page: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]
Re:I guess you could use it for shadows... (Score:3, Interesting)
Re:Now hear this (Score:4, Interesting)
Are there any reasons why current GPU designs can't be adapted for hardware assisted raytracing?
Re:Now hear this (Score:3, Interesting)
Re:Now hear this (Score:3, Interesting)
As an aside, isn't the work to combine your current bog-standard processors with inbuilt "graphics processors" (a la AMD Fusion and Intel Larrabee) just going to turn every consumer CPU into a Cell-ish architecture within five years or so - a number-crunching core or two plus an array of "dumb" scalar processors?
Re:Now hear this (Score:5, Interesting)
Furthermore, rasterization requires tricks (many would call them "hacks") to make the scene approach realism. In games today, shadows are textures (or stencil volumes) created by rendering more passes. While they look "good enough", they still have artifacts and limitations falling short of realistic. Shadows in raytracing come naturally. So do reflections, and refractions. Add some global illumination and the scene looks "real".
Rasterization requires hacks like occlusion culling, depth culling, sorting, portals, levels of detail, etc to make 3D engines run realtime, and some of these algorithms are insanely hard to implement for best case scenarios, and even then you're doing unnecessary work and wasting unnecessary ram rendering things you never see. Raytracing only renders what's on the screen.
That being said, I don't think raytracing will completely replace rasterization, at least not right away. Eventually, some games may incorporate a hybrid approach like most commercial renderers do today (scanline rendering for geometry, add raytracing for reflections and shadows). Eventually, 3D hardware will better support raytracing, and maybe in another decade we'll begin to see fast 3D engines that use ray tracing exclusively.
Re:General purpose CPUs: a REALLY bad way to do th (Score:1, Interesting)
Not ray tracing, radiosity (Score:5, Interesting)
It's amusing to read this. This guy apparently works for Intel's "find ways to use more CPU time" department. Back when I was working on physics engines, I encountered that group.
Actually, the Holy Grail isn't real time ray tracing. It's real time radiosity. Ray-tracing works backwards from the viewpoint; radiosity works outward from the light sources. All the high-end 3D packages have radiosity renderers now. Here's a typical radiosity image. [icreate3d.com] of a kitchen. Radiosity images are great for interiors, and architects now routinely use them for rendering buildings. Lighting effects work like they do in the real world. In a radiosity renderer, you don't have to add phony light sources to make up for the lack of diffuse lighting.
There's a subtle effect that appears in radiosity images but not ray-traced images. Look at the kitchen image and look for an inside corner. Notice the dark band at the inside corner. [mrcad.com] Look at an inside corner in the real world and you'll see that, too. Neither ray-tracing nor traditional rendering produces that effect, and it's a cue the human vision system uses to resolve corners. The dark band appears as the light bounces back and forth between the two corners, with more light absorbed on each bounce. Radiosity rendering is iterative; you render the image with the starting light sources, then re-render with each illuminated surface as a light source. Each rendering cycle improves the image, until, somewhere around 5 to 50 cycles, the bounced light has mostly been absorbed.
There are ways to precompute light maps from radiosity, then render in real time with an ordinary renderer, and those yield better-looking images of diffuse surfaces than ray-tracing would. Some games already do this. There's a demo of true real-time radiosity [dee.cz], but it doesn't have the "dark band in corners" effect, so it's not doing very many light bounces. Geometrics [geomerics.com] has a commercial real-time game rendering system.
Ray-tracing can get you "ooh, shiny thing", but radiosity can get to "is that real?"
Re:General purpose CPUs: a REALLY bad way to do th (Score:1, Interesting)
The fact is that an EXISTING CPU running at 2GHz can outperform an EXISTING FPGA raytracer running at 66MHz.
My money's on stuff that actually works.
Re:in the player's best interests, natch (Score:4, Interesting)
I contend that game AI is almost always laughably bad (or pretty much non-existent). I realize Mass Effect doesn't exactly win a lot of points for its AI, but the problems very nearly ruined a AAA-developer/large-budget game. I remember one point where, out of battle, I was telling one of my squad to go somewhere. There was pretty much one feature in the room - a wall intersecting the direct path to the target point. Getting around this wall would require one small deviation from the direct path. Instead of walking around the wall, the character just stood there "I'm going to need a transporter to get there" or something.
I can't imagine how the "AI" could have been implemented in order for that kind of failure to be possible (and common - I had repeated problems with this through the game). I assume they must have just cheated vigorously on the "follow" logic, as - if they'd used the same system - you'd be losing your squad around every corner.
Really, though, none of the maps were that complicated. The "navigable area" map for the problem location couldn't have had more than 200 or so vertices (it was a very simple map, one of the explorable bunker type areas). That's few enough that you could just Floyd-Warshall the whole graph. But, more generally, a stupidly naive, guessing DFS (that capped at 3 turns or something) would have worked just fine too. I can't think of a solution or algorithm that would fail the way their system did constantly. Mind-boggling.
Stepping back a bit, this shouldn't even be a consideration. There are simple, fast, robust algorithms that could handle this kind of trivial pathing problem without putting any strain on CPU or occupying more than a couple pages of code. That they don't have a better solution says that they (and most of the games industry, in my experience as a player) value AI at very close to 0.
Re:Adaptive techniques: make or break (Score:1, Interesting)
If you have solid Unix knowledge and experience, and you value your work, You don't work with windows, and you set that as a rule with your employer since day 0.
I have worked at various Telcos, ISPs, Hosting Companies, Software Factories, and now I started my own company (very small, but gives me and my family all we need) and the last windows version I ever touched was 3.11.
If you are a good professional, and you respect yourself, you don't have to do work you don't want to do. Just as a Windows guy doesn't have to know Unix, as a Unix Coder/Admin you just state very clearly that you DON'T KNOW Windows, and are not interested in learning.
advantages for *artists* (Score:2, Interesting)
The obvious problem with layering hacks on top of hacks to make your games look better is that it takes more time and money to pay programmers to develop them.
However, one of the clear trends in gaming is spending more on game art and artists than on programming and programmers, and maybe a less obvious problem is that working with hacks is a lot harder for the artists, as well. As an example: setting up shadows in rasterization is not too difficult, codewise (it's fairly expensive, timewise, but clearly workable if you don't have too many lights).
However, from an artist's point of view, rasterized shadows are wonky, counterintuitive, and fiddly to use. You have to specify which lights cast shadows of which objects on which other objects. The specific angles of lights, cameras, and the scene in question can cause ugly artifacts, which may or may not be possible to eradicate. This is particularly a problem in interactive play, where it may not be practical to preview all possible combinations of the above.
So, an overlooked advantage of ray tracing may be to provide a better artistic *medium* for game developers -- if you can afford it....
Re:Now hear this (Score:3, Interesting)
realism (Score:3, Interesting)
This is a rather good point; at some point, adding more polygons doesn't do anything to make an unrealistic scene look more realistic. This is true for raytracing and polygon rendering alike. Ray tracing has some advantages here, for what it's worth; it scales better with huge scenes, and it can represent non-triangular primitives natively (though all the fast ray-tracers I've seen only deal with triangles). I wouldn't call reflection a non-issue; currently, no one cares because current implementations aren't very good, and there aren't any better alternatives to compare with. Same with refraction. Ray-tracing can do soft shadows, but they're computationally more expensive (at least, all the approaches I know are).
Ray tracing is just the next step towards realism. Once we start doing ray tracing, we can move on to global illumination. Photon mapping is a GI algorithm that works like ray tracing in reverse; it casts rays out from the light source, and then they can bounce of off or be absorbed by the objects they hit (depending on surface properties), and their final location is stored as a point in a data structure called a photon map. Then, when you ray trace, you use the local density of the photon map to approximate the amount of light. Photon mapping can be used to simulate ambient light, caustics (patches of light reflected off of or refracted by shiny things), and subsurface scattering in a way that is physically correct and unbiased. See "The Light of Mies van der Rohe" animation on this page [ucsd.edu] for an example of ambient light, or here [ucsd.edu] for some images of caustics.
Ray tracers can do all the things that polygon renderers can do, plus a bit more. Once the hardware gets fast enough (and it looks like that will happen within a few years), there's no real reason not to use ray tracing. Photon mapping is more expensive (there's an nlogn sort involved in constructing the photon map), so it will probably be quite a while before we start to see real-time global illumination updates, but that's the next big step, and we can't go from here to there with the polygon rendering algorithms we're using now.