Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

Ray Tracing for Gaming Explored 266

Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting: "How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."
This discussion has been archived. No new comments can be posted.

Ray Tracing for Gaming Explored

Comments Filter:
  • by MessyBlob ( 1191033 ) on Friday January 18, 2008 @09:08AM (#22091858)
    Adaptive rendering would seem to be the way forward. Ray tracing has the advantage that you can bail out when it gets complicated, or render areas to the desired resolution. This means a developer can prioritise certain regions of the scene and ignore others: useful during scenes of fast motion, or to bring detail to stillness. The result is similar to a decoded video stream, with detail in the areas that are usefully perceived as detailed. Combining this with eye position sensing (for a single user) would improve the experience.
  • Further Reading (Score:5, Interesting)

    by moongha ( 179616 ) on Friday January 18, 2008 @09:24AM (#22091970)
    ... on the subject, from someone that doesn't have a vested interest in seeing real time ray tracing in games becoming a reality.

    http://realtimecollisiondetection.net/blog/?p=38 [realtimeco...ection.net]
  • by BlueMonk ( 101716 ) <BlueMonkMN@gmail.com> on Friday January 18, 2008 @09:28AM (#22091992) Homepage
    I think the problem with the current system, however, is that you have to be a professional 3D game developer with years of study and experience to understand how it all works, whereas if you could define scenes in the same terms that ray tracers accept scene definitions, I think the complexity might be taken down a notch for developers making quality 3D game development a little more accessible and easy to deal with, even if it doesn't provide technical advantages.
  • by dada21 ( 163177 ) <adam.dada@gmail.com> on Friday January 18, 2008 @09:31AM (#22092026) Homepage Journal
    I was a founder of one of the Midwest's first rendering farms back in 1993, a company that has now moved on to product design. Back then we had Pentium 60s (IIRC) with 64MB of RAM. A single frame of non-ray traced 3D Studio animation took an hour or more. We had probably 40 PCs that handled the rendering, and they'd chug along 20 hours a day spitting out literally seconds of video. I remember our first ray trace sample (can't recall the platform for the PC, though) and it took DAYS to render a single frame.

    I do remember that someone found some shortcuts for raytracing, and I wonder if that shortcut is applicable to realtime rendering today. From what I recall, the shortcut was to do the raytracing backwards, from the surface to the light sources. The shortcut didn't take into account ALL reflections, but I remember that it worked wonders for transparent surfaces and simple light sources. I know we investigated this for our business, but at the time we also were considering leaving the industry since the competition was starting to ignite. We did leave a few months early, but it was a smart move on our part rather than continue to invest in ever-faster hardware.

    Now, 15 years later, it's finally becoming a reality of sorts, or at least considered.

    Many will say that raytracing is NOT important for real time gaming, but I disagree completely. I wrote up a theory on it back in the day on how real time raytracing WOULD add a new layer of intrigue, drama and playability to the gaming world.

    First of all, real time raytracing means amazingly complex shadows and reflections. Imagine a gay where you could watch for enemies stealthily by monitoring shadows or reflections -- even shadows and reflections through glass, off of water, or other reflective/transparent materials. It definitely adds some playability and excitement, especially if you find locations that provide a target for those reflections and shadows.

    In my opinion, raytracing is not just about visual quality but about adding something that is definitely missing. My biggest problem with gaming has been the lack of peripheral vision (even with wide aspect ratios and funky fisheye effects). If you hunt, you know how important peripheral vision is, combined with truly 3D sound and even atmospheric conditions. Raytracing can definitely aid in rendering atmospheric conditions better (imagine which player would be aided by the sun in the soft fog and who would be harmed by it). It can't overcome the peripheral loss, but by producing truer shadows and reflections, you can overcome some of the gaming negatives by watching for the details.

    Of course, I also wrote that we'd likely never see true and complete raytracing in our lives. Maybe I'll be wrong, but "true and complete" raytracing is VERY VERY complicated. Even current non-real time raytracing engines don't account for every reflection, every shadow, every atmospheric condition and every change in movement. Sure, a truly infinite raytracer IS impossible, but I know that with more hardware assistance, it will get better.

    My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.

    The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?
  • by Anonymous Coward on Friday January 18, 2008 @09:38AM (#22092070)
    As already mentioned, openrt is not open source. A good open source RTRT I've looked at is Manta. http://code.sci.utah.edu/Manta/index.php/Main_Page [utah.edu]

    And to the (+5 Insightful) naysayer who says that the future of games will be in shaders not in RT ... what are you comparing there? You're almost comparing a technology with an implementation.

    You can implement a ray tracer on the gpu. I.e. through the use of shaders.
  • by RMH101 ( 636144 ) on Friday January 18, 2008 @09:39AM (#22092088)
  • by CannedTurkey ( 920516 ) on Friday January 18, 2008 @09:41AM (#22092096)
    I think the problem with the current system is that it scales horribly. Right now we're barely pushing 2 Megapixel displays with all those shader effects turned on. If the Japanese have their way we'll have 33MP displays in only 7 years - because this is the 'broadcast standard' they're shooting for. Can they double the performance of the current tech every 2 years to eventually meet that? I have my doubts.
  • by Maian ( 887886 ) on Friday January 18, 2008 @09:46AM (#22092142)
    According to the article, incorrect. Read the 3rd page: http://www.pcper.com/article.php?aid=506&type=expert&pid=3 [pcper.com]
  • You can do global lighting (the next step from radiosity) with no side effects using interval rendering/raytracing. and since interval math lends itself to parallelization, your only limit would be hardware cost, which should eventually be low enough to have a globally-lit and raytraced real-time game. At first maybe just 3d pacman, but how awesome would that be!

  • by Joce640k ( 829181 ) on Friday January 18, 2008 @10:02AM (#22092304) Homepage
    Ray tracing looks hyper real on scenes full of plastic and mirrors but it's useless for rendering real-world scenes where radiosity effects dominate.
  • by mdwh2 ( 535323 ) on Friday January 18, 2008 @10:30AM (#22092642) Journal
    It says that a quad core processor gets 16.9 frames at 256x256 resolution.

    Wow.

    (Like most ray tracing advocates, he points out that ray tracing is "perfect for parallelization", but this ignores that so is standard 3D rendering - graphics cards have been taking advantage of this parallisation for years.)
  • Professer Philipp Slusallek of the University of Saarbruecken demonstrated a dedicated raytracer in 2005, using a 66 MHz Xilinx FPGA with about 6 million gates. The latest ATI and nVidia GPUs have 100 times as many transistors and run at 6-8 times the clock with hundreds of times the memory bandwidth. Raytracing is completely parallelizable, and scales up almost linearly with processors, so it's not at all unlikely that if those kinds of resources were applied to raytracing instead of vectorizing you'd be able to add a raytracer capable of rendering 60+ FPS at the level of detail of the very latest games into the transistor budget of the chips they're designing now without even noticing.

    Here's a debate between Professer Slusallek and chief scientist David Kirk of nVidia: http://scarydevil.com/~peter/io/raytracing-vs-rasterization.html [scarydevil.com] .

    Here's the SIGGRAPH 2005 paper, on a prototype running at 66 MHz: http://www.cs.utah.edu/classes/cs7940-010-rajeev/sum06/papers/siggraph05.pdf [utah.edu]

    Here's their hardware page: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]
  • Read TFA. Ray tracing does NOT happen on the graphics card; it happens on your CPU. And they've got Quake 4 at 1280x running at 90 FPS raytraced already. Since raytracing scales almost linearly, as you add more cores to your CPU (which is likely the future direction of CPU technology improvement), you improve raytracing performance by about the same factor.
  • Re:Now hear this (Score:4, Interesting)

    by MrNemesis ( 587188 ) on Friday January 18, 2008 @11:45AM (#22093818) Homepage Journal
    Even more interestingly, they managed to do Quake 4 using CPU's only. Since modern graphics card are no longer just a bunch of vector processors but merely a colossal stack of many scalar processing units they should be able to be much more flexibly adapted to different types of processing - at the moment their internal software is generally specialised for polygon pushing, but I don't see any reason why nVidia or whoever could start developing an OpenRT stack to sit alongside their OpenGL and DirectX stacks, other than there not being much interest in consumer level raytracing just yet (is there raytracing work being done for GPGPU projects?).

    Are there any reasons why current GPU designs can't be adapted for hardware assisted raytracing?
  • Re:Now hear this (Score:3, Interesting)

    by Mister Whirly ( 964219 ) on Friday January 18, 2008 @11:47AM (#22093868) Homepage
    And people used to say the same thing about real-time 3D gaming in the late 80s. Along with - water and fire will never look realistic, you will never be able to render more than 10 frames per second, and nobody will ever buy an expensive video card with tons of memory and a FPU built in.
  • Re:Now hear this (Score:3, Interesting)

    by MrNemesis ( 587188 ) on Friday January 18, 2008 @12:11PM (#22094356) Homepage Journal
    Thanks for that, good to finally see something that seems ideally suited to the Cell.

    As an aside, isn't the work to combine your current bog-standard processors with inbuilt "graphics processors" (a la AMD Fusion and Intel Larrabee) just going to turn every consumer CPU into a Cell-ish architecture within five years or so - a number-crunching core or two plus an array of "dumb" scalar processors?
  • Re:Now hear this (Score:5, Interesting)

    by Cornelius the Great ( 555189 ) on Friday January 18, 2008 @12:21PM (#22094556)
    You completely missed the parent's point. Traditional rasterization chugs when a scene gets complex enough (I think the complexity is O(n)). Ray tracing scales very nicely (O(Log n)) and you can throw in stuff like TRUE reflection/refraction with minimal decreases in performance, with millions more polygons. Yes, rasterization is faster in current games, but throw in hundreds of millions of polygons into a scene and see what happens.

    Furthermore, rasterization requires tricks (many would call them "hacks") to make the scene approach realism. In games today, shadows are textures (or stencil volumes) created by rendering more passes. While they look "good enough", they still have artifacts and limitations falling short of realistic. Shadows in raytracing come naturally. So do reflections, and refractions. Add some global illumination and the scene looks "real".

    Rasterization requires hacks like occlusion culling, depth culling, sorting, portals, levels of detail, etc to make 3D engines run realtime, and some of these algorithms are insanely hard to implement for best case scenarios, and even then you're doing unnecessary work and wasting unnecessary ram rendering things you never see. Raytracing only renders what's on the screen.

    That being said, I don't think raytracing will completely replace rasterization, at least not right away. Eventually, some games may incorporate a hybrid approach like most commercial renderers do today (scanline rendering for geometry, add raytracing for reflections and shadows). Eventually, 3D hardware will better support raytracing, and maybe in another decade we'll begin to see fast 3D engines that use ray tracing exclusively.
  • by Anonymous Coward on Friday January 18, 2008 @12:30PM (#22094732)
    David Kirk's explanation is totally correct. The need for "accuracy" in gaming is not high enough to justify the downside of ray tracing. Multipass rendering and shaders are obviously sufficient for 99% of the games given the state of GPUs and the quality of today's modern game engines. I consulted with nVidia back in the 90s on multipass techniques based on my PhD work at Upenn (available at my current webpage: http://www.pages.drexel.edu/~pjd37/diefenbach96thesis.pdf [drexel.edu] ), and these so-called "hacks" can approximate physics-based calculations very closely for non-critical applications.
  • by Animats ( 122034 ) on Friday January 18, 2008 @12:52PM (#22095168) Homepage

    It's amusing to read this. This guy apparently works for Intel's "find ways to use more CPU time" department. Back when I was working on physics engines, I encountered that group.

    Actually, the Holy Grail isn't real time ray tracing. It's real time radiosity. Ray-tracing works backwards from the viewpoint; radiosity works outward from the light sources. All the high-end 3D packages have radiosity renderers now. Here's a typical radiosity image. [icreate3d.com] of a kitchen. Radiosity images are great for interiors, and architects now routinely use them for rendering buildings. Lighting effects work like they do in the real world. In a radiosity renderer, you don't have to add phony light sources to make up for the lack of diffuse lighting.

    There's a subtle effect that appears in radiosity images but not ray-traced images. Look at the kitchen image and look for an inside corner. Notice the dark band at the inside corner. [mrcad.com] Look at an inside corner in the real world and you'll see that, too. Neither ray-tracing nor traditional rendering produces that effect, and it's a cue the human vision system uses to resolve corners. The dark band appears as the light bounces back and forth between the two corners, with more light absorbed on each bounce. Radiosity rendering is iterative; you render the image with the starting light sources, then re-render with each illuminated surface as a light source. Each rendering cycle improves the image, until, somewhere around 5 to 50 cycles, the bounced light has mostly been absorbed.

    There are ways to precompute light maps from radiosity, then render in real time with an ordinary renderer, and those yield better-looking images of diffuse surfaces than ray-tracing would. Some games already do this. There's a demo of true real-time radiosity [dee.cz], but it doesn't have the "dark band in corners" effect, so it's not doing very many light bounces. Geometrics [geomerics.com] has a commercial real-time game rendering system.

    Ray-tracing can get you "ooh, shiny thing", but radiosity can get to "is that real?"

  • by Anonymous Coward on Friday January 18, 2008 @01:03PM (#22095426)
    Sorry, but that design is worthless. Any academic, or indeed student, with a few $1000 can buy a Xilinx system and design a 66MHz chip. What they tend to do next is assume that all they have to do is buy a more expensive Xilinx system and the chip will suddenly run at 2GHz. Nothing could be further from the truth. The design changes completely.

    The fact is that an EXISTING CPU running at 2GHz can outperform an EXISTING FPGA raytracer running at 66MHz.

    My money's on stuff that actually works.
  • by JMZero ( 449047 ) on Friday January 18, 2008 @01:04PM (#22095452) Homepage
    I contend that game AI is sometimes more advanced than academic AI

    I contend that game AI is almost always laughably bad (or pretty much non-existent). I realize Mass Effect doesn't exactly win a lot of points for its AI, but the problems very nearly ruined a AAA-developer/large-budget game. I remember one point where, out of battle, I was telling one of my squad to go somewhere. There was pretty much one feature in the room - a wall intersecting the direct path to the target point. Getting around this wall would require one small deviation from the direct path. Instead of walking around the wall, the character just stood there "I'm going to need a transporter to get there" or something.

    I can't imagine how the "AI" could have been implemented in order for that kind of failure to be possible (and common - I had repeated problems with this through the game). I assume they must have just cheated vigorously on the "follow" logic, as - if they'd used the same system - you'd be losing your squad around every corner.

    Really, though, none of the maps were that complicated. The "navigable area" map for the problem location couldn't have had more than 200 or so vertices (it was a very simple map, one of the explorable bunker type areas). That's few enough that you could just Floyd-Warshall the whole graph. But, more generally, a stupidly naive, guessing DFS (that capped at 3 turns or something) would have worked just fine too. I can't think of a solution or algorithm that would fail the way their system did constantly. Mind-boggling.

    Stepping back a bit, this shouldn't even be a consideration. There are simple, fast, robust algorithms that could handle this kind of trivial pathing problem without putting any strain on CPU or occupying more than a couple pages of code. That they don't have a better solution says that they (and most of the games industry, in my experience as a player) value AI at very close to 0.
  • Yes, a low-budget junior, or a crappy developer.

    If you have solid Unix knowledge and experience, and you value your work, You don't work with windows, and you set that as a rule with your employer since day 0.

    I have worked at various Telcos, ISPs, Hosting Companies, Software Factories, and now I started my own company (very small, but gives me and my family all we need) and the last windows version I ever touched was 3.11.

    If you are a good professional, and you respect yourself, you don't have to do work you don't want to do. Just as a Windows guy doesn't have to know Unix, as a Unix Coder/Admin you just state very clearly that you DON'T KNOW Windows, and are not interested in learning.
  • by comingstorm ( 807999 ) on Friday January 18, 2008 @02:22PM (#22097172) Homepage

    The obvious problem with layering hacks on top of hacks to make your games look better is that it takes more time and money to pay programmers to develop them.

    However, one of the clear trends in gaming is spending more on game art and artists than on programming and programmers, and maybe a less obvious problem is that working with hacks is a lot harder for the artists, as well. As an example: setting up shadows in rasterization is not too difficult, codewise (it's fairly expensive, timewise, but clearly workable if you don't have too many lights).

    However, from an artist's point of view, rasterized shadows are wonky, counterintuitive, and fiddly to use. You have to specify which lights cast shadows of which objects on which other objects. The specific angles of lights, cameras, and the scene in question can cause ugly artifacts, which may or may not be possible to eradicate. This is particularly a problem in interactive play, where it may not be practical to preview all possible combinations of the above.

    So, an overlooked advantage of ray tracing may be to provide a better artistic *medium* for game developers -- if you can afford it....

  • Re:Now hear this (Score:3, Interesting)

    by IdeaMan ( 216340 ) on Friday January 18, 2008 @02:35PM (#22097436) Homepage Journal
    Actually if the Open Source world gets a clean, easy to use OpenRT stack and standard going before MS they would have one shot at making the next Killer App. Once they get that one truly awesome game out using RT and an easy way to switch to Linux, the rest of the gaming world could fall right into their laps.
  • realism (Score:3, Interesting)

    by j1m+5n0w ( 749199 ) on Friday January 18, 2008 @08:20PM (#22103144) Homepage Journal

    Higher polygon counts, sure nice to have, but again not really all that important...

    This is a rather good point; at some point, adding more polygons doesn't do anything to make an unrealistic scene look more realistic. This is true for raytracing and polygon rendering alike. Ray tracing has some advantages here, for what it's worth; it scales better with huge scenes, and it can represent non-triangular primitives natively (though all the fast ray-tracers I've seen only deal with triangles). I wouldn't call reflection a non-issue; currently, no one cares because current implementations aren't very good, and there aren't any better alternatives to compare with. Same with refraction. Ray-tracing can do soft shadows, but they're computationally more expensive (at least, all the approaches I know are).

    Ray tracing is just the next step towards realism. Once we start doing ray tracing, we can move on to global illumination. Photon mapping is a GI algorithm that works like ray tracing in reverse; it casts rays out from the light source, and then they can bounce of off or be absorbed by the objects they hit (depending on surface properties), and their final location is stored as a point in a data structure called a photon map. Then, when you ray trace, you use the local density of the photon map to approximate the amount of light. Photon mapping can be used to simulate ambient light, caustics (patches of light reflected off of or refracted by shiny things), and subsurface scattering in a way that is physically correct and unbiased. See "The Light of Mies van der Rohe" animation on this page [ucsd.edu] for an example of ambient light, or here [ucsd.edu] for some images of caustics.

    Ray tracers can do all the things that polygon renderers can do, plus a bit more. Once the hardware gets fast enough (and it looks like that will happen within a few years), there's no real reason not to use ray tracing. Photon mapping is more expensive (there's an nlogn sort involved in constructing the photon map), so it will probably be quite a while before we start to see real-time global illumination updates, but that's the next big step, and we can't go from here to there with the polygon rendering algorithms we're using now.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...