Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

Ray Tracing for Gaming Explored 266

Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting: "How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."
This discussion has been archived. No new comments can be posted.

Ray Tracing for Gaming Explored

Comments Filter:
  • by Dr. Eggman ( 932300 ) on Friday January 18, 2008 @09:35AM (#22092046)
    Although I have a hard time arguing in the realm of 3D lighting, I will direct attention to the Beyond3D article, Real-Time Ray Tracing: Holy Grail or Fool's Errand? [beyond3d.com]. Far be it of me to claim that this article applies to all situations of 3D lighting, it may be that Ray Tracing is the best choice for games, but I for one am glad to see an article that atleast looks into the possibility that Ray Tracing is not the best solution; I hate to just assume such things. Indeed, the article concludes that Ray Tracing has its own limitations and that a hybrid with rasterisation techniques would be superior to one or the other.
  • by Xocet_00 ( 635069 ) on Friday January 18, 2008 @10:00AM (#22092278)
    In a lot of cases in computing, doubling the number of pipelines (read: adding a second core, for example) does not, in fact, double performance unless the problem being worked on is highly parallelizable. For example, this is why one can not accurately describe a machine with two 2.5GHz processors as a "5GHz machine". Most computation that personal computers do on a day to day basis does not scale well, and the average user will reach a point of diminishing returns very quickly if they add many cores to increase performance for these tasks.

    So all he's demonstrating here with his "16-core" experiment is that ray-tracing is a highly parallel process, and that throwing lots of cores at it will work effectively to increase performance without reaching that point of diminishing returns (at least, not reaching it very quickly.) Yes, we expect 16 cores to be faster than 4 cores or 1 core, but he's saying that when we're ray-tracing we can expect 16 cores to be almost four times faster than four cores and almost sixteen times faster than one.
  • by mdwh2 ( 535323 ) on Friday January 18, 2008 @10:28AM (#22092606) Journal
    Keep in mind recent parallelization advances. According to TFA, raytracing performance scales almost linearly with the number of processors

    Yes, and standard rasterisation methods are embarrassingly parallel, too. As the other reply points out, we already have parallel processors in the form of GPUs. So I don't see that either multicore CPUs, or that raytracing is easily parallised, is going to make raytracing suddenly catch up.

    What we might see perhaps is that one day processors are so fast enough that people are willing to take the performance hit to get better effects from ray tracing (though even then, I hear that even non-real-time realistic 3D rendering often doesn't use ray tracing these days?)
  • by quantumRage ( 1122013 ) on Friday January 18, 2008 @10:49AM (#22092928)
    well, if you look more closely, you would notice that the articles have the same author!
  • Re:Now hear this (Score:5, Informative)

    by tolan-b ( 230077 ) on Friday January 18, 2008 @12:02PM (#22094132)
    I think you're missing the point. The reason Quake 4 looks crap raytraced was because it wasn't written to be raytraced, no shaders are being applied (because they were all written for a raster engine) so of course it looks bad. This stuff is just research.

    One of the biggest hurdle in game graphics is geometry detail. Normal mapping is just a hack to make things appear more detailed, it breaks down in some situations. Raytracing will allow *much* higher geometry detail than rasterisation. Better reflection, refraction, caustics and so on are just gravy.
  • Re:Now hear this (Score:2, Informative)

    by Anonymous Coward on Friday January 18, 2008 @12:07PM (#22094278)
    (is there raytracing work being done for GPGPU projects?)

    In a word, yes. Check Beyond3D.com forum's GPGPU subforum (lots of raytracer stuff discussed and introduced) -- and also the CellPerformance subforum about raytracers on PS3's CPU (those SPEs rock for that) running on Linux of course.
  • Re:Now hear this (Score:5, Informative)

    by roystgnr ( 4015 ) * <roy&stogners,org> on Friday January 18, 2008 @12:41PM (#22094926) Homepage
    they have a graph that very clearly shows raytracing at a performance advantage as complexity increases.

    No, they have a graph that very clearly shows that raytracing while using a binary tree to cull non-visible surfaces has a performance advantage over rasterizing while using nothing to cull non-visible surfaces. Perhaps someday a raster engine will regain that advantage by using these "BSP Trees" [gamedev.net] as well.
  • by ardor ( 673957 ) on Friday January 18, 2008 @01:37PM (#22096138)
    Hybrids do make a lot of sense. The author's argument is the need for a spatial partitioning structure if one mixes ray tracing with rasterization. This is a no-brainer; you'd have such a structure anyway.

    In fact, his points actually show why a hybrid is perfect: most surfaces are not shiny, refractive, a portal, etc. Most are opaque - and a rasterizer is much better for this (since no ray intersection tests are necessary). He shows pathological scenes where most surfaces are reflective; however, most shots do show a lot of opaque surfaces (since Quake 4 does not feature levels where one explores a glass labyrinth or something).

    Yes, if a reflective surface fills the entire screen, its all pure ray tracing - and guess what, that is exactly what happens in a hybrid. Hybrid does not exclude pure ray tracing for special cases.

    Ideally, we'd have a rasterizer with a cast_ray() function in the shaders. The spatial partitioning structure could well reside within the graphics hardware's memory (as an added bonus, it could be used for predicate rendering). This way haze, fog, translucency, refractions, reflections, shadows could be done via ray tracing, and the basic opaque surface + its lighting via rasterization.

    Now, I keep hearing the argument that ray tracing is better because it scales better with geometric complexity. This is true, but largely irrelevant for games. Games do NOT feature 350 million triangles per frame - it just isn't necessary. Unless its a huge scene, most of these triangles would be used for fine details, and we already have normal/parallax mapping for these. (Note though that relief mapping usually doesn't pay off; either the details are too tiny for relief mapping to make a difference, or they are large, and in this case, traditional geometry displacement mapping is usually better.) So, coarse features are preserved in the geometry, and fine ridges and bumps reside in the normal map. This way, triangle count rarely exceeds 2 million triangles per frame (special cases where this does not apply include complex grass rendering and very wide and fine terrains). The difference is not visible, and in addition the mipmap chain takes care of any flickering, which would appear if all these details were geometry (and AA is more expensive than mipmaps, especially with ray tracing).

    This leaves us with no pros for pure raytracing. Take the best of both worlds, and go hybrid, just like the major CGI studios did.
  • by Cafe Alpha ( 891670 ) on Friday January 18, 2008 @04:05PM (#22099168) Journal
    Uhm, tracing from the camera IS how ray tracing works.

    No cookie for whoever gave the bozo above me "insightful".

    Casting from the light source forward (photon tracing) is too expensive and is rarely done, though it does give you more accurate effects - caustics, colored bleeding from reflected illumination etc.

    Some of that can be approximated more cheaply with radiosity algorithms.
  • by Facetious ( 710885 ) on Friday January 18, 2008 @05:50PM (#22101134) Journal

    It says that a quad core processor gets 16.9 frames at 256x256 resolution.
    Keep reading there, genius. If you had read beyond page one you would see that they are getting 90 fps at 768x768 on a quad core OR 90 fps at 1280x720 on 8 cores.
  • by Enahs ( 1606 ) on Friday January 18, 2008 @09:36PM (#22103892) Journal
    As someone pointed out in another comment, they're getting much higher framerates than that. Plus, their ultimate goal seems to be an OpenRT, roughly analagous to OpenGL, with the goal of interfacing with raytracing gfx cards.

    I for one welcome our new raytracing overlords...

Say "twenty-three-skiddoo" to logout.

Working...