Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

NVIDIA Doubts Ray Tracing Is the Future of Games 198

SizeWise writes "After Intel's prominent work in ray tracing in the both the desktop and mobile spaces, many gamers might be thinking that the move to ray-tracing engines is inevitable. NVIDIA's Chief Scientist, Dr. David Kirk, thinks otherwise as revealed in this interview on rasterization and ray tracing. Kirk counters many of Intel's claims of ray tracing's superiority, such as the inherent benefit to polygon complexity, while pointing out areas where ray-tracing engines would falter, such as basic antialiasing. The interview concludes with discussions on mixing the two rendering technologies and whether NVIDIA hardware can efficiently handle ray tracing calculations as well."
This discussion has been archived. No new comments can be posted.

NVIDIA Doubts Ray Tracing Is the Future of Games

Comments Filter:
  • by $RANDOMLUSER ( 804576 ) on Friday March 07, 2008 @01:37PM (#22677400)
    Intel says to do it with the CPU, and nVidia says to do it with the GPU. What a surprise.
  • by 9mm Censor ( 705379 ) on Friday March 07, 2008 @01:46PM (#22677526) Homepage
    What about game and engine devs? Where do they see the future going?
  • by Squapper ( 787068 ) on Friday March 07, 2008 @01:57PM (#22677700)
    I am an game-developing 3d-artist, and our biggest problem right now is the lack of ability to render complex per-pixel shaders (particulary on the PS3). But this is only a short-term problem, and what would we do with the ability to create more complex shaders? Fake the appearance of the benefits of Ray-tracing (more complex scenes and more realistic surfaces) of course!

    On the other hand, the film industry could be a good place to look for the future technology of computer games. And as it is right now, it's preferable to avoid the slowness of raytracing and fake the effects instead, even when making big blockbuster movies.
  • Hardly anything new (Score:4, Interesting)

    by K. S. Kyosuke ( 729550 ) on Friday March 07, 2008 @01:58PM (#22677714)

    For years, the movie studios were using Pixar PRMan, which is in many ways a high-quality software equivalent of a modern graphics card. It takes a huge and complex scene, divides it into screen-space buckets of equal size, sorts the geometrical primitives in some way, and then (pay attention now!) tesselates and dices the primitives into micropolygons about one pixel each in size (OpenGL fragments, anyone?), shades and deforms them them using simple (or not-so-simple) algorithms written in the RenderMan shading language (hey, vertex and pixel shaders!) and then, it rasterizes them using stochastic algorithms that allow for high quality motion blur, depth-of-field and antialiasing.

    Now the latest part is slightly easier for a raytracer, which can cast a ray from any place in the scene - of course, it needs this functionality anyway. But a raytracer also needs random access to the scene which means that you need to keep the whole scene in memory at all times, along with spatial indexing structures. The REYES algorithms of PRMan needs no such thing (it easily handles 2 GB of scene data on a 512 MB machine along with gigs of textures), and it allows for coherent memory access patterns and coherent computation in general (there is of course some research into coherency in raytracing, but IIRC, the results were never that good). This is a big win for graphics cards, as the bandwidth of graphics card RAM has never been spectacular - it's basically the same case as with general-purpose CPU of your computer and its main memory. But with 128 or so execution units of modern graphics card, the problem is even more apparent.

    Unless the Intel engineeers stumbled upon some spectacular breakthrough, I fail to see how raytracing is supposed to have any advantage in a modern graphics card. And if I had to choose between vivid forest imagery with millions of leaves flapping in the wind and reflective balls on a checkered floor, I know what I would choose.

  • by ZenDragon ( 1205104 ) on Friday March 07, 2008 @02:09PM (#22677876)

    You mention "predetermined destruction" which I agree is rather annoying limitation in almost all modern games. Personally at this point in time I would rather see a more interactive environment, than incredible graphics. What good is a beautifully rendered environment if you cant blow holes in it? I want to see realistic bullet holes, with the light shining through the wall or arms fall off when I mutilate some guy with a chain saw. I want to see water splash when I walk through it, or grass and leaves swaying naturally in the wind. And why cant I shoot the vase off the table for target practice? The damn thing seems to be bullet proof!

    I think they need to be working more on the physics of the environment than making it all look pretty. Hardware like the PhysX card are a step in the right direction and I would like to see that trend continue.

  • by Znork ( 31774 ) on Friday March 07, 2008 @02:16PM (#22678010)
    So instead of buying a separate chip for graphics, you get the same performance boost from just getting a second CPU or one with more cores.

    Not only that; you get a performance boost from your server. And from your kids computers. You get a performance boost from your HTPC. And potentially any other computer on your network.

    The highly paralellizable nature is the most interesting aspect to raytracing IMO; with distributed engines one could do some very cool things.
  • by DeKO ( 671377 ) <danielosmariNO@SPAMgmail.com> on Friday March 07, 2008 @02:42PM (#22678432)

    Ray tracing is firing rays from each screen pixel, test for collision against the geometry, and figure out the proper color. RTM is firing rays from each polygon's "pixel", test for collision against the "texture geometry", and figure out the proper color. It's just ray tracing in a subset of the screen pixels, in a geometry (heigthmap) represented by a texture (or multiple textures) from a polygonal face. Why do you think this is not related to ray tracing?

  • Re:Counterpoint (Score:5, Interesting)

    by Applekid ( 993327 ) on Friday March 07, 2008 @02:46PM (#22678504)
    We're not talking about the current technology, we're talking about the future. As in whether Ray Tracing is the Future of Games.

    Graphics hardware has evolved into huge parallel general-purpose stream processors capable of obscene numbers of FLOPs per second... yet we're still clinging tenaciously to the old safety blanket of a mesh of tesselated triangles and projecting textures onto them.

    And it makes sense: the industry is really, really good at pushing them around. Sort of how like internal combustion engines are pretty much the only game in town until alternative save themselves from the vapor.

    Nvidia, either by being wise or shortsighted, is discounting ray-tracing. ATI is trailing right now so they'd probably do well to hedge their bets on the polar opposite of where Nvidia is going.

    3D modelling starts out in abstractions anyway with deformations and curves and all sorts of things that are relatively easy to describe with pure mathematics. Converting it all to mesh approximations of what was sculpted was, and still is, pretty much just a hack to get things to run at acceptable real-time speeds.
  • Re:Counterpoint (Score:4, Interesting)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday March 07, 2008 @04:56PM (#22680638) Journal
    From what I read, I think part of this is nVidia assuming GPUs still stay relevant. If we ever do get to a point where raytracing -- done on a CPU -- beats out rasterization, done on a GPU, then nVidia's business model falls apart, whereas Intel suddenly becomes much more relevant (as their GPUs tend to suck).

    Personally, I would much prefer Intel winning this. It's a hit or miss, but Intel often does provide open specs and/or open drivers. nVidia provides drivers that mostly work, except when they don't, and then you're on your own...
  • by Mac Degger ( 576336 ) on Saturday March 08, 2008 @12:23AM (#22684648) Journal
    You should check out the quake3 engien modded to use raytracing. Quite nice, and the computer scientist also has some interesting points about the efficiencies of raytracing (ie those problems Portal had with it's portal rendering (which you didn't see, because they had to hack around it))? Not a problem with raytracing).

    So, quake3 runs easily using raytracing, and that was just something to show how fast it can be...it's feasable in the latest games too. So why is raytacing so slow again?

    Oh, you're thinking POVray or some such offline rendering engine.
  • Following up on my previous post about the debate between David Kirk and Philipp Slusallek in 2006 (link [slashdot.org] ... apologies to Dr. Slusallek, Slashdot truncated his name).

    According to Dr. Slusallek's LinkedIn profile [linkedin.com] he's currently a "Visiting Professor at NVIDIA".

    The performance of Dr. Slusallek's real-time raytracing engine at only 90 MHz was quite impressive: "In contrast we have recently implemented a prototype of a custom ray tracing based graphics card using a single Xilinx FPGA chip. The first results show that this really simple hardware running at 90 MHz and containing only a small fraction of the floating point units of a rasterization chip already performs like a 8-12 GHz Pentium 4. In addition, it uses only a tiny fraction of the external memory bandwidth of a rasterization chip (often as low as 100-200 MB/s) and therefore can be scaled simply by using many parallel ray tracing pipelines both on-chip and/or via multiple chips.".

    It seems there may be room for more than one opinion about the future of raytracing and gaming at nVidia.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...