Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

Nvidia's Chief Scientist on the Future of the GPU 143

teh bigz writes "There's been a lot of talk about integrating the GPU into the CPU, but David Kirk believes that the two will continue to co-exist. Bit-tech got to sit down with Nvidia's Chief Scientist for an interview that discusses the changing roles of CPUs and GPUs, GPU computing (CUDA), Larrabee, and what he thinks about Intel's and AMD's futures. From the article: 'What would happen if multi-core processors increase core counts further though, does David believe that this will give consumers enough power to deliver what most of them need and, as a result of that, would it erode away at Nvidia's consumer installed base? "No, that's ridiculous — it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required.""
This discussion has been archived. No new comments can be posted.

Nvidia's Chief Scientist on the Future of the GPU

Comments Filter:
  • Re:VIA (Score:3, Informative)

    by Retric ( 704075 ) on Wednesday April 30, 2008 @02:38PM (#23253782)
    The real limitation on a CPU/GPU hybrid is memory bandwidth. A GPU is happy with .5 to 1 GB of FAST RAM but CPU running vista works best with 4-8GB of CHEEP ram and a large L2 cash. Think of it this way a GPU needs to access every bit of ram 60+ times per second but a CPU tends to work with a small section of a much larger pool of ram which is why L2 cash size/speed is so important.

    Now at the low end there is little need for a GPU but as soon as you want to start 3D gaming and working with Photoshop on the same system you are going to want both video and normal ram.

    PS: This is also why people don't use DDR3 memory for system RAM it's just not worth the cost for a 1-2% increase over cheep DDR2 ram.
  • Re:SIMD vs. MIMD (Score:3, Informative)

    by hackstraw ( 262471 ) on Wednesday April 30, 2008 @03:00PM (#23254048)
    Nvidia makes SIMD (single instruction, multiple data) multicore processors...

    That is untrue. The Nvidia cuda environment can do MIMD. I don't know the granularity, or much about it, but you don't have to run in complete SIMD mode.

  • by j1m+5n0w ( 749199 ) on Wednesday April 30, 2008 @03:16PM (#23254362) Homepage Journal

    During the Analyst's Day, Jen-Hsun showed a rendering of an Audi R8 that used a hybrid rasterisation and ray tracing renderer. Jen-Hsun said that it ran at 15 frames per second, which isn't all that far away from being real-time. So I asked David when we're likely to see ray tracing appearing in 3D graphics engines where it can actually be real-time?

    "15 frames per second was with our professional cards I think. That would have been with 16 GPUs and at least that many multi-core CPUs â" that's what that is. Just vaguely extrapolating that into our progress, it'll be some number of years before you'll see that in real-time," explained Kirk. "If you take a 2x generational increase in performance, you're looking at least four or five years for the GPU part to have enough power to render that scene in real-time.

    Modern real-time ray tracers can get respectable performance without doing any sort of GPU-hybrid trickery, or requiring any hardware other than a fast CPU. For instance, try out the Arauna [igad.nhtv.nl] demo. (Dedicated ray-tracing hardware would be nice, but I'm not aware of any hardware implementation that has significantly outperformed a well-optimized CPU ray tracer. With the resources of a major chip manufacturer I don't doubt it could be done, though.) Arauna and OpenRT and the like might still be a little too slow to run a modern game at high resolution, but they're getting there fast.

    "People use ray tracing for real effects as well though. Things like shiny chains and for ambient occlusion (global illumination), which is an offline rendering process that is many thousands of times too slow for real-time," said Kirk. "Using ray tracing to calculate the light going from every surface to every other surface is a process that takes hundreds of hours."

    This is just plain ignorant. Naive, O(n^2) radiosity may take that long, or path tracing with a lot of samples per pixel, but a decent photon mapping algorithm shouldn't be anywhere near that slow to produce a rendering quality acceptable for games. Maybe "hundreds of seconds" might be a more plausible number. (Or less, if you're willing to accept a less accurate approximation.) Metropolis Light Transport is another algorithm, but I don't have a good notion of how fast it is.

It's great to be smart 'cause then you know stuff.

Working...