Nvidia's Chief Scientist on the Future of the GPU 143
teh bigz writes "There's been a lot of talk about integrating the GPU into the CPU, but David Kirk believes that the two will continue to co-exist. Bit-tech got to sit down with Nvidia's Chief Scientist for an interview that discusses the changing roles of CPUs and GPUs, GPU computing (CUDA), Larrabee, and what he thinks about Intel's and AMD's futures. From the article: 'What would happen if multi-core processors increase core counts further though, does David believe that this will give consumers enough power to deliver what most of them need and, as a result of that, would it erode away at Nvidia's consumer installed base? "No, that's ridiculous — it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required.""
Re:VIA (Score:3, Informative)
Now at the low end there is little need for a GPU but as soon as you want to start 3D gaming and working with Photoshop on the same system you are going to want both video and normal ram.
PS: This is also why people don't use DDR3 memory for system RAM it's just not worth the cost for a 1-2% increase over cheep DDR2 ram.
Re:SIMD vs. MIMD (Score:3, Informative)
That is untrue. The Nvidia cuda environment can do MIMD. I don't know the granularity, or much about it, but you don't have to run in complete SIMD mode.
overestimating the cost of ray tracing (Score:3, Informative)
Modern real-time ray tracers can get respectable performance without doing any sort of GPU-hybrid trickery, or requiring any hardware other than a fast CPU. For instance, try out the Arauna [igad.nhtv.nl] demo. (Dedicated ray-tracing hardware would be nice, but I'm not aware of any hardware implementation that has significantly outperformed a well-optimized CPU ray tracer. With the resources of a major chip manufacturer I don't doubt it could be done, though.) Arauna and OpenRT and the like might still be a little too slow to run a modern game at high resolution, but they're getting there fast.
This is just plain ignorant. Naive, O(n^2) radiosity may take that long, or path tracing with a lot of samples per pixel, but a decent photon mapping algorithm shouldn't be anywhere near that slow to produce a rendering quality acceptable for games. Maybe "hundreds of seconds" might be a more plausible number. (Or less, if you're willing to accept a less accurate approximation.) Metropolis Light Transport is another algorithm, but I don't have a good notion of how fast it is.