Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software

Nvidia's Chief Scientist on the Future of the GPU 143

teh bigz writes "There's been a lot of talk about integrating the GPU into the CPU, but David Kirk believes that the two will continue to co-exist. Bit-tech got to sit down with Nvidia's Chief Scientist for an interview that discusses the changing roles of CPUs and GPUs, GPU computing (CUDA), Larrabee, and what he thinks about Intel's and AMD's futures. From the article: 'What would happen if multi-core processors increase core counts further though, does David believe that this will give consumers enough power to deliver what most of them need and, as a result of that, would it erode away at Nvidia's consumer installed base? "No, that's ridiculous — it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required.""
This discussion has been archived. No new comments can be posted.

Nvidia's Chief Scientist on the Future of the GPU

Comments Filter:
  • NV on the war path? (Score:4, Interesting)

    by Vigile ( 99919 ) * on Wednesday April 30, 2008 @01:39PM (#23253102)
    Pretty good read; interesting that this guy is talking to press a lot more:

    http://www.pcper.com/article.php?aid=530 [pcper.com]

    Must be part of the "attack Intel" strategy?
  • FOR NOW (Score:3, Interesting)

    by Relic of the Future ( 118669 ) <dales@digi[ ]freaks.org ['tal' in gap]> on Wednesday April 30, 2008 @01:42PM (#23253146)
    There wasn't a horizon given on his predictions. What he said about the important numbers being "1" and "12,000" means consumer CPUs have about, what, 9 to 12 years to go before we get there? At which point it'd be foolish /not/ to have the GPU be part of the CPU. Personally, I think it'll be a bit sooner than that. Not next year, or the year after; but soon.
  • by Anonymous Coward on Wednesday April 30, 2008 @01:51PM (#23253260)
    Right, come back in 5 years when we have multi core processors with integrated spe-style cores, GPU and multiple memory controllers.

    NVidia are putting a brave face on it but they're not fooling anybody.
  • by Cedric Tsui ( 890887 ) on Wednesday April 30, 2008 @01:55PM (#23253298)
    ... core processor? I don't understand the author's logic. Now, suppose it's 2012 or so and multiple core processors have gotten past their initial growing pains and computers are finally able to use any number of cores each to their maximum potential at the same time.

    A logical improvement at this point would be to start specializing cores to specific types of jobs. As the processor assigns jobs to particular cores, it would preferentially assign tasks to the cores best suited for that type of processing.
  • VIA (Score:3, Interesting)

    by StreetStealth ( 980200 ) on Wednesday April 30, 2008 @01:56PM (#23253326) Journal
    The more Nvidia gets sassy with Intel, the closer they seem to inch toward VIA.

    This has been in the back of my mind for awhile... Could NV be looking at the integrated roadmap of ATI/AMD and thinking, long term, that perhaps they should consider more than a simple business relationship with VIA?
  • There's the sun reflecting off the cars, there's the cars reflecting off each other, there's me reflecting off the cars. There's the whole parking lot reflecting off the building. Inside, there's this long covered walkway, and the reflections of the cars on one side and the trees on the other and the multiple internal reflections between the two banks of windows is part of what makes reality look real. AND it also tells me that there's someone running down the hall just around the corner inside the building, so I can move out of the way before I see them directly.

    You can't do that without raytracing, you just can't, and if you don't do it it looks fake. You get "shiny effect" windows with scenery painted on them, and that tells you "that's a window" but it doesn't make it look like one. It's like putting stick figures in and saying that's how you model humans.

    And if Professor Slusallek could do that in realtime with a hardwired raytracer... in 2005, I don't see how nVidia's going to do it with even 100,000 GPU cores in a cost-effective fashion. Raytracing is something that hardware does very well, and that's highly parallelizable, but both Intel and nVidia are attacking it in far too brute-force a fashion using the wrong kinds of tools.
  • by nherc ( 530930 ) on Wednesday April 30, 2008 @02:08PM (#23253460) Journal
    Despite what some major 3D game engine creators have to say [slashdot.org] if real-time ray tracing comes sooner than later, at about the time an eight core CPU is common, I think we might be able to do away with the graphics card especially considering the improved floating point units going in next gen. cores. Consider Intel's QuakeIV Raytraced running at 99fps at 720P on a dual quad-core Intel rig at IDF 2007 [pcper.com]. This set-up did not use any graphic card processing power and scales up and down. So, if you think 1280x720 is a decent resolution AND 50fps is fine you can play this now with a single quad-core processor. Now imagine it with dual octo-cores which should be available when? Next year? I hazard 120fps at 1080P on your (granted) above average rig doing real time ray tracing some time next year IF developers went that route AND still playable resolutions and decent fps with "old" (by that time) quad-cores.
  • Re:Very surprising (Score:3, Interesting)

    by AKAImBatman ( 238306 ) <akaimbatman@gmaYEATSil.com minus poet> on Wednesday April 30, 2008 @02:12PM (#23253492) Homepage Journal

    I would never have expected nVidia's chief scientist to say that nVidia's products would not soon be obsolete.

    Moving to a combined CPU/GPU wouldn't obsolete NVidia's product-line. Quite the opposite, in fact. NVidia would get to become something called a Fabless semiconductor company [wikipedia.org]. Basically, companies like Intel could license the GPU designs from NVidia and integrate them into their own CPU dies. This means that Intel would handle the manufacturing and NVidia would see larger profit margins. NVidia (IIRC) already does this with their 3D chips targeted at ARM chips and cell phones.

    The problem is that the GPU chipset looks nothing like the CPU chipset. The GPU is designed for massive parallelism, while CPUs have traditionally been designed for single-threaded operation. While CPUs are definitely moving in the multithreaded direction and GPUs are moving in the general-purpose direction, it's still too early to combine them. Attempting to do so would get you the worst of both worlds rather than the best. (i.e. Shared Memory Architecture [wikipedia.org] )

    So I don't think NVidia's chief scientist is off on this. (If he was, we'd probably already see GPU integration in the current generation of game consoles; all of which are custom chips.) The time will come, but it's not here yet. :-)
  • by Anonymous Coward on Wednesday April 30, 2008 @02:18PM (#23253562)
    I don't think you understand the difference between GPUs and CPUs. The number of parallel processes that a modern GPU can run is massively more than what a modern multi-core CPU can handle. What you're talking about sounds like just mashing a CPU core and GPU core together on the same die. Which would be horrible for all kinds of reasons (heat, bus bottlenecks and yields!).

    Intel has already figured out that for the vast majority of home users have finally caught on that they don't NEED more processing power. Intel knows they have to find some other way to keep people buying more in the future. How many home users need more than a C2D E4500? Will MS Word, web browser and an email client change that much in the next 3-5 years that will demand more horsepower that is available today?

    Then again, you might need 32 CPU cores on a single die if you want to run that AT&T browser ;)

  • Re:VIA (Score:2, Interesting)

    by JoshHeitzman ( 1122379 ) on Wednesday April 30, 2008 @03:50PM (#23255004) Homepage
    Don't see why a hybrid couldn't have two memory controllers included right on the chip and then mobos could have slot(s) for the fast RAM nearest to the CPU socket and the slots for the slower RAM further away.
  • Re:FOR NOW (Score:3, Interesting)

    by renoX ( 11677 ) on Wednesday April 30, 2008 @05:22PM (#23256554)
    >Why would one even want to have a GPU on the same die as the CPU?

    Think about low end computers, IMHO putting the GPU in the same die as the CPU will provide better performance/cost than embedded in the motherboard.

    And a huge number of computers have integrated video so this is an important market too.

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...