Nvidia's Chief Scientist on the Future of the GPU 143
teh bigz writes "There's been a lot of talk about integrating the GPU into the CPU, but David Kirk believes that the two will continue to co-exist. Bit-tech got to sit down with Nvidia's Chief Scientist for an interview that discusses the changing roles of CPUs and GPUs, GPU computing (CUDA), Larrabee, and what he thinks about Intel's and AMD's futures. From the article: 'What would happen if multi-core processors increase core counts further though, does David believe that this will give consumers enough power to deliver what most of them need and, as a result of that, would it erode away at Nvidia's consumer installed base? "No, that's ridiculous — it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required.""
NV on the war path? (Score:4, Interesting)
http://www.pcper.com/article.php?aid=530 [pcper.com]
Must be part of the "attack Intel" strategy?
VIA (Score:3, Interesting)
This has been in the back of my mind for awhile... Could NV be looking at the integrated roadmap of ATI/AMD and thinking, long term, that perhaps they should consider more than a simple business relationship with VIA?
Re: (Score:3, Informative)
Now at the low end there is little need for a GPU but as soon as you want to start 3D gaming and working with Photoshop on th
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
And I don't see how they could bribe themself to dominence either, it's more likely that they just did the best product, again and again. Though luck for the companies which didn't had as competent crew and engineers.
Re: (Score:2)
So going back to your comment about memory mismatch. Some of your cores in a hybrid would have large L2 caches like
Re: (Score:2)
But back then you WANTED "fast mem", as in cpu specific ram, because it made the cpu work faster instead =P
But at current memory prices and if production was moved to faster ram I guess it may be possible to just have a bunch of very fast memory and let the GPU have priority over it once again.
Or as someone else said have both kinds even thought both gpus and cpus are within the same chip (why you would w
Re: (Score:2)
Not to mention if DDR3 are much faster you do get much faster memory for those $140 more for your CPU (Thought often occupied, and you don't need to transport data between system ram and graphics memory.
Personally I don't think it's a good idea to combine them either, I was just reasoning =P
Re: (Score:2)
At some point in the future... (Score:4, Funny)
And it will probably... (Score:2)
Re: (Score:2)
(quad core)
Integrated Central Unit Processor (Score:2)
My inner child needed release. Sorry.
CUPCHICKS (Score:2)
Re: (Score:2)
Very surprising (Score:1)
Re: (Score:3, Interesting)
Moving to a combined CPU/GPU wouldn't obsolete NVidia's product-line. Quite the opposite, in fact. NVidia would get to become something called a Fabless semiconductor company [wikipedia.org]. Basically, companies like Intel could license the GPU designs from NVidia and integrate them into their own CPU dies. This means that Intel would handle the manufacturing and NVidia would see larger profit margins. NVidia (IIR
And as we all knew (Score:3, Insightful)
It doesn't seem likely that one generic item would be better at something than many specific ones. Sure CPU+GPU would just be all in one chip but why would that be better than many chips? Maybe if it had RAM inside aswell and that enabled faster FSB.
Re: (Score:2)
Combined items rarely are. However, they do provide a great deal of convenience as well as cost savings. If the difference between dedicated items and combined items is negligent, then the combined item is a better deal. The problem is, you can't shortcut the economic process by which two items become similar enough to combine.
e.g.
Combining VCR and Cassette Tape Player: Not very effective
Combining DVD Player an
Re: (Score:2)
They may also lose market share because new APIs besides Direct3D may surface to control these new hybrid processors (think CUDA). If Nvidia is not there on the ground floor, then their own API (CUD
Re: (Score:2)
CPU based GPU will not work as good as long as the (Score:2)
Re:CPU based GPU will not work as good as long as (Score:1, Interesting)
NVidia are putting a brave face on it but they're not fooling anybody.
Re:CPU based GPU will not work as good as long as (Score:2, Insightful)
Re: (Score:2)
These kind of comments scare me, is everyone new, or just not paying attention.
The PCI Express 16x has amazing overhead even on the most hardcore gaming today, expecially when utilizing SLI/Crossfire configurations.
As for this ONLY BEING for LOW END? Did you ever read the PCI/AGP/PCI Express specifications?
Just because RAM sharing was ONLY used in low end on board GPUs doesn'
Re: (Score:2)
It's still going to be than the real thing. Show me how fast Vista runs Crysis on a fast 256MB/512MB card compared to a fast 1GB card at high res with AA on.
And that virtual video RAM seems to mean that if you have 2GB of real RAM, Vista takes 1GB for the O/S, and 512-1GB for the vidcard and that leaves you with nothing much left over for the game.
As long as the O/S is still 32bit you'll also have the problem of only 4GB of easily addressa
Re: (Score:2)
Of course more VRAM gives games more room and Vista more room, who said it didn't? AA isn't always the best example though, as most implementations use selective AA, instead of full image sub rendering that requires large chunks of RAM.
(PS Crysis isn't a full DX10 game. When the game says DX10 only, then you will see the performance benefits of DX10, fo
Re: (Score:2)
You only need so much power for %95 of users. And thanks to the introduction of PCIe, most desktop systems come with an x16 expansion port, even if the chipset has integrated graphics. Further, there's a push from ATI and Nvidia to support switching to the IGP and turning off the discrete chip when you're not playing games, which cuts down on power used when you're at the de
Re: (Score:1)
(current high end boards will push an awful lot of pixels. Intel is a generation or two away from single chip solutions that will push an awful lot of pixels. Shiny only needs to progress to the point where it is better than our eyes, and it isn't a factor of 100 away, it is closer to a factor of 20 away, less on smaller screens)
Re:CPU based GPU will not work as good as long as (Score:2)
Re: (Score:2)
I agree that GPU/CPU will need to be integrated at a lower level than current technologies, but not in the near future as PCI Expres 2.0 doesn't even benefit yet.
However, don't discount System and VRAM becoming a unified concept. This has already hap
Ugh. (Score:2, Insightful)
David Kirk takes 2 minutes to get ready for work every morning because he can shit, shower and shave at the same time.
Re: (Score:1, Offtopic)
Re: (Score:2)
( Oh, wait, this is
FOR NOW (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Nope. Duke Nukem Forever will be delayed so the engine can maximize the potential of the new combined GPU/CPU tech.
Re: (Score:3, Insightful)
On the other hand, I certainly do see possibly disadvantages with it. For one thing, they would reasonably be sharing one bus interface in that case, which would lead to possibly less parallelism in the system.
I absolutely love your sig, though. :)
Re: (Score:3, Interesting)
Think about low end computers, IMHO putting the GPU in the same die as the CPU will provide better performance/cost than embedded in the motherboard.
And a huge number of computers have integrated video so this is an important market too.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Sharing one bus would hamper bandwidth per core (or parallelism as you've phrased it) - but look at the memory interface designs in mini-computers/mainframes over the past ten years for some guesses on how that will end up. Probably splitting the single bus into may point-to-point links, or at least that is where AMD's money was.
Consider the source (Score:2)
Re: (Score:2)
On the high-end... (Score:2)
Re: (Score:2)
Re: (Score:2)
And later.... (Score:2)
He then quipped, "Go away kid, ya bother me!" [dontquoteme.com]
Summary (Score:2)
Correction (Score:2)
More interested in open drivers (Score:2)
Re: (Score:2)
The only people who run Linux without access to a Windows/OSX box tend to be the ones who are only willing to run/support Open Source/Free software. This is also the group least likely to buy commercial games, even if they were released for Linux.
No games -> No market share for high end graphics cards with big margin -> The graphics cards companies don't care
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This is also the group least likely to buy commercial games, even if they were released for Linux.
...
No games =>
Ever played Nexuiz? Tremulous? Sauerbraten? Warsow? OpenArena? There are high-quality* free software (non-commercial) games...
(*) Quality is defined as entertaining me. I think contemporary commercial non-free games entertain me about as well, and are slightly prettier while doing it; I haven't heard of any revolutions in game design. However, my play experience of contemporary commercial non-free games is limited to Wii Sports, Twilight Princess and Super Mario Galaxy.
Why wouldn't you have a gpu core in a multiple ... (Score:4, Interesting)
A logical improvement at this point would be to start specializing cores to specific types of jobs. As the processor assigns jobs to particular cores, it would preferentially assign tasks to the cores best suited for that type of processing.
Re:Why wouldn't you have a gpu core in a multiple (Score:1)
Re:Why wouldn't you have a gpu core in a multiple (Score:1, Interesting)
Intel has already figured out that for the vast majority of home users have finally caught on that they don't NEED more processing power
Re: (Score:2)
You're right. Perhaps the CPU and the GPU are too different to play nicely on the same die.
A little simpler then. If CPU processing power does continue to increase exponentially (regardless of need) then one clever way to speed up a processor may be to introduce specialized processing cores. The differences might be small at first. Maybe some cores could be optimized for 64bit applications while others are still backwards compatible with 32bit. (No. I have no idea what sort of logis
Re: (Score:2)
I think the real big issue is that there are no killer apps yet (apps so convenient to ones life that they require more processing power).
I think there are a lot of killer apps out there simply waiting for processing power to make its move, the next big move IMHO is in AUTOMATING the OS, automating programming, and the creation of AI's that do what people can't.
I've been
Re: (Score:2)
It'll also be the case that development will start to adjust back towards the cpu. Keep in mind, I don't think even one game exists now that is actu
Re: (Score:2)
I think, considering the diminishing returns from adding cores, that adding specialised units on die would make sense. Look at how good a GPU version of folding@home is, and think how that kind of specialised processign could be farmed off to a specialised core. Not
Re:Why wouldn't you have a gpu core in a multiple (Score:2)
Re: (Score:2)
I think it's fairly clear that GPUs will stick around until we either have so much processing power and bandwidth we can't figure out what to do with it all, at which point it makes more sense to use the CPU(s) for everything, or until we have three-dimensional reconfigurable logic (optical?) that we can make into big grids of whatever we want. A computer that was just one big giant FPGA with some voltage converters and switches on it would make whatever kind of cores (and buses!) it needed on demand. Since
Re:Why wouldn't you have a gpu core in a multiple (Score:2)
Re: (Score:1)
Pretty much the only way to continue Moore's Law that I can see is via additional cores. If you had 128 cores, you would no longer care about polygons. Polygons = approximations for ray tracing. Nvidia = polygons.
Re: (Score:2, Insightful)
Re:Why wouldn't you have a gpu core in a multiple (Score:2)
I think by 2012 or 2020 or so, it would be far more likely that all code will be compiled to an abstract representation like LLVM. With a JIT engine that will continuously analyse your code, refactoring into the longest execution pipeline it can manage, examine each step of that pipeline and assign each step to the single threaded CPU style or stream processing GPU style core that seems most appropriate.
I don't think this will be done at a raw hardware level. I imagine the optimisation process will be far
Re: (Score:2)
The main problem is the software can't use an arbitrarily high number of cores, not the 'computers'. We could put out 64 core PC's (say 16 quad cores) but software just isn't written to take advantage of that level of parallelism.
Qualified... (Score:1)
Yeah... so all you have to do is turn every problem into one that GPUs are good at... lots of parallelism and l
Every time I walk out to my car I see raytracing. (Score:3, Interesting)
You can't do that without raytracing, you just can't, and if you don't do it it looks fake. You get "shiny effect" windows with scenery painted on them, and that tells you "that's a window" but it doesn't make it look like one. It's like putting stick figures in and saying that's how you model humans.
And if Professor Slusallek could do that in realtime with a hardwired raytracer... in 2005, I don't see how nVidia's going to do it with even 100,000 GPU cores in a cost-effective fashion. Raytracing is something that hardware does very well, and that's highly parallelizable, but both Intel and nVidia are attacking it in far too brute-force a fashion using the wrong kinds of tools.
overestimating the cost of ray tracing (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Kirk vs Slusallek [scarydevil.com]
Re: (Score:2)
What, you're one of these heretics who doesn't realize that we're in an elaborate computer simulation?
Future is set (Score:4, Insightful)
The argument back then is eerily similar to the same as proposed by NV chief, namely the average user wouldn't "need" a Math Co-Processor. Then came along the Spreadsheet, and suddenly that point was moot.
Fast forward today, if we had a dedicated GPU integrated with the CPU, it would eventually simplify things so that the next "killer app" could make use of commonly available GPU.
Sorry, NV, but AMD and INTEL will be integrating GPU into the chip, bypassing bus issues and streamlining the timing. I suspect that VIDEO processing will be the next "Killer App". YouTube is just a precursor to what will become shortly.
Re: (Score:2)
NVidia already makes good GPUs and tolerable chipsets. They should expand to make CPUs and build their own integrated platform. AMD has already proven there is room in the market for entirely non-Intel platforms.
It's that or wait till the competition puts out cheap, low power integrated equivalents that annihilate NVidia's market share. I think they have the credibilit
Whole System Design (Score:2)
Imagine Microsoft buying Intel, AMD buying RedHat, NVidia using Ubuntu(or whatever) and IBM launching OS/3 on Powerchips, and Apple.
If the Document formats are set (ISO) then why not?
There will be those few that continue to mod their cars, but for the most part, things will be mostly sealed and only a qualified me
Re: (Score:2)
Re: (Score:2)
Math co-processors did not have massive bandwidth requirements that modern GPU's need in order to pump out frames. Everyone in this discussion seeing the merging of CPU and GPU haven't been around long enough, I remember many times back in the 80's and 90's the same people predicting the 'end of the graphics card' it
Re: (Score:2)
The bus between the CPU cores, Memory, GPU and whatnot could be ultimately tuned in ways you might not be able to do with a standardized bus (PCIe, AGP etc).
And in fact, the old CPU
Re: (Score:2)
And integrated graphics still suck, the fact is no one wants to subsidize integrating high end GPU chips onto motherboards to standardize the platform because of the costs assoc
Realtime Ray Tracing and Multicore CPU's (Score:5, Interesting)
Re: (Score:2)
I think the best thing about heading in this direction is that "accelerated" graphics no longer becomes limited by your OS--assuming your OS supports the full instruction set of the CPU. No more whining that Mac Minis have crappy graphics cards, no more whining that Linux has crappy GPU driver support....
The downside i
Re: (Score:2)
Most people would spend ~100+ to upgrade a CPU for small increases and then their mobo is locked with PCI or AGP? Just spend ~150-200 for new CPU/RAM/Mobo, upgrade video card later. (I've been upgrading people to AMD 690V chipset mobo, and it has given them a large enough increase they didn't need the new card
Re: (Score:2)
Re: (Score:2)
NVIDIA already sells cards with >1GB of memory. Try rendering a scene at 60FPS when you have 1GB of textures and geometry data.
How bout this (Score:2)
Problem solved.
Of course Nvidia will need to come up with a CPU.
Cheers
Re: (Score:2)
Just means they'd have to forgo the Windows market...
Algorithms for graphics don't need Pentium cores (Score:1)
And this is what happens. Current GPUs can run 512 threas in parallel. Suppose you have 8 core with Hyperthreading, you could run, squeezing everything, 16 threads top. And there isn't any 8 core for sale, isn't?
SIMD vs. MIMD (Score:2)
Re: (Score:3, Informative)
That is untrue. The Nvidia cuda environment can do MIMD. I don't know the granularity, or much about it, but you don't have to run in complete SIMD mode.
Re: (Score:1)
The whole "OMG let's integrate everything!" routine is old. It is quickly followed by the realization (due to programmers getting frustrated with stupid quirks/implementation requirements, and hitting the always annoying performance wall) that things work better when they're designed for a specific purpose, and then we work to separate them out again, creating new buses and slots and form factors and power connectors.
A while l
Re: (Score:3, Insightful)
Instead of 4 CPU cores on a quad-core chip, why not put 2xCPU cores and 2xGPU cores?
Because now they have to make [number of CPU options] x [number of GPU options] variants rather than [number of CPU options] + [number of GPU options].
Even taking a small subset of the market:
8600GT, 8800GT, 8800GTS, 6600, 6700, 6800
Six products sit on shelves. Users buy what they want. As a competitor to say the 8600GT comes out, Best Buy has to discount one product line.
To give users the same choices as an integrated solution, that'd be 9 variants:
8600GT/6600 - Budget
8600GT/6700 - Typical desktop user
860
Re: (Score:2)
Re: (Score:2)
Yes and no. (Score:2)
And in fact Linux is a much better environment for developing CUDA. (Ability to setup headless server, numerous way to interact with said server).
That's what I'm doing at work currently.
*BUT*
No, there are still no decent open source drivers for nVidia yet. Thanks to the lack of collaboration from nVidia, Project Nouveau has to go through the difficulties of reverse engineerin
Re: (Score:2)
Didn't you just re-invent the Itanium? You know, more or less.