Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

Nvidia's Chief Scientist on the Future of the GPU 143

teh bigz writes "There's been a lot of talk about integrating the GPU into the CPU, but David Kirk believes that the two will continue to co-exist. Bit-tech got to sit down with Nvidia's Chief Scientist for an interview that discusses the changing roles of CPUs and GPUs, GPU computing (CUDA), Larrabee, and what he thinks about Intel's and AMD's futures. From the article: 'What would happen if multi-core processors increase core counts further though, does David believe that this will give consumers enough power to deliver what most of them need and, as a result of that, would it erode away at Nvidia's consumer installed base? "No, that's ridiculous — it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required.""
This discussion has been archived. No new comments can be posted.

Nvidia's Chief Scientist on the Future of the GPU

Comments Filter:
  • NV on the war path? (Score:4, Interesting)

    by Vigile ( 99919 ) * on Wednesday April 30, 2008 @12:39PM (#23253102)
    Pretty good read; interesting that this guy is talking to press a lot more:

    http://www.pcper.com/article.php?aid=530 [pcper.com]

    Must be part of the "attack Intel" strategy?
    • VIA (Score:3, Interesting)

      The more Nvidia gets sassy with Intel, the closer they seem to inch toward VIA.

      This has been in the back of my mind for awhile... Could NV be looking at the integrated roadmap of ATI/AMD and thinking, long term, that perhaps they should consider more than a simple business relationship with VIA?
      • Re: (Score:3, Informative)

        by Retric ( 704075 )
        The real limitation on a CPU/GPU hybrid is memory bandwidth. A GPU is happy with .5 to 1 GB of FAST RAM but CPU running vista works best with 4-8GB of CHEEP ram and a large L2 cash. Think of it this way a GPU needs to access every bit of ram 60+ times per second but a CPU tends to work with a small section of a much larger pool of ram which is why L2 cash size/speed is so important.

        Now at the low end there is little need for a GPU but as soon as you want to start 3D gaming and working with Photoshop on th
        • Re: (Score:2, Interesting)

          Don't see why a hybrid couldn't have two memory controllers included right on the chip and then mobos could have slot(s) for the fast RAM nearest to the CPU socket and the slots for the slower RAM further away.
          • by Khyber ( 864651 )
            It's called R&D costs. They know they can do it, but right now, they're too busy milkiing the current cash cow to spend money on any decent R&D advances. I'm willing to bet they (nVidia) could have had SLI out the year after buying 3Dfx, but they were too busy working off the money and paying employees and bribes to be able to absolutely dominate the video industry and put ATi and any other company that made video cards (Matrox, Trident, etc) out of business.
            • by aliquis ( 678370 )
              Except ATI aren't out of business...

              And I don't see how they could bribe themself to dominence either, it's more likely that they just did the best product, again and again. Though luck for the companies which didn't had as competent crew and engineers.
        • The simple answer is you use a memory hierarchy same as people do now. The L2 cache on a CPU is large enough to contain the working set for most problems. The working set for GPU-type problems tends to be accessed differently. You need some sort of caching for data but for lots of the memory you access it will be a really large pretty sequential stream. The memory locking in CUDA reflects this.

          So going back to your comment about memory mismatch. Some of your cores in a hybrid would have large L2 caches like
        • by aliquis ( 678370 )
          On the Amiga this was somewhat solved by giving the graphics chips prioritized access over the chip ram.

          But back then you WANTED "fast mem", as in cpu specific ram, because it made the cpu work faster instead =P

          But at current memory prices and if production was moved to faster ram I guess it may be possible to just have a bunch of very fast memory and let the GPU have priority over it once again.

          Or as someone else said have both kinds even thought both gpus and cpus are within the same chip (why you would w
    • by pato101 ( 851725 )

      interesting that this guy is talking to press a lot more:
      I was a couple of weeks ago in a conference given by him at Barcelona (Spain). He is a nice speaker. He seems at a round of conferences around the world universities showing the CUDA technology. By the way, CUDA technology seems to be an interesting thing.
  • by Anonymous Coward on Wednesday April 30, 2008 @12:40PM (#23253112)
    Everything will be integrated into one chip, and we will call it the PU.
  • I would never have expected nVidia's chief scientist to say that nVidia's products would not soon be obsolete.
    • Re: (Score:3, Interesting)

      by AKAImBatman ( 238306 )

      I would never have expected nVidia's chief scientist to say that nVidia's products would not soon be obsolete.

      Moving to a combined CPU/GPU wouldn't obsolete NVidia's product-line. Quite the opposite, in fact. NVidia would get to become something called a Fabless semiconductor company [wikipedia.org]. Basically, companies like Intel could license the GPU designs from NVidia and integrate them into their own CPU dies. This means that Intel would handle the manufacturing and NVidia would see larger profit margins. NVidia (IIR

      • And as we all knew (Score:3, Insightful)

        by aliquis ( 678370 )
        Only Amiga made it possible! (Thanks to custom chips, not in spite of them.)

        It doesn't seem likely that one generic item would be better at something than many specific ones. Sure CPU+GPU would just be all in one chip but why would that be better than many chips? Maybe if it had RAM inside aswell and that enabled faster FSB.
        • It doesn't seem likely that one generic item would be better at something than many specific ones.

          Combined items rarely are. However, they do provide a great deal of convenience as well as cost savings. If the difference between dedicated items and combined items is negligent, then the combined item is a better deal. The problem is, you can't shortcut the economic process by which two items become similar enough to combine.

          e.g.
          Combining VCR and Cassette Tape Player: Not very effective
          Combining DVD Player an

    • by Hatta ( 162192 )
      I would. After all, why buy nVidia's next product if their current product isn't obsolete?
  • CPU based GPU will not work as good as long as they have to use the main system ram also heat will limit there power. NVIDAI should start working HTX video card so you can the video card on the cpu bus but it is on a card so you put ram and big heat sinks on it.
    • by Anonymous Coward
      Right, come back in 5 years when we have multi core processors with integrated spe-style cores, GPU and multiple memory controllers.

      NVidia are putting a brave face on it but they're not fooling anybody.
    • truthfully only real application for the gpu/cpu hybrid would be in laptop use where they can get away with using lower end gpu chips
      • truthfully only real application for the gpu/cpu hybrid would be in laptop use where they can get away with using lower end gpu chips

        These kind of comments scare me, is everyone new, or just not paying attention.

        The PCI Express 16x has amazing overhead even on the most hardcore gaming today, expecially when utilizing SLI/Crossfire configurations.

        As for this ONLY BEING for LOW END? Did you ever read the PCI/AGP/PCI Express specifications?

        Just because RAM sharing was ONLY used in low end on board GPUs doesn'
        • by TheLink ( 130905 )
          "Vista also intelligently manages and virtualizes VRAM to system RAM"

          It's still going to be than the real thing. Show me how fast Vista runs Crysis on a fast 256MB/512MB card compared to a fast 1GB card at high res with AA on.

          And that virtual video RAM seems to mean that if you have 2GB of real RAM, Vista takes 1GB for the O/S, and 512-1GB for the vidcard and that leaves you with nothing much left over for the game.

          As long as the O/S is still 32bit you'll also have the problem of only 4GB of easily addressa
          • It's still going to be than the real thing. Show me how fast Vista runs Crysis on a fast 256MB/512MB card compared to a fast 1GB card at high res with AA on.

            Of course more VRAM gives games more room and Vista more room, who said it didn't? AA isn't always the best example though, as most implementations use selective AA, instead of full image sub rendering that requires large chunks of RAM.

            (PS Crysis isn't a full DX10 game. When the game says DX10 only, then you will see the performance benefits of DX10, fo
      • truthfully only real application for the gpu/cpu hybrid would be in laptop use where they can get away with using lower end gpu chips

        You only need so much power for %95 of users. And thanks to the introduction of PCIe, most desktop systems come with an x16 expansion port, even if the chipset has integrated graphics. Further, there's a push from ATI and Nvidia to support switching to the IGP and turning off the discrete chip when you're not playing games, which cuts down on power used when you're at the de
    • by maxume ( 22995 )
      How many more pixels do you think you need? I'm glad they are looking ahead to the point when graphics is sitting on chip.

      (current high end boards will push an awful lot of pixels. Intel is a generation or two away from single chip solutions that will push an awful lot of pixels. Shiny only needs to progress to the point where it is better than our eyes, and it isn't a factor of 100 away, it is closer to a factor of 20 away, less on smaller screens)
    • If you have Eight or X Cores, Couldn't one or two (or X-1) be dedicated to run MESA (or a newer, better, software GL implementation)? IIRC, SGI's linux/NT workstation 350's had their graphics tied into system RAM (which you could dedicate huge amounts of RAM for), and they worked fine.
    • CPU based GPU will not work as good as long as they have to use the main system ram also heat will limit there power. NVIDIA should start working HTX video card so you can the video card on the cpu bus but it is on a card so you put ram and big heat sinks on it.

      I agree that GPU/CPU will need to be integrated at a lower level than current technologies, but not in the near future as PCI Expres 2.0 doesn't even benefit yet.

      However, don't discount System and VRAM becoming a unified concept. This has already hap
  • Ugh. (Score:2, Insightful)

    by Anonymous Coward
    From TFA> The ability to do one thing really quickly doesn't help you that much when you have a lot of things, but the ability to do a lot of things doesn't help that much when you have just one thing to do. However, if you modify the CPU so that it's doing multiple things, then when you're only doing one thing it's not going to be any faster.

    David Kirk takes 2 minutes to get ready for work every morning because he can shit, shower and shave at the same time.
  • FOR NOW (Score:3, Interesting)

    by Relic of the Future ( 118669 ) <{gro.skaerflatigid} {ta} {selad}> on Wednesday April 30, 2008 @12:42PM (#23253146)
    There wasn't a horizon given on his predictions. What he said about the important numbers being "1" and "12,000" means consumer CPUs have about, what, 9 to 12 years to go before we get there? At which point it'd be foolish /not/ to have the GPU be part of the CPU. Personally, I think it'll be a bit sooner than that. Not next year, or the year after; but soon.
    • Personally, I think it'll be a bit sooner than that. Not next year, or the year after; but soon.
      You mean it'll coincide with the release of Duke Nukem Forever?
      • You mean it'll coincide with the release of Duke Nukem Forever?

        Nope. Duke Nukem Forever will be delayed so the engine can maximize the potential of the new combined GPU/CPU tech.

    • Re: (Score:3, Insightful)

      by Dolda2000 ( 759023 )
      Why would one even want to have a GPU on the same die as the CPU? Maybe I'm just being dense here, but I don't see the advantage.

      On the other hand, I certainly do see possibly disadvantages with it. For one thing, they would reasonably be sharing one bus interface in that case, which would lead to possibly less parallelism in the system.

      I absolutely love your sig, though. :)

      • Re: (Score:3, Interesting)

        by renoX ( 11677 )
        >Why would one even want to have a GPU on the same die as the CPU?

        Think about low end computers, IMHO putting the GPU in the same die as the CPU will provide better performance/cost than embedded in the motherboard.

        And a huge number of computers have integrated video so this is an important market too.
        • Think about low end computers, IMHO putting the GPU in the same die as the CPU will provide better performance/cost than embedded in the motherboard.
          Oh? I thought I always heard about this CPU/GPU combo chip in the context of high-performance graphics, but I may just have mistaken the intent, then. If it's about economics, I can understand it. Thanks for the explanation!
        • The GPU doesn't care about CPU cache, the CPU doesn't care about VRAM. You'll create a heat problem and need an extended memory bus to access video memory. Graphics without dedicated VRAM causes a huge CPU performance hit due to rapid and repeated north bridge/memory bus access.
      • It can do vast amounts of linear algebra really quickly. That makes it useful for a lot of applications if you decrease the latency between the processor and the vector pipelines.

        Sharing one bus would hamper bandwidth per core (or parallelism as you've phrased it) - but look at the memory interface designs in mini-computers/mainframes over the past ten years for some guesses on how that will end up. Probably splitting the single bus into may point-to-point links, or at least that is where AMD's money was.
  • Graphics card man says that CPU's not a threat to his businees. I'm shocked!
  • ..there's discrete chips, but on the low end there's already integrated chipsets and I think the future is heading towards systems on a chip. A basic desktop with hardware HD decoding and 3D enough to run Aero (but not games) can be made in one package by Intel.
    • Aero takes more graphics support than some games. Even some new games if you look at some smaller niche titles.
  • "No, that's ridiculous -- it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required."

    He then quipped, "Go away kid, ya bother me!" [dontquoteme.com]

  • "If the market wants to move away from GPU integration, let it, but we're not going to help it along..."
  • So I am going which ever manufacturer has the best drivers for my platform of choice, Linux. So if the future doesn't hold this for Nvidia, it doesn't really interest me.
    • And if your platform of choice doesn't hold much future/value for Nvidia, you will continue to not really interest them.

      The only people who run Linux without access to a Windows/OSX box tend to be the ones who are only willing to run/support Open Source/Free software. This is also the group least likely to buy commercial games, even if they were released for Linux.

      No games -> No market share for high end graphics cards with big margin -> The graphics cards companies don't care
      • by pembo13 ( 770295 )
        Does Nvidia make commercial games? I thought they made hardware. I can't (yet) download hardware for free.
        • He never implied they did. He was saying though that NVidia doesn't care about everyday productivity users, they care about gamers since gamers are the ones spending $500 for the top video cards. Since games are typically Windows exclusive (aside from less-than-perfect emulation) gamers tend to be Windows users. Thus, Linux is not their market and they don't care.
      • This is also the group least likely to buy commercial games, even if they were released for Linux.

        No games => ...

        Ever played Nexuiz? Tremulous? Sauerbraten? Warsow? OpenArena? There are high-quality* free software (non-commercial) games...

        (*) Quality is defined as entertaining me. I think contemporary commercial non-free games entertain me about as well, and are slightly prettier while doing it; I haven't heard of any revolutions in game design. However, my play experience of contemporary commercial non-free games is limited to Wii Sports, Twilight Princess and Super Mario Galaxy.

  • by Cedric Tsui ( 890887 ) on Wednesday April 30, 2008 @12:55PM (#23253298)
    ... core processor? I don't understand the author's logic. Now, suppose it's 2012 or so and multiple core processors have gotten past their initial growing pains and computers are finally able to use any number of cores each to their maximum potential at the same time.

    A logical improvement at this point would be to start specializing cores to specific types of jobs. As the processor assigns jobs to particular cores, it would preferentially assign tasks to the cores best suited for that type of processing.
    • Because if processing power goes up way past what you generally need for even heavy apps, Nvidia still want you to believe that you need a separate graphics card. If that model were to change at some point it would be death for graphics card manufacturers. Of course, they could very well be right. What the hell do I know :P
    • by Anonymous Coward
      I don't think you understand the difference between GPUs and CPUs. The number of parallel processes that a modern GPU can run is massively more than what a modern multi-core CPU can handle. What you're talking about sounds like just mashing a CPU core and GPU core together on the same die. Which would be horrible for all kinds of reasons (heat, bus bottlenecks and yields!).

      Intel has already figured out that for the vast majority of home users have finally caught on that they don't NEED more processing power
      • Hmmm. That's interesting.
        You're right. Perhaps the CPU and the GPU are too different to play nicely on the same die.

        A little simpler then. If CPU processing power does continue to increase exponentially (regardless of need) then one clever way to speed up a processor may be to introduce specialized processing cores. The differences might be small at first. Maybe some cores could be optimized for 64bit applications while others are still backwards compatible with 32bit. (No. I have no idea what sort of logis
      • "Intel has already figured out that for the vast majority of home users have finally caught on that they don't NEED more processing power."

        I think the real big issue is that there are no killer apps yet (apps so convenient to ones life that they require more processing power).

        I think there are a lot of killer apps out there simply waiting for processing power to make its move, the next big move IMHO is in AUTOMATING the OS, automating programming, and the creation of AI's that do what people can't.

        I've been
    • well, yeah, for sure. But I see that as only the first step. It's like the math-coprocessor step. My 32-core cpu has six graphics cores, four math cores, two HD video cores, an audio core, 3 physics, ten AI, and 6 general cores. But even that only lasts long enough to reach the point where mass production benefits exceed the specialized production benefits.

      It'll also be the case that development will start to adjust back towards the cpu. Keep in mind, I don't think even one game exists now that is actu
      • Supreme Commander is the game that requires 2 cores (well, ok you can drop the frame rate, polygon levels and other fidelity settings of course. Nobody would ever release a game that couldn't be played on a single core machine)(not yet at least).

        I think, considering the diminishing returns from adding cores, that adding specialised units on die would make sense. Look at how good a GPU version of folding@home is, and think how that kind of specialised processign could be farmed off to a specialised core. Not
    • I think the interviewer wasn't asking the right questions. His answer was for why you can't replace a GPU with an N-core CPU, not why you wouldn't put a GPU on the same die with your CPUs. I think his answers in general imply that it's more likely that people will want GPU cores that aren't attached to graphics output at all in the future, in addition to the usual hardware that connects to a monitor. I wouldn't be surprised if it became common to have a processor chip with 4 CPU cores and 2 GPU cores, and a
      • I think it's fairly clear that GPUs will stick around until we either have so much processing power and bandwidth we can't figure out what to do with it all, at which point it makes more sense to use the CPU(s) for everything, or until we have three-dimensional reconfigurable logic (optical?) that we can make into big grids of whatever we want. A computer that was just one big giant FPGA with some voltage converters and switches on it would make whatever kind of cores (and buses!) it needed on demand. Since

    • I was under the impression that optimal bus design used to different but that was sort of going away with the move to multi core designs.
    • by svnt ( 697929 )
      The five year window might not be in the cards, but I've got two words for you: ray tracing.

      Pretty much the only way to continue Moore's Law that I can see is via additional cores. If you had 128 cores, you would no longer care about polygons. Polygons = approximations for ray tracing. Nvidia = polygons.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      I think a better question is "Why wouldn't we have a separate multi-core GPU along with the multi-core CPU?" While I agree that nVidia is obviously going to protect it's own best interests, I don't see the GPU/CPU separation going away completely. Obviously there will be combination-core boards in the future for lower-end graphics, but the demand on GPU cycles is only going to increase as desktops/games/apps get better. However, one of the huge reasons that video cards are a productive industry is that ther
    • I think by 2012 or 2020 or so, it would be far more likely that all code will be compiled to an abstract representation like LLVM. With a JIT engine that will continuously analyse your code, refactoring into the longest execution pipeline it can manage, examine each step of that pipeline and assign each step to the single threaded CPU style or stream processing GPU style core that seems most appropriate.

      I don't think this will be done at a raw hardware level. I imagine the optimisation process will be far

    • "computers are finally able to use any number of cores each to their maximum potential at the same time."

      The main problem is the software can't use an arbitrarily high number of cores, not the 'computers'. We could put out 64 core PC's (say 16 quad cores) but software just isn't written to take advantage of that level of parallelism.
  • From TFA:

    "Sure," acknowledged David. "I think that if you look at any kind of computational problem that has a lot of parallelism and a lot of data, the GPU is an architecture that is better suited than that. It's possible that you could make the CPUs more GPU-like, but then you run the risk of them being less good at what they're good at now - that's one of the challenges ahead [for the CPU guys].

    Yeah... so all you have to do is turn every problem into one that GPUs are good at... lots of parallelism and l

  • There's the sun reflecting off the cars, there's the cars reflecting off each other, there's me reflecting off the cars. There's the whole parking lot reflecting off the building. Inside, there's this long covered walkway, and the reflections of the cars on one side and the trees on the other and the multiple internal reflections between the two banks of windows is part of what makes reality look real. AND it also tells me that there's someone running down the hall just around the corner inside the building, so I can move out of the way before I see them directly.

    You can't do that without raytracing, you just can't, and if you don't do it it looks fake. You get "shiny effect" windows with scenery painted on them, and that tells you "that's a window" but it doesn't make it look like one. It's like putting stick figures in and saying that's how you model humans.

    And if Professor Slusallek could do that in realtime with a hardwired raytracer... in 2005, I don't see how nVidia's going to do it with even 100,000 GPU cores in a cost-effective fashion. Raytracing is something that hardware does very well, and that's highly parallelizable, but both Intel and nVidia are attacking it in far too brute-force a fashion using the wrong kinds of tools.
    • During the Analyst's Day, Jen-Hsun showed a rendering of an Audi R8 that used a hybrid rasterisation and ray tracing renderer. Jen-Hsun said that it ran at 15 frames per second, which isn't all that far away from being real-time. So I asked David when we're likely to see ray tracing appearing in 3D graphics engines where it can actually be real-time?

      "15 frames per second was with our professional cards I think. That would have been with 16 GPUs and at least that many multi-core CPUs â" that's what t

      • by argent ( 18001 )
        Not to mention that Philipp Slusallek was getting 15 FPS in 2005, with an FPGA that had maybe 1% the gates of a modern GPU, and ran at 1/8th the clock rate. It might not have been beating the best conventional raytracers in 2005, but it was doing them with a chip that had the clock rate and gate count of a processor from 1995.
        • Dedicated ray tracing hardware would be nice. Unfortunately, I don't think any big hardware company is going to invest in the technology until they start feeling the competitive pressure from software renderers.
  • Future is set (Score:4, Insightful)

    by Archangel Michael ( 180766 ) on Wednesday April 30, 2008 @01:02PM (#23253390) Journal
    The pattern set by the whole CPU / Math Co-Processor integration showed the way. For those old enough to remember, once upon a time the CPU and Math Co-Processor were separate socketed chips. Specifically you had to add the chip to the MOBO to get math functions integrated.

    The argument back then is eerily similar to the same as proposed by NV chief, namely the average user wouldn't "need" a Math Co-Processor. Then came along the Spreadsheet, and suddenly that point was moot.

    Fast forward today, if we had a dedicated GPU integrated with the CPU, it would eventually simplify things so that the next "killer app" could make use of commonly available GPU.

    Sorry, NV, but AMD and INTEL will be integrating GPU into the chip, bypassing bus issues and streamlining the timing. I suspect that VIDEO processing will be the next "Killer App". YouTube is just a precursor to what will become shortly.

    • by TopSpin ( 753 ) *
      CPUs, GPUs... in the end they're all ICs [wikipedia.org]. Bets against integration inevitably lose. The history of computation is marked by integration.

      NVidia already makes good GPUs and tolerable chipsets. They should expand to make CPUs and build their own integrated platform. AMD has already proven there is room in the market for entirely non-Intel platforms.

      It's that or wait till the competition puts out cheap, low power integrated equivalents that annihilate NVidia's market share. I think they have the credibilit
      • I've actually been suggesting to my friends for a while, that you'll end up with about four or five different major vendors of computers, each similar to what Apple is today, selling whole systems.

        Imagine Microsoft buying Intel, AMD buying RedHat, NVidia using Ubuntu(or whatever) and IBM launching OS/3 on Powerchips, and Apple.

        If the Document formats are set (ISO) then why not?

        There will be those few that continue to mod their cars, but for the most part, things will be mostly sealed and only a qualified me
      • nVidia would be foolish to think that the desktop graphics market won't follow the same trends as the workstation graphics market, since their founders were at SGI when that trend started and were the ones that noticed it. I suspect this is why nVidia have licensed the ARM11 MPCore from ARM. They are using it in the APX 2500, which has one to four ARM CPU cores at up to 750MHz, and an nVidia-developed GPU, which supports OpenGL 2.0 ES and 720p encoding and decoding of H.264, in a small enough (and low en
    • "The pattern set by the whole CPU / Math Co-Processor integration showed the way. For those old enough to remember, once upon a time the CPU and Math Co-Processor were separate socketed chips"

      Math co-processors did not have massive bandwidth requirements that modern GPU's need in order to pump out frames. Everyone in this discussion seeing the merging of CPU and GPU haven't been around long enough, I remember many times back in the 80's and 90's the same people predicting the 'end of the graphics card' it
      • Your error is that I'm not suggesting the "end" of a graphics chip (or better, graphics core). Bringing the GPU to the CPU Core will INCREASE bandwidth, not decrease it, because it will not be limited by whatever bus you're running.

        The bus between the CPU cores, Memory, GPU and whatnot could be ultimately tuned in ways you might not be able to do with a standardized bus (PCIe, AGP etc).

        And in fact, the old CPU /Math CoProcessor was limited bus speed/bandwidth, which ONE of the reasons they brought it to the
  • by nherc ( 530930 ) on Wednesday April 30, 2008 @01:08PM (#23253460) Journal
    Despite what some major 3D game engine creators have to say [slashdot.org] if real-time ray tracing comes sooner than later, at about the time an eight core CPU is common, I think we might be able to do away with the graphics card especially considering the improved floating point units going in next gen. cores. Consider Intel's QuakeIV Raytraced running at 99fps at 720P on a dual quad-core Intel rig at IDF 2007 [pcper.com]. This set-up did not use any graphic card processing power and scales up and down. So, if you think 1280x720 is a decent resolution AND 50fps is fine you can play this now with a single quad-core processor. Now imagine it with dual octo-cores which should be available when? Next year? I hazard 120fps at 1080P on your (granted) above average rig doing real time ray tracing some time next year IF developers went that route AND still playable resolutions and decent fps with "old" (by that time) quad-cores.
    • by Sancho ( 17056 ) *
      It seems like you could still have specialized ray-tracing hardware. Whether that's integrated into the main CPU as a specialized core, or as an expansion card really isn't relevant, though.

      I think the best thing about heading in this direction is that "accelerated" graphics no longer becomes limited by your OS--assuming your OS supports the full instruction set of the CPU. No more whining that Mac Minis have crappy graphics cards, no more whining that Linux has crappy GPU driver support....

      The downside i
      • by G00F ( 241765 )
        You can rarely upgrade CPU now days unless you bought what is considered a low end CPU. And even then you don't see the performance jump from say a Geforce 440MX to a Geforce 8500. (easy under $100)

        Most people would spend ~100+ to upgrade a CPU for small increases and then their mobo is locked with PCI or AGP? Just spend ~150-200 for new CPU/RAM/Mobo, upgrade video card later. (I've been upgrading people to AMD 690V chipset mobo, and it has given them a large enough increase they didn't need the new card
        • by Sancho ( 17056 ) *
          Well, I was talking about a world where we've moved on from off-CPU GPUs. Right now, yes, it's rare for people to upgrade the CPU without also upgrading many other components--but it's not always as dark an outlook as you suggest. The Core Duo, for example, is pin-compatible with the Core2Duo, and the performance difference is noticeable (at least on Macs.)
    • And how big were the textures? Raw computational performance is one thing but the often overlooked issue is memory bandwidth. GPUs don't access memory the same was as CPUs. It's harder to work with and not as efficient for general purpose computing as a CPU's multiple levels of caches, but it has much higher bandwidth.

      NVIDIA already sells cards with >1GB of memory. Try rendering a scene at 60FPS when you have 1GB of textures and geometry data.
  • Why not make one of the multiple cores a GPU, then the speed at which it communicates with the CPU will be at clock speed.

    Problem solved.

    Of course Nvidia will need to come up with a CPU.

    Cheers
     
    • Of course Nvidia will need to come up with a CPU.
      No they wouldn't. They could easily use a licensed PowerPC core, or one of the open Sparc cores.

      Just means they'd have to forgo the Windows market...
  • It is very ridiculous, because if you can put 8 cores in a single dye, then you can put a lot more Multiprocessors then a current GPU already have. And this GPUs are very scalable and the software the runs in it are very simple, so you need simpler threads.
    And this is what happens. Current GPUs can run 512 threas in parallel. Suppose you have 8 core with Hyperthreading, you could run, squeezing everything, 16 threads top. And there isn't any 8 core for sale, isn't?
  • Nvidia makes SIMD (single instruction, multiple data) multicore processors while Intel, AMD and the other players make MIMD (multiple instructions, multiple data) multicore processors. These two architectures are incompatible, requiring different programming models. The former uses a fine grain approach to parallelism while the latter is coarse-grained. This makes for an extremely complex programming environment, something that is sure to negatively affect productivity. The idea that the industry must some
    • Re: (Score:3, Informative)

      by hackstraw ( 262471 )
      Nvidia makes SIMD (single instruction, multiple data) multicore processors...

      That is untrue. The Nvidia cuda environment can do MIMD. I don't know the granularity, or much about it, but you don't have to run in complete SIMD mode.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...