Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

NVIDIA To Enable PhysX For Full Line of GPUs 140

MojoKid brings news from HotHardware that NVIDIA will be enabling PhysX for some of its newest graphics cards in an upcoming driver release. Support for the full GeForce 8/9 line will be added gradually. NVIDIA acquired PhysX creator AGEIA earlier this year.
This discussion has been archived. No new comments can be posted.

NVIDIA To Enable PhysX For Full Line of GPUs

Comments Filter:
  • Hentai (Score:5, Funny)

    by jaguth ( 1067484 ) on Friday June 20, 2008 @07:21PM (#23881051)
    Maybe we'll finally see some realistic physics with fantasy tentacle rape hentai games. Is it just me, or do the current tentacle rape game physics seem way off?
  • by neokushan ( 932374 ) on Friday June 20, 2008 @07:37PM (#23881177)

    I read TFA, but it didn't really give many details as to how this works, just some benchmarks that don't really reveal much.
    Will this work on single cards or will it require an SLi system where one card does the PhysX and the other does the rendering?

    Plus, how does handling PhysX affect framerates? Will a PhysX enabled game's performance actually drop because the GPU is spending so much time calculating it and not enough time rendering it, or are they essentially independent because they're separate steps in the game's pipeline?

    • Re: (Score:1, Flamebait)

      by Ant P. ( 974313 )

      The effect on framerate doesn't matter - the target audience for this will have at least one spare graphics card to run physics on.

      • Are you sure that's the target audience, though?
        See I've only got 1 card and I'd love hardware accelerated physics, but I sure as hell wouldn't buy a separate card for it.

        • Re: (Score:1, Insightful)

          by Anonymous Coward

          Previously you had to buy a $200+ physics card from Ageia. I'm not sure how well a graphics card can do physics, but it'd be neat if I could take an older graphics card and repurpose it to do physics instead of throwing it away.

          • Ageia isn't a hardware company, so you couldn't buy one from them but they did license the hardware to other who did make cards for $200. The PS3 use a PhysX chip, if I remember correctly.
            • If not the chip, at least some PS3 games use the Ageia physics engine.... The game "Pain" involves fling a man or woman into objects in a downtown area and watching them splat and twist into windows, cars, monkeys, etc. Somewhat disturbing....
      • by lantastik ( 877247 ) on Friday June 20, 2008 @08:14PM (#23881437)

        That's not true at all. It works in a single card configuration as well. Modern GPUs have more than enough spare parallel processing power to chug away at some physics operations. Guys are already modifying the beta drivers to test it out on their Geforce 8 cards. The OP in this thread is using a single card configuration:
        http://forums.overclockers.com.au/showthread.php?t=689718 [overclockers.com.au]

        • While the card is off to render the cpu sits waiting. While the cpu is busy the card sits waiting.
          • No, don't be stupid. Any half-decent games engine nowadays does everything with parallel threads.
            They (cpu and gpu) still have to wait on each other if they finish early (to synchronise the frames), but they will spend at least 50% of their time both running at once. Ideally it would be 95%+, but games are often unbalanced in favour of graphics complexity these days.

    • by Kazymyr ( 190114 ) on Friday June 20, 2008 @09:20PM (#23881769) Journal

      Yes, it works on one card. I have enabled it on my 8800GT earlier today. The CUDA/PhysX layer gets time-sliced access to the card. Yes, it will drop framerates by about 10%.

      OTOH if you have 2 cards, you can dedicate one to CUDA and one to rendering so there won't be a hit. The cards need to NOT be in SLI (if they're in SLI, the driver sees only one GPU, and it will time-slice it like it does with a single card). This is actually the preferred configuration.

    • Re: (Score:3, Interesting)

      by nobodyman ( 90587 )

      According to the Maximum PC Podcast [maximumpc.com] they saw significant framerate hits with single card setups, but that it was much better under SLi. They did stress that they had beta drivers, so things may drastically improve once nvidia gets final drivers out the door.

  • This was reported in February [techreport.com], shortly after Nvidia purchased PhysX. Of course, the GF9 series had not been released yet, so it was not mentioned in the news posting -- but future support sort of goes without saying. I'm fairly certain that it was reported on /. with a nearly identical headline in February as well.
  • ...how much gamers used to shit all over PhysX cards? Now, they can't wait to get their hands all over it.

    • Re: (Score:3, Insightful)

      by urbanriot ( 924981 )
      Really? I don't know any gamers that are excited about this. Name more than one game (without googling) that supports Physx?
      • Re: (Score:3, Informative)

        by lantastik ( 877247 )

        I don't need to Google. Anything built on the Unreal 3 engine has PhysX support built in.

        • So... Unreal 3? That's one...
          • by lantastik ( 877247 ) on Friday June 20, 2008 @08:29PM (#23881521)

            Reading comprehension...anything built on the Unreal 3 engine.

            Like one of these many licensees:
            http://www.unrealtechnology.com/news.php [unrealtechnology.com]

            Native PhysX Support:
            http://www.theinquirer.net/en/inquirer/news/2007/05/30/unreal-3-thinks-threading [theinquirer.net]

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Duke Nukem Forever.

      • City of Heroes/Villains
    • Re: (Score:2, Interesting)

      Except modern physics engines (see: Quake 1 for MS DOS) use threads for each individual moving physics object, and the Render Thread that manages control of the graphics card uses 1 thread itself (hard to split up that...), so with new Quad Core and 8 and 16 core systems you've got a much better physics processing engine running on your CPU.

      • Re: (Score:3, Informative)

        Except modern physics engines (see: Quake 1 for MS DOS) use threads for each individual moving physics object
        Name one engine that is that stupid.

        When we're talking about game worlds in which there could easily be 50 or 100 objects on the screen at once, it makes much more sense to have maybe one physics thread (separate from the render thread, and the AI thread) -- or maybe one per core. I very much doubt one real OS thread per object would work well at all.

        • by bluefoxlucid ( 723572 ) on Friday June 20, 2008 @11:47PM (#23882379) Homepage Journal

          Um, except if you you have exactly 1 physics thread you have to juggle complex scheduling considerations about who needs how much CPU, handle the prioritization against the render and AI threads, handle intermixing them, etc. You have to implement a task scheduler. ... which is exactly what Quake 1 did. Carmack wrote a userspace thread library, and spawned multiple threads. Since DOS didn't have threads this worked rather well.

          An OS thread will give any thread a base priority, and then raise that priority every time it passes it over in the queue when it wants CPU time. It lowers the priority to the base when it runs. If a task sleeps, it gets passed over and left at lowest priority; if it wakes up and wants CPU, it climbs the priority tree. In this way, tasks which need a lot of CPU wind up getting run regularly-- as often as possible, actually-- and when multiple ones want CPU they're split up evenly.

          If you make the render thread one thread, you have to implement this logic yourself. Further, the OS will see your thread as exactly one thread, and act accordingly. If you have 10000 physics objects and 15 AIs, keeping both threads CPU-hungry, then the OS will give 1/3 CPU to the physics engine; 1/3 CPU to the AI; and 1/3 CPU to the render thread. This means your physics engine starves, and your physics start getting slow and choppy well before you reach the physical limits of the hardware. The game breaks down.

          You obviously don't understand either game programming or operating systems.

          • Um, except if you you have exactly 1 physics thread you have to juggle complex scheduling considerations about who needs how much CPU, handle the prioritization against the render and AI threads, handle intermixing them, etc.

            Which people do.

            Or simpler: Give the render thread priority, and set it to vsync. Anything above 60 fps is a waste.

            If you have 10000 physics objects and 15 AIs, keeping both threads CPU-hungry, then the OS will give 1/3 CPU to the physics engine; 1/3 CPU to the AI; and 1/3 CPU to the render thread.

            Assuming the render thread needs that 1/3rd.

            Keep in mind that ideally -- that is, if you're not lagging -- none of these are pegging the CPU, and you're just making whatever calculations you make every tick.

            You obviously don't understand either game programming or operating systems.

            Well, let's see -- most games I know of won't take advantage of more than one CPU. In fact, when Quake3 was ported to dual-core, it took a 30% performance hit -- and keep in mind, that's Car

            • Well, let's see -- most games I know of won't take advantage of more than one CPU. In fact, when Quake3 was ported to dual-core, it took a 30% performance hit -- and keep in mind, that's Carmack doing it.

              Really? Quake 3 was already threaded when released; I looked through the code myself. On Windows, however, the Windows scheduler pegs all threads in one program to the CPU the program's on unless you manually manage that part of the scheduler (you have to give threads CPU affinity or they have affinity to CPU 0). I had it on Linux (which, if threads have no CPU affinity, will distribute them to the next available CPU when scheduling), it pushes both cores just fine.

              And this is hardly the first place this argument has been made -- green threads are inherently more efficient than OS threads.

              Ah, the green threads argument. Green

    • Hardly flamebait. Everyone was dumping on them pretty bad.

      "WHAT I have to buy a second card, it's not free/can't run off the bios chip in the MB, WTF!!??"

      The funny thing that now that the PhysX cards are Rago (It's in there) you still are going to have to buy a second video card to keep your frame rate up and increase the number of physX objects. Of course with this arrangement your GPU is less speciallized thant he PhysX hardware and can be used for all the CUDA applications.

      I'm going to end up

  • I called it (Score:4, Insightful)

    by glyph42 ( 315631 ) on Friday June 20, 2008 @08:01PM (#23881359) Homepage Journal
    I called this when the PhysX cards first came out. I told my excited coworkers, "these cards are going to be irrelevant pretty soon, because it will all move to the GPU". They looked at me funny.
    • by ruiner13 ( 527499 ) on Friday June 20, 2008 @08:33PM (#23881539) Homepage
      Awesome! Would you like a medal or a monument? What stocks should I buy next week? Who will become the next president, oh wise prophet?
    • Re: (Score:1, Troll)

      by slaker ( 53818 )

      That is kind of a non-sequitur when you work in a whorehouse.

    • by Barny ( 103770 )

      Of course not that NV were packaging a GPU accelerated Havok engine with TWIMTBP for developers (look at company of heroes for that kind of thing), their plans with Havok dropped out when Intel brought the engine tech, so NV secured Ageia so that this time its tech can't be yanked out from under it.

      A fun thing to do, load up all your favorite games, and actually watch the intros, how many have TWIMTBP, how many of the new games from these makers will require a NV card for their games physics to run well?

  • If they do, it'll cost $15 for the driver to enable this.
    • I'm assuming you're talking about the 802.11n controversy? That was actually down to anti-monopoly laws; Apple were legally obliged to charge people a certain amount for the upgrade.
      • The iPod Touch updates cost money, whereas the iPhone updates don't. Reasoning is that the Touch revenue is recognized at time of sale, and iPhone revenue is spread over many months. One would expect nVidia to take the revenue when they sell the card, so if Apple are correct with their argument about adding functionality and the accounting, nVidia would need to charge. The post was mostly in jest.
  • NVIDIA To Enable PhysX For Full Line of GPUs
    No, actually they are adding it to new editions of their cards. Not current cards already in machines. It is not a driver update.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...