Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

GeforceFX (vs. Radeon 9700 Pro) Benchmarks 388

Obiwan Kenobi writes "Looks like they guys at Maximum PC got lucky -- they scored the first ever GeforceFX benchmarks via an Alienware prototype machine. Two 'marks to notice: The Geforce FX scored 209 FPS in Quake 3 (1600x1200x32) and 41fps in 3dMark Game4 demo, while the Radeon 9700 Pro attained only 147fps in Quake 3 yet came back with 45fps in the 3dMark test. It seems that the GeforceFX is the clear leader in pure processing power, but in memory bandwidth the 9700 Pro is still king."
This discussion has been archived. No new comments can be posted.

GeforceFX (vs. Radeon 9700 Pro) Benchmarks

Comments Filter:
  • by Anand_S ( 638598 ) on Monday January 06, 2003 @09:34AM (#5025025)
    ATI's 147 fps has always been a problem for me in Quake. I like to blink a lot.
    • by Anonymous Coward
      Framerates are not static. They will dip heavily with complex scenes, especially map geometry. If you play any Q3A mods like UrT, True Combat, or just Q3-based games like SoF2 and RtCW, the framerates aren't going to stay a magical 148fps.

      More importantly is how this will translate into capacity for future games. Doom 3 will take considerably more muscle than Q3 does.
      • by SimplyCosmic ( 15296 ) on Monday January 06, 2003 @10:00AM (#5025197) Homepage
        Conversely, simply having a higher average fps number doesn't guarentee that the highs and lows will be better than the lower average fps.

        There's nothing in that single number to say that the higher average fps doesn't suffer from a number of wildly varying large peaks and valleys in performance, while the lower number could be much more steady, with relatively low variance in the number across its peaks and valleys.

        It only makes you wish that these benchmarks, especially the "real world" Quake 3 tests, had a graph of fps throughout the test to see how performance was at any particular point.
        • Interesting!

          Maybe it would be a good idea for benchmarks to start reporting even just the standard deviation along with the average fps. I'd certainly find it useful/informative.
          • I bet that 95% of the people that would buy this card right now could care less about the standard deviation or any other statistic about the card beside fps. They just want bragging rights to be able to say that they got .0002% better fps then card XYZ.
        • If (big if, sadly) I get one of these cards, I'll probably cap the framerate in games that support it and play with everything as cranked as it'll handle. It seems better to me to have 60 FPS with incredible detail than just 200 FPS.

          My monitor refreshes at 72Hz. What am I getting out of 100+ FPS? Sure, the raw power is impressive, but we dont use it.
          • What I was once told from people who are likely to know better than myself, is that the Quake series of games had, for a long time, physics quirks in it. These very well may still be in Quake 3, though I don't know for certain. In these games certain jumps that allowed you to reach various places (usually the homes of high-powered items) much faster than normal players became possible only if your framerate was above a certain number. I can't remember if it was 100 or 120 FPS, but it was a triple-digit number.

            There used to be, and very well may still be, archives of game demos of people demonstrating these jumps and trying to one-up each other. Almost all of the people I know who contributed to these archives believed this was true.

            That said, I have never been that hardcore of a 1v1 or Team DM player to care about this and as such haven't researched its truth myself. It very well may just be some bullshit that spread because enough people blindly passed it on like I am. If someone wants to correct me, please do.
      • by Masem ( 1171 ) on Monday January 06, 2003 @10:03AM (#5025220)
        I certainly have no problem with id and others trying to push the limits of real-time 3D rendering power, or the hardware makers making boards that can do that.

        However, I do question the point where much of the work for games is put into the engine, and little of it put into the gameplay itself. I realize that we're almost at the point where one company is making the engine, and another company licenses that engine to make the game, so the responsibility of good gameplay is on the shoulders of the latter company. However, it seems that a lot more of the games that have been published of late focus more on the realism and the 3d-modeling that the playability of the game, and the continuous push to up the abilities of rendering does not seem to allow the developers of new games to step back and think about gameplay.

        The other problem is that right now, with the specs we're getting on Doom3 and other games, it sounds like another focused hardware upgrade cycle if you want to play these games reasonably. Sure, you can drop the screen resolution, and there's probably hundreds of tweaks you can apply to the engine to cut back details, but older, but still viable cards, will have problems. I know people don't want to develop for outdated systems, but there is a point where you have to include a reasonable amount of backwards compatibility to allow non-power gamers to play new games as well. One of the reasons that Half-Life and CS sold so well was that the game was optimized for play on a previous generation of processor/vidcards compared to the average system that was sold. (HL/CS, also, IMO, excels at it's gameplay as mentioned above). I know a lot of PC game writers are of the opinion that the gaming market will only move forward when vidcard makers put out new features into cards, and then when PC game makers follow up by using those new features in predominate titles, but the PC gaming market is just not healthy right now, and to make games that require the latest-and-greatest hardware will limit sales further and may push this part of the market into a slump, while console gamers will continue to see more improved titles.

        Again, I'm not against improvements in 3d rendering tech and pushing polys as fast as possible; it's the game makers themselves that need to realize what the average hardware of their target audience is going to be and not just to focus on how pretty the game looks.

        • by afidel ( 530433 ) on Monday January 06, 2003 @10:23AM (#5025355)
          The reason Carmack designs his engines for the top of the line card at the time of launch is that ID only makes ok money with their own games. Where they rake in the cash is selling the engine to other game companies, who then go and make another game around it. With game development lifecycles being fairly long with regards to hardware lifecycles this makes a fair amount of sense. The fanboys will go out the day doom3 ships and buy a new top of the line rig, for the rest of us that technology will get into our computer over the next year or two, which probably is in line with the amount of time it will take the companies that buy the ID engine to make their games. So Carmack puts everything into his engine because he knows that by the time most people use it their hardware will be up to snuff. Remember when he started working on the Doom3 engine the idea of programmable pixel shaders was just that an idea, now most people who play games have a card with a PPS. If Carmack did all of his engine design based on the hardware available when he starts the design it would be outdated before it ever got used outside ID.
        • Basically it boils down to: Yes, if you want to play the latest 3D-Games you better get a new machine.

          I don't see why this is bad? I personally dislike 3D Games since they all look alike. If you want you can still play great games with older Hardware, the whole simulation, build-up scene for instance. And most likely your system could even handle games such as DarkAge of Camelot [darkageofcamelot.com] or Everquest [everquest.com], wich are games with a focus on gameplay and not grafics. I agree, there are a lot of crappy games out there with really stunning grafic fx, but i don't care about them (anymore). I let my friends play them and when one diamond among them is found i consider wether it's worth the hardware upgrade. The last game i did this for was Return to Castle Wolfenstein. Had to upgrade for 20$ and get a TNT2 to play it at decent fps. Now i wonder what doom3 brings. Is it worth the upgrade to Radeon/GeForceFx? I don't know. Maybe I will keep waiting for WarhammerOnline [warhammeronline.com]until I upgrade. But someone will betatest for me and then i can still stick with my XentorTNT2/32MB [thetechzone.com]and keep playing Anno1503 [anno1503.de] or DarkAges [darkages.com].
      • This is why I absolutly HATE average frame rate as an indicator of a cards game performance. A much better indicator would be minimum framerate. I don't care if a card can create frames faster than my monitor can display them 95% of the time if it bogs down to sub 30 fps for the other 5% of the time. It really sucks in shooters and is annoying in rpg's when you go from glorious full motion action to flipbook graphics just because some effect or combination of effects took place in your field of view. As an example my gf3Ti does very well in NWN at 1024*768 full detail most of the time (average 40fps) but if you get too many lights on screen it would bog down towards single digits, so the only thing I could do was turn off dynamic shadows which looked way cool but which were causing such slowdowns that I was ripped out of the gaming experience.
    • by OldStash ( 630985 ) on Monday January 06, 2003 @09:57AM (#5025177)
      True gamers blink between frames.
      • Say it is 100fps, so each frame takes 0.01 seconds. In that time your eyelid has to travel 2cm's say (1cm down, 1cm up again). s=d/t = 0.02/0.01 = 2m/sec = 2*60*60/1000 = 7.2 km/h.

    • If you use shutter glasses to display Quake in 3D, then 147fps total = 73.5fps for each eye, which is clearly below the magical number of 85hz refresh that we all love and need.

      (now try to find a monitor with a 170hz refresh rate)

    • I'm not a gamer, so bear with me, but where does one get a monitor that has a refresh rate over 200 Hz at 1600x1200?

      I am correct in that your game's fps cannot be faster than the gun in your crt, right?
  • by OutRigged ( 573843 ) <rage AT outrigged DOT com> on Monday January 06, 2003 @09:35AM (#5025028) Homepage
    I'm sure the GeForce FX drivers they were using were early beta versions, and as such, not optimized to the standard of which release drivers are.

    I'll wait for the final hardware and drivers before I decide which to buy.
    • optimized for what, Quake benchmarks.

      I seem to recall a long standing argument about GFX card drivers being 'optimized' to perform well in the standard performance tests e.g. Quake 3.
      Couldn't find a link on google though.
      • do a google search for "quack 3" and whanot. i think it was actually the ati drivers that were optimized for quake 3. what they did is they took a hex editor with a "find and replace" function and found every instance of quake3 in the executable, and changed it to quack. speed decreased a good 14 fps, and this was back in the day of
        too lazy to find a link, though.
      • Yes, ATI was accused of cheating [tech-report.com] in one of their driver sets, and they did indeed do some "quackery". But since this happened, more and more people are looking for it, so I doubt any GFX card company would dare try it again.

        But also keep in mind that optimizing for a specific game isn't necessarily a Bad Thing, so long as it doesn't hurt the visuals or quality. For example, if you know a certain game doesn't need/use certain features of the card, and by disabling them you improve performance, then why not. (ATI, however, vastly cut down on the texture quality in the game itself to get their increases - tsk tsk).

    • by prefect42 ( 141309 ) on Monday January 06, 2003 @10:06AM (#5025241)
      Which is why in the article they say that the drivers are beta and as such the results should be viewed as beta too.

      Come on, read the article.

      The reluctance of NVidia to allow them to test the higher levels of AA is more telling if you ask me.
  • competition (Score:5, Interesting)

    by vistic ( 556838 ) on Monday January 06, 2003 @09:37AM (#5025043)
    Competition like this kicks ass. The big players taking turns taking the lead. I only wish Matrox were making a larger effort than the Parhelia.

    What I am surprised about though is that prices are so high for graphics cards still even with relatively good competition in the marketplace. I mean even the Parhelia debuted at like $400 didn't it?

    It always seemed to be that the benefit of having AMD competing with Intel, was that I could get a really good CPU pretty cheap. (Though now it seems AMD is taking it easy for awhile, so that benefit may have been short-lived.) Yet I don't see the competition driving video card prices down.

    There's some evil conspiracy afoot here, I know it!
    • Re:competition (Score:5, Insightful)

      by netwiz ( 33291 ) on Monday January 06, 2003 @09:47AM (#5025109) Homepage
      What I am surprised about though is that prices are so high for graphics cards still even with relatively good competition in the marketplace. I mean even the Parhelia debuted at like $400 didn't it?

      It's mostly due to high-end video chipsets costing so much, plus the added expense of the uber-fast memory that these cards require, but mostly, it's driven by ultimate demand for these products. Everyone needs a P4/Athlon XP, but only a few people need the absolute fastest display adapter out there. As a result, fewer units get produced, as fewer units will actually sell. Combine that with the already higher cost of producing core logic that's 1.5-2x the transistor count of high-end CPUs and RAM that's 2-3x faster than desktop stuff, and you've got a recipe for pricey hardware.

      Also, don't forget that most products these days are priced at what the market will bear. People will pay $400 for the fastest thing on the block, so that's what they sell for. My general rule of thumb is to wait a month or two after the new, fast, whiz-bang product, then buy whatever card has the fewest problems and costs $300.
    • by W2k ( 540424 ) on Monday January 06, 2003 @10:26AM (#5025381) Journal
      I see this alot nowadays - people saying that AMD have "lost their edge", or "been taking it easy for a while" ... that is simply not true. An AMD Athlon XP 2800+ _will_ beat an Intel Pentium IV at 2.8 GHz in most benchmarks (and the 3.06 GHz P4 in quite a few - see the latest ones at THG [tomshardware.com] or AT [anandtech.com] if you don't trust me), just as it is supposed to. And you can still practically get two Athlons (not 2800+'s mind you) for the same price as one high-end Pentium IV. Surely no-one here thinks that a single P4, HT or no HT, stands a chance against a true SMP system (given apps that take advantage of both CPU's)?

      Furthermore, there's no app or game available on this earth, and there probably won't be for at least two years to come, where the speed difference between an AXP/2800+ and a P4/3GHz is big enough to really mean anything to anyone other than the fanatical overclocking crowd, who will spend any amount of money just to have the fastest stuff on the market, only to use it for stuff like playing Counter-Strike, which uses perhaps 20% of the total CPU and graphics card capacity. Well, if you're into that sort of stuff, sure. Get a P4 and enjoy having the fastest CPU there is .. until the next model P4/AXP is out, that is.

      For the rest of us, who base our computer purchases on common sense, for speed, stability and price, the obvious choice is still the Athlon XP.

      Besides, the Pentium IV still has a pretty fucked up design. See this page [emulators.com] if you don't know what I'm talking about. I always laugh at people who whine that Windows is poorly designed, only to praise Intel CPU's in the next breath.

      Anyone care to disagree? Remember, modding me down is so much easier than posting an intelligent reply.
  • nVidia still haven't release the integrated graphics version of the nForce2 that they announced over 6 months ago (although you can buy the non-IGP version). They told me that it would be out in September of 2002 and now they just ignore me. I've made the decision to not buy any more products from them since they actively engage in announcing products that take forever to materialize. ATI, OTOH, announces a product only as they are readying to ship it. I have much more respect for this.

    I wouldn't be surprised if ATI has something oodles better than the FX if/when it ever ships.
  • by Ninja Master Gara ( 602359 ) on Monday January 06, 2003 @09:39AM (#5025051) Homepage
    The early beta is probably the reason nVidia wouldn't release this before. They don't want to see numbers I like this out in the public before they're ready.

    I'll still bet money the GF FX will be the dominant card come final release.

    • The early beta is probably the reason nVidia wouldn't release this before. They don't want to see numbers I like this out in the public before they're ready.

      Ready for what - making some optimizations to the drivers to make them look good in some random benchmark?

      Ofc the drivers are beta, and the final release will probably be faster, but these figures look more realistic than the figures Nvidia told when the card was announced!

      And tbh, unless things change drasticly(they seldom do), this card will probably not be much more than 20-25% faster than an radeon 9700 - but the radeon will probably be a bit more than 20-25% cheaper at that time!

      Besides - who cares about those $400+ gfx cards? No sane person would buy them anyway, but instead go for a Ti4200 or Radeon 9500 Pro - value for money you know... :)
      • by Ninja Master Gara ( 602359 ) on Monday January 06, 2003 @10:02AM (#5025215) Homepage
        The driver optimizations have similar effects on game performances. It's why nVidia's drivers are so highly praised. The benchmark optimization is retarded, but everyone does it. But nVidia gets it with the games too.

        these figures look more realistic than the figures Nvidia told when the card was announced!

        This is true. Marketing is evil. Evil evil evil.

        this card will probably not be much more than 20-25% faster than an radeon 9700

        When DirectX 9 is out the door, it will not only be faster, but look better. Much better. I suppose I'll catch flack for buying into the hype, but I've been blown away every time.

        Besides - who cares about those $400+ gfx cards?

        You're right about that last part. For me, it's just about keeping track of who's leading the industry and what technology my next card will have, when I buy it in a year for $120 :)

        • DX 9 (Score:4, Interesting)

          by Anonymous Brave Guy ( 457657 ) on Monday January 06, 2003 @03:31PM (#5027597)

          Hate to break it to you, but having just spent some days researching this, I concluded that there is nothing that the GeforceFX will support that the Radeon 9000 series won't. nVidia's web site may say differently, but that doesn't make it so.

          The FX may do it faster (though this remains to be seen, of course) but it probably won't do it with better image quality. If anything, I'd say ATI cards have historically produced nicer output where there's any difference at all.

          Hell, even the drivers for the Radeon 9700 are getting good reviews. I thought the season of miracles was a couple of weeks ago. ;-)

    • I'll still bet money the GF FX will be the dominant card come final release.

      Well maybe, but I doubt if ATI will sit still. The 9700 is a damn fine card and they've still got like 3 months to cook up a 9900 or something to combat the GFFX. They're normally just leapfrogging each other, but I think nVidia dropped the ball when they GF FX didn't ship in november like it was supposed to. ATI's last 2 generations of products (8500 & 9700) have been pretty damn good, and they've gotten into the 6 month dev groove that nVidia used to have (before the GFFX) Well, competition is good, and I'm on a iBook anyway, so I can just drool. :(
  • by Aggrazel ( 13616 ) <aggrazel@gmail.com> on Monday January 06, 2003 @09:42AM (#5025068) Journal
    3dMark 2001 is a guestimate on how fast things will work, its meant to torture your card and Game 4 (nature) is just that, the most punishing thing they could come up with.

    But it is actual game performance that is important with most people, so while you may get better 3dmark scores, most people aren't running that a whole bunch to see those nifty graphics, they'd rather be running games.

    Also, don't forget to mention that all these tests were run with 2xFSAA on.
  • by uncleFester ( 29998 ) on Monday January 06, 2003 @09:42AM (#5025074) Homepage Journal
    For those who don't rtfa, the quake3 framerates for both cards had 2x antialiasing turned on. When thrown in the mix, this becomes a bit more impressive than the simple 1xx fps rate shown, as a number of current cards can achieve mid- or high-100s speeds but with no AA.

    It's not simply the frame rate, but what's actually being generated in that frame.

    =r
    • What'd I'd love to see is these same tests with 4x or higher AA turned on. Apparently, nVidia disallowed MaximumPC from doing any tests but those, which is probably why the FX comes out on top. I'd bet that the 9700 wins at 4x and up.
      • I'm okay with being patient on that stat; the increase of 2xAA to 4xAA would be a exponential increase in processing, right? This is where driver optimization becomes more critical.

        I figure Nvidia did enough internal testing to know they're safe on 2xAA but want to do further improvement before going higher.
  • From the Article.... (Score:4, Interesting)

    by RebelTycoon ( 584591 ) on Monday January 06, 2003 @09:44AM (#5025089) Homepage
    And if you really twisted our arms, we'd bet money that it will be running on a 0.13-micron core and using 256-bit DDR II memory.

    And if we grab your nuts and twist, you'll confirm this? And if we threaten to cut them off... I think you'll scream just about anything...

    So let me guess, they know what's coming from ATI... But like they said, its not about bandwidth, its about GPU processing power, so how will a even bigger pipe that ATi isn't filling going to help....

  • Who cares (Score:5, Funny)

    by nuggz ( 69912 ) on Monday January 06, 2003 @09:49AM (#5025130) Homepage
    Geforce FX scored 209 FPS in Quake 3 (1600x1200x32)
    while the Radeon 9700 Pro attained only 147fps


    So what they are saying is that even at a ridiculous resolution, either card is capable of a higher framerate than your monitor, and your eyes.

    • Re:Who cares (Score:4, Interesting)

      by Lethyos ( 408045 ) on Monday January 06, 2003 @10:19AM (#5025330) Journal
      So what they are saying is that even at a ridiculous resolution, either card is capable of a higher framerate than your monitor, and your eyes.

      With frame-rate to spare, you can achieve really silky-smooth images by syncing with the monitor's refresh. This prevents those ugly redraw lines that can occur from the next frame being drawn right as the display refreshes. So, having a rate higher than your refresh can be useful.
      • having _capability_ for higher fps than your monitor is refreshing at is good.

        if you actually have the card do faster (by disabling wait for refresh) it will look teared.
  • by netwiz ( 33291 ) on Monday January 06, 2003 @09:50AM (#5025132) Homepage
    I'm willing to bet that there's another 20-30% in the FX due to driver tuning. nVidia typically releases a new product, then, after about two to three months, releases a driver that actually makes the card fast.

    Plus, if this is the first of the GigaPixel cores, then there should definitely be more in it, and the fact that it's down on memory B/W shouldn't make much of a difference.
    • I would have agreed with you before the advent of the 9700 series by ATI and the broken product cycle because of the lateness of the FX. Nvidia's off their usual product cycle by about 6 months and no longer has the luxuries of (1) being the current performance leader and (2) having a vastly superior solution in silicon and drivers. ATI's drivers are much, much better than I expected (I'm a Rage Fury owner who has been enraged and infuriated at ATI's lack of drivers more times than I care to count). Nvidia must be in shock over the 9700 series. I don't believe that Nvidia has enough overhead in the FX to play their usual driver games.
  • I like Maximumpc magazine, I even subscribed to it for a couple of years, but what is up with their website? It doesn't look terrible, but they could have a lot more content on there. Maximumpc seems to have always viewed their website as a threat to magazine sales. The magazine would have been much better off having a content filled, updated daily, community based site that would attract people to the magazine. I even remember one time a year or two ago when their website was not updated for a few months due to "renovations." Who shuts down for such things besides personal websites consisting of cat pictures and one of those "under construction" animations that came out with Netscape 2.0?
  • Hm... (Score:3, Funny)

    by foxtrot ( 14140 ) on Monday January 06, 2003 @09:52AM (#5025144)
    The Radeon 9700 pushes 147 frames per second.

    The GeForce pushes over 200 frames per second.

    My monitor refreshes 75 times a second.

    Tell me again why I want a top-of-the-line graphics card?

    -JDF
    • To help the US economy. If you don't buy one, you are helping the 'EvilDoers(tm)' and are no better then the terrorists.

    • Have fun spending a few grand to get something readable. 75Hz... Man, your eyes must be bloodshot. Too much flicker.
    • Re:Hm... (Score:5, Interesting)

      by Zathrus ( 232140 ) on Monday January 06, 2003 @10:28AM (#5025400) Homepage
      Hey, why don't we test it with Quake1? Bet we can come close to 1000 fps.

      And, yet, even with Q1 running at 1000 fps UT2k3 only runs at 140 fps. Wonder what something with even more complexity than that would run at... oh, look, there's a benchmark that only did 41 fps.

      Or go look at CodeCult.com's Codecreatures, which does a lovely 6 fps on a 2.5 GHz P4 w/ Radeon 9700 at 1600x1200 anti-aliased and ansitropic filtering. And it still doesn't look real.

      Until we have holographic imaging that's indistinguishable from reality the cards aren't there yet. If you don't need/want it, then fine, don't buy it. But whining that it's clearly beyond what's needed is, well, stupid.
    • Because a faster 3D card allows you to:
      - Spend more time doing other things.
      - Make more detailed worlds.

      Running the old games at such high speed doesn't make much sense of course. But it does make it possible to created even more detailed games.

      Greetings,
    • Re:Hm... (Score:5, Informative)

      by Vireo ( 190514 ) on Monday January 06, 2003 @10:37AM (#5025470)
      In all seriousness... In Quake 3, the physics model is tied to your framerate (i.e. a new snapshot of the "world" is computed at each frame). It is well known among avid quakers that the physics is different for different framerate, and that there is an optimum at 125 FPS. This has nothing to do with the visuals. You can go faster and jump higher when getting 125 FPS. In one-player mode, it is possible to separate world snapshots and visual frames, but not in multi-player mode. So most gamers will in some ways try to achieve above 125 FPS and then cap it (using the com_maxfps in-game variable) to 125 FPS. It is then important that the card do above 125 FPS in all maps, all occasions (moreover in heavy battles involving many models and thus many polygons).

      I can't talk for other games, but since the Q3 engine is widely used, that may be the case for some of them too. That partly explains the "need" for high-FPS.
      • Re:Hm... (Score:3, Insightful)

        by roystgnr ( 4015 )
        It is well known among avid quakers that the physics is different for different framerate,

        And does the Quake 3 client actually transmit it's own private physics calculations to the server in a multiplayer game? If so, why would the server believe a client's physics over it's own calculations and why have no cheats sprung up to take advantage of this ridiculous security hole? If not, why does the "different physics" matter?
        • Re:Hm... (Score:3, Interesting)

          by Vireo ( 190514 )
          First of all, I'm not an expert on this matter so I can't answer you definetely. Obviously there are many security holes in the game as demonstrated by the many aimbots available. PunkBuster [evenbalance.com] is a technology enabling the detection of aimbot use, and the banning and kicking out of aimbot users, but does not prevent aimbot use per se. However in my knowledge there is no cheat allowing physics hacks (e.g. high jumping), so that the physics must be computed or checked server-side (?).

          However, the rate at which this is done is certainly less than 125 times per second. Given a ballistic trajectory (e.g. a player during a jump), the trajectory could be checked by the server but the actual position occupied by the player along that trajectory is updated at the frame rate. At 125 FPS, given the standard height at which each player can jump in Quake, the player actually is able to be during one frame at the apogee of the trajectory, which is not the case at other framerates. Thus, certain items in certain maps for example are only reachable if your framerate is exactly 125 FPS.

          Thus, the physics doesn't really change with the framerate. It's the way the "world" is sampled (trajectories, etc.) that is the problem here. And this is done client-side. You can decouple the two in single-player mode (i.e. position updated at 125 Hz, but screenshots generated at 50 Hz), but in multiplayer, by default, servers do not allow this.

          Sorry I can't be more precise... Do a search for "Quake 3 trickjumping" to now more about this, since many "trickjumps" in Quake necessitate the 125 FPS framerate.
    • Doom 3


      it crawls a measly 20fps on my GeForce ti4600. When this gFX comes out, I'll get one right away, even at 500$. I can afford it and it's tax-deductible. I want one cause I want the best.
  • by archeopterix ( 594938 ) on Monday January 06, 2003 @09:53AM (#5025153) Journal
    They never test the number of text lines per second in text mode. Or Nethack [nethack.org] FPS. My card does 7.5 FPS in Nethack, if I click the keys really fast.
    • by Zathrus ( 232140 ) on Monday January 06, 2003 @10:09AM (#5025260) Homepage
      While humorous, once upon a time it did matter how fast you could scroll text, and cards would be benchmarked based on how fast they could do it in a window (doing it in a FS text session was a non-issue).

      I won a Number9 Imagine128 card at Comdex back in the early 90s... I distinctly remember being amazed since for the first time ever it was faster to scroll text in a window than it was full-screen.

      Nowadays it's a total non-issue of course.

      Oh, and I get far better FPS in Nethack. You're just a slow typist ;)
  • Not very fair (Score:5, Insightful)

    by GauteL ( 29207 ) on Monday January 06, 2003 @10:02AM (#5025212)
    Comparing future products against real shipping products is not very fair without at least keeping this in mind. This article barely mentions it.

    ATI might very well ship an improved version around the time GeForce FX ships.
  • 4xAA (Score:5, Insightful)

    by cca93014 ( 466820 ) on Monday January 06, 2003 @10:16AM (#5025305) Homepage
    They need to run the demos with 4xAA. The 9700 and NV30 are so fast as to make FPS irrelevant and eye-candy relevant.

    If the game is running at 100 fps people are going to up the eye-candy, right?

    Assuming this is the case, I seem to remember the 9700 getting very similar scores whether the card was set to no AA, 2xAA or 4xAA, i.e. the AA processing was almost (but not quite) 'free'.

    I know the benchmarks are very very early and it really needs to get the full treatment from a hardware site, but the important figures IMHO are ones where the card is set to run everything maxed out...I have a feeling the NV30 is not going to be in such a prominent position in that instance...
  • by eviltypeguy ( 521224 ) on Monday January 06, 2003 @10:39AM (#5025479)
    The GeForce FX has more hardware capability and increased hardware precision then the *CURRENT* revision of the R300. While the ATi card is only 96-bit effective floating point precision through the pipeline. Hopefully, this oversight will be corrected by ATi with the R350 or R400. As Carmack said:


    NVIDIA is the first of the consumer graphics chip companies to firmly understand what is going to be happening with the convergence of consumer real-time and professional offline rendering. The architectural decision in NVIDIA's next-generation GPU to allow full floating point precision all the way to the frame buffer and texture fetch, instead of just in internal paths, is a good example of far-sighted planning. It has been obvious to me for some time how things are going to come together, but NVIDIA has made moves on both the technical and company strategic fronts that are going to accelerate the timetable over my original estimations.

    My current work on Doom is designed around what was made possible on the original GeForce, and reaches an optimal implementation on NVIDIA's next-generation GPU. My next generation of work will be designed around what is made possible on NVIDIA's next-generation GPU.

    - John Carmack, id Software


    So even if the GeForce FX is a bit slower for some things, those games that are using full DX9/OpenGL features will get better looking graphics thanks to the increased hardware precision. People using 3D programs like Maya with the Cg plugin will notice the biggest difference especially IMO. And at this point, NVidia's shaders are far better geared to the professional 3D graphics industry than ATi's *current* offering. This might encourage many developers to take advantage of extra GeForce FX features instead of ATi features.

    (Source URL for Quote:
    http://www.nvidia.com/content/areyouready/ twimtbp. html)
  • word choice (Score:3, Funny)

    by Transcendent ( 204992 ) on Monday January 06, 2003 @11:12AM (#5025706)
    ...while the Radeon 9700 Pro attained only 147fps in Quake 3...

    Only?!?
  • by stratjakt ( 596332 ) on Monday January 06, 2003 @11:18AM (#5025742) Journal
    Without the goodies on, even the Ti4600 can "outperform" the R9700.

    Hard to imagine a 'serious review' site would neglect to test these features. I don't give a crap about 400 average FPS in quake, but I do care if it drops to 14 with all the enhancements turned on. But then they were trying to make the GeForceFX look like it's leaps and bounds better.

    I'd imagine it's still the case - the 9700 is still the bandwidth king. Personally, I don't care about faster (when its already faster than my monitor can display and brain can process). My next upgrade will be motivated because it will look better.

    The GeforceFX isn't something thats going to leave the 9700 in the dust - it's something that should have come out 6 months ago to compete head-to-head with ATI.

    At any rate, after putting together a couple of cheap flex-atx pcs with onboard S4s (shared memory - Shuttle FV25 in case anyone cares), I'm surprised at how little GPU horsepower is needed to actually play most games.

    Even UT2k3 is playable on these little guys (albeit not 1600x1200 with all the goodies turned on, but playable). I'm pretty sure my "outdated" radeon 64vivo will play Doom 3 when it goes gold.

    Anyhow, my point is that cards have been displaying 'fast enough' for awhile - I mean we don't measure a cards performance in polygons anymore. They need to "look better", as in more natural, smoother, more TV-like.
  • by mycal ( 135781 ) on Monday January 06, 2003 @11:32AM (#5025838) Journal

    Has anyone ever seen Nvidia Driver code? It is littered with benchmark/Game specific code. // if Quake is running, done do these transforms...

    So basically what Nvidia has done is do as little processing as possible when certain apps are running, or optimize for those specific apps.

    So there benchmarks are good if you are running those apps, but bad if not.

    www.mycal.net
    • That's a pretty serious allegation. Can you back it up? How did *you* get access to the code? Can you provide evidence? Moreoever, how does the code detect that the game is running? It can't be simply executable name, given that the Quack3 fiasco took place when ATI tried this stunt.

      No disrespect intended, but a claim like that does not stand on its own.

  • by MrEd ( 60684 ) <[tonedog] [at] [hailmail.net]> on Monday January 06, 2003 @11:59AM (#5026021)


    ...Did anyone test the cards running quack3?

  • dx8 vs dx9 (Score:3, Insightful)

    by gedanken ( 24390 ) on Monday January 06, 2003 @12:03PM (#5026050)
    "GeforceFX ... 41fps in 3dMark Game4 demo, while the Radeon 9700 ... came back with 45fps in the 3dMark test."

    Who cares? 3dMark test is designed around directX 8, while both of those cards are designed to take advantage of directX 9. Wait until the next 3dMark release then you have a valid test.
  • by Aqua OS X ( 458522 ) on Monday January 06, 2003 @01:08PM (#5026465)
    Considering that ATI has a new card coming out soon, I doubt nVidia will get to be king of the hill for very long.
  • by waltc ( 546961 ) on Monday January 06, 2003 @02:28PM (#5027100)
    These benches are more of the same passed-around mush that nVidia's been handing out since October. Wake up and smell the coffee, people--these are the same programs nVidia handed out in the October handouts for benchmarking. Did the reviewer have a gun to the back of his head, so that he couldn't mange to run *anything* else? How convenient.

    By the author's own words, this was no review. There are no 6x, 8x FSAA tests, at all, although these are supposed capabilities of GF FX--there are no screen shots for comparison--in otherwords, there is absolutely nothing to prove this ever took place. There are no anisptropic filtering tests, we don't know what cpu system the Radeon 9700 benchmarked on--nothing--absolutely nothing of interest that you would normally see in a real review is present. Even if you believe the author--he says unapologetically he was under direct duress by nVidia as far as what he was permitted to show AND SAY.

    Already people on the Rage3D forums are talking about how much slower the 9700P speeds are in this promotional propganda piece than they themselves can get with their systems at home.

    Also....what, pray tell, would Alienware be doing with a NVIDIA beta prototype? As a small OEM I would expect that if anything Alienware would have an OEM beta version of the card--possibly. Certainly not a nVidia version of a prototype card! If nVidia needs Alienware to beta test its upcoming card this must mean nVidia hasn't even finished the prototype reference design yet and nVidia's OEMs haven't even begun production!

    Here's what I think it is: a paid-for promotional piece which is designed to deter people from going ahead and buying an ATI 9700 Pro. What it most certainly is not is an actual review of the product--by the words of the author himself. What I still can't get over is that these are the very same benchmarked programs nVidia was handing out in October!

    When nVidia starts sending out cards to reviewers with driver sets and saying, "Have at it--review it any way you like!" that's when I'll start listening.

    • I'm the author of the story you're talking about, and I generally don't respond to criticism like this, but I think it's important to here.

      We looked at this board during the second week in December. It was a very early board, and simply didn't run a large number of applications. The situation is a pretty common one for print pubs. Since we have a lead time that ranges from 2-6 weeks between the time we write stuff and the time that magazines get to readers, we occasionally take a look at preview hardware with special terms negotiated with the the vendors in advance. Unlike some other mags, we ALWAYS make it abundantly clear that this is a preview, not a full review. Furthermore, we always make it clear when a vendor specified we run specific benchmarks in these previews. Naturally, in our full reviews, we run whatever benchmarks we please at whatever resolution we like.

      Anyway, Alienware wanted nVidia to get this sample in time for our preview story, but the drivers were very raw. In order to make our deadline and get an early look at the board, we agreed to only run a small subset of benchmarks, with a big huge disclaimer that said "Hey, nVidia would only let us run these benchmarks", which we did.

      A nVidia rep hand-delivered the board up the same day that the Alienware system arrived, watched me install it, installed the drivers, watched me set up and run the benchmarks, then pulled the card, obliterated the drivers and went on their way. After that, we restored the pre-nVidia hard drive image and benchmarked the Radeon 9700 in the exact same machine.

      We don't run other people's benchmarks in Maximum PC. If you see a number on the website or in the magazine, it was run by a staff member in our lab. Period.

      I can't understand why you'd think this is a positive thing for nVidia. The overall tone of the story was that the performance is a little weak for something that will cost an arm and a leg, and take up two PCI slots. Heck, it doesn't even beat the 6 month old Radeon 9700 in the programmable shader test, which is the only one that really matters in my eyes. I don't care about 30 more fps in Quake 3. I want a card that will be fast in programmable shader games later this year, and the GeForce FX doesn't appear to be that.

      Will Smith
  • by GoSpeedRacerGo ( 587091 ) on Monday January 06, 2003 @02:50PM (#5027245)
    (ooops)...

    One quick point to address all the 150 fps in Quake jokes:

    Frame rate consistancy is what is most important (by far). A game that runs at 30 fps solid will also feel better than a game that runs at 60 fps some of the time but then bops back and forth between 30 fps and 60 fps.

    The VisSim industry has done a better job of saying "we only need 60 Hz (fps) but we better never ever see you dip below that or you are out!" This forces hardware and software to be optimized for locking at 60 fps. I can tell you that a 60 fps Air Force flight simulator will always feel higher performance than a soupped up PC running Quake at 100 fps but dipping down to 50 fps or worse when things get hairy.

    The biggest evidence of this issue being unimportant in PC gaming is the number of people or games that run with vertical blank (vblank) synchronization turned off. This is wrong wrong wrong in my opinion but most gamers are willing to live with enormous visual artifacts from partially completed frames to get that max fps and lowest input latency when things tough on the system.

    So, to all those that mock high fps benchmarks, I challege you to post information on a recent 3D game, gfx card, system, and config that allows you to play with all the gfx features on (or those that are important to you) with vsync on using a 60 Hz display and only double buffering which locks at 60 fps solid without ever dipping below that.

    That is when things have become fast enough for _that particular_ game.

    Products like the NV30 and R300 help push the bar but are still not overkill. Take the above challenge and now turn on 16x multi-sampled FSAA (same as an SGI Onxy/IR), 8x anisotropic filtering (often more important than FSAA), 1600x1024 (the native resolution of my DFP), 128bit pixel depth (which NV30 can do before scan out), and include very complex vertex and fragment (pixel) programs. With all of that, turn Vsync on (as it should be) and have this entire combination run at 60 fps per second regardless of what is going on in the game at any and every given moment.

    When we can do all of that, we are finished. :-)

    The problem then becomes the content creators who continue to push the envolope. GeForce FX is launching with a demo that has Jurasic Park/Toy Story quality rendering tied to real-time dynamics and feature film quality animation. However, it is not quite to the level of Gollum in LOTR: Two Towers. Imagine Doom4 with 50 characters on the screen that all look like Yoda or Gollum. My point is that there is always room and applications for higher performance.

    100 of these running around locked at 60 fps is the new goal: http://notendur.centrum.is/~czar/misc/gollum.jpg

Nothing ever becomes real till it is experienced -- even a proverb is no proverb to you till your life has illustrated it. -- John Keats

Working...