Futuremark Replies to Nvidia's Claims 317
Nathan writes "Tero Sarkkinen, Executive Vice President of Sales and Marketing at Futuremark, has commented on the claims by Nvidia that 3DMark2003 intentionally puts the GeforceFX in bad light, after Nvidia had declined becoming a member of Futuremark's beta program. This issue looks like it will get worse before it gets better." ATI also seems to be guilty of tweaking their drivers to recognize 3DMark.
I don't care how fast it is... (Score:4, Insightful)
open source (Score:5, Insightful)
Evaluating the evaluation. (Score:5, Insightful)
And, since one of the main reasons people will buy this is to play flashy and pretty games, ignoring the performance in those games is rediculous.
Once Again A Call For Open Source Benchmarks (Score:4, Insightful)
The alternative is what we have now: hand waving voodoo. Not only do we have to take the vendor's word they aren't monkeying around with the driver to match execution of the benchmark but now we have to question where the aligence of the benchmark makers.
Confused (Score:3, Insightful)
I'm confused about what this means. Is the 1.9% difference in ATI performance between Game Test 4 with correct and modified names, or between the current driver and an older version?
Most people here seem to think it's the latter, and I'd agree that they did nothing wrong if that's the case. But it's not obvious to me that they're not accused of the same thing as NVIDIA.
Respect (Score:1, Insightful)
Re:If it weren't for standards ...... (Score:5, Insightful)
Quake III at 300 FPS (Score:5, Insightful)
The issue with low FPS is a game problem 9 out of 10 times. The faster the video card, the less the game development houses work to streamline and improve their framerate.
Re:Once Again A Call For Open Source Benchmarks (Score:5, Insightful)
Better to have open source drivers so we can inspect the driver for cheating/optimisations.
Infact, open source them all if the hard numbers are that important!
Re:ATi wasn't so bad (Score:2, Insightful)
Cheating??? (Score:3, Insightful)
Nvidia is cheating and acting like a child, er, large corporation...but that isn't at all what Ati is doing.
Re:Same old story.... (Score:5, Insightful)
I dont even bother with 3DMark scores when I read reviews, I skip straight to the tested games and get a look at the FPS at various levels of detail.
Then it's easy to realize that card A gives 201 FPS, card be gives 199 FPS, and the answer is: buy whichever is cheaper.
This gives me much more useful information that relates to what I want the card for - playing games.
but this doesnt seem fishy? (Score:3, Insightful)
Ive read 5-6 reviews of the FX 5900 and everyone seems to think its great, and rightly gives Nvidia the 3d crown. (Especially concerning Doom ]|[
If you read the interview, its even brought up that the 5900 seems to do just fine in all other benchmarks, only futuremark seems to give it a hard time, and im not buying that crap about Doom 3 benchmarks not being readily available.
If i remember, Toms had a good review of that....
Re:Cheating??? (Score:3, Insightful)
Coincidence? (Score:4, Insightful)
What upsets me is not that you lied to me, but that from now on I can no longer believe you. -- Nietzsche
Classic.
This is a good thing (Score:3, Insightful)
Think of it this way, when's the last time you saw PC World roast a product that truely deserved it? How many review sites gave WinMe a thumbs up when it's widely viewed in the industry at MS's worst OS to date? We (the public) simply aren't being served if the test companies are cooperating with the companies their testing. Look if a testing company, review site or whatever other lab doesn't occasionaly come out and just say "this sucks" than you know they aren't credible. There's too much out there that sucks, and too few reviewers willing to let the public know before they waste their money.
It's the same reasoning that dictates why consumer's reports will buy their cars anonymously from dealers using third parties instead of getting "special" delivery directly from the manufacture. What we should really see with the behaviour were observing so far is an impetus to develop an open source test benchmark application. By doing this we would assure that the results can't be bought, just like has become common practice in so many other industries.
Re:but this doesnt seem fishy? (Score:2, Insightful)
Re:nVidia vs. ATI (Score:5, Insightful)
ATI recognized the 3dmark executable and special cased for it. Which is misleading and wrong. The performance is enhanced for 3DMark and 3DMark alone.
So what? Who cares? (Score:2, Insightful)
All this proves is that a benchmark is a highly isolated incident of observable performance.
For example, most charts I see rate the P4 as "faster than the Athlon" at the same clock rate. Yet when I benchmarked my bignum math code I found that the Athlon completely kicked the P4s ass
[K7]
http://iahu.ca:8080/ltm_log/k7/index.html
[P4]
http://iahu.ca:8080/ltm_log/p4/index.html
Does this single test prove the athlon is faster than the P4? Hell no. It proves that using portable ISO C source code todo bignum math is faster on an Athlon. If I used SSE2 the P4 would probably smoke the Athlon, etc...
Can we stop putting stock into this BS?
For the record I have a Ti200. Its decently fast [50fps at 1280x1024 in UT2] and there are no visible artifacts or other "cheats". It works nicely. So if nVIDIA cheated to make their 3dmark score better all the power to them. Screwing around with meaningless benchmarks is a good way to discredit them.
Tom
Re:driver tweaking (Score:5, Insightful)
Video cards have a simple job when it comes to resolution and refresh rate: When using resolution X, use the best refresh rate Y, and if I have to tell it what that is, so be it. They can't do this.
A lot of this is due to games and their poor detection of capabilities, and lack of effort to try for the best refresh rate. However, it's hard to pin it all on them. Games generally don't have trouble detecting if you have EAX capability, or detecting how many axis are on your joystick, or whether you have a third button on your mouse. Sure, I've seen problems in these areas too, but the video card situation feels like someone just invented VGA yesterday and the video card manufacturers are struggling to make it all work.
I'm using the Cat 3.4 drivers. I can set the refresh separately too, but a few times both the ATI driver tabs and the XP display properties reported that I was in one refresh rate, but my monitor OSD said differently. Inexcusable. It was due to that "maximum capability" setting, and as a result it didn't mind lying to me as long as it avoided going over the maximum. Glad it was able to "protect" me.
But that's not the half of it. I set the refresh rate, and when entering a game, it changes, usually back to 60hz. When entering games, the resolution changes a lot, and it seems completely random what it ends up on.
Other problems I'm having include that mode switching in general takes three times as long as the NV card. Switching back to Windows from games results in a very long black screen, and until Cat 3.4 came along, I couldn't switch out of CS/HL at all without crashing the entire OS.
Let me give you an example of my typical day. I set the display capability to 1280x1024 @ 85hz because I want 85hz in CS. In CS, I accidentally had the mode set for 1600x1200. With ATI3.3, it would crash. With ATI3.4, it would actually draw a 1600x1200 screen in a 1280x1024 window. Yep, it was cut off, with part of the screen literally extending off the monitor into the void.
I change the capability to 1600x1200 @ 75hz and play a while. I quit and fire up BF1942, which due to CPU constraints, runs better at 1024x768. But, I'm at 75hz, because I have no way to tell the card that while it only supports 1600x1200 @ 75hz, it does 85hz in every other mode. I have to change the capability to 1280x1024 @ 85hz. BF1942 runs.
I run another game at 1600x1200. Unlike CS, where it drew off the screen, this one would simply blackscreen as a result of trying to go into 1600x1200 @ 85hz (since 85hz is my "maximum" resolution.) I reboot, and the first few times it happened, I looked for game patches before realizing that this stupid ATI driver was the cause.
The constant mode switches between games take several seconds, and perform an odd "screen wiping" effect that reeks of cheesy hardware. The NV switches modes smooth as butter.
I'm scared as hell to hook this thing into my TV. It might try to pump 2048x1024 @ 100hz at it and cause an explosion.
Re:nVidia vs. ATI (Score:2, Insightful)
The clipping hack however is an obvious no-no and NVidia should simply admit it and shut the f*ck up.
Re:If it weren't for standards ...... (Score:4, Insightful)
No, but they use shaders which are generally only supported on DX9 cards and a few older ATI cards. Just because you have a PS2.0 card that doesn't mean you have to use PS2.0 if PS1.4 can do the same: why deliberately make more work for yourself by not supporting older cards?
"Three of four 3DMark03 demos use Pixel Shader 1.4 which was introduced with DirectX8.1 and isn't natively supported by NVIDIA cards"
Support for PS1.4 is a requirement of DX9, so if the GF FX is a DX9 card then it supports PS1.4, and your claim is therefore bogus. If it doesn't support PS1.4, then it's not a real DX9 card.
Re:nVidia vs. ATI (Score:4, Insightful)
Who cares about benchmarks anyway? (Score:3, Insightful)
I don't believe claims anyway; ATI says their card is faster than NVidia's. NVidia says theirs is faster than ATI's. Bleh....
It's quite simple really (Score:4, Insightful)
Point is, they can come out of this wearing the white hat, because they were the first to be such good guys about the issue.
The fact is, even with all Nvidia optimizations in-place, their high-end card will just barely edge out a 9800 Pro without optimizations. Add ot this the fact that ATI, 3dmark and the community will hound them and discount Nvidia's optimizations until they are removed, and you've got an all-out win for ATI.
Remember folks: everyone cheats. Fools take things too far and get caught. ATI has played the fool before, Nvidia plays it now; that is the game.
Re:What a mess! (Score:1, Insightful)
Tired of hearing this. (Score:3, Insightful)
And how have people figured this out time and time again? Oh, they renamed the executable...
Why does the benchmarking software not rename the executable to some-254-character-long-file-name-random-string.e
I'm sure that there is some way that Nvidia and ATI could get around even this but what are they gonna do make a 75MB driver in retaliation to what the benchmark companies do?
Re:The real reason this is important. (Score:2, Insightful)
Re:If it weren't for standards ...... (Score:4, Insightful)
So this is how it should look, properly:
- Game 1: no shaders at all, only static T&L (DX7-class effects, given comparatively little weighting in overall score)
- Game 2: vertex shader 1.1 and pixel shader 1.4 (natively supported by GFFX, ATI Radeon 8500 and above)
- Game 3: vertex shader 1.1 and pixel shader 1.4 (natively supported by GFFX, ATI Radeon 8500 and above)
- Game 4: vertex shader 2.0 and pixel shader 1.4+2.0 (DX9 cards only, Radeon 9x00 and GFFX)
Nvidia's lack of support for PS1.4 is their own design choice, and now they have to live with it. The GF4 was released after DX8.1 came out, which contained the PS1.4 spec, but they chose not to support it. ATI Radeon 8500 and above have no problem with this because they supported DX8.1 from the getgo, but nvidia did not change and continued their 8.0 support. As was previously mentioned in the article, nvidia was participating in the developer's beta until Dec 2002, well into the development period for 3dm03 and a month after they paper launched the GFFX, so they knew what was going on with the benchmark for a long time beforehand and didn't change their stance for a while. Presumably, as a beta member up until Dec 2002 if they didn't like the choice of PS 1.4 in extensive use, then they could've said something earlier.
The key to regarding 3dm03 is it's goal as a forward-looking benchmark. Both DX8 games and DX9 games are currently in development, and many DX7 games are still in existence (remember, HL2 doesn't require anything above a DX6 card), so in this respect 3DM03 is still fair in its test design.
Driver strategies (Score:5, Insightful)
Rewriting a shader so that it does exactly the same thing, but in a more efficient way, is generally acceptable compiler optimization, but there is a range of defensibility from completely generic instruction scheduling that helps almost everyone, to exact shader comparisons that only help one specific application. Full shader comparisons are morally grungy, but not deeply evil.
The significant issue that clouds current ATI / Nvidia comparisons is fragment shader precision. Nvidia can work at 12 bit integer, 16 bit float, and 32 bit float. ATI works only at 24 bit float. There isn't actually a mode where they can be exactly compared. DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit. For just about any given set of operations, the Nvidia card operating at 16 bit float will be faster than the ATI, while the Nvidia operating at 32 bit float will be slower. When DOOM runs the NV30 specific fragment shader, it is faster than the ATI, while if they both run the ARB2 shader, the ATI is faster.
When the output goes to a normal 32 bit framebuffer, as all current tests do, it is possible for Nvidia to analyze data flow from textures, constants, and attributes, and change many 32 bit operations to 16 or even 12 bit operations with absolutely no loss of quality or functionality. This is completely acceptable, and will benefit all applications, but will almost certainly induce hard to find bugs in the shader compiler. You can really go overboard with this -- if you wanted every last possible precision savings, you would need to examine texture dimensions and track vertex buffer data ranges for each shader binding. That would be a really poor architectural decision, but benchmark pressure pushes vendors to such lengths if they avoid outright cheating. If really aggressive compiler optimizations are implemented, I hope they include a hint or pragma for "debug mode" that skips all the optimizations.
John Carmack
Re:Does anyone think it's coincidence (Score:1, Insightful)
Re:Driver strategies (Score:1, Insightful)
It sort of irks me when I see people replying to your post with silly or snide remarks. Don't let them keep you from posting. For every 1 of them, there's a 100 more slashdot readers who appreciate reading your comments.
Sincrely,
AC
Assumed is not the same as Minimum (Score:2, Insightful)
ATIs "optimized code" (Score:3, Insightful)
This isn't about Nvidia vs. ATI or about defending Nvidia, what NV did by clipping planes was even worse. It's just that there is no justification for cheating on the benchmarks, even if the graphical results are the same. The benchmarks should be an indicator how the card will perform in real-world-szenarios (i.e. games) and any driver-tweaks that are benchmark-specific but don't help performance otherwise are just cheating and make-believe.
Re:Driver strategies (Score:3, Insightful)
Not to nitpick but the ARB specs do specify a minimum precision for ARB_fragment_program. From the Latest GL documentation:
RESOLVED: We've decided not to include precision queries.
Implementations are expected to meet or exceed the precision guidelines set forth in the core GL spec, section 2.1.1, p. 6, as ammended by this extension.
To summarize section 2.1.1, the maximum representable magnitude of colors must be at least 2^10, while the maximum representable magnitude of other floating-point values must be at least 2^32. The individual results of floating-point perations must be accurate to about 1 part in 10^5.
I'll leave the tallying of what fp precision that comes out to for those more anal than myself. Or maybe just those looking for a chance to correct JC on a rather obscure technical detail