NVidia Accused of Inflating Benchmarks 440
Junky191 writes "With the NVidia GeForce FX 5900 recently released, this new high-end card seems to beat out ATI's 9800 pro, yet things are not as they appear. NVidia seems to be cheating on their drivers, inflating benchmark scores by cutting corners and causing scenes to be rendered improperly. Check out the ExtremeTech test results (especially their screenshots of garbled frames)."
Giveing them self a bad name (Score:3, Interesting)
Re: seems misleading.. (Score:3, Interesting)
As someone who has always been impressed by nVidia's driver updates and the benefits they can give each time, I am going to wait to see if it really is somethin
Re:Giveing them self a bad name (Score:5, Insightful)
Oh, c'mon. Benckmark fudging has been an on-going tradition in the computer field. When I was doing computer testing for InfoWorld, I found some people in a vendor's organization would try to overclock computers so they would do better in the automated benchmarks. ZD Labs found some people who "played" the BAPco graphics benchmarks to earn better scores by detecting a benchmark was running and cutting corners.
<Obligatory-Microsoft-bash>
One of the early players was Microsoft, with its C compiler. I have it from a source in Microsoft that when the Byte C-compiler benchmarks figures were published in the early 1980s Microsoft didn't like being back of the pack. "It would take six months to fix the optimizer right." It would take two weeks, though, to put in recognizers for the common benchmarks of the time and insert hand-optimized "canned code" to better their score.</Obligatory-Microsoft-bash>
Microsoft wasn't the only one. How about a certain three-letter company who fudged their software? You have multiple right answers to this one. :)
When the SPECmark people first formed their benchmark committee, they knew of these practices and so they made the decision that SPECmarks were to be based on real programs, with known input and output, and the output was checked for correct answers before the execution times would be used.
And now you know why reputable testing organizations who use artifical workloads check their work with real applications: to catch the cheaters.
Let me reiterate an earlier comment by Alan Partridge: it's idiots who think that a less-than-one-percent difference in performance is significant. (Whether you the shoe fits you is something you have to decide for yourself.) What benchmark articles don't tell you is the spread of results they obtain through multiple testing cycles. When I was doing benchmark testing at InfoWorld, it was common for me to see trial-to-trial spreads of three percent in CPU benchmarks, and broader spreads than that with hard-disk benchmarks. Editors were unwilling to admit to readers that results were collected that formed a "cloud" -- they wanted a SINGLE number to put in print. ("Don't confuse the reader with facts, I want to make the point and move on.") I see that in the years since I was doing this full-time that editors are still insisting on "keep it simple" even when it's wrong.
Another observation: when I would trace back hardware and software that was played with, the response from upper management was universally astonishment. They would fall over backwards to ensure we got a production piece of equipment. To some extent, I believed their protestations, especially when bearded during their visits to our Labs. One computer company (name withheld to protect the long-dead guilty) was amazed when we took them into the lab and opened up their box. We pointed out that someone had poured White-Out over the crystal can, and that when we carefully removed the layer of gunk the crystal was 20% faster than usual. Talk about over-clocking!
So when someone says "Nvidia is guilty of lying" I say "prove it", further saying that you have to show with positive proof that the benchmark fudging was authorized by top management. I can't tell from the article, but I suspect someone pulled a fast one, and soon will be joining the very long high-technology bread line.
Pray the benchmarkers will always check their work.
And remember, the best benchmark is YOUR application.
Re:Giveing them self a bad name (Score:4, Insightful)
The course focuses on making decisions based on statistics. In the second week of class, we learned what a standard deviation was, and we never stopped using it throughout the semester.
But perhaps ignorance would explain business tactics of the 90's.
Voodoo economics (Score:3, Interesting)
Mod parent up (Score:3, Interesting)
Benchmarks, even so-called 'real-world' benchmarks, are a poor indicator of system performance. Sites like Tom's Hardware and Anandtech exist as a kind of group therapy for hardcore gamers and 'performance enthuiasists'. You know if you read their "technical" articles that they understand as much about the inner workings of a computer as the rice rocket driver with the huge spoiler and chrome wheel covers understands about his car's engine.
These sites always have an incestuous r
What's the big news? (Score:5, Insightful)
Re:What's the big news? (Score:3, Interesting)
Re:What's the big news? (Score:3, Interesting)
Re:What's the big news? (Score:5, Insightful)
This is always the case with any chosen performance measurement. Look at managers asked to bring quarterly profits. They tend to be extremely shortsighted...
Moral of the story: be very wary on how you measure and always add a qualitative side to your review (e.g. in this case, "driver readiness/completedness").
Problem is the benchmarks themselves (Score:5, Interesting)
More importantly why is any benchmark rendering the exact same scene each time? Nobody would test an FPU based on how many times per second it could take the square root of seven. You need to generate thousands, millions of different scenes and render them all. Optionally, the benchmark could generate the scenes at random, saving the random seed so the results are reproducible and results can be compared.
Re:Problem is the benchmarks themselves (Score:5, Insightful)
Really? Do you write benchmarks?
I used to write benchmarks. It was very common to include worst-case patterns in benchmark tests to try to find corner cases -- the same sort of things that QA people do to try to find errors. For example, given your example of a floating-point unit: I would include basic operations that would have 1-bits sprinkled throughout the computation. If Intel's QA people would have done this with the Pentium, they would have discovered the un-programmed quadrant of the divide look-up table long before the chip was committed to production.
Why do we benchmark people do this? Because we are amazed (and amused) at what we catch. Hard disk benchmarks that catch disk drives that can't handle certain data patterns well at all, even to the point of completely being unable to read back what we just wrote. My personal favorite: how about modems from big-name companies that drop data when stressed to their fullest?
The SPECmark group recognizes that the wrong answer is always bad, so they insist that in their benchmarks the unit under test get the right answer before they even talk of timing. This is from canned data, of course, not "generating random scenes." The problem with using random data is that you don't know if the results are right with random data -- or at least that you get the results you've gotten on other testbeds.
Besides, how is the software supposed to know how the scene was rendered? Read back the graphics planes and try to interpret the image for "correctness"? First, is this possible with today's graphics cards, and, second, is it feasible to try? Picture analysis is an art unto itself, and I suspect that being able to check rendering adds a whole 'nuther dimension to the problem. I won't say it can't be done, but I will say that it would be expensive.
For FPUs, it's easy: have a test vector with lots of test cases. Make sure you include as many corner cases as you can conceive. When you make a test run, mix up the test cases so that you don't execute them in the same order every pass. (This will catch problems in vector FPU implementations.) Check those results!
Now, if you will tell me how to extend that philosophy to graphic cards, we will have something.
Re:Problem is the benchmarks themselves (Score:3, Insightful)
Benchmarks are meant to predict performance. While it is essential to check the validity of the answer (wrong answers can be computed infinitely fast), the role of a benchmark isn't to check never-seen-in-practice cases or so-rarely-seen-in-practice-that-running-100x-slow e r-won't-matter.
That reminds me of the "graphic benchmark" used by some Mac websites that compares Quickdraw/Quartz performance when creating 10k windows. Guess what, Quart
Just a note (Score:3, Interesting)
It is perfectly possable ot read the graphics data from the card and write it to a file, like a tiff. In fact, I've seen some benchmarking programs that do. Then what you can do, for DirectX at any rate, is compare against a reference renderer. The development version of DX has a full software renderer built in that can do everything. It is slow as hell, being a pure software implementation, but also 100% 'correct' being that it is how DirectX intends for stuff t
Lies (Score:2, Funny)
Re:What's the big news? (Score:4, Insightful)
Re:What's the big news? (Score:3, Funny)
Two things, both related to the key demographic:
1) When you're spending $200USD or more on any piece of hardware, you want to know that your purchasing decision was the best one you could make. Given that the majority of the people making these big-buck video card purchasing decisions are males in high school/college, who in general don't have that m
NVIDIA == Thieves and Liars if et is correct (Score:2)
The fuss is about the honesty of nvidia's business practices. I dont know about you, but I do not excuse dishonesty from business people -- they should be held to a very high standard.
If what extremetech is saying (that nvidia purposefully wrote their driver identify a specific benchmark suite, and then ONLY to inflate the results) it would be increadiby significant. if so, I would *NEVER* buy another nvidia product again -- and I would make clear to the
Re:NVIDIA == Thieves and Liars if et is correct (Score:5, Insightful)
Re:NVIDIA == Thieves and Liars if et is correct (Score:5, Funny)
actually it is 80 GB (Score:3, Interesting)
This is standard even in most other parts of computing (anything engineering-oriented especially). For example, that 128kbps mp3 you downloaded is 128000 bits/second, not 128*1024 bits/second.
Re:Does this even improve your experience? (Score:5, Funny)
Re:Does this even improve your experience? (Score:2)
Re:Does this even improve your experience? (Score:3, Insightful)
The trouble with free speech is that everyone has it.
Re:Does this even improve your experience? (Score:4, Insightful)
On the other hand, ATI sold over 1 million Radeon 9700s in first few months of it being out, so there are definitely a lot of people out there who do need and want the best card the money can buy.
So, that gets us to your question of whether nvdia cheating really makes a difference. Obviously, it doesn't make a difference to you, because you don't want the buy any of the high-end cards in the first place. It should be obvious in the same way, though, that it does make a big difference to somebody who will buy a high end card.
If 9800 and FX5900 have the same price, and speed is what you're after (and it should be, since you're buying these cards), then you want to buy the faster one. The only way to figure out which one is faster is to check the benchmark results (unless you buy both and try them tyourself). If one of the companies cheated in a benchmark, they have tricked you into thinking that you're buying a faster card, while you're really buying a slower one.
Imagine you're picking between two equally expensive cars, and you want to buy the faster of the two. One claims to do 0-60 in 5s, and the other claims to do it in 3s. You'll go ahead and buy the latter one, only to learn later that they were testing the car going downhill while the other was accelerating on level ground! I think enraged would only begin to describe your reaction to that.
Re:What's the big news? (Score:5, Interesting)
I've seen a video card driver where about half the performance-related source code was put in specifically for benchmarks (WinBench, Quake3, and some CAD-related benchmarks), and the code was ONLY used when the user is running said benchmark. This is one of the MAJOR consumer cards, people.
So many programming hours put into marketing's request to optimize the drivers for a particular benchmark. It makes me sick to think that we could have been improving the driver's OVERALL performance and add more features! One of the reasons I left......
Re:What's the big news? (Score:2, Interesting)
Re:What's the big news? (Score:3, Funny)
Re:What's the big news? (Score:5, Funny)
Uhhh, can I have the sucky card?
Please?
Hmmmm (Score:2, Interesting)
I dont know why anyone ever cheats on benchmarks...how could you ever get away with it? do you really think no one is going to do their own benchmark? Come on. This is probably one of those most retarded things I have ever seen a company do.
Oh well, Nvidia is getting to the point were they are going to have beat out ATI at some point if they want to survive
Re:Hmmmm (Score:5, Informative)
Here [tech-report.com] is a link about it in case you forgot or didn't know.
It just goes to show that both companies play that game, and neither to good results.
Re:Hmmmm (Score:4, Informative)
Re:Hmmmm (Score:2)
What stands out in my mind is that they cheated, and yet they still loose compared to ATI! It's the worst kind of cheating... Mediocre.
The benchmark will say the same thing when you run it, as it did when they ran it... You will have to notice the fact that the images are lower quality to realize there is something awry.
Database Vendors (Score:3, Interesting)
20zillion transactions per second provided you have a massive parallel Alpha with 1024 processors and 256 TB of physical memory for just 23.99$ per transaction assuming that you found your massive parallel Alpha on a heap of scrap metal.
I don't know (Score:5, Insightful)
Targeting benchmarks is just part of the business. When I was on the compiler team at HP, we were always looking to boost our SPECint/fp numbers.
In a performance driven business, you would be silly not to do it.
Re:I don't know (Score:2)
More information here [hardocp.com].
Re:I don't know (Score:3, Interesting)
Re:I don't know (Score:2)
You're right, you don't know (Score:4, Informative)
This is why all software and hardware should be open-source.
Re:You're right, you don't know (Score:2, Insightful)
Right, and why all your bank records should be public (just in case you are stealing and have illegal income). And all your phone records should be public as well as details of your whereabouts (just in case your cheating on your wife/skipping class). And of course, why the govt should have access to all your electronics transmissions (internet, cell, etc), just in case you're doing something that they don't like.
Re:You're right, you don't know (Score:2)
Hey, here's an off-the-wall idea: why don't you let the people writing the code choose what they want to do with it. If you want to make your code open-source, fine; and if you don't want to use code that isn't open-source, also fine. Otherwise, save the zealotry for Sunday mornings.
Re:I don't know (Score:2, Interesting)
Read the article. The cheating does not directly affect quality. Then how is it cheating I hear you ask? Because it only increase performance in the _specific_ scene and path rendered in the benchmark.
This is similar to claiming having the worlds fastest _calculator_ of decimals of Phi, only to have it revaled that you're simply doing std::cout << phi_string << std::endl;
ATI, Trident [spodesabode.com] and now nVidia. I really hoped nVidia would stand about this kind of lying.
I think there is a difference (Score:2)
nVidia realised that the markers of this paper were only basing their score on the accuracy of every other question for this particular exam so specifically designed their driver for this specific exam paper to only bother to answer every other question.
You
Re:I don't know (Score:2)
Sure, but the Spec guidelines for optimization say: "Optimizations must generate correct code for a class of programs, where the class of programs must be larger than a single SPEC benchmark or SPEC benchmark suite."
We all know that vendors target benchmarks. The important question is: does the optimization have a general benefit, other than inflating the bench
whatever (Score:2, Insightful)
Since when did rendering errors caused by driver problems become "proof" of a vendor inflating benchmarks?
And this story was composed by someone with the qualifications of "Website content creator, who likes video games alot" not a driver writer, not anyone technically inclined beyond the typical geek who plays alot of video games and writes for a website called "EXTREME tech" because you know, their name makes
Re:whatever (Score:5, Informative)
The short of it is that nVidia added hard-coded clipping of the scenes for everything that the banchmark doesn't show in its normal run, and which gets exposed as soon as you move the camera away from its regular path.
It's a step in the direction of recording an mpeg on what the benchmark is supposed to show and then playing it back at 200 fps.
Re:whatever (Score:3, Informative)
Re:whatever (Score:5, Interesting)
Yeah well... (Score:3, Interesting)
3Dmark03 may be inflated but what counts is real world game benching. And FX 5900 wins over ATI in all but Comanche 4.
Interesting ehh?
Re:Yeah well... (Score:3, Insightful)
Re:Yeah well... (Score:2)
Well, in all honesty, this cheat could be used in ALL popular benchmarks. I mean, how do those real-life game-benchmark work? They run a pre-recorded demo and calculate the FPS. Just like 3DMark does. Only difference is that in 3DMark, you can stop the demo and move the camera around, which exposes this type of cheating. You can't do that in the real-life game-demos.
The reason (Score:5, Funny)
Good, now they're even... (Score:2)
Re:Good, now they're even... (Score:2, Insightful)
As the mighty start to fall... (Score:5, Interesting)
So now they are falling into the power trap of "we need to be better and faster then the others" which is only going to have them end up like 3DFX in the end. Cutting corners is NOT the way to gain consumer support.
As I look at it, it doesn't matter if your the fastest or not...it's the wide variety of platform support that has made them the best. ATi does make better hardware but their software (drivers) are terrible and not very well supported. If ATi would get the support that nVidia has been giving for the last few years, I would start using ATi hands down...It's the platform support that I require, not speed.
Re:As the mighty start to fall... (Score:2)
Re:As the mighty start to fall... (Score:4, Insightful)
that is a old accusation - that had a kernel of truth 24 months ago, but Ive used ati cards for years, and they have gone rock solid since forums like this just started to accept that schlock as 100% truth.
Bottom line: dont believe the hype. this is just *not* true.
Article talks about DEVELOPER version of 3DMark03 (Score:2, Insightful)
Wow, some prelease software is having issues with the new brand-new drivers? Who would have thought... Why not wait for official release of the software and the drivers before making hasty conclusions?
In addition, who really cares about 3DMark? Why not use time which is wasted on 3DMark benchmark for benchmarking real games? After all
Re:Article talks about DEVELOPER version of 3DMark (Score:4, Informative)
The developer version is not a pre-release, it's the same version with some extra features that let you debug things, change scenes, etc.
As soon as you move the camera away from it's usual benchmark path, you can see that nVidia hard-coded clipping of the benchmark scenes to make it do less work than it would need to in a real game, where you don't know where the camera will be in advance.
As I mentioned in another post, it's a step in the direction of recording an mpeg of the benchmark and playing it at a high fps rate.
Very old practice. (Score:5, Interesting)
One test involved writing a text string in a particular font continuously to the screen in. This text string was encoded directly in the driver for speed. Similarly one of the polygon drawing routines was optimised for the particular polygons used in this benchmark.
Re:Very old practice. (Score:2)
Sigh... (Score:3, Insightful)
Voodoo was beaten squarely by other, better video cards in short order. The fanboys kept buying Voodoo cards, and we all know what happened to them
GeForce cards appeared. They were the best. They have their fanboys. Radeon cards are slowly becoming the "other, better" cards now.
Interesting....
(I'm not sure what point I was trying to make. I'm not saying that nVidia will suck, or that Radeon cards are the best-o. The moral of this story is: fanboys suck, no matter their orientation.)
Re:Sigh... (Score:2)
So, what you are saying is now that nVidia is slowly dieing they will soon be aquired by ATi in the next couple of years?
I like that theory...hopefully it doesn't happen to nVidia, but it's a solid theory
Re:Sigh... (Score:3, Interesting)
No one has ever held onto the #1 spot in the graphics card industry. No one.
Perhaps it is because you are competing against a monolith that the up-and-comers can convince their engineers to give up hobbies and work 12 hour days. Perhaps it is because the leader of a #1 must be conservative in its movements to please the shareholders. Perhaps it is because with 10 other companies gunning for your head, one of them will be gambling on the right combination of t
"nvidia engineers are investigating these issues" (Score:2)
It's clearly a deliberate attempt. But it looks like NV's going to deny responsibility on this one.
Shame on them...
Another reason to open-source drivers (Score:5, Insightful)
Of course, if Nvidia's drivers were released under the GPL, none of the mud from this would stick as they could just point to the source code and say "look, no tricks". As it is, we just get a nasty combination of the murky world of benchmarks and the murky world of modern 3D graphics.
Re:Another reason to open-source drivers (Score:2)
Forgive me, but that sounds like one very stupid idea.
Why would you want to expose your hard earned work to the world? NVidia pays very well for programmers to think of wild and imaginative (out o' the box) programming techniques to get the most from their hardware.
With rogue drivers out there thanks to open-sourcing the code, someone cou
Re:Another reason to open-source drivers (Score:4, Interesting)
GPLing the drivers would give NVidia:
1) Thousands of developers willing to submit detailed bug reports, port drivers, improve performance on 'alternative' operating systems etc.
2) Protection from these kind of cheating accusations
3) Better relationship with game developers - optimising for an NVidia card when you've got details of exactly how the drivers work is going to be much easier than for a competitor card.
4) A huge popularity boost amongst the geek community, who spend a lot on hardware every year.
NVidia is, first and foremost, a hardware company. In the same way that Sun, IBM etc. contribute to open-source projects in order to make their hardware or other services more appealing, NVidia stand to gain a lot too.
And as for rogue drivers? I suppose you're worried about rogue versions of the Linux kernel destroying your processor?
Re:Another reason to open-source drivers (Score:5, Interesting)
Realize, in a society in which people sue others over dogs barking too loud, NVidia would definitely hear from a very small but very vocal group about it.
6) Nivida's Programmers Don't Want This. Why? Let's say they GPL'd just the Linux reference driver. And in less than two weeks, a new optimized version came out that was TWICE as fast as the one before. This makes the programmers looks foolish. I know this is pure ego, but it is a concern I'm sure, for a programmer w/ a wife and kids.
I know this all sounds goofy, and trivial. But politics and Common Sense do not mesh. Again, I think your intentions are great and in a perfect world there would be thousands working on making the best, most optimized driver out there.
But if such a community were to exist (and you know it would), why bother paying a league of great programmers and not just send out a few test boards to those most active in that new community, more than willing to do work for Free (as in beer?)
Just something to think about.
Reason for open-source - period. (Score:4, Interesting)
Take for instance the relationship between Microsoft and IBM during the OS/2 era. The two companies working on the same code base produced OS/2 and, eventually, the NT kernel.
Or, more recently - the brilliant strategy of Netscape Communications Corporation - the birth of the Mozilla project. To the open source community - take our browser, modify it like hell, make it a better project. You have, of course, Mozilla as the browser - but Netscape (Navigator) still exists (as a repackaged, "enhanced" Mozilla).
nVidia's source code release would have two major impacts as far as their performance goes.
1) ATI (et al.) would find the actual software-based enhancements they could also incorporate into their own driver to improve their product.
2) nVidia could capture the many brilliant software developers that happen to be a part of the whole nVidia "cult" - this could lead to significant advancements to their driver quality (and overall product quality).
My guess is that the lid is kept so tightly shut on nVidia's drivers because they can keep their chips relatively simple through their complex software driver. ATI, perhaps, has the technical edge in the hardware arena, but does not have the finesse for software enhancing drivers like nVidia does.
So true. (Score:2)
They don't want to hear "Card A is good at foo, but it everheats, and card B is good at bar, but slow at foo..." They want to hear "Card A is 125 foobars better than Card B."
It might not be premeditated (Score:2)
Re:It might not be premeditated (Score:2, Insightful)
They have all done it (Score:2)
The whole Quake / Quack fiasco for ATI was enlightening, but does anyone know if ATI does this currently?
Frame rates are overrated anyway, since people buying these cards are buying new ones before their current ones go down to noticable frame rates. Features, picture quality and noise is what matters.
ATI seems to still have the upper hand, and at least for ATI cards ther
Re:They have all done it (Score:4, Interesting)
Finally after months of waiting I traded my Radeon to my roommate and got a GeForce 2 Pro with 64MB of DDR. Runs beutifully on Linux, I even play UT2K3 with it on an Athlon 850. Finally after having the GeForce2 for about four months I happen across a site that tells me how to make 3D acceleration work for the Radeon. To late now, I'm happy with my GeForce, and UT2K3 seems to only really want to work with nVidia anyways.
I don't think drivers are the best way to defend ATI considering they tend to shrug off other OS's and nVidia has committed themselves to supporting Alternate OS's.
Statistics (Score:2)
Benchmarks are nothing else than statistics: In order to get to a (more or less) meaningful benchmark, you repeat the same process over and over, possibly in different environments. Then you analyze the results, resulting in a statistic of whatever you've benchmarked.
Therefore, the old Disraeli saying applies: "There are lies, damn lies, and statistics."
Or, to essentially say the same thing without expletives: Never trust a statistic you haven't faked yourself.
Not a big deal. (Score:5, Informative)
One has to take all benchmarks with a grain of salt if they come from a party with financial interestes in the product. Win 2K server outperforms Linux, a Mac is 2x the speed of the fastest Wintel box, my daddy can beat up your daddy..
It's not suprising but it is somewhat disappointing.
Much worse than ATI's cheating (Score:2)
I'm sure nVidia does the same thing: new Detonator driver releases have been known to get amazing improvements for specific games.
ATI screwed up by affecting the visual quality. Well, screwing up visual quality would be acceptable if there was a documented setting to turn that particular optimization off, but there wa
Re:Much worse than ATI's cheating (Score:2)
Optimizing a benchmark hurts me: I might be tricked into making a decision based on incorrect information.
All compilers used for SpecINT/FP must be released within 6 months of the benchmark results being contributed. However, they are allowed to put in special options just for the tests, as long as it's available to everybody. They also have code that recognizes special SPEC sequences, but these may also theoretically help random code th
This is NOT standard practice. (Score:3, Informative)
These drivers were written with specific limits built in that make the drivers COMPLETELY irrelevant to ordinary gaming, as ET demonstrates by moving the camera just a bit from the designated path.
This would be like chopping the top off of a car to make it lighter, to reduce the distance it takes for it decellerate in a brake test. Or compensating for a crappy time off the starting line by removing the back half of the car and bolting a couple of RATO rockets where the back seats used to be. Or loading the car up with nitro, or something. You think Car and Driver Magazine wouldn't say something?
These drivers make the card completely unsuitable for ordinary gaming. They aren't 'more powerful' -- they are a completely altered version of the drivers that are ONLY good at improving one particular set of benchmarks.
Re:This is NOT standard practice. (Score:2)
Edetorial on this issue (Score:2, Informative)
Enough of that... (Score:3, Interesting)
Re:Enough of that... (Score:2)
Quack 3? Is that the one with Howard the Duck in it? :)
The circle is complete (Score:3, Interesting)
Possible solutions. (Score:2, Interesting)
The article talks about possible solutions to the problem of "repeatability" while still avoiding the problem of cheating in the way alleged here. I don't remember it mentioning this possible solution though: How about if the camera was controlled by a mathematical function of a seed given by hand. Like you'd seed a PRNG.
This way you could repeat the benchmarks by giving the same seed. Generate a 'default one' at each new install (this to ensure clueless reviewers get a new seed). Make it easy to enter a
Is this a problem? (Score:2)
As a programmer, I develop a test plan before I even start writing code. This is similar to someone giving me a requirement, and then changing the requirement after I've built a test plan and developed code toward that test. . . it's not really fair to the driver developers.
I'm going to side with nVidia, that this is a bug in the driver. Benchmarks make good testing software, but the best way to ensure good drivers is to make the benchmarks as comprehensive as possible. ExtremeTech is attributing to ma
Solution to problem (Score:2)
...the fix consists of another vacuum cleaner to be attached to the card.
The Kettle or the Pot? (Score:2, Interesting)
It also uses ATI-only pixel shaders 1.4, and reverts to dual-pass on other cards.
Why all this?
NVIDIA isn't on the 3dmark03 beta program (read: didn't pay FutureMark a hefty lump of greenbacks).
Favorite quote from the article! (Score:4, Funny)
nVidia believes that the GeForceFX 5900 Ultra is trying to do intelligent culling and clipping to reduce its rendering workload
It's alive !
NVidia not cheating (Score:4, Informative)
But basically, extremetek is just a little bit mad because they were excluded from the doom3 benchmarks. Since nvidia refused to pay the 10s of thousands of dollars to be a member of the 3dmark03 board, they have absolutely no access to the software used to create this bug.
Here is the full exept from hardocp.com:
3DMark Invalid?
Two days after Extremetech was not given the opportunity to benchmark DOOM3, they come out swinging heavy charges of NVIDIA intentionally inflating benchmark scores in 3DMark03. What is interesting here is that Extremetech uses tools not at NVIDIA's disposal to uncover the reason behind the score inflations. These tools are not "given" to NVIDIA anymore as the will not pay the tens of thousands of dollars required to be on the "beta program" for 3DMark "membership".
nVidia believes that the GeForceFX 5900 Ultra is trying to do intelligent culling and clipping to reduce its rendering workload, but that the code may be performing some incorrect operations. Because nVidia is not currently a member of FutureMark's beta program, it does not have access to the developer version of 3DMark2003 that we used to uncover these issues.
I am pretty sure you will see many uninformed sites jumping on the news reporting bandwagon today with "NVIDIA Cheating" headlines. Give me a moment to hit this from a different angle.
First off it is heavily rumored that Extremetech is very upset with NVIDIA at the moment as they were excluded from the DOOM3 benchmarks on Monday and that a bit of angst might have precipitated the article at ET, as I was told about their research a while ago. They have made this statement:
We believe nVidia may be unfairly reducing the benchmark workload to increase its score on 3DMark2003. nVidia, as we've stated above, is attributing what we found to a bug in their driver.
Finding a driver bug is one thing, but concluding motive is another.
Conversely, our own Brent Justice found a NVIDIA driver bug last week using our UT2K3 benchmark that slanted the scores heavily towards ATI. Are we to conclude that NVIDIA was unfairly increasing the workload to decrease its UT2K3 score? I have a feeling that Et has some motives of their own that might make a good story.
Please don't misunderstand me. Et has done some good work here. I am not in a position to conclude motive in their actions, but one thing is for sure.
3DMark03 scores generated by the game demos are far from valid in our opinion. Our reviewers have now been instructed to not use any of the 3DMark03 game demos in card evaluations, as those are the section of the test that would be focused on for optimizations. I think this just goes a bit further showing how worthless the 3DMark bulk score really is.
The first thing that came to mind when I heard about this, was to wonder if NVIDIA was not doing it on purpose to invalidate the 3DMark03 scores by showing how the it could be easily manipulated.
Thanks for reading our thoughts; I wanted to share with you a bit different angle than all those guys that will be sharing with you their in-depth "NVIDIA CHEATING" posts. While our thoughts on this will surely upset some of you, especially the fanATIics, I hope that it will at least let you possibly look at a clouded issue through from a different perspective.
Further on the topics of benchmarks, we addressed them earlier this year, which you might find to be an interesting read.
We have also shared the following documentation with ATI and NVIDIA while working with both of them to hopefully start getting better and more in-game benchmarking tools. Please feel free to take the documentation below and use it as you see fit. If you need a Word document, please drop me a mail and let me know what you are trying to do please.
Benchmarking Benefiting Gamers
Objective: To gain reliable benchmarking and image quality tools
Everyone seems to mess with benchmarks. (Score:5, Interesting)
At the same time, I'm debating what my next video card should be. Even though ATI's hardware might be slightly better this round, the differences will probably be negligable to all but the most extreme gamers. At the same time NVidia has proven to me that they have a history of writing good drivers, and they still provide significantly better support to the Linux community than ATI does.
For this reason I'm still siding with the GeForce family of video cards.
Short Description. (Score:3, Informative)
Using the rail test, Nvidia excluded almost all non-visible data. This shows nvidia tweaked its drivers to only render data seen on the rail test, which would only happen if you tweak your drivers for the benchmarks. (aka the cheat)
I like it better if benchmarks uses average FPS on a game, and you go PLAY the game, and watch for yourself.
Try 1024x768/1280x1240/1600x1200 with all AA/AF modes. Also stop using 3ghz P4's for the benchmarks, use a mix of 1ghz/2ghz/3ghz AMD/Intel boxes so we can know if the hardware is worth the upgrade.
They did it before (Score:3, Interesting)
They garbled texture maps to achieve a higher transfer rate and frame rate. Then they went legit for the TNT line.
I guess the belief "if you can't win, cheat" is still there at nvidia.
I wonder if ATi makes a good Linux driver...
Benchmarks for catching cheating vendors (Score:3, Interesting)
Some compilers miscomplied the modified benchmark, because they recognized the code as the standard benchmark even though it wasn't exactly the same.
(Anybody have a reference for this? I heard the author give a talk at Stanford years ago.)
Re:32 fps ... (Score:2)
They are soooo busted. (Score:2)
Re:Random Rail (Score:2, Informative)
By sheer luck, card A could get a 'rail' that drags it along a plain brick wall with nothing fancy to render, and card B could go through the heart of some mega explosion with fragments and fire and smoke and all that. Card A would get 4000000 fps, card B gets 20.
It would be fine to take them off the rails to "keep em honest", but you need to run both cards in the exact same situation for your test
ATI's release of the drivers aren't up to par... (Score:4, Insightful)
nVidia could really follow along this same philosophy, instead of hearing the massive complaints from their oft-buggy video driver.
Re:STFU - who cares? (Score:5, Insightful)
I gather you read it differently?