NVidia Accused of Inflating Benchmarks 440
Junky191 writes "With the NVidia GeForce FX 5900 recently released, this new high-end card seems to beat out ATI's 9800 pro, yet things are not as they appear. NVidia seems to be cheating on their drivers, inflating benchmark scores by cutting corners and causing scenes to be rendered improperly. Check out the ExtremeTech test results (especially their screenshots of garbled frames)."
What's the big news? (Score:5, Insightful)
I don't know (Score:5, Insightful)
Targeting benchmarks is just part of the business. When I was on the compiler team at HP, we were always looking to boost our SPECint/fp numbers.
In a performance driven business, you would be silly not to do it.
whatever (Score:2, Insightful)
Since when did rendering errors caused by driver problems become "proof" of a vendor inflating benchmarks?
And this story was composed by someone with the qualifications of "Website content creator, who likes video games alot" not a driver writer, not anyone technically inclined beyond the typical geek who plays alot of video games and writes for a website called "EXTREME tech" because you know, their name makes them extreme!
note: I'm not an Nvidia fanboy, i just bought an ATI Radeon 9500, so I am just a skeptic of incredulous, idiotic derivations of fact, when all he has are some screenshots of a driver screwing up the render of a scene.
Article talks about DEVELOPER version of 3DMark03 (Score:2, Insightful)
Wow, some prelease software is having issues with the new brand-new drivers? Who would have thought... Why not wait for official release of the software and the drivers before making hasty conclusions?
In addition, who really cares about 3DMark? Why not use time which is wasted on 3DMark benchmark for benchmarking real games? After all 60fps tells a lot more about performance than 5784 3DMarks.
Re:Yeah well... (Score:3, Insightful)
Re:What's the big news? (Score:4, Insightful)
Sigh... (Score:3, Insightful)
Voodoo was beaten squarely by other, better video cards in short order. The fanboys kept buying Voodoo cards, and we all know what happened to them
GeForce cards appeared. They were the best. They have their fanboys. Radeon cards are slowly becoming the "other, better" cards now.
Interesting....
(I'm not sure what point I was trying to make. I'm not saying that nVidia will suck, or that Radeon cards are the best-o. The moral of this story is: fanboys suck, no matter their orientation.)
Re:What's the big news? (Score:1, Insightful)
Another reason to open-source drivers (Score:5, Insightful)
Of course, if Nvidia's drivers were released under the GPL, none of the mud from this would stick as they could just point to the source code and say "look, no tricks". As it is, we just get a nasty combination of the murky world of benchmarks and the murky world of modern 3D graphics.
Re:Good, now they're even... (Score:2, Insightful)
This on the other hand, if true, could be construed as NOTHING BUT cheating. Especially when coming from a company that said they didn't support 3Dmark 2003 because it was possible for companies to optimize their drivers specifically FOR such benchmarks...well, they proved their point.
Re:Does this even improve your experience? (Score:1, Insightful)
This is a big deal to people who care -- it insults the reviewers who spent hours benchmarking their card, and it insults the users who bought/will buy their card. There are people who care, and people who do want the fastest card for a reason, and they are interested to hear from other people who care, and not the people who don't!
Re:What's the big news? (Score:5, Insightful)
This is always the case with any chosen performance measurement. Look at managers asked to bring quarterly profits. They tend to be extremely shortsighted...
Moral of the story: be very wary on how you measure and always add a qualitative side to your review (e.g. in this case, "driver readiness/completedness").
Re:You're right, you don't know (Score:2, Insightful)
Right, and why all your bank records should be public (just in case you are stealing and have illegal income). And all your phone records should be public as well as details of your whereabouts (just in case your cheating on your wife/skipping class). And of course, why the govt should have access to all your electronics transmissions (internet, cell, etc), just in case you're doing something that they don't like.
Random Rail (Score:1, Insightful)
Re:As the mighty start to fall... (Score:4, Insightful)
that is a old accusation - that had a kernel of truth 24 months ago, but Ive used ati cards for years, and they have gone rock solid since forums like this just started to accept that schlock as 100% truth.
Bottom line: dont believe the hype. this is just *not* true.
Re:It might not be premeditated (Score:2, Insightful)
Re:NVIDIA == Thieves and Liars if et is correct (Score:5, Insightful)
Re:Giveing them self a bad name (Score:5, Insightful)
Oh, c'mon. Benckmark fudging has been an on-going tradition in the computer field. When I was doing computer testing for InfoWorld, I found some people in a vendor's organization would try to overclock computers so they would do better in the automated benchmarks. ZD Labs found some people who "played" the BAPco graphics benchmarks to earn better scores by detecting a benchmark was running and cutting corners.
<Obligatory-Microsoft-bash>
One of the early players was Microsoft, with its C compiler. I have it from a source in Microsoft that when the Byte C-compiler benchmarks figures were published in the early 1980s Microsoft didn't like being back of the pack. "It would take six months to fix the optimizer right." It would take two weeks, though, to put in recognizers for the common benchmarks of the time and insert hand-optimized "canned code" to better their score.</Obligatory-Microsoft-bash>
Microsoft wasn't the only one. How about a certain three-letter company who fudged their software? You have multiple right answers to this one. :)
When the SPECmark people first formed their benchmark committee, they knew of these practices and so they made the decision that SPECmarks were to be based on real programs, with known input and output, and the output was checked for correct answers before the execution times would be used.
And now you know why reputable testing organizations who use artifical workloads check their work with real applications: to catch the cheaters.
Let me reiterate an earlier comment by Alan Partridge: it's idiots who think that a less-than-one-percent difference in performance is significant. (Whether you the shoe fits you is something you have to decide for yourself.) What benchmark articles don't tell you is the spread of results they obtain through multiple testing cycles. When I was doing benchmark testing at InfoWorld, it was common for me to see trial-to-trial spreads of three percent in CPU benchmarks, and broader spreads than that with hard-disk benchmarks. Editors were unwilling to admit to readers that results were collected that formed a "cloud" -- they wanted a SINGLE number to put in print. ("Don't confuse the reader with facts, I want to make the point and move on.") I see that in the years since I was doing this full-time that editors are still insisting on "keep it simple" even when it's wrong.
Another observation: when I would trace back hardware and software that was played with, the response from upper management was universally astonishment. They would fall over backwards to ensure we got a production piece of equipment. To some extent, I believed their protestations, especially when bearded during their visits to our Labs. One computer company (name withheld to protect the long-dead guilty) was amazed when we took them into the lab and opened up their box. We pointed out that someone had poured White-Out over the crystal can, and that when we carefully removed the layer of gunk the crystal was 20% faster than usual. Talk about over-clocking!
So when someone says "Nvidia is guilty of lying" I say "prove it", further saying that you have to show with positive proof that the benchmark fudging was authorized by top management. I can't tell from the article, but I suspect someone pulled a fast one, and soon will be joining the very long high-technology bread line.
Pray the benchmarkers will always check their work.
And remember, the best benchmark is YOUR application.
Re:Does this even improve your experience? (Score:3, Insightful)
The trouble with free speech is that everyone has it.
Re:Problem is the benchmarks themselves (Score:5, Insightful)
Really? Do you write benchmarks?
I used to write benchmarks. It was very common to include worst-case patterns in benchmark tests to try to find corner cases -- the same sort of things that QA people do to try to find errors. For example, given your example of a floating-point unit: I would include basic operations that would have 1-bits sprinkled throughout the computation. If Intel's QA people would have done this with the Pentium, they would have discovered the un-programmed quadrant of the divide look-up table long before the chip was committed to production.
Why do we benchmark people do this? Because we are amazed (and amused) at what we catch. Hard disk benchmarks that catch disk drives that can't handle certain data patterns well at all, even to the point of completely being unable to read back what we just wrote. My personal favorite: how about modems from big-name companies that drop data when stressed to their fullest?
The SPECmark group recognizes that the wrong answer is always bad, so they insist that in their benchmarks the unit under test get the right answer before they even talk of timing. This is from canned data, of course, not "generating random scenes." The problem with using random data is that you don't know if the results are right with random data -- or at least that you get the results you've gotten on other testbeds.
Besides, how is the software supposed to know how the scene was rendered? Read back the graphics planes and try to interpret the image for "correctness"? First, is this possible with today's graphics cards, and, second, is it feasible to try? Picture analysis is an art unto itself, and I suspect that being able to check rendering adds a whole 'nuther dimension to the problem. I won't say it can't be done, but I will say that it would be expensive.
For FPUs, it's easy: have a test vector with lots of test cases. Make sure you include as many corner cases as you can conceive. When you make a test run, mix up the test cases so that you don't execute them in the same order every pass. (This will catch problems in vector FPU implementations.) Check those results!
Now, if you will tell me how to extend that philosophy to graphic cards, we will have something.
But these are SYNTHETIC BENCHMARKS! (Score:2, Insightful)
I dunno, but synthetic benchmarks seem a bit irrelevant as does what Nvidia does in them. Show me how many FPS it gets in Q3A, that I care about.
Re:Problem is the benchmarks themselves (Score:3, Insightful)
Benchmarks are meant to predict performance. While it is essential to check the validity of the answer (wrong answers can be computed infinitely fast), the role of a benchmark isn't to check never-seen-in-practice cases or so-rarely-seen-in-practice-that-running-100x-slow
That reminds me of the "graphic benchmark" used by some Mac websites that compares Quickdraw/Quartz performance when creating 10k windows. Guess what, Quartz is slower, because Quartz windows are a lot more powerful/heavyweight than Quickdraw ones. But who gives a fuck, how often do you need to create 10k windows in a hurry ? No one, apart from those OS 9 zealots who are looking for ways to bash OS X. A realistic benchmark may to check to at most 10s of windows, but the conclusion would probably be that the difference in speed isn't observable by humans.
A good benchmark can only be judged by comparing its execution profile against what users will run. If it's not reflecting the reality, it's not an appropriate prediction of the performance for the user. And it's not a binary property. While Spec is by definition perfect for anyone that only runs Spec, it is known and accepted to be imperfect at anything else, and a completely useless predictor in some cases (as in very low statistical correlation between Spec scores and speed at running Foo). It's just a "best effort" suite of tests for workstation applications. I'm talking SpecINT / SpecFP here, other Spec benchmarks exist because (gasp!) SpecINT/FP don't cover the whole computing spectrum.
You also don't seem to have much of a clue about how processors are really tested. Guess what, the processors people do all that you describe and more, much more. All day long on many, many samples, for months on end, in good/bad conditions (thermal, electrical). It's just that no test suite can catch all the problems, so defects will always slip by. _Always_, even if the logic is formally proven correct, since processors aren't mathematical entities but subject to electrical / manufacturing variations. Even if no problem exists today on a given CPU, take a hundred of them from various batches, power-cycle them a few million times, run them for a few years in marginal conditions and check again.
STFU - who cares? (Score:2, Insightful)
The things that they're being accused of reduce work to the graphics engine - and doesn't affect image quality - it's called OPTIMIZATION. The fastest frame rate with the best image quality.
Man someone must have spent hours in front of their computer coming up with a way to get a sensational story like this. ATI has done it, and so does everyone else but what sucks is that this "news" is being flogged everywhere like it's the most incredible piece of news ever.
In this case it's not ANYWHERE NEAR as bad as changing the card's performance based on the name of the program that's being run - I think most people remember that one.
In this case it's a non-story. And yes, we all pay too much attention to benchmarks. I am now one to two generations behind leading edge and plan to stay there. It's far less expensive than driving a new car of the lot every four months.
Re:Random Rail (Score:1, Insightful)
Run the random rails, not really random (they never are anyways) but with a "random seed" number -- so you can generate your "random" path, type in the seed number, and the benchmark runs THAT fly path.
Want someone else to replicate your results? Give them the seed number and let them test on the same path.
Want to run the test w/ multiple cards? Just remember to write down your seed number.
This's already done with a lot of the old "random world map/dungeon map" generators for D&D players. I can give 2-3 numbers to a friend with the same program and "poof", he generates the same dungeon.
The only thing we'd have to be sure on is that certain seed numbers didn't get to be "common" such that the graphics card makers specifically optimized for those seeds -- but that's a bit of a stretch given that you can test with the "common" seed as well as 2-3 other random seeds just to keep them honest.
ATI's release of the drivers aren't up to par... (Score:4, Insightful)
nVidia could really follow along this same philosophy, instead of hearing the massive complaints from their oft-buggy video driver.
Re:Giveing them self a bad name (Score:4, Insightful)
The course focuses on making decisions based on statistics. In the second week of class, we learned what a standard deviation was, and we never stopped using it throughout the semester.
But perhaps ignorance would explain business tactics of the 90's.
Re:Does this even improve your experience? (Score:2, Insightful)
The point I was making is simply this - if they cheated or did not cheat on the benchmarks, does it really make a difference? For some, sure. But for me and probably a good chunk of people out there, the slight extra edge that NVIDIA may or may not have given themselves in this benchmark isn't going to be enough to make me run out and purchase the new geforce over the radeon unless I wanted to particpate in the "I have the fastest graphics card available as of 3:00 this afternoon" pissing contest. The few extra FPS nvidia can boast by rigging this benchmark will not help me become a better gamer, nor will it help most people become better gamers. So what's the point of becoming enraged over something like this? Even if you are one of the lucky few who can tell the difference between a great card and a slightly less great card, has this really altered your opinion so much of your choice of video cards?
Re:Another reason to open-source drivers (Score:2, Insightful)
Re:STFU - who cares? (Score:5, Insightful)
I gather you read it differently?
Re:Another reason to open-source drivers (Score:2, Insightful)
How would a driver downloaded from a different web site cause a liability for nVidia? Since the source is open, it would be easy to determine that it was not NVidia's code that caused the problem. Seems like (5) is an ADVANTAGE for nVidia, not a disadvantage.
6) Speaking algorithmically, it is probably impossible to get that much improvement from a driver. In case you've never worked directly with 3D hardware before, this type of optimization is TOUGH. Open source is great for some things, but it would be difficult for them to so seriously outpace nVidia's development in this specialized area.
Re:Does this even improve your experience? (Score:4, Insightful)
On the other hand, ATI sold over 1 million Radeon 9700s in first few months of it being out, so there are definitely a lot of people out there who do need and want the best card the money can buy.
So, that gets us to your question of whether nvdia cheating really makes a difference. Obviously, it doesn't make a difference to you, because you don't want the buy any of the high-end cards in the first place. It should be obvious in the same way, though, that it does make a big difference to somebody who will buy a high end card.
If 9800 and FX5900 have the same price, and speed is what you're after (and it should be, since you're buying these cards), then you want to buy the faster one. The only way to figure out which one is faster is to check the benchmark results (unless you buy both and try them tyourself). If one of the companies cheated in a benchmark, they have tricked you into thinking that you're buying a faster card, while you're really buying a slower one.
Imagine you're picking between two equally expensive cars, and you want to buy the faster of the two. One claims to do 0-60 in 5s, and the other claims to do it in 3s. You'll go ahead and buy the latter one, only to learn later that they were testing the car going downhill while the other was accelerating on level ground! I think enraged would only begin to describe your reaction to that.