Intel Caught Cheating In 3DMark Benchmark 216
EconolineCrush writes "3DMark Vantage developer Futuremark has clear guidelines for what sort of driver optimizations are permitted with its graphics benchmark. Intel's current Windows 7 drivers appear to be in direct violation, offloading the graphics workload onto the CPU to artificially inflate scores for the company's integrated graphics chipsets. The Tech Report lays out the evidence, along with Intel's response, and illustrates that 3DMark scores don't necessarily track with game performance, anyway."
Eh? (Score:2, Interesting)
I thought offloading graphics computations to the CPU was the whole *point* of integrated video.
Doesn't 3DMark cheat too? (Score:4, Interesting)
Is 3DMark the benchmark that will give a higher score to a VIA graphics card if the Vendor ID is changed to Nvidia?
That's what they do for LOTS of games... (Score:4, Interesting)
Intel fully admits that the integrated chipset graphics aren't that great. They freely admit that they offload rendering to the CPU in some cases. This isn't a secret.
White Goodman Would be Proud (Score:3, Interesting)
In true White Goodman fashion, cheating is something losers come up with to make them feel better about losing.
Is Intel the 500 lb. gorilla in chipsets? Sure, and they got there by 'cheating.' Which is winning.
Aint capitalism grand?
For all you losers who don't know who the great White Goodman is: http://www.imdb.com/title/tt0364725/ [imdb.com]
Re:Why a bad hack when you are close to much more? (Score:4, Interesting)
Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance. I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ? Clearly you know the app behaviour and you know the performance optimizations work. This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack. A really bad one too!!!
Sure, but how hard would it actually be for a graphics driver to scan an arbitrary executable and determine a) that it's a game and b) how it will behave when executed? I suppose they could model it after the heuristic and behavioristic features of some antivirus/antispyware applications, but nothing about this problem sounds trivial. There's also the question about how bloated of a graphics driver you are willing to accept.
My guess is that the above concerns explain why this was a poorly-executed hack.
Re:Do the optimizations work for anything else? (Score:3, Interesting)
Well, there is also the interesting tidbit that it doesn't enable those optimizations unless the CPU is an Intel CPU.
Hmm.
Re:Wonder if AMD plays fair? (Score:5, Interesting)
Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe anyone?
Oh this type of thing has been going on for a VERY long time. For example, there was the Chang Modification [pcmag.com] back in 1988 (It slowed down the system clock that was used as a timing base for the benchmark, resulting in higher benchmark scores).
Re:Why a bad hack when you are close to much more? (Score:2, Interesting)
Re:Doesn't 3DMark cheat too? (Score:3, Interesting)
I did some searching and found the article you are probably referring to http://arstechnica.com/hardware/reviews/2008/07/atom-nano-review.ars/6 [arstechnica.com]
Re:White Goodman Would be Proud (Score:3, Interesting)
But why they even bother trying to rig benchmarks like this is beyond me. No one who's serious about graphics performance would use Intel's built-in video.
No, but there are a lot of 3D games that aren't FPS junkie, the-sky-is-the-limit craving games. For example, I liked King's Bounty which has minimum requirements of "Videocard nVidia GeForce 6600 with 128 Mb or equivalent ATI". Tales of Monkey Island says: "Video: 64MB DirectX 8.1-compliant video card (128MB rec.)". You won't find any of these in the latest AMD/nVidia review, but just pretending to have a little 3D performance can make a difference between "no 3D games at all" and "some non-intensive 3D games". Outside the hardcore gaming market it might matter.
Re:Hmm... (Score:3, Interesting)
I'm inclined to give Intel the benefit of the doubt here. Few reasons:
1) Nobody buys Intel integrated chips because of how they do on 3D mark. Nobody thinks they are any serious kind of performance. Hell, most people are amazed to find out that these days they are good enough that you can, in fact, play some games on them (though not near as well as dedicated hardware). So I can't imagine they are gaining lots of sales out of this. Remember these are chips on the board itself. You either got a board with one or didn't. You don't pick one up later because you liked the numbers.
2) Individual program optimization in drivers is extremely common. Some programs do things an odd way, and sometimes the vendors can figure out a way to work around it. An example would be the Unreal 3 engine and anti-aliasing in DirectX 9 mode. I don't know the details, but the upshot is it normally doesn't work. However nVidia (and probalby others) have figured out a way around this. So you can force AA on the Mass Effect and games that don't include the controls in the driver. However the driver has a particular hack for that game to make it work. If you use a program like Riva Tuner, you can mess with that sort of thing and flip the hacks on and off for various things.
3) Since Intel's integrated chips are exceedingly simple, it isn't surprising they have the CPU handle some things. I seem to recall that their older integrated chips did basically everything on the CPU, being little more than frame buffers themselves. The whole point of an integrated GPU is cheap and low power. That means it isn't going to have massive arrays of shaders to handle things. However with a clever driver, a CPU could do some of that work. Would work particularly well in an integrated GPU case since they use system memory.
So while I'm not sure I see the point in optimizing for 3DMark, I don't see the overall problem in specific optimizations for specific apps. If you discover that an app has a problem, and you can fix it, but that fix is not something to apply over all, well then why not apply that fix for that app?
Re:Wonder if AMD plays fair? (Score:4, Interesting)
Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe anyone?
Oh this type of thing has been going on for a VERY long time.
I even remember teapot based hacks (although not the details unfortunately, probably something along the lines of having the teapot hardwired somewhere) back when displaying rotating GL teapots was all the rage to test graphics hardware (ancient history, obviously).
Of course something like Quake was still the stuff of science fiction at the time.
No, it's not. (Score:3, Interesting)
The behaviour their driver has in the benchmark is also used in several games... ie Crysis Warhead. RTFA.
Re:They all cheat (Score:3, Interesting)
Don't forget 3DMark itself has also cheated [arstechnica.com] to give Intel a higher score.
Re:They are all guilty of cheating at some point (Score:3, Interesting)
True cheating when benchmarking a GPU vs. optimizing is something that many people do not seem to understand. Cheating when it comes to GPUs really would come down to intentionally degrading visual quality just to get higher scores while tricking the benchmark application into thinking the quality is as high as specified.
An example of this would be when running a test with Antialiasing set at 8x in the applications, and antialiasing being set to "application control" in the drivers, yet when the drivers see the application running, it forces AA off to get better scores, no matter what the settings might be set to. This is a clear case of trying to fool people into thinking that the GPU in question handles AA very well or has better performance.
And that is where most of the big reports of cheating have come from in the past. If the image quality is degraded, that will generally increase the framerates, and that is what both NVIDIA and ATI were guilty of.
Now, adjusting things in the drivers to provide the proper visual quality but changing HOW things are done to provide better framerates is fine, because what you are telling the card to do is what you get.
In this case with Intel, the GPU is what is being tested, not the CPU. If a GPU test has results based on CPU doing a lot of the work, then going with a lower end CPU will have a MAJOR impact on GPU performance. The whole point of benchmarking a GPU is to test how fast the GPU is, so you could put that GPU in a modern system with various processors and you get roughly the same GPU performance(of course, when the application is CPU limited, you will see reduced GPU performance anyway).
I have felt for a long time that drivers should compensate for missing features in the hardware when it comes to the APIs though. If a GPU only accelerates DirectX 7 stuff, with a properly powerful CPU you SHOULD be able to use DirectX 11 instructions, but since the GPU doesn't handle them, it would be much slower. The real key is that performance would be HORRIBLE when using software to handle those newer "unaccelerated" instructions, but it would still work.
From that perspective then, it does not bother me that Intel would use the CPU to compensate for their GPUs, except that published benchmark results need to indicate the true performance of the product. From that perspective, if SLI/Crossfire were used and the benchmark were reporting the results as a single GPU result numbers, that too would give a false impression of the performance of the product.
If Intel were to make it clear that faster CPUs will improve the performance of their GPUs, but would result in slower application performance(since CPU cycles are going to the GPU), then that wouldn't bother me for how the product actually performs. The problem is that it still doesn't show just how fast a given GPU is in a benchmark environment.
Re:Wonder if AMD plays fair? (Score:4, Interesting)
I think the point is to benchmark the performance of the gpu. If your fav-game-of-the-month looks fabulous on your friend's hopped up system with xyz graphics card, you expect to get the same graphics performance if you buy the same card, despite having a lower class processor. If the game is already taxing your friend's CPU to play smoothly, imagine the reduced gameplay AND graphics you'll get when you try it on your system, since it's trying to offload GPU work to your already burdened CPU?
There's simply no excuse for changing your behavior when you detect a benchmark app is running. Fraud, fraud, fraud. That's no better than the driver software screwing with the benchmark app as it runs or modifying its output before it's displayed, bugging it into displaying completely made-up numbers of their choosing.