NVIDIA's New Flagship GeForce GTX 580 Tested 149
MojoKid writes "Even before NVIDIA's GF100 GPU-based GeForce GTX 480 officially arrived, there were a myriad of reports claiming the cards would be hot, loud, and consume a lot of power. Of course, NVIDIA knew that well before the first card ever hit store shelves, so the company got to work on a revision of the GPU and card itself that would attempt to address these concerns. Today the company has launched the GeForce GTX 580 and as its name suggests, it's a next-gen product, but the GF110 GPU powering the card is largely unchanged from the GF100 in terms of its features. However, refinements have been made to the design and manufacturing of the chip, along with its cooling solution and PCB. In short, the GeForce GTX 580 turned out to be the fastest, single-GPU on the market currently. It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870. Take synthetic tests like Unigine into account and the GTX 580 can be up to twice as fast."
Purely out of curiosity... (Score:3, Interesting)
Re:Good write ups, good card (Score:3, Interesting)
The problem is, how much does it cost? Radeon 5770s can be had for $120 at Newegg after rebate, so why the hell would I need to waste $500 on this card? I could hook up a pair of 5770's for much less and get similar performance.
And what the hell games on the PC is it actually supposed to be required to play?
The AMD cards do just fine from the last gen, when they were beating NVidia cards. And I'm willing to bet that the "next gen" AMD card will see similar performance increases as well when it hits by next month.
Re:Get 'em while they're hot (Score:3, Interesting)
If you want to see the board, back off a few price/performance tiers, and you'll get a 90% bare PCB with a dinky little slug of aluminum or copper on the main chip.
Next gen? (Score:2, Interesting)
That looks like a 480 with the 4 replaced by a 5. Hardly a revolution.
Just watercool the 480, it's how it's supposed to be used.
Synthetic Benchmarks - (Score:3, Interesting)
Does anyone assume that the synthetic benchmarks achieved by either AMD or NVIDIA are representative of anything more than these companies' efforts to tweak their driver sets against the pre-existing criteria for getting a "good score"?
Both companies I believe have been accused over the years of doing just that and pointing the finger at the other as taking part in shennaniganism"
Terrible Summary (Score:3, Interesting)
The /. summary ends with:
It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870.
But if you read the original article, the one flaw in the (otherwise good) nVidia card is that is still loses to the 5970 which is -- according to the article -- 'about a year old'. So why is that other article mentioned in the summary talking about the 5870 as if its the flagship? Clearly the 5970 is. Or am I missing something?
Fast open source drivers coming.. (Score:3, Interesting)
Re:SLI/Crossifre isn't always valid (Score:3, Interesting)
If you have a 30" monitor and want to drive it at its native, beyond HD rez (2560x1600) you need some heavy hitting hardware to take care of that, particularly if you'd like the game to run nice and smooth, more around 60fps than around 30. You then need still more if you'd like to crank up anti-aliasing and so on.
Isn't the point of AA to make things look better at lower resolutions? Running at resolutions beyond the HD rez, even on large screens, eliminate any sort of need for FSAA. At that point, you just don't get jaggies that need to be smoothed.
Re:Good write ups, good card (Score:3, Interesting)
By now, we should have had a plethora of different applications running on such a card: audio encoding, compression, encryption, gaming AI. I know about CUDA, but why aren't we seeing such applications?
Flash, web based video, Power DVD, and various others at the consumer end of the spectrum (where accuracy is not important). When I first bought an ION based netbook (about 12 months ago), half the websites on the net could not play video on it without dropping a hideous number of frames. Since I've owned it, there has been a gradual stream of updates to various libs/SDK/apps (flash video was the most obvious!) that have made my netbook usable (by utilising the ION GPU).
Are they held back because of lacking OS support? Lacking driver support? Lacking deployment infrastructure? Lacking developer initiative? Is the GPU architecture (disparate memory) unsuitable? Or is CUDA just woefully inadequate to express parallel problems, seeing as it's based on (one of) the most primitive of imperative languages?
Disappointed minds want to know...
It's much simpler than that - it's all about available dev time. For any given app, any new feature has to work on all available systems (and by that I mean, it has an Intel GPU). This means you have to target your code to run on the CPU first. Later, if you have time (or performance is sucky enough to warrant the development effort) you can add in a GPU codepath in places where it makes sense. Sadly, most users don't tend to notice the difference between an app using 30% of the CPU, or one using 5%. As a result, GPU codepaths tend to get dropped down the priority list somewhat.
Writing code for the GPU is not fun (well, it is fun in the hobby project sense, but not so much for a paid job). You have to target your code for GL2.1 Intel, GL2.1 ATI, GL2.1 NVidia, GL3.3 ATI, GL3.3 Nvidia, GL4.0 ATI, GL4.0 Nvidia. At best you've just added an extra week to your QA process. At worst it's batted back and forth between QA and the dev team for a month or more. The bean counting senior management do a quick cost/benefit analysis, and almost always find that the added development time cannot be justified.
Finally..... There were a few features lacking from GPU's (until very recently) that tended to prevent them from being used in any serious environments. (The lack of double precision or ECC memory support spring to mind). That is slowly changing, but until the costs for development on the GPU start to fall, I doubt you'll see too many apps moving to the GPU.....
Re:Good write ups, good card (Score:3, Interesting)
Re:CPU, GPU... (Score:4, Interesting)
With the prior generation of atoms, the usual pairing was Atom + fairly antiquated Intel chipset with GMA950 and a fairly high TDP. For just a little extra, you could pair the Atom with Nvidia's chipset instead, which had as good or better TDP and much better integrated graphics. Intel wasn't happy; but the end result was good.
With the newer generation, Intel brought most of the chipset functions onboard, and played hardball with licensing, so that "Ion 2" ended up consisting of, in essence, Nvidia's lowest-end discrete GPU added on to the system via the few PCIe lanes available. Unlike Ion, which was a genuine improvement in basically all respects other than OSS linux support, Ion2 meant higher TDP, more board space, and higher BOM.
Intel bears much of the blame for it; but Ion 2 is largely a dog, particularly when compared to the "CULV" options, which will get you a real(albeit low end) Core2 or i3 processor and a similar low end GPU for not much more than the Atom...