NVIDIA's New Flagship GeForce GTX 580 Tested 149
MojoKid writes "Even before NVIDIA's GF100 GPU-based GeForce GTX 480 officially arrived, there were a myriad of reports claiming the cards would be hot, loud, and consume a lot of power. Of course, NVIDIA knew that well before the first card ever hit store shelves, so the company got to work on a revision of the GPU and card itself that would attempt to address these concerns. Today the company has launched the GeForce GTX 580 and as its name suggests, it's a next-gen product, but the GF110 GPU powering the card is largely unchanged from the GF100 in terms of its features. However, refinements have been made to the design and manufacturing of the chip, along with its cooling solution and PCB. In short, the GeForce GTX 580 turned out to be the fastest, single-GPU on the market currently. It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870. Take synthetic tests like Unigine into account and the GTX 580 can be up to twice as fast."
Good write ups, good card (Score:5, Informative)
http://www.pcper.com/article.php?aid=1034 [pcper.com]
http://www.hardocp.com/article/2010/11/09/nvidia_geforce_gtx_580_video_card_review [hardocp.com]
http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580 [anandtech.com]
http://www.legitreviews.com/article/1461/1/ [legitreviews.com]
http://www.techreport.com/articles.x/19934 [techreport.com]
http://www.bit-tech.net/hardware/graphics/2010/11/09/nvidia-geforce-gtx-580-review/1 [bit-tech.net]
Re:CPU, GPU... (Score:5, Informative)
In a "designing your next gaming build" sense, they largely already have. Unless you are a money-is-no-object-e-penis-must-get-longer type gamer, you can generally get better bang for your buck by going with a cheaper CPU and spending the savings on a nicer graphics card. It depends on the game, and there are situations where a truly epic(2x or 3x of the top of the line GPU ganged together with SLI or crossfire) graphics system will be CPU bound without the best CPU available; but Joe Gamer is, most of the time, better off with a third tier CPU and a second tier GPU, or a 2nd tier CPU and a 1st tier GPU.
In smaller systems(where board footprint really counts) or in cheap systems(where package costs and board size really count) the integration of CPU and GPU into a single package proceed apace, with AMD rolling low-end ATI tech into certain of their newer parts, and Intel trying to make their GMA stuff suck less. The only real wild card is Nvidia: Unlike Intel or AMD, they have no x86 cores to speak of, on the other hand, their GPU-computing initiatives are arguably the most advanced, in terms of tool and driver maturity. The question is, will they eventually produce an Nvidia equivalent to AMD and Intel's CPU/GPU combo packages(perhaps by buying VIA, who has adequate-but-deeply-unexciting x86 assets; but utter shit GPUs), or will they persist purely as a maker of high end gaming GPUs and GPU-based compute cards?
Unless the heriditary line of the "PC" as we know it is wholly extinguished, there will always be an x86 CPU floating around somewhere in the block diagram(and, in other types of systems, likely an ARM CPU); but it is already the case that, for many applications, the CPU has gotten fast enough to hit diminishing returns for many applications, and the GPU(or just the embedded h.264 decoder) is where the action is.
Re:Purely out of curiosity... (Score:3, Informative)
Re:Next gen? (Score:2, Informative)
From TFA:
Unigine Heaven Benchmark v2.0: 18% better
580: 879
480: 742
Quake war: 14% better
580: 176 FPS
480: 154 FPS
Farcry2: 14% better
580: 109 FPS
480: 95 FPS
Alien vs Predator: 16% better
580: 43 FPS
480: 37 FPS
Power consumption: 96% of that of the 480
580: 377
480: 392
Woot, 15% increase in performance for same consumption ! Clearly the 580 is "as its name suggests, it's a next-gen product".
If you mean same-gen as the 480, right. If you mean next-gen compared to the 480, clearly not.
So go read some non-synthetic ones (Score:3, Informative)
HardOCP is famous for their real gameplay ratings. They go and actually play through the game while testing performance. They then find the highest settings that the reviewer finds playable. Now while there is some subjectivity to it they do back it up with FPS numbers, and it is the same reviewer trying everything out. So it gives real, in game, actually playing, results. I find it maps nicely to what actually happens when I get a card and play games.
http://hardocp.com/article/2010/11/09/nvidia_geforce_gtx_580_video_card_review [hardocp.com]
Re:Terrible Summary (Score:3, Informative)
It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870.
The 5970 is a dual GPU solution. TBH, it's no surprise that it's faster than a single GPU solution that is a year newer. I would expect the last gen card in a dual GPU setup (this, or SLI/Crossfire) to outperform the latest next gen card, especially when the new card is really just an iteration of the architecture used in the last gen. Nothing really surprising about it at all. And I bet you if you get two of the GTX 580's in SLI, they'll stomp the 5970. That's a bit more of an apples to apples comparison (although not 100%, since there are specific bottlenecks that tend to keep 2 GPU's on a card from performing as well as two discrete GPUs in SLI).
Re:SLI/Crossifre isn't always valid (Score:3, Informative)
Running at resolutions beyond the HD rez, even on large screens, eliminate any sort of need for FSAA. At that point, you just don't get jaggies that need to be smoothed.
You still get pixel-shimmer though, which FSAA greatly reduces.
Re:CPU, GPU... (Score:3, Informative)
To the best of my knowledge, though, neither Nvidia, with their ARM SoCs, nor Intel with their on-package GMAs, nor AMD with their upcoming on-die ATI tech are creating what you might call a full "hybrid"(ie. a CPU whose instruction set also includes GPU-esque instructions, like MMX or SSE on steroids). At present, they are all just more heavily integrating, for economic and latency reasons, a discrete "CPU" block and a discrete "GPU" block.
Re:Competition is good. (Score:3, Informative)
I'm gonna feed this troll.
What about Radeon 9700, 9800, x800, 4800, 5800 before Fermi, and 6850 before GF110?
Also, ATI cards play games and do it well. I don't know what driver issues you're talking about.
Re:Good write ups, good card (Score:3, Informative)
P.S. My furnace is 93%/16 SEER and my house is only 1200sq ft so in percentage terms it can be a large cost.
Re:Purely out of curiosity... (Score:1, Informative)
Well, the chips of the late 1970s ranged from about 4000 in the 6502 (used in the Apple II, and a variant in the original Nintendo) to tens of thousands for the 8086 (IBM PC) and 68000 (Mac).
So if we only count transistors in CPUs, we hit 3 billion before the millionth of that kind of computer was sold.
However, integrated circuits weren't the first uses of transistors. They replaced vacuum tubes in radios back in the 1950s. Not many transistors in each device, but LOTS of devices sold. Radios, TVs, touchtone phones... surely we hit the billionth transistor sometime in the 1960s.
As for today, wiki's entry on transistors says we're currently making 60 million per person on earth per year.