NVIDIA's New Flagship GeForce GTX 580 Tested 149
MojoKid writes "Even before NVIDIA's GF100 GPU-based GeForce GTX 480 officially arrived, there were a myriad of reports claiming the cards would be hot, loud, and consume a lot of power. Of course, NVIDIA knew that well before the first card ever hit store shelves, so the company got to work on a revision of the GPU and card itself that would attempt to address these concerns. Today the company has launched the GeForce GTX 580 and as its name suggests, it's a next-gen product, but the GF110 GPU powering the card is largely unchanged from the GF100 in terms of its features. However, refinements have been made to the design and manufacturing of the chip, along with its cooling solution and PCB. In short, the GeForce GTX 580 turned out to be the fastest, single-GPU on the market currently. It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870. Take synthetic tests like Unigine into account and the GTX 580 can be up to twice as fast."
Competition is good. (Score:5, Insightful)
I am very glad to see the performance crown handed back and forth.
Now if only this was happening in the CPU market...
Re:Good write ups, good card (Score:2, Insightful)
Re:Purely out of curiosity... (Score:1, Insightful)
That's only 7462 Intel 286 processors. A very low number. So somewhere between 1954 and 1982.
SLI/Crossifre isn't always valid (Score:3, Insightful)
For one, there are a lot of motherboards that don't support it. Even new, reasonably high end boards. I have an Intel P35 board with a Core 2 Quad at home, but it has only 1 16x slot. At work, a Dell Precision T1500 with an i7, again only 1 16x slot. Crossfire/SLI cannot be done in these cases. You have to buy a single, heavier hitting, card if you want performance.
Also you need to do a bit more research if you think multi-card solutions work well all the time. They can, but they also can have some serious problems. Some games work great, others can't use a second card at all. There is something to be said for the simplicity of a single card that does what you need.
In terms of needing the speed? Well depends on what you have and your tastes. You certainly don't need it to play any game, all games are playable on less. However you might need it if you desire extremely high resolutions and high frame rates. If you have a 30" monitor and want to drive it at its native, beyond HD rez (2560x1600) you need some heavy hitting hardware to take care of that, particularly if you'd like the game to run nice and smooth, more around 60fps than around 30. You then need still more if you'd like to crank up anti-aliasing and so on.
Now that clearly isn't for everyone, but that's fine. There is no reason not to have a high end as well as a mid range. You should hate on people who want more performance than you do. In fact, you should thank them. Know why the 5770 is so cheap? Because the 5870 is not. That high end card financed the development of the new tech, it recouped a lot of the R&D costs, making an economical midrange card a reality.
This is why nobody seems to be able to break in and compete with nVidia and ATi in graphics. They target midrange or lower end, because development costs on high end stuff is so much. However nVidia and ATi have extremely solid mid and low range lineups, because they can take the tech on their high end cards, and scale it down.
Re:About 6 days from never (Score:3, Insightful)
What may happen, what AMD would like to see happen, is for GPU functions to become a part of the CPU, that GPUs go away because CPUs can do it. However that'll be because CPUs have GPU like logic in addition to their own.
The problem, as Intel found out with Larrabee, is that a cache that works well for CPU tasks does not work well for GPU tasks, and vise-versa. For a GPU the bandwidth is everything, while for a CPU its the latency that matters most.
Our CPU's L1 caches are 32K/64K in size because smaller caches have significantly smaller latencies than larger ones. Its quite obvious that a 64K cache is way too small for a GPU, which could literally process 64K of data in only a few of its clock cycles.
Intel never could solve the problem. Larrabee could either be a GPU with poor CPU-capabilities, or a multi-core CPU with poor GPU-capabilities.
Maybe in the future... not with todays memory types.