Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Technology

NVIDIA's New Flagship GeForce GTX 580 Tested 149

MojoKid writes "Even before NVIDIA's GF100 GPU-based GeForce GTX 480 officially arrived, there were a myriad of reports claiming the cards would be hot, loud, and consume a lot of power. Of course, NVIDIA knew that well before the first card ever hit store shelves, so the company got to work on a revision of the GPU and card itself that would attempt to address these concerns. Today the company has launched the GeForce GTX 580 and as its name suggests, it's a next-gen product, but the GF110 GPU powering the card is largely unchanged from the GF100 in terms of its features. However, refinements have been made to the design and manufacturing of the chip, along with its cooling solution and PCB. In short, the GeForce GTX 580 turned out to be the fastest, single-GPU on the market currently. It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870. Take synthetic tests like Unigine into account and the GTX 580 can be up to twice as fast."
This discussion has been archived. No new comments can be posted.

NVIDIA's New Flagship GeForce GTX 580 Tested

Comments Filter:
  • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 09, 2010 @12:01PM (#34174604) Journal
    This GTX580 is a 3 billion transistor chip(not counting the RAM on the same card, just the GPU die itself). Does anybody know what year the number of transistors on the entire planet reached the number on this die?
  • by Moryath ( 553296 ) on Tuesday November 09, 2010 @12:11PM (#34174744)

    The problem is, how much does it cost? Radeon 5770s can be had for $120 at Newegg after rebate, so why the hell would I need to waste $500 on this card? I could hook up a pair of 5770's for much less and get similar performance.

    And what the hell games on the PC is it actually supposed to be required to play?

    The AMD cards do just fine from the last gen, when they were beating NVidia cards. And I'm willing to bet that the "next gen" AMD card will see similar performance increases as well when it hits by next month.

  • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 09, 2010 @12:23PM (#34174856) Journal
    With a 244 watt TDP, I suspect that they need every inch of the front of the card, and are constrained only by PCIe form-factor concerns from using more of the back, just to keep the thing from burning out without a fan that sounds like a legion of the damned every time you boot the thing. The entire front of the card is a combination of heatsink(and not your extruded aluminum jobby, a phase-change vapor chamber unit) and a shroud to direct air flow.

    If you want to see the board, back off a few price/performance tiers, and you'll get a 90% bare PCB with a dinky little slug of aluminum or copper on the main chip.
  • Next gen? (Score:2, Interesting)

    by Issarlk ( 1429361 ) on Tuesday November 09, 2010 @12:26PM (#34174880)
    So, this card is about as fast, and consumes about the same power as a 480, but it's "next gen" anyway ?

    That looks like a 480 with the 4 replaced by a 5. Hardly a revolution.
    Just watercool the 480, it's how it's supposed to be used.
  • by Fibe-Piper ( 1879824 ) on Tuesday November 09, 2010 @12:34PM (#34174966) Journal

    Does anyone assume that the synthetic benchmarks achieved by either AMD or NVIDIA are representative of anything more than these companies' efforts to tweak their driver sets against the pre-existing criteria for getting a "good score"?

    Both companies I believe have been accused over the years of doing just that and pointing the finger at the other as taking part in shennaniganism"

  • Terrible Summary (Score:3, Interesting)

    by Godai ( 104143 ) * on Tuesday November 09, 2010 @01:14PM (#34175490)

    The /. summary ends with:

    It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870.

    But if you read the original article, the one flaw in the (otherwise good) nVidia card is that is still loses to the 5970 which is -- according to the article -- 'about a year old'. So why is that other article mentioned in the summary talking about the 5870 as if its the flagship? Clearly the 5970 is. Or am I missing something?

  • by xiando ( 770382 ) on Tuesday November 09, 2010 @01:22PM (#34175576) Homepage Journal
    ..never as long as Nvidia refuses to release even a hint of documentation and insists that GNU/Linux users accept their Binary Blob World Order. I don't really care if this new card is faster than the fastest AMD card, atleast I can (ab)use those for something. I still have a Nvidia PCI (not PCIe) card on some shelf which does NOT work with the Binary Blob under GNU/Linux, nor does it work with nouveau joke of a free driver.
  • by Slime-dogg ( 120473 ) on Tuesday November 09, 2010 @02:03PM (#34176154) Journal

    If you have a 30" monitor and want to drive it at its native, beyond HD rez (2560x1600) you need some heavy hitting hardware to take care of that, particularly if you'd like the game to run nice and smooth, more around 60fps than around 30. You then need still more if you'd like to crank up anti-aliasing and so on.

    Isn't the point of AA to make things look better at lower resolutions? Running at resolutions beyond the HD rez, even on large screens, eliminate any sort of need for FSAA. At that point, you just don't get jaggies that need to be smoothed.

  • by robthebloke ( 1308483 ) on Tuesday November 09, 2010 @02:40PM (#34176808)

    By now, we should have had a plethora of different applications running on such a card: audio encoding, compression, encryption, gaming AI. I know about CUDA, but why aren't we seeing such applications?

    Flash, web based video, Power DVD, and various others at the consumer end of the spectrum (where accuracy is not important). When I first bought an ION based netbook (about 12 months ago), half the websites on the net could not play video on it without dropping a hideous number of frames. Since I've owned it, there has been a gradual stream of updates to various libs/SDK/apps (flash video was the most obvious!) that have made my netbook usable (by utilising the ION GPU).

    Are they held back because of lacking OS support? Lacking driver support? Lacking deployment infrastructure? Lacking developer initiative? Is the GPU architecture (disparate memory) unsuitable? Or is CUDA just woefully inadequate to express parallel problems, seeing as it's based on (one of) the most primitive of imperative languages?

    Disappointed minds want to know...

    It's much simpler than that - it's all about available dev time. For any given app, any new feature has to work on all available systems (and by that I mean, it has an Intel GPU). This means you have to target your code to run on the CPU first. Later, if you have time (or performance is sucky enough to warrant the development effort) you can add in a GPU codepath in places where it makes sense. Sadly, most users don't tend to notice the difference between an app using 30% of the CPU, or one using 5%. As a result, GPU codepaths tend to get dropped down the priority list somewhat.

    Writing code for the GPU is not fun (well, it is fun in the hobby project sense, but not so much for a paid job). You have to target your code for GL2.1 Intel, GL2.1 ATI, GL2.1 NVidia, GL3.3 ATI, GL3.3 Nvidia, GL4.0 ATI, GL4.0 Nvidia. At best you've just added an extra week to your QA process. At worst it's batted back and forth between QA and the dev team for a month or more. The bean counting senior management do a quick cost/benefit analysis, and almost always find that the added development time cannot be justified.

    Finally..... There were a few features lacking from GPU's (until very recently) that tended to prevent them from being used in any serious environments. (The lack of double precision or ECC memory support spring to mind). That is slowly changing, but until the costs for development on the GPU start to fall, I doubt you'll see too many apps moving to the GPU.....

  • by robthebloke ( 1308483 ) on Tuesday November 09, 2010 @02:44PM (#34176874)
    p.s. As for CAD/CAM software. They actually don't push the GPU as much as you might expect. They tend to use the simplest single pass shading available, so don't actually need too many GPU cores. What's more important for those apps is lots and lots of fast DDR5 ram....
  • Re:CPU, GPU... (Score:4, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 09, 2010 @04:57PM (#34178950) Journal
    I don't think that they have much choice about "Ion 2" pretty much sucking.

    With the prior generation of atoms, the usual pairing was Atom + fairly antiquated Intel chipset with GMA950 and a fairly high TDP. For just a little extra, you could pair the Atom with Nvidia's chipset instead, which had as good or better TDP and much better integrated graphics. Intel wasn't happy; but the end result was good.

    With the newer generation, Intel brought most of the chipset functions onboard, and played hardball with licensing, so that "Ion 2" ended up consisting of, in essence, Nvidia's lowest-end discrete GPU added on to the system via the few PCIe lanes available. Unlike Ion, which was a genuine improvement in basically all respects other than OSS linux support, Ion2 meant higher TDP, more board space, and higher BOM.

    Intel bears much of the blame for it; but Ion 2 is largely a dog, particularly when compared to the "CULV" options, which will get you a real(albeit low end) Core2 or i3 processor and a similar low end GPU for not much more than the Atom...

Disclaimer: "These opinions are my own, though for a small fee they be yours too." -- Dave Haynie

Working...