Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Hands On With Nvidia's New GTX 280 Card 212

notdagreatbrain writes "Maximum PC magazine has early benchmarks on Nvidia's newest GPU architecture — the GTX 200 series. Benchmarks on the smokin' fast processor reveal a graphics card that can finally tame Crysis at 1900x1200. 'The GTX 280 delivered real-world benchmark numbers nearly 50 percent faster than a single GeForce 9800 GTX running on Windows XP, and it was 23 percent faster than that card running on Vista. In fact, it looks as though a single GTX 280 will be comparable to — and in some cases beat — two 9800 GTX cards running in SLI, a fact that explains why Nvidia expects the 9800 GX2 to fade from the scene rather quickly.'"
This discussion has been archived. No new comments can be posted.

Hands On With Nvidia's New GTX 280 Card

Comments Filter:
  • by Anonymous Coward on Monday June 16, 2008 @11:31AM (#23811277)
    http://anandtech.com/video/showdoc.aspx?i=3334 [anandtech.com]

    http://techreport.com/articles.x/14934 [techreport.com]

    Conclusion: 9800GX2 is faster and cheaper
  • by Colonel Korn ( 1258968 ) on Monday June 16, 2008 @11:39AM (#23811361)
    In most reviews, the 9800GX2 is faster, and it's also $200 cheaper. As a multi-GPU card it has some problems with scaling, and micro-stutter makes it very jumpy like all existing SLI setups.

    I'm not well versed in the cause of micro-stutter, but the results are that frames aren't spaced evenly from each other. In a 30 fps situation, a single card will give you a frame at 0 ms, 33 ms, 67 ms, 100 ms, etc. Add a new SLI card and let's say you have 100% scaling, which is overly optimistic. Frames now render at 0 ms, 8 ms, 33 ms, 41 ms, 67 ms, 75 ms, 100ms, and 108ms. You get twice the frames per second, but they're not evenly spaced. In this case, which uses realistic numbers, you're getting 60 fps might say that the output looks about the same as 40 fps, since the delay between every other frame is 25 ms.

    It would probably look a bit better than 40 fps, since between each 25 ms delay you get an 8 ms delay, but beyond the reduced effective fps there are other complications as well. For instance, the jitter is very distracting to some people. Also, most LCD monitors, even those rated at 2-5 ms response times, will have issues showing the 33 ms frame completely free of ghosting from the 8 ms frame before the 41 ms frame shows up.

    Most people only look at fps, though, which makes the 9800 GX2 a very attractive choice. Because I'm aware of micro-stutter, I won't buy a multi-GPU card or SLI setup unless it's more than 50% faster than a single-GPU card, and that's still ignoring price. That said, I'm sort of surprised to find myself now looking mostly to AMD's 4870 release next week instead of going to Newegg for a GTX280, since the 280 results, while not bad, weren't quite what I was hoping for in a $650 card.
  • Re:Power vs Intel (Score:5, Informative)

    by the_humeister ( 922869 ) on Monday June 16, 2008 @11:39AM (#23811363)
    Because graphics operations are embarrassingly parallel whereas regular programs arn't.
  • Noise leveb (Score:5, Informative)

    by eebra82 ( 907996 ) on Monday June 16, 2008 @11:39AM (#23811369) Homepage
    Looks like NVIDIA went back to the vacuum cleaner solution. Blatantly taken from Tom's Hardware:

    During Windows startup, the GT200 fan was quiet (running at 516 rpm, or 30% of its maximum rate). Then, once a game was started, it suddenly turned into a washing machine, reaching a noise level that was frankly unbearable - especially the GTX 280.
    Frankly, reviews indicate that this card is too f*cking noisy and extremely expensive ($650).
  • by pdusen ( 1146399 ) on Monday June 16, 2008 @11:40AM (#23811377) Journal

    and while they support DX10, who needs that when games under the wonderful OS Vista run twice as slow than they do on XP?
    That would be a good point, if it were true.

    Only you can prevent FUD!
  • Re:Power Consumption (Score:4, Informative)

    by SolidAltar ( 1268608 ) on Monday June 16, 2008 @11:43AM (#23811423)
    More detail (sorry):

    You may be wondering, with a chip this large, about power consumptionâ"as in: Will the lights flicker when I fire up Call of Duty 4? The chip's max thermal design power, or TDP, is 236W, which is considerable. However, Nvidia claims idle power draw for the GT200 of only 25W, down from 64W in the G80. They even say GT200's idle power draw is similar to AMD's righteously frugal RV670 GPU. We shall see about that, but how did they accomplish such a thing? GeForce GPUs have many clock domains, as evidenced by the fact that the GPU core and shader clock speeds diverge. Tamasi said Nvidia implemented dynamic power and frequency scaling throughout the chip, with multiple units able to scale independently. He characterized G80 as an "on or off" affair, whereas GT200's power use scales more linearly with demand. Even in a 3D game or application, he hinted, the GT200 might use much less power than its TDP maximum. Much like a CPU, GT200 has multiple power states with algorithmic determination of the proper state, and those P-states include a new, presumably relatively low-power state for video decoding and playback. Also, GT200-based cards will be compatible with Nvidia's HybridPower scheme, so they can be deactivated entirely in favor of a chipset-based GPU when they're not needed.

  • Re:Power vs Intel (Score:3, Informative)

    by corsec67 ( 627446 ) on Monday June 16, 2008 @11:44AM (#23811443) Homepage Journal
    I think one huge thing is that graphics is a hugely parallelizable task. The operations aren't very complex, so they can just keep cramming more and more processing units onto the chip.

    Intel and AMD are having issues getting over 4 cores per die right now, while this card "... packs 240 tiny processing cores into this space, plus 32 raster-operation processors".
  • no surprise (Score:3, Informative)

    by unity100 ( 970058 ) on Monday June 16, 2008 @11:45AM (#23811455) Homepage Journal
    8000gts were much louder than their 3870 counterparts too.

    i dont get why people fall for that - push a chip to limits, put a noisy fan on it, and sell it as high performance card.

    at least with ati 3870 you can decide whether you gonna overclock the card and endure the noise or not.
  • Re:Power Consumption (Score:5, Informative)

    by CastrTroy ( 595695 ) on Monday June 16, 2008 @11:48AM (#23811493)
    This is how graphics cards used to work. You would plug a VGA cable from your standard 2D graphics card to your, for example, Voodoo II card, and the Voodoo II card would go out to the monitor. You could just have the 3D card working in passthrough mode when not doing 3D stuff. Something like this could work on a single board though. There's no reason you couldn't power down entire sections of the graphics card that you aren't using. Most video cards support changing the clock speed on the card. I'm wondering if this is a problem at all, with any real effects, or whether it's just speculation based on the poster assuming what might happen. Anybody have any real numbers for wattage drained based on idle/full workload for these large cards?
  • The 9800GX2 may be cheaper but it most certainly is not faster, even considering your links. From Anandtech [techreport.com], the charts show a significant speed increase with the new hardware.

    In fact, from the article:
    The GTX 280 delivered real-world benchmark numbers nearly 50 percent faster than a single GeForce 9800 GTX running on Windows XP, and it was 23-percent faster than that card running on Vista. In fact, it looks as though a single GTX 280 will be comparable to--and in some cases beat--two 9800 GTX cards running in SLI, a fact that explains why Nvidia expects the 9800 GX2 to fade from the scene rather quickly.

    Which leads me to the question, are you trolling?
  • by DrYak ( 748999 ) on Monday June 16, 2008 @11:54AM (#23811599) Homepage

    I could see XP running say 20% better with both cards, but why does Vista penalize the new card so much?
    Crysis is a DirectX 10 game.
    When run under Vista, it features tons of additional effects. Those are the reasons why the speed improvement in Crysis aren't that much impressive under Vista.

    PS: And for the record, Radeon HD3870X2 uses the exact same GDDR3, not GDDR4 as TFA's review says. ATI choose to go for GDDR3 to cut the costs of the dual GPU setup. (Only a few non standard boards by 3rd party manufacturer use GDDR4 and a PCI-express 2.0 bridge).
  • RTFA (Score:3, Informative)

    by ruiner13 ( 527499 ) on Monday June 16, 2008 @12:02PM (#23811703) Homepage
    If you'd RTFA, you'd note this part:

    Power Considerations

    Nvidia has made great strides in reducing its GPUs' power consumption, and the GeForce 200 series promises to be no exception. In addition to supporting Hybrid Power (a feature that can shut down a relatively power-thirsty add-in GPU when a more economical integrated GPU can handle the workload instead), these new chips will have performance modes optimized for times when Vista is idle or the host PC is running a 2D application, when the user is watching a movie on Blu-ray or DVD, and when full 3D performance is called for. Nvidia promises the GeForce device driver will switch between these modes based on GPU utilization in a fashion that's entirely transparent to the user.
    So, yes, they hear you, and are making improvements in this area.
  • by ericvids ( 227598 ) on Monday June 16, 2008 @12:22PM (#23811991)
    From the Tech Report [techreport.com]:

    TR: Does the removal of this "render pass during post-effect" in the DX10.1 have an impact on image quality in the game?

    Beauchemin: With DirectX 10.1, we are able to re-use an existing buffer to render the post-effects instead of having to render it again with different attributes. However, with the implementation of the retail version, we found a problem that caused the post-effects to fail to render properly.

    TR: What specific factors led to DX10.1 support's removal in patch 1?

    Beauchemin: Our DX10.1 implementation was not properly done and we didn't want the users with Vista SP1 and DX10.1-enabled cards to have a bad gaming experience.

  • by BronsCon ( 927697 ) <social@bronstrup.com> on Monday June 16, 2008 @12:30PM (#23812111) Journal
    Actually, they're comparing the GTX 280 to TWO 9800s in SLI configuration. RTFS FFS
  • by Kamidari ( 866694 ) on Monday June 16, 2008 @12:56PM (#23812431)
    Yes, they're comparing it to two 9800GTXs, which is what a 9800GX2 is: Two 9800GTXs on one board. RTFS indeed. It seems like a case of give and take. The GTX280 is more expensive, but is a single GPU solution, which tends to be more stable. The 9800GX2 is cheaper, runs about as fast, but is a dual GPU unit, so you might have a few more "issues" to deal with in your game playing adventures.
  • by Anonymous Coward on Monday June 16, 2008 @12:58PM (#23812461)

    When run under Vista, it features tons of additional effects.
    Stupidly enough, a 20% loss in framerate is all you get by running the game in DX10 mode. All of the 'DX10' effects can be enabled in DX9 (on XP as well) with simple modifications to the game's configuration files. When you do it, it looks exactly the same as DX10 without the deleterious effect on framerate.

    I suppose if the reviewers were stupid, they may not have run the game in DX9 mode on both XP and Vista, which would account for the difference even if the graphical options were set to the same levels. But it doesn't make the game look better just by running it in DX10 mode, the only difference is that it's slower.

    For those who aren't familiar with this: in XP/DX9 mode the highest-level graphical options are grayed out. This is an entirely artificial limitation; the configuration changes I mentioned simply replace graphical options with a higher ones (they're just integers in a plaintext file), so for example 'High' (DX9) becomes 'Very High' (the 'DX10' effects) in practise.
  • Re:Power Consumption (Score:3, Informative)

    by CastrTroy ( 595695 ) on Monday June 16, 2008 @01:00PM (#23812479)
    However, not all that power is needed for running the 3D desktops. I can run Compiz (linux 3D desktop) on my Intel GMA 950 without a single slowdown. With all the standard 3D eyecandy turned on. So you wouldn't need to run an nVidia 8800 at full clock speed to render the desktop effects. Also, Windows Vista and Linux both support turning off the 3D effects and running in full 2D mode. I'm sure Mac OS supports the same, although I've never looked into it, so it's hard to say for sure. Especially since MacOS has such a limited number of computers that it is supported on.
  • Re:Noise leveb (Score:3, Informative)

    by clampolo ( 1159617 ) on Monday June 16, 2008 @01:15PM (#23812663)

    and extremely expensive ($650)

    Not at all surprising. Did you see the size of that chip die? You can fit 6 Penryn on it!! I used to work for a semiconductor company and the larger the chip the more expensive it gets. This is because the larger the die is the less likely it is to be defect free when it comes out of the fab.

  • Re:Yeah but... (Score:3, Informative)

    by masterzora ( 871343 ) on Monday June 16, 2008 @01:44PM (#23813027)
    Actually, NVidia's pretty good about getting Linux drivers for new cards out relatively quickly.
  • Re:Power vs Intel (Score:3, Informative)

    by AmiMoJo ( 196126 ) on Monday June 16, 2008 @02:50PM (#23813813) Homepage Journal
    Also, graphics processors are evolving quickly, where as CPUs have had basically the same instruction set for 30 years now.

    For example, with the 8000 series pixel shaders had become very important in modern games, so the cards were optimised for pixel shading performance much more than the 7000 series was. There is simply no equivalent for CPUs - even stuff like SSE extensions is really just trying to do the same stuff in a more parallel way, it isn't a radically new way of doing things.
  • Those game benchmarks are probably CPU bound.

    So an increase in CPU speed would probably benefit more the GTX 280 than the 9800GX2.

    Of course, we can't know if my theory is true unless we test it, but seems logical to me.
  • Re:Power vs Intel (Score:4, Informative)

    by p0tat03 ( 985078 ) on Monday June 16, 2008 @05:08PM (#23815389)

    Precisely. This is something that can be solved by simply throwing more transistors in. Their biggest challenge is probably power and heat, not architecture.

    Not to mention that "programs" on GPUs are ridiculously simple compared to something on a general purpose CPU. Next time you write a shader, try branching (i.e. if, else), your shader will slow to a relative crawl.

  • by TheNetAvenger ( 624455 ) on Tuesday June 17, 2008 @04:45AM (#23820313)
    Maybe your eyes are better than mine, but I don't think we're even getting that.

    Texture size and number of objects in a scene on Crysis would be the best examples, there is a difference. Games are moving to levels (especially for HD or 1920x1200&up players) that the texture limitations of DX9.0c can't bring the detail needed, and this is just one 'tiny' aspect of DX10.

    http://www.tomsgames.com/us/2007/09/18/dx10_part3/page3.html [tomsgames.com]

    Bundling it into Vista is bad, for a slew of reasons, and the shit they've pulled with several 'Vista only' titles,

    DX10 has specific reasons why it only runs on Vista. Go ask the people hacking the libraries for XP. They will run, but it expects the OS to be handling aspects of the GPU that XP isn't doing.

    http://arstechnica.com/journals/microsoft.ars/2007/2/14/7060 [arstechnica.com]

    DX10 is designed around Vista because it expects GPU RAM Virtualization to be available from the OS. (Only Vista can do this) DX10 expects even 'in-game' threads/processes to be prioritized and handled by the OS, and only Vista can do this because it has a pre-emptive scheduler for the GPU, XP don't (in fact no other OS has one). To put these things in XP would be to make a full WDDM for XP, and that is not quite so easy.

    The DX10 stuff like this is a tie over from the XBox360 development team, and DX10 is what MS and Robbie learned to take gaming forward on the PC.

    As for the 'Vista Only' titles, there were reasons for them at the time. For example Halo 2, as its online play is Games for Windows Live, and at the time used Vista's communication framework, and Live for Windows (the Gaming connection) was a Vista only technology. So the Halo 2 development went forward with these considerations, and other internal optimizations in the game just exepcted the Vista WDDM to be there, etc. Microsoft went back and wrote Live Games for Windows for XP from the ground up. (Hence some of the new networking features in XP SP3, just to support it.)

    So it may have seemed nefarious, but was not a con, just a platform specific feature and optimization design, pure and simple... Sadly MS was counting on NVidia and ATI to have their WDDM drivers at XP levels at release of VIsta, and this didn't happen. When MS jumped in with NVidia and ATI and 'helped' their driver development the fruit of this was seen around June 07, as Vista was catching to XP in gaming performance, and by Sept 07, had equaled it.

    I see absolutely nothing to recommend Vista over XP, at this time or in the near future.
    This is where Microsoft's marketing sucks. They should do like Apple and list every tiny feature.(Remember the 300 list about Leopard?) If Microsoft did a list like this for Vista, it would be around 10,000 items in their list.

    If I had time this morning, I could take your circumstances and make a very credible case for Vista. I also understand where you are coming from as Vista is a plumbing and architecture shift, they burned their time to build more features based on these changes with the iniitial dump of Longhorn. Windows 7 is basically going to be a showcase of what is already in Vista, since it doesn't have any major architecture changes planned.

    Hardly a year out of date. The figures you post are one month old, and involve Vista SP1 final, vs SP3 of XP. I admit I am impressed by the evening out that Vista has managed to achieve, in those tests.

    Ok, year was a bit of tongue in cheek.

    A lot of people didn't realize that NVidia and ATI had to write the Vista WDDM drivers from scratch, as it is a dramatic different model than XPDM. From letting Vista do scheduling to RAM virtualization and handing over more to the OS from core driver level to even Aero Composer.

    And even though I think NVidia and ATI could have done better at launch, as they didn't provide drivers to beta testers

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...