Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Hands On With Nvidia's New GTX 280 Card 212

notdagreatbrain writes "Maximum PC magazine has early benchmarks on Nvidia's newest GPU architecture — the GTX 200 series. Benchmarks on the smokin' fast processor reveal a graphics card that can finally tame Crysis at 1900x1200. 'The GTX 280 delivered real-world benchmark numbers nearly 50 percent faster than a single GeForce 9800 GTX running on Windows XP, and it was 23 percent faster than that card running on Vista. In fact, it looks as though a single GTX 280 will be comparable to — and in some cases beat — two 9800 GTX cards running in SLI, a fact that explains why Nvidia expects the 9800 GX2 to fade from the scene rather quickly.'"
This discussion has been archived. No new comments can be posted.

Hands On With Nvidia's New GTX 280 Card

Comments Filter:
  • Power vs Intel (Score:4, Interesting)

    by SolidAltar ( 1268608 ) on Monday June 16, 2008 @11:34AM (#23811299)
    Let me say I do not know anything about chip design but I have a question -

    How is Nvidia able to year after year make these amazing advances in power while Intel makes (although great) only modest advances?

    As I said I do not know anything about chip design so please correct me on any points.

  • Power Consumption (Score:5, Interesting)

    by squoozer ( 730327 ) on Monday June 16, 2008 @11:34AM (#23811305)

    Something that has always concerned me (more as I play games less often now) is how much power these cards draw when they aren't pumping out a zillion triangles a second playing DNF.

    Most of the time (90%+ probably) I'm just doing very simple desktop type things. While it's obvious from the heat output that these cards aren't running flat out when redrawing a desktop surely they must be using significatnly more power than a simple graphics card that could perform the same role. Does anyone have any figures showing how much power is being wasted?

    Perhaps we should have two graphics cards in the the system now - one that just does desktop type things and one for when real power is required. I would have thought it would be fairly simple to design a motherboard such that it had an internal only slot to accept the latest and greatest 3D accelerator card that suplimented an on board dumb-as-a-brick graphics card.

  • by Anonymous Coward on Monday June 16, 2008 @11:35AM (#23811315)
    The article talks about the new card smoking ATI and showing 50% improvement yet the benchmark chart at the end of the article shows only a couple games got 50% fps boosts and ATI still outperformed it in Crysis with a card that is available today.

  • by unity100 ( 970058 ) on Monday June 16, 2008 @11:44AM (#23811441) Homepage Journal
    are royally screwed ? it was a 'new' card and all.

    well done nvidia. very microsoft of you.
  • by Colonel Korn ( 1258968 ) on Monday June 16, 2008 @11:45AM (#23811463)


    Why would Vista make the performance gains so much less? I could see XP running say 20% better with both cards, but why does Vista penalize the new card so much?

    Digital Restrictions Management strikes again, I guess...

    Vista: where do we want you to go today?
    TFA has some very weird numbers compared to Anandtech and Tomshardware and all the other real review sites that actually tell you all the details of their testing. The 280 looks more like it's 50-75% faster than the 9800GTX in most reviews, and most of those are done in Vista. Framerate in XP vs. Vista is completely even on a 9800 GTX with current drivers (the Vista slowdown went away a long time ago), except on Oblivion where Vista is about 20% faster for no apparent reason, but maybe the drivers Maximum PC used weren't the same as those used by the serious review sites, or maybe they have something wrong with their Vista install.
  • ATI's Response? (Score:2, Interesting)

    by mandark1967 ( 630856 ) on Monday June 16, 2008 @11:49AM (#23811513) Homepage Journal
    Anyone know when the new ATI card will be released?

    Based on the information I've seen on it, it will be pretty comparable in terms of performance, but at a far cheaper price.

    I'm hoping that the new ATI card performs within 10% - 15% or so of the GTX280 because I'm getting a bit tired of the issues I have with my current nVidia 8800GTS cards. (SLI)

    I cannot set the fanspeed in a manner that will "stay" after a reboot.

    My game of choice actually has some moderate-sever issues with nVidia cards and crashes at least a couple times a week due to some issue with nvcpl which I have bugged for 10 different versions of drivers and they never fix

    My last ATI Card was their 9700Pro. I switched to nVidia because, while their drivers are closed source, the installation in Linux is far easier and their performance was pretty top-notch. Now I'm considering switching back to ATI if they can deliver a decent competitor.
  • by wild_quinine ( 998562 ) on Monday June 16, 2008 @12:03PM (#23811731)
    I used to be near the front of the queue for each new line of graphics cards. I used to wait just long enough for the first price drops, and then stump up. Cost me a couple of hundred quid a year, after the sale of whatever my old card was, to stay top of the line. Compared to owning and running a car, which I don't, owning and running a super-rig was positively cheap (in the UK). Some might call it a waste of money, and I have sympathy for that argument, but it was my hobby, and it was worth it, to me.

    This year I put my disposable income towards getting in on all three next generation consoles, and the PC will languish for a long time yet.

    I don't think I've changed, I think the market has changed.

    They're getting bigger and hotter, and no longer feel like cutting edge kit. They feel like an attempt to squeeze more life out of old technology.

    DirectX 10 as a selling point is a joke, with the accompanying baggage that is Vista all it does is slow games down, and none of them look any better for it yet. In any case, there are only five or six of them. You can pick up an 8800GT 512 for less than 150 dollars these days, and it's a powerhouse, unless you're gaming in full 1080p. There is no motivation to put one of those power hungry bricks in my rig. Nothing gets any prettier these days, and FPS is well taken care of at 1680x1050 or below.

    Game over, graphics cards.

    I wonder what will happen if everyone figures this out? Imagine a world in which the next gen of consoles is no longer subsidised, or driven, by PC enthusiasts...

  • by Z34107 ( 925136 ) on Monday June 16, 2008 @12:06PM (#23811775)

    There is no "Vista DRM." That little copy protection stuff lets you play Blu-Ray discs.

    You can rip CDs, DVDs, and pirate t3h internetz if you want. I do so on a daily basis on my Vista x64 machine.

    Now, if OS support for DRM bothers you, take it up with the studios that require it. Not playing DVDs is not an option.

  • by Colonel Korn ( 1258968 ) on Monday June 16, 2008 @12:41PM (#23812243)

    You don't have SLI... I do, and micro stutter is barely noticeable at worst. And I can play at resolutions and anti-aliasing that no sigle card could've made playable.
    I've had three SLI setups (an ancient 3dfx X2 and two nVidia pairs). I liked my first SLI rig but I felt not to satisfied with the feel of the last two when compared to a single card, and now that I've learned about this issue I know why. Lots of people say that microstutter is barely noticeable, but lots of people also insist that a $300 HDMI cable gives "crisper" video over a 6 foot connection than a $10 HDMI cable. The micro-stutter effect that you can barely notice is the inconsistency in frame delay, which I mentioned ("For instance, the jitter is very distracting to some people."), but beyond that, there's the problem I described with the bulk of my comment. It's not just a question of whether you can tell that frames are coming in in clumps. It's a question of whether you can tell the difference between 60 fps delays, which is what you paid for, and 40 fps delays, which is what you get. SLI definitely improves performance and for those of us who don't mind the jitter (I never did, actually), it is an upgrade over a single card, but even with 100% scaling of fps, the benefit is more like a 33% increase in effective fps.

    It almost seems like micro stutter is some kind of viral ATI anti-marketing bs.
    Definitely not, and definitely not BS, but speaking of ATI, rumor has it that the 4870x2 may adapt the delay on the second frame based on the framerate, eliminating this problem. If it's true, then it will be the best dual-GPU card relative to its own generation of single card ever, by a very large margin. But of course, the rumor may just be some kind of viral ATI marketing bs. ;-) I hope it's true.
  • Re:Power vs Intel (Score:5, Interesting)

    by cliffski ( 65094 ) on Monday June 16, 2008 @01:17PM (#23812699) Homepage
    As a game dev, and from what I see, I'm assuming its a stability thing.
    Intel's chips have to WORK. and I mean WORK ALL THE TIME. getting a single calculation wrong is mega mega hell. remember the pentium calculation bug?
    People will calculate invoices and bank statements with that intel chip. It might control airplanes or god knows what. It needs to be foolproof and highly reliable.

    Graphics chips draw pretty pictures on the screen.

    It's a different ballgame. As a game dev, my 100% priority for any new chips is that they ship them with stable, tested drivers that are backwards compatible, not just great with directX 10 and 11.
    If someone wrote code that adhered correctly to the directx spec on version 5 or even 2, the new cards should render that code faithfully. Generally, they don't, and we have to explain to gamers why their spangly new video card is actually part of the problem in some situations :(
  • Re:Power vs Intel (Score:4, Interesting)

    by mrchaotica ( 681592 ) * on Monday June 16, 2008 @02:12PM (#23813363)

    People will calculate invoices and bank statements with that intel chip. It might control airplanes or god knows what. It needs to be foolproof and highly reliable.

    Graphics chips draw pretty pictures on the screen.

    Nvidia is increasingly marketing its chips as "stream processors," rather than "graphics processors." They are becoming increasingly used for scientific computation, where reliability and accuracy are just as important as in the general-purpose case (which reminds me, I need to check if they support double-precision and IEEE 754 yet). It could be the case in a few years that the structural analysis for the building you'll be in might be done by a program running on one of these chips.

  • Re:Power Consumption (Score:2, Interesting)

    by willy_me ( 212994 ) on Monday June 16, 2008 @02:47PM (#23813773)

    Ok, I just went over your original posting again and wanted to address the issues you spoke of.

    There's no reason you couldn't power down entire sections of the graphics card that you aren't using.

    There is a price to pay for powering up/down sections of a chip. It takes time and power. One needs to limit the number of times this is done. But the basic idea is good and I believe this is exactly what the card in question does. However, it is not as simple as implied.

    Most video cards support changing the clock speed on the card.

    This is a good solution because it does not incur a time or power penalty unlike the previous suggestion. However, it still results is sections of the chip being left on when not in use. In addition, with leakage current being such a big deal in modern transistors, you really can't clock it down too much. One risks loosing state in the dynamic elements of the chip. (At least this is how I understand it - but it is not my field, others here would know for sure.)

    I am willing to bet that the chip in question utilizes both of the techniques you suggested. So they were good suggestions. However, I still don't believe that the 3D accelerator would ever be "turned off" completely on a modern desktop computer. So implementing such techniques is essential as one can not rely on a Voodoo II type solution.

  • Re:Power vs Intel (Score:1, Interesting)

    by Anonymous Coward on Tuesday June 17, 2008 @01:17AM (#23819095)
    These latest ones, the ones the article is about, support double-precision. Not so clear if they support full IEEE 754: When I messed with scientific computation on GPUs last, the single most major problem wasn't the precision (there are techniques for making the most of single precision after all), it was the fact they silently carried on through fp exception conditions that in a scientific context mean "you have screwed up. Stop now.". They would mean a barely-noticeable graphical glitch at most in mere graphical output applications, even in game-physics the error wouldn't matter, maybe a crate would bounce imperceptibly wrongly or slightly intersect a wall or whatever.

    But it matters a lot in grownup science land. :-)

The faster I go, the behinder I get. -- Lewis Carroll

Working...