Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Hands On With Nvidia's New GTX 280 Card 212

notdagreatbrain writes "Maximum PC magazine has early benchmarks on Nvidia's newest GPU architecture — the GTX 200 series. Benchmarks on the smokin' fast processor reveal a graphics card that can finally tame Crysis at 1900x1200. 'The GTX 280 delivered real-world benchmark numbers nearly 50 percent faster than a single GeForce 9800 GTX running on Windows XP, and it was 23 percent faster than that card running on Vista. In fact, it looks as though a single GTX 280 will be comparable to — and in some cases beat — two 9800 GTX cards running in SLI, a fact that explains why Nvidia expects the 9800 GX2 to fade from the scene rather quickly.'"
This discussion has been archived. No new comments can be posted.

Hands On With Nvidia's New GTX 280 Card

Comments Filter:
  • Yeah but... (Score:2, Funny)

    by Anonymous Coward
    Can it play Duke Nukem forever?
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Can Linux run it?

      Same answer as all cool new hardware: NO!
      • Re: (Score:3, Insightful)

        Same answer as all cool new hardware: NO!

        Easy counter-example would be any new CPU architecture, which is generally adopted by Linux faster than the competition (especially Windows, which is probably what you're comparing Linux to, given the context). AMD64 (and Itanium 2, for that matter) is an example. While Linux can be slow to get support for some things, that's certainly not true for all cool new hardware. What about the PS3? Pandora? Heck, some cool hardware Linux supports would be impossible for a g

      • Re: (Score:3, Informative)

        by masterzora ( 871343 )
        Actually, NVidia's pretty good about getting Linux drivers for new cards out relatively quickly.
    • They still have 10 more years to develop video cards before Duke Nukem Forever comes out!
  • Power vs Intel (Score:4, Interesting)

    by SolidAltar ( 1268608 ) on Monday June 16, 2008 @10:34AM (#23811299)
    Let me say I do not know anything about chip design but I have a question -

    How is Nvidia able to year after year make these amazing advances in power while Intel makes (although great) only modest advances?

    As I said I do not know anything about chip design so please correct me on any points.

    • Re:Power vs Intel (Score:5, Informative)

      by the_humeister ( 922869 ) on Monday June 16, 2008 @10:39AM (#23811363)
      Because graphics operations are embarrassingly parallel whereas regular programs arn't.
      • Re: (Score:3, Informative)

        by AmiMoJo ( 196126 )
        Also, graphics processors are evolving quickly, where as CPUs have had basically the same instruction set for 30 years now.

        For example, with the 8000 series pixel shaders had become very important in modern games, so the cards were optimised for pixel shading performance much more than the 7000 series was. There is simply no equivalent for CPUs - even stuff like SSE extensions is really just trying to do the same stuff in a more parallel way, it isn't a radically new way of doing things.
      • Re: (Score:2, Troll)

        by Lord Ender ( 156273 )
        Why is parallelism embarrassing?
      • Re:Power vs Intel (Score:4, Informative)

        by p0tat03 ( 985078 ) on Monday June 16, 2008 @04:08PM (#23815389)

        Precisely. This is something that can be solved by simply throwing more transistors in. Their biggest challenge is probably power and heat, not architecture.

        Not to mention that "programs" on GPUs are ridiculously simple compared to something on a general purpose CPU. Next time you write a shader, try branching (i.e. if, else), your shader will slow to a relative crawl.

    • by gnick ( 1211984 )

      How is Nvidia able to year after year make these amazing advances in power while Intel makes (although great) only modest advances?

      There is more room for improvement in the graphics card/GPU arena than in the CPU arena. Since the market is so much larger surrounding CPUs, more research has been done and the chips are closer to "perfectly" using available technology and continually expanding the realm of what technology is available.

      And I'll echo the_humeister's statement that graphics operations are much more easily done in parallel than generic computing. You can throw processors/cores at the problem pretty easily and continue to s

    • Re: (Score:3, Informative)

      by corsec67 ( 627446 )
      I think one huge thing is that graphics is a hugely parallelizable task. The operations aren't very complex, so they can just keep cramming more and more processing units onto the chip.

      Intel and AMD are having issues getting over 4 cores per die right now, while this card "... packs 240 tiny processing cores into this space, plus 32 raster-operation processors".
    • Re:Power vs Intel (Score:5, Interesting)

      by cliffski ( 65094 ) on Monday June 16, 2008 @12:17PM (#23812699) Homepage
      As a game dev, and from what I see, I'm assuming its a stability thing.
      Intel's chips have to WORK. and I mean WORK ALL THE TIME. getting a single calculation wrong is mega mega hell. remember the pentium calculation bug?
      People will calculate invoices and bank statements with that intel chip. It might control airplanes or god knows what. It needs to be foolproof and highly reliable.

      Graphics chips draw pretty pictures on the screen.

      It's a different ballgame. As a game dev, my 100% priority for any new chips is that they ship them with stable, tested drivers that are backwards compatible, not just great with directX 10 and 11.
      If someone wrote code that adhered correctly to the directx spec on version 5 or even 2, the new cards should render that code faithfully. Generally, they don't, and we have to explain to gamers why their spangly new video card is actually part of the problem in some situations :(
      • Re:Power vs Intel (Score:4, Interesting)

        by mrchaotica ( 681592 ) * on Monday June 16, 2008 @01:12PM (#23813363)

        People will calculate invoices and bank statements with that intel chip. It might control airplanes or god knows what. It needs to be foolproof and highly reliable.

        Graphics chips draw pretty pictures on the screen.

        Nvidia is increasingly marketing its chips as "stream processors," rather than "graphics processors." They are becoming increasingly used for scientific computation, where reliability and accuracy are just as important as in the general-purpose case (which reminds me, I need to check if they support double-precision and IEEE 754 yet). It could be the case in a few years that the structural analysis for the building you'll be in might be done by a program running on one of these chips.

    • by mikael ( 484 )
      A GPU is a large number of high-speed floating point units (stream processors), parallelised memory (video ram) and some clever memory caching (texture units). In earlier GPU's, integer operations were just a special case of floating point calculations. The later GPU's have seperate logic pathways for integer calculations

      A CPU consists of a single pipelined processor with all sorts of tricks to optimise performance. The Intel P6 [wikipedia.org] article at wikipedia gives some explanation of these:

      * Speculative executio
  • Power Consumption (Score:5, Interesting)

    by squoozer ( 730327 ) on Monday June 16, 2008 @10:34AM (#23811305)

    Something that has always concerned me (more as I play games less often now) is how much power these cards draw when they aren't pumping out a zillion triangles a second playing DNF.

    Most of the time (90%+ probably) I'm just doing very simple desktop type things. While it's obvious from the heat output that these cards aren't running flat out when redrawing a desktop surely they must be using significatnly more power than a simple graphics card that could perform the same role. Does anyone have any figures showing how much power is being wasted?

    Perhaps we should have two graphics cards in the the system now - one that just does desktop type things and one for when real power is required. I would have thought it would be fairly simple to design a motherboard such that it had an internal only slot to accept the latest and greatest 3D accelerator card that suplimented an on board dumb-as-a-brick graphics card.

    • Re:Power Consumption (Score:4, Informative)

      by SolidAltar ( 1268608 ) on Monday June 16, 2008 @10:43AM (#23811423)
      More detail (sorry):

      You may be wondering, with a chip this large, about power consumptionâ"as in: Will the lights flicker when I fire up Call of Duty 4? The chip's max thermal design power, or TDP, is 236W, which is considerable. However, Nvidia claims idle power draw for the GT200 of only 25W, down from 64W in the G80. They even say GT200's idle power draw is similar to AMD's righteously frugal RV670 GPU. We shall see about that, but how did they accomplish such a thing? GeForce GPUs have many clock domains, as evidenced by the fact that the GPU core and shader clock speeds diverge. Tamasi said Nvidia implemented dynamic power and frequency scaling throughout the chip, with multiple units able to scale independently. He characterized G80 as an "on or off" affair, whereas GT200's power use scales more linearly with demand. Even in a 3D game or application, he hinted, the GT200 might use much less power than its TDP maximum. Much like a CPU, GT200 has multiple power states with algorithmic determination of the proper state, and those P-states include a new, presumably relatively low-power state for video decoding and playback. Also, GT200-based cards will be compatible with Nvidia's HybridPower scheme, so they can be deactivated entirely in favor of a chipset-based GPU when they're not needed.

    • Re:Power Consumption (Score:5, Informative)

      by CastrTroy ( 595695 ) on Monday June 16, 2008 @10:48AM (#23811493)
      This is how graphics cards used to work. You would plug a VGA cable from your standard 2D graphics card to your, for example, Voodoo II card, and the Voodoo II card would go out to the monitor. You could just have the 3D card working in passthrough mode when not doing 3D stuff. Something like this could work on a single board though. There's no reason you couldn't power down entire sections of the graphics card that you aren't using. Most video cards support changing the clock speed on the card. I'm wondering if this is a problem at all, with any real effects, or whether it's just speculation based on the poster assuming what might happen. Anybody have any real numbers for wattage drained based on idle/full workload for these large cards?
      • But all modern operating systems support 3D accelerated displays. MacOS has their Quartz Extreme, Windows has Aero (I think), and even Linux has one (although the name escapes me.) Your solution would have worked well 5 years ago but times have changed. The 3D component is now always on.
        • Re: (Score:3, Informative)

          by CastrTroy ( 595695 )
          However, not all that power is needed for running the 3D desktops. I can run Compiz (linux 3D desktop) on my Intel GMA 950 without a single slowdown. With all the standard 3D eyecandy turned on. So you wouldn't need to run an nVidia 8800 at full clock speed to render the desktop effects. Also, Windows Vista and Linux both support turning off the 3D effects and running in full 2D mode. I'm sure Mac OS supports the same, although I've never looked into it, so it's hard to say for sure. Especially since M
          • I agree, my point was that it is better to have a 3D accelerator that can scale down (as the one in the article) then to completely bypass the 3D accelerator (as the Voodoo II did.) This is because most desktops will have the 3D accelerator activated at all times regardless of whether or not it is being used extensively.
            • Re: (Score:2, Interesting)

              by willy_me ( 212994 )

              Ok, I just went over your original posting again and wanted to address the issues you spoke of.

              There's no reason you couldn't power down entire sections of the graphics card that you aren't using.

              There is a price to pay for powering up/down sections of a chip. It takes time and power. One needs to limit the number of times this is done. But the basic idea is good and I believe this is exactly what the card in question does. However, it is not as simple as implied.

              Most video cards support changing

          • by anss123 ( 985305 )
            Quartz Extreme is off by default up to 10.4 (don't know about 10.5), so MacOSX can run without it.
          • I would like to see video cards slow down clock cycles like CPUs (e.g., AMD Cool'n'Quiet [amd.com]). I don't always play games, 3D stuff, etc. Most of the time, it is surfing the Web, e-mails, newsgroups, watch videos and Flash (might need video card's accerelation for this so speed it up), etc.
    • by Barny ( 103770 )
      This is why NV came up with their new trick, build an integrated video adapter into all boards and let the high end cards use the pci-e 2.0 bus to move the framebuffer over to that when playing games, then when just doing normal windows tasks use the SM bus to turn off these electric heaters.

      Works in vista only though, and of course, that OS is still showing signs of flop in the games area, despite DX10 and SP1.
    • by Xelios ( 822510 )
      ATI's last generation of cards had a feature called PowerPlay, which gears the card down when its not being heavily used. The 4800 series will have the same feature and judging from TFA Nvidia's doing something similar with the GT200.
    • RTFA (Score:3, Informative)

      by ruiner13 ( 527499 )
      If you'd RTFA, you'd note this part:

      Power Considerations

      Nvidia has made great strides in reducing its GPUs' power consumption, and the GeForce 200 series promises to be no exception. In addition to supporting Hybrid Power (a feature that can shut down a relatively power-thirsty add-in GPU when a more economical integrated GPU can handle the workload instead), these new chips will have performance modes optimized for times when Vista is idle or the host PC is running a 2D application, when the user is watching a movie on Blu-ray or DVD, and when full 3D performance is called for. Nvidia promises the GeForce device driver will switch between these modes based on GPU utilization in a fashion that's entirely transparent to the user.

      So, yes, they hear you, and are making improvements in this area.

    • Aero graphics must surely be bad for the environment - it prevents most of the GPU from powering down.

    • Even current nVidia cards have power-saving. My QuadroFX1500 has PowerMizer, which will drop the GPU clock down as far as 100MHz (memory clock too) even while pushing polygons. For instance I've run an xscreensaver (blocktube with all options on, low speed though) in xwinwrap on my 1680x1050 desktop while xscreensaver-demo ran the same saver again in a small window and the GPU was toggling between the low and medium speeds. The new ones (as per sibling comment by SolidAltar [slashdot.org]) do much much more, but even the
  • I for one, Now plan on purchasing a new space heater soon for my box (NOTE: Nvidia GT200 has a TDP of 236W!)... so long as I can FINALLY have Crysis playable at resolution!

    -Another good article on the GTX280 (GT200 GPU) at TR: http://www.techreport.com/articles.x/14934 [techreport.com]
  • by Colonel Korn ( 1258968 ) on Monday June 16, 2008 @10:39AM (#23811361)
    In most reviews, the 9800GX2 is faster, and it's also $200 cheaper. As a multi-GPU card it has some problems with scaling, and micro-stutter makes it very jumpy like all existing SLI setups.

    I'm not well versed in the cause of micro-stutter, but the results are that frames aren't spaced evenly from each other. In a 30 fps situation, a single card will give you a frame at 0 ms, 33 ms, 67 ms, 100 ms, etc. Add a new SLI card and let's say you have 100% scaling, which is overly optimistic. Frames now render at 0 ms, 8 ms, 33 ms, 41 ms, 67 ms, 75 ms, 100ms, and 108ms. You get twice the frames per second, but they're not evenly spaced. In this case, which uses realistic numbers, you're getting 60 fps might say that the output looks about the same as 40 fps, since the delay between every other frame is 25 ms.

    It would probably look a bit better than 40 fps, since between each 25 ms delay you get an 8 ms delay, but beyond the reduced effective fps there are other complications as well. For instance, the jitter is very distracting to some people. Also, most LCD monitors, even those rated at 2-5 ms response times, will have issues showing the 33 ms frame completely free of ghosting from the 8 ms frame before the 41 ms frame shows up.

    Most people only look at fps, though, which makes the 9800 GX2 a very attractive choice. Because I'm aware of micro-stutter, I won't buy a multi-GPU card or SLI setup unless it's more than 50% faster than a single-GPU card, and that's still ignoring price. That said, I'm sort of surprised to find myself now looking mostly to AMD's 4870 release next week instead of going to Newegg for a GTX280, since the 280 results, while not bad, weren't quite what I was hoping for in a $650 card.
    • You don't have SLI... I do, and micro stutter is barely noticeable at worst. And I can play at resolutions and anti-aliasing that no sigle card could've made playable.

      Short answer: it's fine. if you have the money, and want to play at extreme resolutions, get SLI.

      It almost seems like micro stutter is some kind of viral ATI anti-marketing bs.
      • by Colonel Korn ( 1258968 ) on Monday June 16, 2008 @11:41AM (#23812243)

        You don't have SLI... I do, and micro stutter is barely noticeable at worst. And I can play at resolutions and anti-aliasing that no sigle card could've made playable.
        I've had three SLI setups (an ancient 3dfx X2 and two nVidia pairs). I liked my first SLI rig but I felt not to satisfied with the feel of the last two when compared to a single card, and now that I've learned about this issue I know why. Lots of people say that microstutter is barely noticeable, but lots of people also insist that a $300 HDMI cable gives "crisper" video over a 6 foot connection than a $10 HDMI cable. The micro-stutter effect that you can barely notice is the inconsistency in frame delay, which I mentioned ("For instance, the jitter is very distracting to some people."), but beyond that, there's the problem I described with the bulk of my comment. It's not just a question of whether you can tell that frames are coming in in clumps. It's a question of whether you can tell the difference between 60 fps delays, which is what you paid for, and 40 fps delays, which is what you get. SLI definitely improves performance and for those of us who don't mind the jitter (I never did, actually), it is an upgrade over a single card, but even with 100% scaling of fps, the benefit is more like a 33% increase in effective fps.

        It almost seems like micro stutter is some kind of viral ATI anti-marketing bs.
        Definitely not, and definitely not BS, but speaking of ATI, rumor has it that the 4870x2 may adapt the delay on the second frame based on the framerate, eliminating this problem. If it's true, then it will be the best dual-GPU card relative to its own generation of single card ever, by a very large margin. But of course, the rumor may just be some kind of viral ATI marketing bs. ;-) I hope it's true.
  • I was just about to go buy a new video card! Now I'll hold out!
  • Noise leveb (Score:5, Informative)

    by eebra82 ( 907996 ) on Monday June 16, 2008 @10:39AM (#23811369) Homepage
    Looks like NVIDIA went back to the vacuum cleaner solution. Blatantly taken from Tom's Hardware:

    During Windows startup, the GT200 fan was quiet (running at 516 rpm, or 30% of its maximum rate). Then, once a game was started, it suddenly turned into a washing machine, reaching a noise level that was frankly unbearable - especially the GTX 280.
    Frankly, reviews indicate that this card is too f*cking noisy and extremely expensive ($650).
    • no surprise (Score:3, Informative)

      by unity100 ( 970058 )
      8000gts were much louder than their 3870 counterparts too.

      i dont get why people fall for that - push a chip to limits, put a noisy fan on it, and sell it as high performance card.

      at least with ati 3870 you can decide whether you gonna overclock the card and endure the noise or not.
    • Re: (Score:3, Informative)

      by clampolo ( 1159617 )

      and extremely expensive ($650)

      Not at all surprising. Did you see the size of that chip die? You can fit 6 Penryn on it!! I used to work for a semiconductor company and the larger the chip the more expensive it gets. This is because the larger the die is the less likely it is to be defect free when it comes out of the fab.

  • by JohnnyBigodes ( 609498 ) <morphine@digita l m e nte.net> on Monday June 16, 2008 @10:43AM (#23811427)

    and in some cases beat â" two 9800 GTX cards running in SLI, a fact that explains why Nvidia expects the 9800 GX2 to fade from the scene rather quickly.

    Bullshit. The 9800GX2 is consistently quite a bit faster (TechReport's very detailed review here [techreport.com]), and it costs around $450, while the GTX 280 costs $650 (with the younger brother the 260 at $400), with the only drawbacks being more power drawn and higher noise. Even then, I think it's a no-brainer.

    Don't get me wrong, these are impressive single-GPU cards, but their price points are TOTALLY wrong. ATI's 4870 and 4850 cards are coming up at $450 and $200 respectively, and I think they'll eat these for lunch, at least in the value angle.

    • by Xelios ( 822510 )
      The 4870 will be $350, [fudzilla.com] not $450. And at that price Nvidia is going to have a hard time convincing me to buy a GTX 280, even if it does turn out to be marginally faster.

      Lets see how the reviews of the 4800 series pan out.
    • ATI's 4870 and 4850 cards are coming up at $450 and $200 respectively, and I think they'll eat these for lunch, at least in the value angle.
      People buying $400 video cards aren't looking for value. Around $200, I could see the price being a factor. However, once you've decided to spend $400 on a video card, price isn't even something you are considering.
      • Not everyone who makes a relatively large investment is "mindless", so to speak. That's what I call an expensive graphics card, an investment. And even the big ones must pay off somehow, and these new cards don't.
        • I find it really hard to follow the logic that an object that will be worth 50% of it's current value in a year (and in each consecutive year) to be an investment. It would be hard to argue that a new production model car to be an investment. If you kept it in the garage until 25 years later it might be worth more than the original, by 3 or 4 times. However, if you just took the original money you spent on the car and invested it for 25 years, you would end up way ahead. With a car, at least you could
          • I didn't say it was meant to be a collector's item, that would be quite ridiculous :) What I mean is that for me, a proper investment is to stay on top of what I intend to have in regards of resolution/image quality (1680 @ 4xAA) for a reasonable timeframe without upgrades. My 8800GTX has served me extremely well in this regard, as I can now still sell it for almost half its original value and pay roughly half or more of a new card, give or take a few.
          • by TheLink ( 130905 )
            Well if he can play his desired game NOW in its full "maximum quality" glory, and he typically spends USD400 a night on entertainment, then it could actually help him save money.

            Basically he spends USD400, plays computer games for a few nights, and actually ends up with more money than he would otherwise (I actually know someone who did save some money in a similar way). In contrast if it were USD4000 for a vid card, the calculation could be different - he could get bored of the various games and go back to
      • If you play video games for, say, two hours per day, compare the cost of a $400 graphics card every two years to the cost of seeing a two hour movie every day (or pick your own form of entertainment).

        A $400 video card really is a smart business decision when you look at entertainment-hours per dollar.
  • by unity100 ( 970058 ) on Monday June 16, 2008 @10:44AM (#23811441) Homepage Journal
    are royally screwed ? it was a 'new' card and all.

    well done nvidia. very microsoft of you.
    • by Jellybob ( 597204 ) on Monday June 16, 2008 @10:53AM (#23811573) Journal
      AMD and NVidia are always going to release new cards. It's just the way of the industry.

      If you buy a graphics card in the hope that it's going to be the top of the line card for longer then a few months then you're very much mistaken.

      Buy a card that will do what you need it to, and then just stick with that until it stops being powerful enough for you. Anyone hoping their computer will be "future proof" is heading towards disappointment very fast.
    • So tell me, which idiots believe the world is going to stand still just because they paid out money for something?
      • Everyone who bought an iPhone at launch, apparently. Hell, I did - and didn't feel ripped off when the price dropped later, but apparently I was the only one...
    • by Barny ( 103770 )
      You must be new to the whole "computer part" thing...
    • Does the 9800 stop working, or in some way slow down, because NVidia came out with something new? Did they say they were going to cut off driver support for the 9800? You got what you paid for, and you still have what you paid for. New stuff will always come along that's cheaper and more powerful than what already exists.
    • The price of a 9800GTX has gone from $350 to $220 in a very short time. I just bought a 9800GTX for $220 from NewEgg a few hours ago. They are dropping the price to make room for these new cards, and you would be wise to take advantage of it. The new card may be 50% faster, but it is much more than 50% more expensive at today's prices.
  • ATI's Response? (Score:2, Interesting)

    by mandark1967 ( 630856 )
    Anyone know when the new ATI card will be released?

    Based on the information I've seen on it, it will be pretty comparable in terms of performance, but at a far cheaper price.

    I'm hoping that the new ATI card performs within 10% - 15% or so of the GTX280 because I'm getting a bit tired of the issues I have with my current nVidia 8800GTS cards. (SLI)

    I cannot set the fanspeed in a manner that will "stay" after a reboot.

    My game of choice actually has some moderate-sever issues with nVidia cards and crashes at le
  • by L4t3r4lu5 ( 1216702 ) on Monday June 16, 2008 @10:54AM (#23811587)
    I run Crysis, all maxed out, on an 8800gtx, and only get lower than 30fps in the end battle.

    If I want more speed, i'll get another 8800. That card is phenomenal, and about to get a lot cheaper.
    • by ady1 ( 873490 ) *
      Exactly what I was thinking. I am able to run crysis on 1900x1200 with 8800GT sli with >40fps (all setting very high).

      I understand using crysis as a benchmark but pretending that there wasn't any setup capable of running crysis on 1900x1200 is exaggerating
  • But every time Nvidia releases "THE new, big thing" the prices of the previos and, especially, the second-previous generation cards drop by a significant amount, making them worth the buck for an occasional gamer who doesn't want to spend a fortune to play games and is happy with his games running on the low to medium details settings.
  • Noise... (Score:3, Funny)

    by Guanine ( 883175 ) on Monday June 16, 2008 @11:00AM (#23811687)
    Yes, but will you be able to hear your games over the roar of the fans on this thing?
  • by wild_quinine ( 998562 ) on Monday June 16, 2008 @11:03AM (#23811731)
    I used to be near the front of the queue for each new line of graphics cards. I used to wait just long enough for the first price drops, and then stump up. Cost me a couple of hundred quid a year, after the sale of whatever my old card was, to stay top of the line. Compared to owning and running a car, which I don't, owning and running a super-rig was positively cheap (in the UK). Some might call it a waste of money, and I have sympathy for that argument, but it was my hobby, and it was worth it, to me.

    This year I put my disposable income towards getting in on all three next generation consoles, and the PC will languish for a long time yet.

    I don't think I've changed, I think the market has changed.

    They're getting bigger and hotter, and no longer feel like cutting edge kit. They feel like an attempt to squeeze more life out of old technology.

    DirectX 10 as a selling point is a joke, with the accompanying baggage that is Vista all it does is slow games down, and none of them look any better for it yet. In any case, there are only five or six of them. You can pick up an 8800GT 512 for less than 150 dollars these days, and it's a powerhouse, unless you're gaming in full 1080p. There is no motivation to put one of those power hungry bricks in my rig. Nothing gets any prettier these days, and FPS is well taken care of at 1680x1050 or below.

    Game over, graphics cards.

    I wonder what will happen if everyone figures this out? Imagine a world in which the next gen of consoles is no longer subsidised, or driven, by PC enthusiasts...

    • I've been holding out this generation, holding onto my 7900GT. I like the GT because it delivers solid performance for only 60w, which is half the power ATI's x1900 series was offering at the time. I've also been able to stall because games like Team Fortress 2 and Quakewars ET still look great on my current card.

      Only now with the release of the 8800GT and 9600GT is the power consumption/performance ratio getting reasonable (and yes, the ATI 3870 has similar power consumption to the 8800GT, but cannot mat
  • by alen ( 225700 ) on Monday June 16, 2008 @11:04AM (#23811749)
    how loud is it and does it need the hoover dam to power it up?

    the way things are going you will need 2 power supplies in a PC. one for the video card and one for everything else
  • by kiehlster ( 844523 ) on Monday June 16, 2008 @11:18AM (#23811933) Homepage
    No wonder people say Console killed the PC game star -- "Alright, got my hardware list done. Time to order. Oh, look what just came out, guess I'll wait for prices to drop. Alright, they dropped! No wait, a new processor is out, think I'll wait. Sweet, think I can order now. No, nevermind, Crysis just came out, I'll have to wait until I can afford the current bleeding edge. Awesome, I can afford it now! No, a new GPU just came out that runs the game better. Oh, SATA 600 is coming out. Ah, forget this, I'm buying an Xbox."
    • by aliens ( 90441 )
      And this has been the same story for how many years now?
    • by Rhys ( 96510 )
      Those same people will sadly discover the consoles aren't much better. The PS3/Rock Band setup downstairs needs far-too-common patching (particularly the PS3) and that patching is usually horribly slow. Not to mention little nuggets like I don't think we can back up our Rock Band "saves", so if the hard drive dies, so does our save file.
    • The problem is that people have this perception that you absolutely need these super high end setups just to play games with. The reality is that most of these reviews are done at giant resolutions like 2560 * 1600 on 30" LCDs or 1920 * 1200 on 24" LCDs, which is unlikely to be a common setup. Valve's hardware survey says 3 out of 4 people still use non-widescreen monitors, with the most common sizes being 16", 17", or 19" monitors. Meaning most people probably don't game any higher than 1280 * 1024 or 1
    • by Kelbear ( 870538 )
      My geek friend spent 3 years timing the purchase of his "new PC". He is aware that new tech constantly comes out, so he just kept waiting for new tech releases to coincide in a short time span so that he could maximize the value between iterations. His computer became obsolete within 3 weeks.

      To be fair, he's an extreme case. He's also waiting for an Xbox360...waiting for the RRODs to be solved in the 3rd or 4th production generation which will address graphics cooling(unlike the last one which changed the h
    • Its so true. I just built my first computer in six years. Building it was ten times easier, buying my parts was ten times harder. I had to do so much research on parts. Keep the price reasonable but still maintain decent power. I was so hesitant to order because I knew that if I just wait another month I can get something even better then what I'm getting now, but it already been six months so I just bit my lip and ordered.

      My worst experience was researching video cards. This GTX200 series just popped ou
  • and it's probably noisier too :D.
  • by fuzzyfuzzyfungus ( 1223518 ) on Monday June 16, 2008 @01:23PM (#23813479) Journal
    The GTX280 part looks quite powerful; but its die size is really extreme. Anandtech claims that maximum best case yield for the 280 is 105 chips per 300mm wafer. TFA notes that Intel can put 6 dual core Penryns in the same space These guys [icknowledge.com] quote just under $3,400 for a single 300mm wafer. So, assuming absolutely optimal yield, the GTX280's core costs ~ $300 to manufacture, not counting R&D, packaging, distribution, etc. A gigabyte of RAM suitable for a high end graphics card (read, not 10 dollars worth of DDR2) adds some more, and the board, passives, and assorted other logic do as well.

    Obviously, the above numbers are wild speculation; but the punchline is that these parts can't possibly be cheap to manufacture. I suspect that NVIDIA will see some nice sales to lunatic early adopters, and they'll probably have a compute only version of this card for high end computing; but there is no way that it could hit mass distribution price points. Even at $650, I'm not sure that NVIDIA's margins are all that exciting on this particular part.
  • by schwaang ( 667808 ) on Monday June 16, 2008 @01:30PM (#23813561)
    More and more these commodity graphics cards are being used for non-graphical high speed computing by taking advantage of the insane parallelism of the GPUs.

    Someone please develop CUDA [wikipedia.org] benchmarks to be included in future reviews.

    We need several apps: one with a kernel that is trivial enough to be constantly starved for memory, one that is the opposite (compute heavy, memory light), integer vs. FP, and something that specifically benefits from the new double-precision floating point that only the newer stuff has.

    Get back to me soon, mmmmK?

"If there isn't a population problem, why is the government putting cancer in the cigarettes?" -- the elder Steptoe, c. 1970

Working...