Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

NVIDIA GTX 295 Brings the Pain and Performance 238

Vigile writes "Dual-GPU graphics cards are all the rage and it was a pair of RV770 cores that AMD had to use in order to best the likes of NVIDIA's GeForce GTX 280. This time NVIDIA has the goal of taking back the performance crown and the GeForce GTX 295 essentially takes two of the GT200 GPUs used on the GTX 280, shrinks them from 65nm to 55nm, and puts them on a single card. The results are pretty impressive and the GTX 295 dominates in the benchmarks with a price tag of $499."
This discussion has been archived. No new comments can be posted.

NVIDIA GTX 295 Brings the Pain and Performance

Comments Filter:
  • Holiday Shopping (Score:2, Insightful)

    by Szentigrade ( 790685 )
    Just in time for holiday shopping!
    • I have a GTX 280, and I think it's suspicious that it's absent from their benchmarks, to say the least. Why would you include GTX 260 and an ATI card, but not GTX 280?
      • Because this new card uses the 260's memory bus and memory effects dominate at higher resolutions with AA turned up? Because its core clock is the same as the 260? Yes, its cores have the same number of shaders as a 280, but if you run at 1920x1200 or above it's probably more like a pair of 260's in SLI than a pair of 280's - and there's no point in having a $500 card if you play on a crappy screen.
        • by Kagura ( 843695 )

          Because this new card uses the 260's memory bus and memory effects dominate at higher resolutions with AA turned up? Because its core clock is the same as the 260? Yes, its cores have the same number of shaders as a 280, but if you run at 1920x1200 or above it's probably more like a pair of 260's in SLI than a pair of 280's - and there's no point in having a $500 card if you play on a crappy screen.

          You must have completely missed what I was saying. I own a GTX 280 card, and I would like to see how my GTX 280 compares to the new GTX 295. Why would you benchmark a graphics card without comparing it to the card that was top-of-the-line for the last several months straight?

  • ugh (Score:3, Insightful)

    by larry bagina ( 561269 ) on Thursday December 18, 2008 @12:09PM (#26160781) Journal

    this is like the razor wars (double blade! triple blade! quad blade! pento blade!). With OpenCL and DirectGPU (or whatever MS is calling it this week), this should be good for anyone trying to build a cheap superGPU cluster.

    • Re:ugh (Score:5, Funny)

      by Spazztastic ( 814296 ) <spazztastic&gmail,com> on Thursday December 18, 2008 @12:12PM (#26160821)

      this is like the razor wars (double blade! triple blade! quad blade! pento blade!).

      Clearly you don't value a close shave.

    • by nbert ( 785663 )
      At least those pento blades don't consume a whooping 289 W while you use them. Somehow most previews don't even mention power consumption. The author of the article linked above actually states about this card, that it "properly balances GPU performance and power efficiency". By that logic everything which does not start to burn is power efficient...
      • Re:ugh (Score:5, Informative)

        by ThePhilips ( 752041 ) on Thursday December 18, 2008 @01:12PM (#26161657) Homepage Journal

        Somehow most previews don't even mention power consumption.

        Had you RTFA properly, folks have mentioned that card is not yet officially out and nVidia asked to withheld further details as BIOS might still get tweaked.

        By that logic everything which does not start to burn is power efficient...

        This is not an absolute metric (or is it "yardic" in US?). I presume they compare it to 4870 on which the infamous DDR5 alone - even when idle - draws whooping 40W [techpowerup.com]. 4870x2 has already tweaked factory BIOS and yet twice more DDR5 still consumes same 40W. Yes - RAM alone consumes 40W.

      • Well for one, this is a "preview", which usually means NVidia sent a bunch of propaganda for them to disseminate to the masses. These things come with NDAs and a variety of other restrictions, in exchange for getting the "scoop" on the new product, and raking in a few extra yen from the ad networks.

        You will get a real review once the product hits the shelves, and a real person performs real tests. Anything else should be taken with a grain of salt, as most "review" sites exist to make money first and fore

  • by Stereoface ( 1400061 ) on Thursday December 18, 2008 @12:16PM (#26160895)
    I might consider upgrading from my 2MB VGA after seeing it in action... :)
    • I remember when my friends at school called me a liar because I said our graphics card had 4MB of RAM. It was a sweet ATI VLB card for my dad's architecture workstation.
    • I might consider upgrading from my 2MB VGA after seeing it in action... :)

      And what exactly is wrong with 1024x768x24bpp?

      (No, seriously. I used that very setup from 1995 to 2001. Then I got a PCI Voodoo4 cheap because 3dfx had just gone bust, and then got drunk one night and bought a 19" CRT on an auction site, and discovered the joy of 1600x1200.)

      • by smoker2 ( 750216 )
        Unfortunately, you have to spend a good deal more to get a TFT that can handle 1600x1200. I "upgraded" from a 19" CRT to a 19" tft and am stuck at 1024 (at 32bit admittedly). But the tft cost more than the CRT at the time. Good old Dell.
  • 2 years later, and we will be able to saddle a graphics card and fly with it in skies.
    • by lymond01 ( 314120 ) on Thursday December 18, 2008 @12:43PM (#26161301)

      All I know is that my graphics box (I call it a graphical) houses a nice little motherboard with a cute Intel chip, some hard drives, and I think I even have a sound card plugged into it.

      I remember when the graphics "card" was simply part of the computer -- these days, all the other components are part of my graphical.

  • by JoeMerchant ( 803320 ) on Thursday December 18, 2008 @12:28PM (#26161085)
    I'm glad that people are out there buying graphics cards that can render the latest games in QuadHD resolution at 120 frames per second... it makes the integrated graphics in eee class PCs that much better when the tech trickles down 5 years later.
    • It's not just about the graphics. GPUs are being called upon to do much more, from AI to Physics, to folding@home. Even encoding and decoding audio and video formats.

      • As a programmer who does a great deal of data crunching, I sincerely hope that Intel's 80+ core CPU comes along quickly to crush the silliness out of people who are trying to find applications for GPU "cores."

        • In order to do that Intel would have to adopt some of the architecture that is a GPU. Hence become the very thing you dislike. Right now GPUs are here and being used. Your fictional core isn't and with present limitations most likely will not be showing up for some time.

          • I signed up for CUDA when it launched (early '07?), due to conditional execution, only about 20% of what we were crunching at the time would have benefited. We were far better off optimizing and parallelizing to take advantage of 8 slightly memory bound cores that could actually do a conditional execution.
            • For me it was the opposite, I started with the cell but because the SIMD requires the operands to be adjacent to really gain something (I could have loaded from multiple locations and then moved them all into the 128 bit reg but by then gains would be lost), I switched to cuda and it's working although I do have to jump through hoops on the conditionals to make sure the threads are staying in sync with their instructions.
        • by caerwyn ( 38056 )

          Take a look at the latest issue of Computer- there's an interesting (if technically fairly light) look at performance ratios of various multicore designs- the present "cpu surrounded by lots of simple processing cores design, as implemented by a CPU + GPU combination, turns out to have the best performance/space and performance/watt ratios.

          Sure, there are some situations where you can do massive parallelization but each individual thread needs the flexibility of a full CPU, but there are at least as many, i

        • By "silliness" do you mean the fact that my first pass (simple and unoptimized) at running my calcs on GPU gave me a 10x speed up (including memory xfer)? I can get through thousands of generations of my simulation in hours instead of days.

          If intel brings out their 80 core proc I will be one of the first to compare to gpu to see which I should continue to use, until then I will use my gpu for real performance gains.
          • You can claw your way up to 16 cores today if you work with 4 socket motherboards...

            The data I work with isn't typically "fully massaged", there are a lot of sparse areas to consider quickly to identify areas that need more attention.

            I still find cases where the algorithms can be sped up more than 10x by eliminating un-necessary work - this might not happen in more mature fields, but especially in cases where the new programmer has implemented something, there's usually more speed to be gained in a good c

        • yea, and that general-purpose CPU will also consume about 80x the power needed by a vector processor that is twice as fast at performing: 2D/3D rendering, scientific modeling, financial calculations, video transcoding, data compression, and all the other tasks that GPGPU/stream processors are used for.

          a general-purpose scalar CPU will never replace specialized vector co-processors because it's the wrong tool for the job. most applications that vector coprocessors are used for involve processing very large d

    • by IorDMUX ( 870522 )

      it makes the integrated graphics in eee class PCs that much better when the tech trickles down 5 years later.

      This is also good news right now for the "sweet spot" gaming PC builders. Each time these new bleeding-edge-$500-200W-XXTREME cards come out, the previous two generations of graphics cards tend to suddenly drop drastically in price.

      When I built my current middle-of-the-road gaming computer, I put in an ATI HD3850 for $150, with the expectation of adding a second on Crossfire once the price drop occurred. Looking at Newegg.com, the 3850's have hit ~$55.00 this week. My computer looks to be due for an

  • by je ne sais quoi ( 987177 ) on Thursday December 18, 2008 @12:30PM (#26161115)
    As a somewhat mystified recent purchaser of a GTK 260 from eVGA, I was amazed to discover that NVIDIA has such problems with their linux drivers [nvnews.net]. I owned one of their older cards before and built a new computer and thought it was a no-brainer to pick NVIDIA for linux (freedom issues are notwithstanding, but I decided to go with the pragmatic choice). Only after I ran afoul of the powermizer slow switching crap, or other weird issues such as the misreporting of the screen refresh frequency, did I start digging and realized how many problems there are. As it is, I've got the beta 180.16 driver installed and it's better but I still had to do some tricks to shut off the powermizer feature. Just this morning had some other weird problem with screen corruption that's never happened before with my old hardware but more or less the same software on top of it.

    For me personally, I could care less if the card hardware is great if the drivers suck. NVIDIA, fix your linux drivers please. Next time I'll give a much harder look at amd.
    • Next time I'll give a much harder look at amd.

      I'll save you 5 minutes of research....stick with nVidia.

      But in all seriousness, I agred with your point. It seems like their Linux drivers have taken a shit compared to previous releases. Personally for me, I have a lot of artifacting issues in KDE4 that are apparently related directly to nVidia's drivers from what I've read.

      • But in all seriousness, I agred with your point. It seems like their Linux drivers have taken a shit compared to previous releases. Personally for me, I have a lot of artifacting issues in KDE4 that are apparently related directly to nVidia's drivers from what I've read.

        Performance issues, by any chance? I've been baffled why my KDE 4 performance is so terrible compared to my Gnome performance, and I have a reasonable nVidia notebook chip (Quadro NVS 140M).

        A lot of the forums have similar complaints, but most people seem to indicate that their problems went away with the 177.80 drivers, which I have installed.

        I was hoping the forthcoming nVidia driver will help, but from how people are talking, I've got to wonder if it's even wise to install it when it's released.

      • by Splab ( 574204 )

        Bad advice, the new AMD cards runs fine under Linux, since radeon 9600 I've never had any problems getting ATI/AMD to run under Linux.

    • I've still been buying nVidia for my Linux boxes. Is ATI finally a better choice?

      • by cecom ( 698048 )

        I don't know about ATI, but NVidia is a _terrible_ choice! I also used to buy NVidia for all my Linux boxes and recommend it to everybody. I was wrong. The problem is that the free NVidia drivers are extremely slow. They are actually slower than using the generic VESA driver with Intel graphics.

        I don't know why - I suspect it has something to do with reading from display memory - but it is a fact. I have a relatively fast quadcore machine, and yet when I am using the free NV drivers, it is unusable for Inte

        • Generally what I've done is preferred Intel for my work laptops, because I wanted to good quality of the Intel open source drivers.

          But on my home computers, which I sometimes use for gaming in Linux and/or Windows, it's really a choice between nVidia and ATI. I don't care much wether I use the open or closed source drivers on those cards, as long as they work well.

          The problem for now seems to be that at least in KDE 4, nVidia's closed drivers aren't a good match for the implementation of KDE 4, but ATI is

        • However I am using 64-bit kernel with 32-bit userspace (this is the most reasonable choice if you have more than 1 GB of RAM)

          I'm sorry... what? I don't see how that's a reasonable choice at all.

    • fix (Score:4, Insightful)

      by doti ( 966971 ) on Thursday December 18, 2008 @12:51PM (#26161411) Homepage

      NVIDIA, fix your linux drivers please.

      NVIDIA, open your linux drivers please.

    • by Kjella ( 173770 )

      Next time I'll give a much harder look at amd.

      A good advice - don't. Maybe when their open source driver rolls around but the fglrx driver has a lot of really bad issues, take a look at the Phoronix forums whenever a new AMD driver rolls around. Personally I run KDE3 on a 8800 GTS and all the 3D acceleration and whatnot is working just fine for me. No tearing in video (a problem that seems to plague AMD users endlessly), but then I use it in a desktop machine.

      • So... you're recommending against ATI without having actually used it?

        I'm currently using Nvidia (on price), but I was on an ATI card 6 months ago and it worked fine. The Nvidia drivers were slightly better when I switched, but only very slightly.

    • Hah ha, whoops -- that should be GTX 260. Too much time using gtk apps. :)
  • Microstutter (Score:3, Interesting)

    by Anonymous Coward on Thursday December 18, 2008 @12:54PM (#26161443)

    I wonder if this card will suffer from microstutter. The 9800GX2 benchmarked very well but real world performance was lacking because the card essentially varied between very high fps and very low fps, so it still lagged even though it got decent average fps.

    With these dual cards it's best to look at their low fps rating. An average fps is often misleading.

    • That's very true, I recently purchased a 4850 1gb over the 512 because while in the reviews they both showed identical frame rate, actual game play was not equal. "One" of the problems crysis is having is a zoom lag when you use a scope or binocs, because it has to load the textures.

      The 512 had a bit of a lag, but the 1gb did not.
  • Ten years ago the video card wars were in full swing. Each generation brought amazing new advance and impressive technology.

    But nVidia and ATI haven't realized that we passed the point of diminishing returns years ago. Mobility and battery life are what matter. And I know there are hardcore PC gamers out there, but there are only a handful of companies even bothering to push the high-end graphics, so you buy a $500 video card and there are exactly ZERO games that take full advantage of it. Wait a year or

    • by TheKidWho ( 705796 ) on Thursday December 18, 2008 @01:04PM (#26161559)
      See this is where you are uninformed, the new GTX's have lower power consumption than the 9000 series at idle and for 2d applications.
    • I'd say let the early adopters go for it. It is usually the early adopters that help pay for the lion's share of the development anyway. A year from now, the same performance will be had for less than half, and there should be several games that can play it.

    • No the render farms do not need this level of graphics either ... they render off-screen at seriously low frame rates (frames per minute not per second) they go for quality not speed

      They do use the GPU's to render but they are using it as a general processor that is fast (in a cluster) not as a GPU at all ...

      The only people using high end graphics cards are Gamers and a very few graphic artists ...

      • Not true, a lot of engineering departments depend on computers running high end Quadro video cards. The difference between a computer with, and a computer without, is staggering in high end modelling software.

        I also expect in the near future to see accelerated CAE/FEA built into new CAD packages that utilize the power of the GPU for processing.
    • If there is enough of a market for them to develope these cards, and stay in business then what the problem? There is a need, and they are filling it.
    • I agree with you, but I also think that raster 3D is hitting a downward slope in "realistically programmable" feature sets and hopefully ray tracing or hybrid rendering will start to pick up in it's place. I actually think keeping the bleeding edge market going is a good thing toward both obtainable real time tracing and lower power consumption. Even today, I think (even today) nVidia/ATI have to reduce energy costs to go faster.

    • by ceoyoyo ( 59147 )

      ATI and nVidia know all about the mobile, embedded and low power markets. That's where they make most of their money. You only see the product announcements for the latest and greatest, super fast, gamer wet dream cards because those are the only ones that make the news.

      ATI and nVidia have to do R&D, develop faster cards even if they are impractical for their target market. Then, once those features have been tried and tested, they get put into regular, production cards. Just like Formula 1, they've

    • And I know there are hardcore PC gamers out there, but there are only a handful of companies even bothering to push the high-end graphics, so you buy a $500 video card and there are exactly ZERO games that take full advantage of it. Wait a year or so, and you may find that one or two of the few high-end PC game makers decide to throw you a bone and add support.

      This is blatantly false. What a decent ($200+) graphics card buys you today is being able to play games at decent resolution. To play todays games at

    • But nVidia and ATI haven't realized that we passed the point of diminishing returns years ago. Mobility and battery life are what matter.

      In the context of a card designed for desktop PCs, specifically for people who play games on gigantic monitors? You can't be serious.

      And I know there are hardcore PC gamers out there, but there are only a handful of companies even bothering to push the high-end graphics, so you buy a $500 video card and there are exactly ZERO games that take full advantage of it.

      Most modern games can use the full capabilities of this card. It's designed for people who want to play the latest games at extremely high resolutions with maximum quality settings. That takes an unholy amount of processing power. It's really the only use for a card like this. Most people won't buy it.

      And as a bonus, you get SIGNIFICANTLY increased power consumption, and the video card addicts are just wasting resources so they can all whack-off to Shader 30.0 soft shadows on eyelashes.

      Nobody who buys a dual-GPU card gives a singular shit about their power

  • Point: Missed (Score:4, Interesting)

    by jrronimo ( 978486 ) on Thursday December 18, 2008 @01:04PM (#26161575)
    A while back, AMDTi said that they were not competing at the high-end anymore: "There were also very specific admissions that AMD/ ATI isnâ(TM)t competing at the high end with Nvidia, nor do they intend to match up to the GTX 280 with a release of their own uber-chip." source [hardwarezone.com]. So to say "ATI had to combine two cards to be on top!" kind of completely misses the point. (emphasis added.)

    For the interested, there's a great article at anandtech [anandtech.com] talking about how the R770 came to be pretty awesome... Really, though, it's not a super-high-end part.
    • Hey you forgot to mention the best part, while the Nvidia card is about 15-20% faster the ATI costs about 1/2 as much.
  • by tangent3 ( 449222 ) on Thursday December 18, 2008 @02:04PM (#26162451)

    Nvidia never lost the performance crown. AMD did not even bothered to compete with Nvidia for performance at the high end.
    Read this excellent article [anandtech.com].

    What AMD did with the RV770 series was to totally pwn everything below the super high end.
    When the 4870 was released at $299, it was generally worse than GTX280, but it easily beat the GTX260 which was priced at $399.
    When the 4850 was released at $199, it easily matched the 9800GTX which was priced at $249

  • The real question is: Did Nvidia simply make enough of these (~30 - 200) certainly very carefully hand-picked chips to homologate them as a valid offering in claiming the top spot on the performance charts...

    ...or are they actually producing them in quantity such that anyone who wants one can buy one at the stated price?

    Personally I'm betting on the former being far more likely than the latter.
  • You guys have got it all wrong... ATI cards are all the rage... Or, I guess they were until the Radeon...

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...