Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Technology

NVIDIA Previews GF100 Features and Architecture 101

MojoKid writes "NVIDIA has decided to disclose more information regarding their next generation GF100 GPU architecture today. Also known as Fermi, the GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. If you're keeping count, the older GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units, but the geometry and raster units, as they are implemented in GF100, are not present in the GT200 GPU. The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock. Reportedly, the GF100 will also offer 8x the peak double-precision compute performance as its predecessor, 10x faster context switching, and new anti-aliasing modes."
This discussion has been archived. No new comments can be posted.

NVIDIA Previews GF100 Features and Architecture

Comments Filter:
  • Wait... (Score:4, Insightful)

    by sznupi ( 719324 ) on Monday January 18, 2010 @08:48AM (#30807274) Homepage

    Why more disclosure now? There doesn't seem to be any major AMD or, gasp, Intel product launch in progress...

    • Re:Wait... (Score:5, Insightful)

      by Anonymous Coward on Monday January 18, 2010 @08:52AM (#30807312)

      Because I needed convincing not to buy a 5890 today.

      • Re: (Score:3, Interesting)

        by ThePhilips ( 752041 )

        I do not need convincing: 5870 (and likely rumored 5890) simply do not fit my PC case.

        Though question left open is whether the GF100 based cards would. Or rather: Would GF100 with PSU it would likely require together fit my case.

        • Re: (Score:3, Interesting)

          by Kjella ( 173770 )

          Considering the rumor is it'll pull 280W, almost as much as the 5970, my guess would be no. I settled for the 5850 though, plenty oomph for my gaming needs.

          • Agreed. I ordered the 5850 just last night. The bundled deal at Newegg comes with a free 600 watt Thermaltake power supply (limited time of course: http://www.newegg.com/Product/Product.aspx?Item=N82E16814102857 [newegg.com]). I'll be gaming at 1920x1080, so this should be quite enough for me for a fair while (though I wish I could've justified a 2 GB card).

            Normally I wouldn't do $300 for a vid card. I've paid the $600 premium in the past and that made me realize that the $150 - $200 cards do just fine. Last night'

        • Re: (Score:3, Interesting)

          by L4t3r4lu5 ( 1216702 )
          No, the question is:

          Is the price / performance difference worth the investment in the pricier card, or does opting for the cheaper option allow me to buy a case which will fit the card for a net saving?

          If GF100 price > 5870 + New case, you have an easy decision to make.
        • Re: (Score:3, Informative)

          Given the sizes reported by those who saw the actual card at CES, they're stating it's ~10.5 inches, similar to the 5870.

          I would wait for a GF100 or 5870 refresh first. AMD is rumored to be working on the 28nm refresh that should be available by mid-year. GlobalFoundries has been showing off wafers that have been fabbed on a 28 nm process [overclock.net], and rumors indicate that we'll be seeing 28nm GPUs by the mid-year. I would imagine that nvidia is planning a 28nm refresh of GF100 not long after. Smaller GPU = less
          • by sznupi ( 719324 )

            28nm...that should be interesting for Nvidia. Considering 40nm TSMC process is still painful for them; and I don't see Nv going eagerly to Global Foundries.

          • 28nm isn't on the roadmap for Global Foundries for production until 2011 at the earliest.

        • Re: (Score:3, Insightful)

          by Tridus ( 79566 )

          Yeah, seriously. The board makers don't take this problem as seriously as they should. The GTX 260 I have now barely fit in my case, and I only got that because the ATI card I wanted outright wouldn't fit.

          It doesn't matter how good the card is if nobody has a case capable of actually holding it.

      • Re: (Score:3, Interesting)

        by w0mprat ( 1317953 )
        With AMD's 5000 series is now coming down price, by the time NVIDIA gets it's 100 series shipping, it won't be *that* much faster since it's a similar generational leap, and similar process size, but it will be high priced until a significant ammount of stock hits channels... oh and to sting the early adopter fanboys.

        Just like the sucess of AMD 4800 cards, many people will go for the significantly better bang for buck in the 5800 line. It looks like AMD is in a good position.
      • Re: (Score:3, Informative)

        Comment removed based on user account deletion
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      At the end of TFA it states that the planned release date is Q1 2010, so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon ... with the hope they'll hang off on a purchase until it hits the store shelves.

    • Re:Wait... (Score:5, Informative)

      by galaad2 ( 847861 ) on Monday January 18, 2010 @09:10AM (#30807456) Homepage Journal

      280W power drain, 550mm^2 chip size => no thanks, i'll pass.

      http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable [semiaccurate.com]

      • Re:Wait... (Score:5, Insightful)

        by afidel ( 530433 ) on Monday January 18, 2010 @09:28AM (#30807596)
        Compared to the watts you would need to run a Xeon or Opteron to get the same double precision performance it's a huge bargain.
        • Re:Wait... (Score:5, Insightful)

          by Calinous ( 985536 ) on Monday January 18, 2010 @09:41AM (#30807764)

          But most of us will compare it with the watts needed to run two high end AMD cards

        • Re:Wait... (Score:4, Informative)

          by EmagGeek ( 574360 ) on Monday January 18, 2010 @09:47AM (#30807820) Journal

          I think he's talking about dissipation of such a large amount of power in such a small package size.

          The die size is barely larger than a square inch, and 280W is a tremendous amount of energy to dissipate through it.

          Cooling these things is going to be an issue for sure.

          • Re: (Score:3, Informative)

            by afidel ( 530433 )
            Prescott dissipated 105W from only 112 mm^2, or about twice the power density of this chip, I don't think cooling will be a major problem.
            • Prescott dissipated 105W from only 112 mm^2, or about twice the power density of this chip, I don't think cooling will be a major problem.

              That's why it was called Preshot and why Intel had given up on Netburst. It was a bitch to cool down, and - unlike GPUs - it had the benefit of cooling with bigger heatsinks and larger fans.

            • also made your PC sound like a vacuum cleaner.

      • No one knows what the power draw actually is for Fermi except for engineers at Nvidia, 280W is highly suspect.

        Also, Semiaccurate? Come on, all that site does is bash Nvidia because of the writers grudge against them.

        Remember when G80 was being released and Charlie said it was "Too hot, too slow, too late." Yeah that turned out real well didn't it...

    • The engineers had a cool idea and the asked the sales guys. And the sales guys said "Dude that's fucking awesome! What are you waiting for? Stick it on the web?"

  • For making the better GPU? :P

    • Well... Intel's past anti-competitive practices were never really a secret. Dell in past was constantly bragging about the deals they were getting by remaining loyal to Intel.

      Also AMD/ATI at the moment do better GPUs - as consumers are concerned. Buying now gt200 card is pointless as it is a well known fact that nVidia literally abandons support of previous GPU generation when they release new one. Waiting for GF100 based cards just to find that one has to sell an arm and a leg to afford one (especially

      • Re: (Score:3, Informative)

        Buying now gt200 card is pointless as it is a well known fact that nVidia literally abandons support of previous GPU generation when they release new one.

        Such bullshit. For example the latest Geforce 4 drivers date to Nov 2006 which was when the GeForce 8 series came out 4 years after the initial Geforce 4 card. Even the Geforce 6 has Win7 drivers that came out barely 2 months ago and thats 5 series back from the current 200 series.

        • As proud ex-owner of TNT2 and gf7800 I can personally attest: yes, the drivers exists.

          But all those problems pretty much everybody experienced complained and reported to nVidia in the drivers never got fixed for the older cards. (*) And those were normal stability, screen corruption and game performance problems.

          I'm sorry but I have to conclude that they do in fact abandon support. OK, that's my personal experience. But with two f***ing cards in different times I got pretty much the same experience w

          • I'm sorry but I have to conclude that they do in fact abandon support.

            Which is a entirely different claim. You said they abandon a product right after the next generation which is completely unadulterated bullshit. Of course they will eventually abandon support for old products because they get no revenue from doing so and most of the older cards they drop support for have a tiny market share.

          • You think ATI is any better? They abandon support after 3 years too, to the point where you better not bother even trying to install newer drivers on your system.

            • What he's complaining about is what ALL hardware companies do. No piece of hardware has indefinite support unless you're paying them a bunch of money to do so.

        • Seriously nobody should complain about the lack of Win7 driver support for cards that game out before XP SP1.
  • I understand most of the time people who write about computers aren't exactly literature graduates, but wtf, at least write correctly. Use some spell checker or have someone proof read it.
  • Anandtech (Score:5, Informative)

    by SpeedyDX ( 1014595 ) <speedyphoenix&gmail,com> on Monday January 18, 2010 @09:02AM (#30807398)

    Anandtech also has an article up about the GF100. They generally have very well written, in-depth articles: http://www.anandtech.com/video/showdoc.aspx?i=3721 [anandtech.com]

  • From the article:
    "The GPU will also be execute C++ code."

    They integrate a C++ interpreter (or JIT compiler) into their graphics chip?

    • Re: (Score:3, Informative)

      by TeXMaster ( 593524 )

      From the article: "The GPU will also be execute C++ code."

      They integrate a C++ interpreter (or JIT compiler) into their graphics chip?

      That's a misinterpretation of part of the NVIDIA CUDA propaganda stuff: better C++ support in NVCC

    • Without more details, I suspect they've just made a more capable language that lets you write your shaders and stuff in something that looks just like C++.

      • No, the GPU can execute native C++ code now. It's one of the big new features of Fermi for GPGPU.

        • Although I may not have worded it very well, that's pretty much what I meant. I was thinking that others might be under the incorrect impression that it could offload code bound for the CPU and run it on the GPU instead.

        • by Suiggy ( 1544213 )
          You have been mislead. Their CUDA compiler tool chain uses LLVM as the backend to compile C++ code into CUDA machine code. There is no such thing as "native C++ code," but there is "standard C++ code" which is what nVidia's marketing goons really mean.
          • You're picking apart at Semantics here.

            • by Suiggy ( 1544213 )
              Well, I've already had to correct a number of other people on a couple of other forums I frequent who misunderstood. They thought it would be possible to run compiled windows/linux x86/x86-64 executables that were originally C++, without any translation whatsoever. They thought C++ uses a virtual machine somewhat akin to the JVM or .NET, and that such executables were not native machine code. And we're not even getting into platform specific libraries. In other words, they thought it would be a complete rep
              • Yes well, most forums tend to be filled with what I coin Intelligent-Idiots. Especially ones that focus on gaming.

                You're right however, I did state it wrong.

    • by dskzero ( 960168 )
      That's rather ambiguous.
  • is where it's at for scientific computation. Folks are moving their codes to GPUs now, betting the double-precision performance will get there soon. 8x increase in compute performance looks promising, assuming it translates into real world gains.

  • by Immostlyharmless ( 1311531 ) on Monday January 18, 2010 @09:10AM (#30807458)
    Why it is that they would stick with a 256 bit memory bus (aside from the fact that clock for clock its really the same speed as a 512 bit bus of slower memory?) Is it just because the rest of the card is a bottle neck? I don't think I can recall another card, that when all other things were equal, a faster bit bus didn't result in a sizable increase in processing power? It was obviously implemented in the previous generation of cards, so why not stick with it, use the GDDR5 and then end up with a card thats even faster?

    Can anyone explain to me why they would do this (or not do this, depending on how you look at it?)
    • Re: (Score:1, Informative)

      by Anonymous Coward

      A wide memory bus is expensive in terms of card real-estate (wider bus = more lines) this increases cost. It also increases the amount of logic in the GPU and requires more memory chips for the same amount of memory.

      • Re: (Score:3, Interesting)

        by afidel ( 530433 )
        This monster is already 550 mm^2, I don't think the couple million transistors needed to do a 512bit bus would be noticed, nor would the cost of the pins to connect to the outside. The more likely explanation is that they aren't memory starved and that trying to route the extra high precision lanes on the board was either too hard or was going to require more layers in the PCB which would add significant cost.
        • They _think_ that the card won't be memory starved at the usual loads. More memory lanes means higher complexity also in assuring the same "distance" (propagation time) for all the memory chips.
                People think that the newest AMD card (5970?) is huge, I wonder how big cards with Fermi will be, and how much bigger they should be if needing even more memory chips and memory lanes.

        • nor would the cost of the pins to connect to the outside.

          Are you kidding? The pin driver pads take up more die real-estate than anything else (and they suck up huge amounts of power as well). Even on now-ancient early 80's ICs, the pads were gargantuan compared to any other logic. E.g. a logic module vs. a pad was a huge difference... like looking at a satellite map of a football field (pad) with a car parked beside it (logic module). These days, that's only gotten orders of magnitude worse as pin drivers haven't shrunk much at all when compared to current lo

          • by afidel ( 530433 )
            Huh? The outer left and right rows on this [techtree.com] picture are the memory controllers, that's what 5-10% of the total die area? Adding 1/3rd more pins would add a couple percent to the overall cost of the chip. Now on lower level parts where there's half as many logic units it would be more significant, but there's a reason that lower end parts have less memory bandwidth (and they need less since they can process less per clock.)
            • Increasing the bus size has the effect of increasing the perimeter of the chip. Which drives up costs because of the increased die area.

      • by Creepy ( 93888 )

        Not to mention there is certainly not a 1:1 gain in speed from doubling the bandwidth. Double bandwidth is nice for, say, copying blocks of memory, but it doesn't help for performing operations, and sometimes added latencies can make it under perform slower memory - early DD3 for instance, had CAS latencies double or more of DDR2 without a huge gain in bandwidth (800 to 1066) and often could be beaten by much cheaper DDR2. Without a more comprehensive analysis it is hard to say which is faster.

    • Costs more (Score:4, Informative)

      by Sycraft-fu ( 314770 ) on Monday January 18, 2010 @10:04AM (#30808018)

      The wider your memory bus, the greater the cost. Reason is that it is implemented as more parallel controllers. So you want the smallest one that gets the job done. Also, faster memory gets you nothing if the GPU isn't fast enough to access it. Memory bandwidth and GPU speed are very intertwined. Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.

      Apparently, 256-bit GDDR5 is enough.

      • by hattig ( 47930 )

        Apparently, 256-bit GDDR5 is enough.

        (figures from http://www.anandtech.com/video/showdoc.aspx?i=3721&p=2 [anandtech.com])

        GF100 has a 384-bit memory bus, likely with a 4000MHz+ data rate. HD5870 has a 4800MHz data rate, so let's assume the same.

        The GTX285 had a 512-bit memory bus, with a 2484MHz data rate

        So the bandwidth is (384/512) * 4800 / 2484 = 1.45x higher.

      • Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.

        My old 7900GS was the first card where I felt like the memory wasn't being fully utilized by the GPU.

        It had a near negligible performance impact running 4xAA on most games.

        My next card (8800GS) had a higher framerate, but also a bigger hit from 4xAA.

  • by LordKronos ( 470910 ) on Monday January 18, 2010 @09:17AM (#30807498)

    So we've had this long history with nvidia part numbers gradually increasing. 5000 series, 6000 series, etc. up until the 9000 series. At that point they needed to go to 10000, and the numbers were getting a bit unwieldy. So understandably, the decided to restart with the GT100 series and GT200 series. So now instead of continuing with a 300 series, we're going back to a 100. So we had the GT100 series and now we get the GF100 series? And GF? Serieously? People already abbreviates GeForce as GF, so now when someone says GF we can't be sure what they are talking about. Terrible marketing decision IMHO.

  • wait a minute... (Score:3, Insightful)

    by buddyglass ( 925859 ) on Monday January 18, 2010 @09:33AM (#30807658)
    What happened to GDDR4?
  • by Colonel Korn ( 1258968 ) on Monday January 18, 2010 @10:37AM (#30808374)
    Now that graphics are largely stagnant in between console generations, the PC's graphics advantages tend to be limited to higher resolution, higher framerate, anti-aliasing, and somewhat higher texture resolution. If the huge new emphasis on tesselation in GF100 strikes a chord with developers, and especially if something like it gets into the next console generation, games may ship with much more detailed geometry which will then automatically scale to the performance of the hardware on which they're run. This would allow PC graphics to gain the additional advantage of having an order of magnitude increase in geometry detail, which would make more of a visible difference than any of the advantages it currently has, and it would occur with virtually no extra work by developers. It would also allow performance to scale much more effectively across a wide range of PC hardware, allowing developers to simultaneously hit the casual and enthusiast markets much more effectively.
    • by crc79 ( 1167587 )

      I remember seeing something like this in the old game "Sacrifice". I wonder if their method was similar...

    • Now that graphics are largely stagnant in between console generations

      I'm afraid that you've lost me. XBox to XBox 360, PS2 to PS3, both represent substantial leaps in graphics performance. In the XBox/PS2 generation, game teams clearly had to fight to allocate polygon budgets well, and it was quite visible in the end result. That's not so much the case in current generation consoles. It's also telling that transitions between in-game scenes and pre-rendered content aren't nearly as jarringly obvious as they used to be. And let's not forget the higher resolutions that cu

      • I think he meant that until the next generation of consoles launches, graphics within a generation are fairly stagnant. This has pretty much always been the case, but these days seems to be affecting the PC games market more than it used to.
    • I'd hardly consider graphics stagnant.

      A perfect example is the difference between Mirror's Edge on the Xbox and PC. There is a lot more trash floating about, there are a lot more physics involved with glass being shot out and blinds being affected by gunfire and wind in the PC version. Trust me when I say that graphics advantages are still on the PC side, and my 5770 can't handle Mirror's Edge (a game from over a year ago) at 1080p on my home theatre. Now it's by no means a top end card, but it is relat

    • by mjwx ( 966435 ) on Monday January 18, 2010 @10:46PM (#30815892)
      The PC market isnt going anywhere. Not even EA is willing to abandon it despite the amount of whinging they do.

      Now that graphics are largely stagnant in between console generations

      Graphical hardware power is a problem on consoles not PC. Despite their much touted power the PS3 or Xbox360 cannot do FSAA at 1080p. Most developers have resorted to software solutions (hacks, for all intents and purposes) to get rid of jaggedness.

      Most games made for consoles will work the same, if not better on a low end PC (if they don't do a crappy job on porting but Xbox to PC this is pretty hard to screw up these days). The problem with PC gaming is that it is not utilised to its fullest extent. Most games are console ports or PC games bought up at about 60% completion and then consolised.

      the PC's graphics advantages tend to be limited to higher resolution

      PC Graphics 1280x1024 upwards tend to look pretty good. Compare that to Xbox (720p) or PS3 (1080p) which still look pretty bad at those resolutions. Check out the screenshots of Fallout 3 or Far Cry 2, the PC version always looks better no matter the resolution. According the the latest Steam survey 1280x1024 is still the most popular resolution, 1680x1050 the second.

      anti-aliasing, and somewhat higher texture resolution

      If you have the power, why not use it.

      If the huge new emphasis on tesselation in GF100 strikes a chord with developers

      Dont get me wrong however, progress and new idea are a good thing but the PC gaming market is far from in trouble.

  • What video card do people recommend you fit in your PC nowadays
    a) on a budget (say £50)
    b) average (say £100)
    c) with a bigger budget (say £250)

    Bonus points if you can recommend a good (fanless) silent video card....

    • Re: (Score:3, Informative)

      by TheKidWho ( 705796 )

      a) 5670 or GT240 if you can find one cheap enough... However depending how British pounds convert, the true budget card is a gt 220 or a 4670.
      b) 5770 or GTX260 216 core
      c) Radeon 5870 or 5970 if you can afford it.

    • Re: (Score:1, Informative)

      by Anonymous Coward
      What's that funny squiggly L-shaped thing where the dollar sign is supposed to be? :-P

      I'm running a single Radeon 4850 and have no problem with it whatsoever.

      A friend of mine is running two GeForce 260 cards in SLI mode which make his system operate at roughly the same temperature as the surface of the sun.

      We both play the same modern first person shooter games. If you bring up the numbers, he might get 80fps compared to my 65fps. However I honestly cannot notice any difference.

      The real differ
    • A: Nothing. Save money and get a B type later, these cards are not good value. Alternatively, try the used market for a cheap Geforce 8800GT or Radeon HD4850, which will serve you pretty well.

      B: Radeon HD4870. Great card, extremely good value. [overclockers.co.uk]

      C: Radeon HD5850 kicks ass. Diminishing value for money here though. [overclockers.co.uk]
  • from GF100 to MRS100.

Keep up the good work! But please don't ask me to help.

Working...