Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Technology

NVIDIA Previews GF100 Features and Architecture 101

MojoKid writes "NVIDIA has decided to disclose more information regarding their next generation GF100 GPU architecture today. Also known as Fermi, the GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. If you're keeping count, the older GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units, but the geometry and raster units, as they are implemented in GF100, are not present in the GT200 GPU. The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock. Reportedly, the GF100 will also offer 8x the peak double-precision compute performance as its predecessor, 10x faster context switching, and new anti-aliasing modes."
This discussion has been archived. No new comments can be posted.

NVIDIA Previews GF100 Features and Architecture

Comments Filter:
  • Wait... (Score:4, Insightful)

    by sznupi ( 719324 ) on Monday January 18, 2010 @09:48AM (#30807274) Homepage

    Why more disclosure now? There doesn't seem to be any major AMD or, gasp, Intel product launch in progress...

  • Re:Wait... (Score:5, Insightful)

    by Anonymous Coward on Monday January 18, 2010 @09:52AM (#30807312)

    Because I needed convincing not to buy a 5890 today.

  • Re:Wait... (Score:2, Insightful)

    by Anonymous Coward on Monday January 18, 2010 @10:08AM (#30807436)

    At the end of TFA it states that the planned release date is Q1 2010, so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon ... with the hope they'll hang off on a purchase until it hits the store shelves.

  • by LordKronos ( 470910 ) on Monday January 18, 2010 @10:17AM (#30807498)

    So we've had this long history with nvidia part numbers gradually increasing. 5000 series, 6000 series, etc. up until the 9000 series. At that point they needed to go to 10000, and the numbers were getting a bit unwieldy. So understandably, the decided to restart with the GT100 series and GT200 series. So now instead of continuing with a 300 series, we're going back to a 100. So we had the GT100 series and now we get the GF100 series? And GF? Serieously? People already abbreviates GeForce as GF, so now when someone says GF we can't be sure what they are talking about. Terrible marketing decision IMHO.

  • Re:Wait... (Score:5, Insightful)

    by afidel ( 530433 ) on Monday January 18, 2010 @10:28AM (#30807596)
    Compared to the watts you would need to run a Xeon or Opteron to get the same double precision performance it's a huge bargain.
  • wait a minute... (Score:3, Insightful)

    by buddyglass ( 925859 ) on Monday January 18, 2010 @10:33AM (#30807658)
    What happened to GDDR4?
  • Re:Wait... (Score:5, Insightful)

    by Calinous ( 985536 ) on Monday January 18, 2010 @10:41AM (#30807764)

    But most of us will compare it with the watts needed to run two high end AMD cards

  • by ZERO1ZERO ( 948669 ) on Monday January 18, 2010 @10:52AM (#30807882)
    I find your ideas intriguing and would like to subscribe to your newsletter.
  • by Colonel Korn ( 1258968 ) on Monday January 18, 2010 @11:37AM (#30808374)
    Now that graphics are largely stagnant in between console generations, the PC's graphics advantages tend to be limited to higher resolution, higher framerate, anti-aliasing, and somewhat higher texture resolution. If the huge new emphasis on tesselation in GF100 strikes a chord with developers, and especially if something like it gets into the next console generation, games may ship with much more detailed geometry which will then automatically scale to the performance of the hardware on which they're run. This would allow PC graphics to gain the additional advantage of having an order of magnitude increase in geometry detail, which would make more of a visible difference than any of the advantages it currently has, and it would occur with virtually no extra work by developers. It would also allow performance to scale much more effectively across a wide range of PC hardware, allowing developers to simultaneously hit the casual and enthusiast markets much more effectively.
  • Re:Wait... (Score:3, Insightful)

    by Tridus ( 79566 ) on Monday January 18, 2010 @12:23PM (#30808870) Homepage

    Yeah, seriously. The board makers don't take this problem as seriously as they should. The GTX 260 I have now barely fit in my case, and I only got that because the ATI card I wanted outright wouldn't fit.

    It doesn't matter how good the card is if nobody has a case capable of actually holding it.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...