NVIDIA Previews GF100 Features and Architecture 101
MojoKid writes "NVIDIA has decided to disclose more information regarding their next generation GF100 GPU architecture today. Also known as Fermi, the GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. If you're keeping count, the older GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units, but the geometry and raster units, as they are implemented in GF100, are not present in the GT200 GPU. The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock. Reportedly, the GF100 will also offer 8x the peak double-precision compute performance as its predecessor, 10x faster context switching, and new anti-aliasing modes."
Wait... (Score:4, Insightful)
Why more disclosure now? There doesn't seem to be any major AMD or, gasp, Intel product launch in progress...
Re:Wait... (Score:5, Insightful)
Because I needed convincing not to buy a 5890 today.
Re: (Score:3, Interesting)
I do not need convincing: 5870 (and likely rumored 5890) simply do not fit my PC case.
Though question left open is whether the GF100 based cards would. Or rather: Would GF100 with PSU it would likely require together fit my case.
Re: (Score:3, Interesting)
Considering the rumor is it'll pull 280W, almost as much as the 5970, my guess would be no. I settled for the 5850 though, plenty oomph for my gaming needs.
Re: (Score:1)
Agreed. I ordered the 5850 just last night. The bundled deal at Newegg comes with a free 600 watt Thermaltake power supply (limited time of course: http://www.newegg.com/Product/Product.aspx?Item=N82E16814102857 [newegg.com]). I'll be gaming at 1920x1080, so this should be quite enough for me for a fair while (though I wish I could've justified a 2 GB card).
Normally I wouldn't do $300 for a vid card. I've paid the $600 premium in the past and that made me realize that the $150 - $200 cards do just fine. Last night'
Re: (Score:1)
Re: (Score:3, Interesting)
Is the price / performance difference worth the investment in the pricier card, or does opting for the cheaper option allow me to buy a case which will fit the card for a net saving?
If GF100 price > 5870 + New case, you have an easy decision to make.
Re: (Score:3, Informative)
I would wait for a GF100 or 5870 refresh first. AMD is rumored to be working on the 28nm refresh that should be available by mid-year. GlobalFoundries has been showing off wafers that have been fabbed on a 28 nm process [overclock.net], and rumors indicate that we'll be seeing 28nm GPUs by the mid-year. I would imagine that nvidia is planning a 28nm refresh of GF100 not long after. Smaller GPU = less
Re: (Score:2)
28nm...that should be interesting for Nvidia. Considering 40nm TSMC process is still painful for them; and I don't see Nv going eagerly to Global Foundries.
Re: (Score:2)
28nm isn't on the roadmap for Global Foundries for production until 2011 at the earliest.
Re: (Score:3, Insightful)
Yeah, seriously. The board makers don't take this problem as seriously as they should. The GTX 260 I have now barely fit in my case, and I only got that because the ATI card I wanted outright wouldn't fit.
It doesn't matter how good the card is if nobody has a case capable of actually holding it.
Re: (Score:3, Interesting)
Just like the sucess of AMD 4800 cards, many people will go for the significantly better bang for buck in the 5800 line. It looks like AMD is in a good position.
Re: (Score:3, Informative)
Re: (Score:2, Insightful)
At the end of TFA it states that the planned release date is Q1 2010, so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon ... with the hope they'll hang off on a purchase until it hits the store shelves.
Re:Wait... (Score:5, Informative)
280W power drain, 550mm^2 chip size => no thanks, i'll pass.
http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable [semiaccurate.com]
Re:Wait... (Score:5, Insightful)
Re:Wait... (Score:5, Insightful)
But most of us will compare it with the watts needed to run two high end AMD cards
Re:Wait... (Score:4, Informative)
I think he's talking about dissipation of such a large amount of power in such a small package size.
The die size is barely larger than a square inch, and 280W is a tremendous amount of energy to dissipate through it.
Cooling these things is going to be an issue for sure.
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Prescott dissipated 105W from only 112 mm^2, or about twice the power density of this chip, I don't think cooling will be a major problem.
That's why it was called Preshot and why Intel had given up on Netburst. It was a bitch to cool down, and - unlike GPUs - it had the benefit of cooling with bigger heatsinks and larger fans.
Prescott... (Score:2)
also made your PC sound like a vacuum cleaner.
Re: (Score:2)
No one knows what the power draw actually is for Fermi except for engineers at Nvidia, 280W is highly suspect.
Also, Semiaccurate? Come on, all that site does is bash Nvidia because of the writers grudge against them.
Remember when G80 was being released and Charlie said it was "Too hot, too slow, too late." Yeah that turned out real well didn't it...
Re: (Score:1)
The engineers had a cool idea and the asked the sales guys. And the sales guys said "Dude that's fucking awesome! What are you waiting for? Stick it on the web?"
Re: (Score:2, Funny)
Re: (Score:2)
It changed back to the "But will it run Linux?" meme because, with the current hardware roadmap from the hardware manufacturers, nothing will be able to run Crysis until Duke Nukem hits store shelves.
Re: (Score:1)
Should AMD sue them too? (Score:2)
For making the better GPU? :P
Re: (Score:2)
Well... Intel's past anti-competitive practices were never really a secret. Dell in past was constantly bragging about the deals they were getting by remaining loyal to Intel.
Also AMD/ATI at the moment do better GPUs - as consumers are concerned. Buying now gt200 card is pointless as it is a well known fact that nVidia literally abandons support of previous GPU generation when they release new one. Waiting for GF100 based cards just to find that one has to sell an arm and a leg to afford one (especially
Re: (Score:2)
THE HORROR!!! THE HORROR!!!!
The problem was that Intel treated different OEMs differently. There were many other vendors who were capable of selling in volumes, yet Intel dealt only with selected OEMs.
And that is discriminatory, unfair and often illegal.
P.S. IANAL
Re: (Score:3, Informative)
Buying now gt200 card is pointless as it is a well known fact that nVidia literally abandons support of previous GPU generation when they release new one.
Such bullshit. For example the latest Geforce 4 drivers date to Nov 2006 which was when the GeForce 8 series came out 4 years after the initial Geforce 4 card. Even the Geforce 6 has Win7 drivers that came out barely 2 months ago and thats 5 series back from the current 200 series.
Re: (Score:2)
As proud ex-owner of TNT2 and gf7800 I can personally attest: yes, the drivers exists.
But all those problems pretty much everybody experienced complained and reported to nVidia in the drivers never got fixed for the older cards. (*) And those were normal stability, screen corruption and game performance problems.
I'm sorry but I have to conclude that they do in fact abandon support. OK, that's my personal experience. But with two f***ing cards in different times I got pretty much the same experience w
Re: (Score:1)
I'm sorry but I have to conclude that they do in fact abandon support.
Which is a entirely different claim. You said they abandon a product right after the next generation which is completely unadulterated bullshit. Of course they will eventually abandon support for old products because they get no revenue from doing so and most of the older cards they drop support for have a tiny market share.
Re: (Score:2)
You think ATI is any better? They abandon support after 3 years too, to the point where you better not bother even trying to install newer drivers on your system.
Re: (Score:1)
What he's complaining about is what ALL hardware companies do. No piece of hardware has indefinite support unless you're paying them a bunch of money to do so.
Re: (Score:1)
Re: (Score:1)
Especially when one of the cards he's whining about that gets no support anymore is a fucking 11 year old TNT2.
Wow, that article is terribly written... (Score:2, Informative)
Anandtech (Score:5, Informative)
Anandtech also has an article up about the GF100. They generally have very well written, in-depth articles: http://www.anandtech.com/video/showdoc.aspx?i=3721 [anandtech.com]
Re:Anandtech (Score:5, Interesting)
"The GPU will also be execute C++ code." (Score:1)
From the article:
"The GPU will also be execute C++ code."
They integrate a C++ interpreter (or JIT compiler) into their graphics chip?
Re: (Score:3, Informative)
From the article: "The GPU will also be execute C++ code."
They integrate a C++ interpreter (or JIT compiler) into their graphics chip?
That's a misinterpretation of part of the NVIDIA CUDA propaganda stuff: better C++ support in NVCC
Re: (Score:2)
Without more details, I suspect they've just made a more capable language that lets you write your shaders and stuff in something that looks just like C++.
Re: (Score:2)
No, the GPU can execute native C++ code now. It's one of the big new features of Fermi for GPGPU.
Re: (Score:2)
Although I may not have worded it very well, that's pretty much what I meant. I was thinking that others might be under the incorrect impression that it could offload code bound for the CPU and run it on the GPU instead.
Re: (Score:2)
Re: (Score:2)
You're picking apart at Semantics here.
Re: (Score:2)
Re: (Score:2)
Yes well, most forums tend to be filled with what I coin Intelligent-Idiots. Especially ones that focus on gaming.
You're right however, I did state it wrong.
Re: (Score:1)
double-precision (Score:1)
is where it's at for scientific computation. Folks are moving their codes to GPUs now, betting the double-precision performance will get there soon. 8x increase in compute performance looks promising, assuming it translates into real world gains.
Can someone who is more knowledgeable tell me... (Score:3, Interesting)
Can anyone explain to me why they would do this (or not do this, depending on how you look at it?)
Re: (Score:1, Informative)
A wide memory bus is expensive in terms of card real-estate (wider bus = more lines) this increases cost. It also increases the amount of logic in the GPU and requires more memory chips for the same amount of memory.
Re: (Score:3, Interesting)
Re: (Score:2)
They _think_ that the card won't be memory starved at the usual loads. More memory lanes means higher complexity also in assuring the same "distance" (propagation time) for all the memory chips.
People think that the newest AMD card (5970?) is huge, I wonder how big cards with Fermi will be, and how much bigger they should be if needing even more memory chips and memory lanes.
Re: (Score:2)
nor would the cost of the pins to connect to the outside.
Are you kidding? The pin driver pads take up more die real-estate than anything else (and they suck up huge amounts of power as well). Even on now-ancient early 80's ICs, the pads were gargantuan compared to any other logic. E.g. a logic module vs. a pad was a huge difference... like looking at a satellite map of a football field (pad) with a car parked beside it (logic module). These days, that's only gotten orders of magnitude worse as pin drivers haven't shrunk much at all when compared to current lo
Re: (Score:2)
Re: (Score:2)
Increasing the bus size has the effect of increasing the perimeter of the chip. Which drives up costs because of the increased die area.
Re: (Score:2)
Not to mention there is certainly not a 1:1 gain in speed from doubling the bandwidth. Double bandwidth is nice for, say, copying blocks of memory, but it doesn't help for performing operations, and sometimes added latencies can make it under perform slower memory - early DD3 for instance, had CAS latencies double or more of DDR2 without a huge gain in bandwidth (800 to 1066) and often could be beaten by much cheaper DDR2. Without a more comprehensive analysis it is hard to say which is faster.
Costs more (Score:4, Informative)
The wider your memory bus, the greater the cost. Reason is that it is implemented as more parallel controllers. So you want the smallest one that gets the job done. Also, faster memory gets you nothing if the GPU isn't fast enough to access it. Memory bandwidth and GPU speed are very intertwined. Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.
Apparently, 256-bit GDDR5 is enough.
Re: (Score:1)
Apparently, 256-bit GDDR5 is enough.
(figures from http://www.anandtech.com/video/showdoc.aspx?i=3721&p=2 [anandtech.com])
GF100 has a 384-bit memory bus, likely with a 4000MHz+ data rate. HD5870 has a 4800MHz data rate, so let's assume the same.
The GTX285 had a 512-bit memory bus, with a 2484MHz data rate
So the bandwidth is (384/512) * 4800 / 2484 = 1.45x higher.
Re: (Score:2)
Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.
My old 7900GS was the first card where I felt like the memory wasn't being fully utilized by the GPU.
It had a near negligible performance impact running 4xAA on most games.
My next card (8800GS) had a higher framerate, but also a bigger hit from 4xAA.
What's with the terrible naming (Score:5, Insightful)
So we've had this long history with nvidia part numbers gradually increasing. 5000 series, 6000 series, etc. up until the 9000 series. At that point they needed to go to 10000, and the numbers were getting a bit unwieldy. So understandably, the decided to restart with the GT100 series and GT200 series. So now instead of continuing with a 300 series, we're going back to a 100. So we had the GT100 series and now we get the GF100 series? And GF? Serieously? People already abbreviates GeForce as GF, so now when someone says GF we can't be sure what they are talking about. Terrible marketing decision IMHO.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Informative)
GF100 is the name of the chip. The cards will be called the GT300 series.
Re:What's with the terrible naming (Score:5, Funny)
GF100 is the name of the chip. The cards will be called the GT300 series.
Great! That's not confusing at all.
wait a minute... (Score:3, Insightful)
Re:wait a minute... (Score:4, Funny)
Was renamed GDDR5...
Only joking
Re: (Score:2)
Re: (Score:2)
The sad thing is, the cheapo non-brand razor I used this morning had 7 blades... And a strip...
Re: (Score:2)
Re: (Score:2)
Because in just a few years reality overtook the exaggeration in an onion article.
Insufficient Hyperbole (Score:2)
I was disappointed by that article as well. It was obvious even then that they simply weren't exaggerating enough.
Re: (Score:2)
Re: (Score:1)
It was bundled with Leisure Suit Larry 4.
Tesselation could rescue PC gaming (Score:5, Insightful)
Re: (Score:1)
I remember seeing something like this in the old game "Sacrifice". I wonder if their method was similar...
Re: (Score:2)
Now that graphics are largely stagnant in between console generations
I'm afraid that you've lost me. XBox to XBox 360, PS2 to PS3, both represent substantial leaps in graphics performance. In the XBox/PS2 generation, game teams clearly had to fight to allocate polygon budgets well, and it was quite visible in the end result. That's not so much the case in current generation consoles. It's also telling that transitions between in-game scenes and pre-rendered content aren't nearly as jarringly obvious as they used to be. And let's not forget the higher resolutions that cu
Re: (Score:1)
Re: (Score:2)
I'd hardly consider graphics stagnant.
A perfect example is the difference between Mirror's Edge on the Xbox and PC. There is a lot more trash floating about, there are a lot more physics involved with glass being shot out and blinds being affected by gunfire and wind in the PC version. Trust me when I say that graphics advantages are still on the PC side, and my 5770 can't handle Mirror's Edge (a game from over a year ago) at 1080p on my home theatre. Now it's by no means a top end card, but it is relat
What do you mean by rescue? (Score:4, Informative)
Graphical hardware power is a problem on consoles not PC. Despite their much touted power the PS3 or Xbox360 cannot do FSAA at 1080p. Most developers have resorted to software solutions (hacks, for all intents and purposes) to get rid of jaggedness.
Most games made for consoles will work the same, if not better on a low end PC (if they don't do a crappy job on porting but Xbox to PC this is pretty hard to screw up these days). The problem with PC gaming is that it is not utilised to its fullest extent. Most games are console ports or PC games bought up at about 60% completion and then consolised.
PC Graphics 1280x1024 upwards tend to look pretty good. Compare that to Xbox (720p) or PS3 (1080p) which still look pretty bad at those resolutions. Check out the screenshots of Fallout 3 or Far Cry 2, the PC version always looks better no matter the resolution. According the the latest Steam survey 1280x1024 is still the most popular resolution, 1680x1050 the second.
If you have the power, why not use it.
Dont get me wrong however, progress and new idea are a good thing but the PC gaming market is far from in trouble.
Someone please tell me (Score:1)
What video card do people recommend you fit in your PC nowadays
a) on a budget (say £50)
b) average (say £100)
c) with a bigger budget (say £250)
Bonus points if you can recommend a good (fanless) silent video card....
Re: (Score:3, Informative)
a) 5670 or GT240 if you can find one cheap enough... However depending how British pounds convert, the true budget card is a gt 220 or a 4670.
b) 5770 or GTX260 216 core
c) Radeon 5870 or 5970 if you can afford it.
Re: (Score:1, Informative)
I'm running a single Radeon 4850 and have no problem with it whatsoever.
A friend of mine is running two GeForce 260 cards in SLI mode which make his system operate at roughly the same temperature as the surface of the sun.
We both play the same modern first person shooter games. If you bring up the numbers, he might get 80fps compared to my 65fps. However I honestly cannot notice any difference.
The real differ
Re: (Score:2)
B: Radeon HD4870. Great card, extremely good value. [overclockers.co.uk]
C: Radeon HD5850 kicks ass. Diminishing value for money here though. [overclockers.co.uk]
Beware the future upgrade (Score:1)