Zooming in on the GeForce 3 90
Couple of more in depth hands on looks at the GeForce 3 popped up lately including Sharky's
coverage (with DX8 screenshots) and
AnandTech's take. Same basic story. Good card, ahead of its time, overpriced, nothing will take advantage of its best features. I bet in 12 months we think differently.
I bet right now (Score:1)
If you owned a Mac, you could Think Different right now.
--
Anonymous cowards live for moments like these.
Re:Yawn, we need other features (Score:1)
1. built in analog capture
2. built in firewire IO
3. built in IDCT mpeg2 decomp/ compression
4. built in video overlay for mpeg streams
i nother words, the same shit $6000 cards do but for $300.
Re:Wonderful (Score:1)
People complain about the video card upgrades but looking back, this isn't such a bad situation. I don't buy the latest video card every time one comes out. I tend to go every other generation or so. At home I've used a Voodoo 1, a TNT 1, and GeForce 1 so far. Total money into these is around 400 bucks because I didn't buy them all hot off the production line.
Now compare that to the pre-hardware acceleration days. If some game was pushing your computer beyond its limits you had to buy a new CPU. A single CPU would easily cost more than the three upgrades I've done on video cards. So I like the situation we are in these days, hardware-wise.
As for gameplay value - well that's something people determine pretty easily. Better games will rise to the top - whether they have the latest greatest effects or not. Fallout and Halflife were both great sellers. Fallout was 2D, nothing flashy, while Halflife was using the Quake engine. Yet they both sold well because they were fun to play, which is the bottom line.
Why I dont buy Nvidia. (Score:1)
I wish the drivers were open, or even more open -- but that's not the main reason why I dont eagerly
follow Nvidia video cards (and, instead look more toward Matrox and ATI). FWIW, the kernel driver is fairly actively hacked on, so when kernel module changes break them, an (unofficial) fix is usually posted within a few hours/days.
The reason I dont go for Nvidia cards is the (often forgotten) issue of 2D video quality. I've used TNT's and Geforces, but gee, text on the desktop is just not very sharp. There's a world
of difference in image quality between Matrox,ATI and Nvidia based cards. Nvidia's licensees tend to cheap out on the filters on the card. I hope this situation changes, but since they're all competing on price -- well, I'm not holding my breath. I play the occasional game, but I have to stare at the desktop for long periods...Geforces just dont have the sharpness of their competitors.
FWIW, I found that Number Nine (Alas, defunct now) made the sharpest, fastest 2d graphics+text displays. The Imagine128 and Revolution series (T2R-based; the chips they made themselves) were legendary in that regard.
Re:Colour depth. (Score:1)
Once you calculate the exact amounts of light [using that vast dynamic range] you then post-process the entire scene with an iris simulator which brings the pixels back into the dynamic range of the monitor ("retina").
The whole complex process is required when you want to properly simulate scenes which are mostly dark but have extremely bright parts - e.g. a prison escape with spotlights.
Some of the work at SIGGRAPH 96 in this area was awesome.
Yawn (Score:2)
Face it, graphics hardware has hit the same plateau that CPU power has--for the overwhelming majority of users what we have is fine, and there's no reason to upgrade.
But... (Score:4)
Baz
Colour depth. (Score:2)
While this is true, most cards add a few extra bits per colour component internally to keep roundoff errors in blending from causing visible artifacts. 30 bits was standard for that, if I recall correctly (more if you count the alpha channel).
128 would just be silly, of course...
Re:Colour depth. (Score:2)
Doubtful, IMO. Human colour vision isn't infinitely acute. Black-and-white vision is a bit better than colour, requiring somewhere in the range of 10-12 bits before we can't see colour bands, but that's about it.
I'm skeptical of conditions changing this very much. How would you modify the environment to give us the ability to see finer variations in colour?
Still silly, if you can see the entire image. (Score:2)
However, with all current display technologies, you can see all of the screen at one time. Thus, varying the iris size just brightens or darkens the scene by a constant factor. Within the scene as a whole, the dynamic range you can percieve is just the dynamic range of the retina - which is quite low, as you point out.
Thus, being able to accurately represent sunlit and moonlit scenes on the same monitor would be useless; your iris would respond to the average brighness of the screen, which would cause the sunlit scene to look washed-out and the moonlit scene to look black.
If you're looking at the images one at a time, you might as well just normalize both to the same 256-level brightness range.
In 12 months... (Score:1)
Cheers,
Tim
in 12 months.... (Score:1)
Oooh! (Score:2)
(Send all "yeah, but it wont have games" replies to
Re:Wonderful (Score:1)
In short, if you want games with plot and gameplay, perhaps these Uber-cards aren't the enemy after all.
Re:kick ass card (Score:1)
Of course, when they say 128bit colour, they mean intenrally, and trust me it's a good thing, 3 or 4 4channel (rgba) polygons on top of each other can introduce a fair amount of clipping in a 36 bit colour space before it gets trimmed down to 24bpp for output.
And don't jump up about you're in 32bit colour mode, you're in 24bit colour mode, with an extra byte per pixel to speed up access.
--Gfunk
Re:Who's the top manufacturer? (Score:1)
They only vary with the heatsink/fan used and the quality of the RAM (some kinds of RAM can overclock higher than others so that's something to keep a look out for). All the cards run at the same default MHz for memory and core speed, and the RAMDACs are identical because they're built into NVidia's chip.
In the end, buy the cheapest one you can find because they're all the same.
In 12 months, there probably WONT be anything... (Score:2)
So, yeah, in 12 months, we'll probably *still* be saying "nothing takes advantage of it".
Re:This brings up a funny story (Score:1)
Re:Colour depth. (Score:2)
Re:Order one now... (Score:1)
-PovRayMan
----------
Re:Yawn (Score:2)
By the blockiness of the models (low poly), and stuttering framerate with all the "high-detail" options turned on.
> Face it, graphics hardware has hit the same plateau that CPU power has
Oh please.
CPUs are still dog-slow. Realistic cloth movement sucks the cpu dry just on the physics calcs alone.
The GeForce 3 can render "Luxo Jr" in real time (Check the GF3 preview at MacExpo) and that's no where even _close_ to photo-realism. We *finally* are *starting* to see the ability to render heat waves, the ability to render hair properly. (Along with the end to billboarded trees - thank god.)
Graphics (and video cards) have quite a way to go still.
Re:Still silly, if you can see the entire image. (Score:1)
For folks who aren't used to working in dB: 80 dB means a ten to the eighth change: about 100 million to one. For example, full sunlight ia about 1000 watts per sqare meter. However, you can see quite well in a room lit by a single 1 watt lightbulb, which is putting about ten milliwatts per square meter of wall (assuming a fairly large room).
Way back when, researchers found that, for a change to be perceptible, it had to be about a percent change. In other words, if I give you a one kilogram mass to hold, and then hand you a one mass of 1001 grams, you probably won't be able to tell the difference, but if I give you a 1010 gram mass, you will. Researchers at Bell labs decided to make a unit to measure this, and called it the Bel (modest, aren't they?) However, a Bel was defined as log10(change) and that was just a little bit coarse, so in true metric fashion they used tenths of a Bel, or deci-bels (which is why decibel is properly written dB, not db).
Now, in graphics cards they use a linear mapping of values: 128 is twice as bright as 64 and a little more than half a bright as 255. If they were to make the color scale logrithmic (10 is ten times a bright as 1, 20 is ten times a bright as 10, etc.), they would do much better at matching the eye's response. However, this makes the math of working with the data MUCH more complicated, so they just add bits.
(In fact, this sort of mapping is actually used in the telephony system: the 8 bits used to represent the voice signal are mapped to a log response, to better get the dynamic range to represent a whisper to a shout.)
Re:I will be happy when it's out (Score:2)
Mark Duell
Wonderful (Score:1)
-Chris
Re:Yawn (Score:1)
Game developers.
In 18-24 months, the GF3 level of "computing" performance will be more accessible and commonplace. Incidently this is about the amount of time it takes to build a game... and developers like to be able to debug on REAL hardware. :)
-Chris [nondot.org]
Re:Wonderful (Score:4)
Actually by making it pretty easy to put "flashy" effects in, this card might allow the developer to spend more time working on the other parts of the game.
Perhaps, but not likely. Almost every effect is a special case hack that is designed to look good, at the expense of generality. This means that the engines for games (Quake 2/3 is a perfect example) become very specialized. If you want to base a second game off of the same engine, you get exactly the same set of effects as before, because the design is not very extensible.
My reference to simpler games of yesteryear goes farther though... try to get a group of people together for a netparty, for example... how many people do you think will be able to run stuff that requires a Geforce 1, much less a brand new Geforce 3? Second quesion is: why would you want to?
It seems like the technology trend is to push the envelope to the next step (GF3 is a logical progression from GF2), and then software has to play catchup (no, not ketchup :). It seems that, especially with the first few games for a technology, games focus on doing things because they "can", not because they "should".
Instead of designing games with a clear focus, plot, and motivation, games tend to get dilluted to being flashy silly things. Sure they look cool and have nifty features, but are they really FUN to play? Where is the replay value and interactivity with other human players?
The problem with, for example, vertex shaders and the other DX8 features is that they are not a simple extension of a uniform graphics architecture.
Instead of being able to designing a simple graphics engine that supports a variety of features, and then enhancing it as the game progresses, they have to "Target" a "technology" (for example DX7 or DX8). Of course, they two are essentially completely different APIs, and they are mutually exclusive. This means that you get to rewrite your engine every time you want to support a new "technology".
In a lot of ways, I'm really happy that cards (specifically the radeon and GF3) are moving towards programmability and higher quality visuals. It's quite a different race than pushing fillrates and making up stats on textured poly's a second. Maybe when things are fully generalized, we won't NEED an API like DX that gets constantly mutated, torturing developers...
Oh wait, that's called OpenGL. (sorry, couldn't help it. ;)
-Chris [nondot.org]
Hard to tell much from screenshots (Score:1)
SGI Iris Crimson 1993 (Score:3)
With 128bpp, they may be talking about different buffers.
The high-end SGI workstations in '93 had an effective 140bpp video memory. (I thought I recalled 142, but this is from my hazy recollections.)
8+8+8bpp RGB front buffer.
8+8+8bpp RGB back buffer.
32bpp Z front buffer.
32bpp Z back buffer.
24bpp Windowing buffer.
4 bpp (rle compressed) per-pixel video mode selector.
I'd like to see more of that (plus today's dedicated memory for texture, vertex, transform, lightmaps, etc.)
As for color bit depth, 8bps (RGB 24bpp) is the most you'll see on most CRTs. You won't see 32bpp onscreen, usually the other 8 bits are just dword alignment for speed or an alpha channel for video source weaving.
However, the human eye is quite capable of seeing more colors in other situations; Hollywood typically does 16bps (RGB 48bpp) on their special effects, because they don't like to see 1"x1" jaggies or dithering on the 30' screen.
Re:kick ass card (Score:1)
Now all they need to make realistic looking graphics is 128bits for color and a few orders of magnitude of the speed improvement.
You're kidding me, right? Why do you need 128 bits of color? Your eye can't really discern above 24 bits of color. Anything more than that would just be a waste. As for speed, you can't really tell above 70 fps, either, which is what some of the better setups are hitting nowadays. (You can get up to about 125 or so, but you're not going to be able to discern a real difference.)
The GeForce 3 allows for essentially infinitely many different effects as its "vertex shader" (It just does whatever transforms on vertices) unit is fully programmable. You can do fisheye lenses, motion blur, lens flare, whatever with the unit. That's where the beauty of it is, since nVidia knows that pushing the speed and color envelope isn't really important anymore. Now their focus is on getting more effects out there for programmers to play with.
It should be rather interesting, as hopefully programmers won't have to worry about optimizing graphics code so much, sacrificing time from AI, physics, STORY, etc. Maybe this whole post sounds like some pro-nVidia propaganda, and I hope it hasn't come off that way, but I've just been rather impressed with the *reviews* of the card so far (in other words, no, I haven't seen it in action yet).
Re:I bet right now (Score:1)
Re:GeForce advantages? (Score:4)
Take OpenGL as an example. The "T" (in "T&L") functions are glRotatef, glTranslatef, glScalef, and glMulMatrix. Before there was hardware T&L, people don't use these functions often - they write their own. And it was amazing that even a very simple unoptimized matrix transform code performs better than these gl functions most of the time.
What hardware T&L does (in terms of OpenGL) is to accelerate these functions in hardware - formerly, the OpenGL library does inefficient software transform. Now they'll just blast the arguments to some chip registers and let it do the rest. And it is fast, not only because it reduces bandwidth use (intra-chip communication is fast), but it also releases CPU cycles for other uses, which inevitably will have a positive impact on performance.
So, in short, if developers ditch their own matrix libraries and use the ones provided by the graphics API, they're already making use of hardware T&L. And, yes, unfortunately, hardware T&L only has things to do with frame rates - there's no other advantage than frame rate that hardware T&L provides.
Just remember - ALL effects are archeavable with software. The more you offload from the CPU to the GPU, the more CPU cycles you can save for physics, AI, and graphic effects that the hardware does not do yet. So, even hardware that "only" increases framerate sounds good enough for me.
Re:so im looking at the screenshot (Score:1)
(if you don't catch the meaning of that joke, well... then you didn't catch the initial joke, either)
Troll??? (Score:1)
Re:kick ass card (Score:1)
The output would still be 32-bit. It would only be processed at 128-bit internally.
------
Re:Optimized for DX8 (Score:2)
That's just a few of the new extensions NVidia has added to OpenGL to support the new features of the GeForce 3. They are every bit as good as the DX8 stuff.
------
Re:Hard to tell much from screenshots (Score:4)
Another thing to note is that alot of the really cool effects this thing can do are only noticeable in moving images. Bump mapping, for example, can be faked using regular texturing if nothing is moving. You really only see the difference when you see it in action.
------
Re:Oooh! (Score:1)
So, even if the indrema has a 750mhz cpu, on paper the Xbox is still a stronger platform (in situations where the GPU's extra pipes help)
Also, the Xbox has developers and financial backing. I think its a neat idea, but don't hold your breath for Indrema
Re:kick ass card (Score:1)
Re:I will be happy when it's out (Score:2)
One of the reasons for the high price of the Ultras is the high price for components: It uses very fast (and expensive) memory chips, and this is the main reason for the performance increase. If memory serves, the memory on the GeForce Ultra is faster than on the GeForce3 - the latter uses new technology to make better use of available bandwidth instead of increasing it.
So until the prices on this kind of memory decreases quite a bit, I don't see the Ultras coming down
blah blah.. bitch bitch bitch.... (Score:1)
Re:The GeForce3 is great, but... (Score:1)
--
Re:Why I dont buy Nvidia. (Score:1)
Incidentally, the Xpert98 is the card I recommend to people wanting to build cheap x86 Unix-workalike boxes, because in addition to its crisp 2D and low price, it also boasts excellent support in XFree86.
ATI, despite their failings, will probably always lead nVidia in this area. I got another SUN workstation a few weeks ago, and its Riva chipset is very bright and crisp. (Offtopic: In fact, possibly too bright... when viewing large fields of white, I seem to hear a high-pitched whirring/humming coming from the case. It's not the monitor or other interference, and it's bugging me. It's not the annoying sound so much as I'm worried that there's something wrong with the chip.)
IIRC Number Nine cards had legendary 2D and were very popular for x86 UNIX and Unix-workalike boxes, but I think they went out of business a year or two ago.
--
moderation (Score:1)
Re:NV20 at GDC (Score:2)
That used to be a typical price for a "professional" 3D board, with hardware geometry transformation. I used it to run Softimage.
NV20 at GDC (Score:4)
Yes, it cost $500, but I paid over $2000 for a far-inferior 3D board just a few years ago.
It looks like Hercules will be the lead board vendor on this round. Creative is dropping out of graphics boards.
Carmack has written that all developers should get one of these boards as soon as possible. Gamers may want to wait.
Re:kick ass card (Score:2)
GeForce advantages? (Score:1)
Re:GeForce advantages? (Score:1)
Optimized for DX8 (Score:1)
I think this is a big mistake in the creation of the GeF3.
Re:Yawn (Score:1)
The fact that these babies can pull off this full-screen anti-aliased stuff with multiple transparencies and all sorts of multi-textured goodness at a high framerate sets them apart from anything available today.
Graphics cards will keep getting better, always getting faster in order to achieve more detail, hence realism.
Re:so im looking at the screenshot (Score:1)
pass me some of what you're smoking
Re:Underwhelming (Score:1)
Well, then you're probably just not into that sort of thing. Nothing wrong with that, but it doesn't make sense for you to go around disparaging it just because it doesn't float your boat.
assembler (Score:1)
Re:Optimized for DX8 (Score:1)
I assume you're talking about programs using the video card.. I don't know about you, but 100% of the programs I run, windows and linux, all run OpenGL.
Who's the top manufacturer? (Score:1)
Re:I will be happy when it's out (Score:1)
Expect to see the likes of ELSA, Guillemot/Hercules and VisionTek (nothing from Creative this time around) fight it out for GeForce3 shelf space sometime in the coming month. Not only will they be releasing a new product line but also all of these companies have told us at one time or another that their current range of Geforce2's will have prices cut (but probably not the Ultras). This is a good thing..
I tend to agree, these are top top end, in a different catagory then the Ultras. It's not just an upgrade (as the Ultra 2 was to the geForce). More of an evolution. Prices drop when products are either not selling well or they are old technology (and a replacement is out). I don't believe that to be the case with the Ultras.
I showed you my two cents...
Re:Who's the top manufacturer? (Score:1)
Also, I'm a Radeon person. I get my drivers from ATI. Do Nvidia drivers come from Nvidia or the board manufacturer? Does it matter? Is Nvidia still commited to Linux as it was in the past? Or have things rolled into XFree86 as the Radeon drivers have?
In 12 Months (Score:1)
I bet in 12 months we think differently.
....
In 12 Months we would have GeForce 5 (Nvidia releases a new card, either speed bumped or new architecture every six months)
Not about Frame Rates anymore... (Score:3)
Gameplay would be next. AI would improve tremendously, storylines would improve, though sometimes you just dont care a f@#$ about the story and just wanna jump in and let that Chaingun rip.
My only worry would be that DirectX8 is fast becoming the API of choice among the developers (except Carmack who claims to only use OpenGL till kingdom come). And considering that Nvidia has now an unfair advantage over other cards, since they developed DirectX 8 along with M$, well my guess is as good as yours.
However a couple of weeks back, there was much stirring among the gamers when Kyro II kind of beat Ultra in Tile Based Rendering capabilities. I would welcome someone like that anytime.
Re:Optimized for DX8 (Score:1)
Re:But... (Score:1)
http://www.nvidia.com/Demos
They are as real as it gets...Impressive.
Re:Who's the top manufacturer? (Score:1)
The only tweaking is done by the board manufacturer, and then only the drivers.
Do Nvidia drivers come from Nvidia or the board manufacturer? Does it matter?
If you want to use the manufacturer's drivers, feel free. I personally use Nvidia reference drivers (Detonator). But some of the manufacturer's drivers enable overclocking, etc without changing much of anything except the card's name in Device Manager.
In short, it really doesn't matter who you buy it from. Buy the cheapest card with the best software bundle (my GeForce 2MX came with WinDVD2000 and Soldier of Fortune...it was in an Asus box). :)
the unbeliever
aim:dasubergeek99
yahoo!:blackrose91
ICQ:1741281
Re:Who's the top manufacturer? (Score:1)
My fiance has a Generic GeForce 2MX, and I have an Asus GeForce 2MX, I can't tell a difference between the two. We have identical monitors, at the same identical resolution and refresh rate, and as far as I can see, there is no difference. Then again, we are both running the Detonator drivers.
the unbeliever
aim:dasubergeek99
yahoo!:blackrose91
ICQ:1741281
not even close (Score:1)
but as far as games are concerned, programmers are constantly pushing the limits. why? cuz you still can't render a photo-realistic 3d scene. people still have angular heads and rectangualr hands. trust me, the video card market is still on the steep part of the parabola.
Re:Why 3dfx is no longer around (Score:1)
Re:Why 3dfx is no longer around (Score:1)
Re:But... (Score:1)
-fragbait
Price my biggest concern (Score:1)
In 12 months the price will be what, half it is now? Still too high for this potential customer.
If current games are any indication of what to expect I doubt I'll be buying any for some time:
Counter-Strike (a Half-Life modification) is filling up with egotistical people who think a year is a long time to have been playing the game.
Quake3 was a *big* disappointment. Especially with CTF, where there is no grapple and there are bottomless pits that do instant kills, stuff you didn't see in Quake2 CTF, probably the biggest mod for the Quake trilogy. Rocket arena is impressive, but doesn't have quite the same loyal following.
Baldur's Gate series and the various MMORPGS just use graphics as eye-candy to lure 30-something year old men and women who are into fantasizing and playing roles, the meat of the games often misses the point of stratetic play, and they are a big downer for those of us hoping for more.
Sacrifice is, IMHO, the best game to have come out to take advantage of what some of the new graphics cards can do. Unfortunately for the rest of us, developers like Shiny are dropping the PC platform and now going back to console programming.
I hate to say it, but it looks like the future of gaming is going to rest back in the hands of consoles. I guess they finally figured out broadband was the device to attract users who used to use consoles before computers.
Only problem is that I don't want to go back to consoles. Guess I'm just screwed. Hell awaits me.
This brings up a funny story (Score:2)
Everyone glanced at us when I loudly asked, "You don't have a Voodoo5 do you?!?".
He sheepishly packed up his computer and left.
Re:Optimized for DX8 (Score:1)
Yeah we will (Score:1)
By the way, you also claim that all the "hardware features of nvidia's last few generations" which I'll assume means "Hardware-assisted T&L" has "no software taking full advantage of it." This is just outright false. Every game made using the Quake 3 engine uses T&L (like Alice, FAKK, and that new Star Trek shooter), as do Giants: Citizen Kabuto, Sacrifice, all are examples of current 3d titles using T&L. You may say "they're not taking FULL advantage! They're not using Hardware Mipmapping!" or some other obscure function, but we all know that T&L was the GeForce's flagship technical function and developers have rallied behind it.
What am I trying to say? Don't dismiss nVidia's new innovations as things developers are not going to support, because there's a damned good chance they will.
kick ass card (Score:1)
Re:kick ass card (Score:1)
Why 3dfx is no longer around (Score:1)
Re:kick ass card (Score:1)
so im looking at the screenshot (Score:2)
Ummm... (Score:1)
you never thought it was possible
It keeps getting
BIGGER
--
Re:Hard to tell much from screenshots (Score:1)
Still nothing is going to compare to my Riva TNT
Re:Geek Guide to getting women (Score:1)
No one cares about your lame ass site..
We Geeks don't even like girls, we have an attachment to our puters
Re:Face reality: (Score:1)
Re:Optimized for DX8 (Score:2)
Its somewhat wrongheaded to bash NVidia for having good DirectX support as they more than any other card manufacturer has pushed OpenGL as a viable API to use for gaming under Windows. A lot of current NVidia staff are old SGI employees that helped develop OpenGL and related libs (GLUT, etc).
And while I have no first hand knowledge of such things, from all outward reports its more the other way around. Microsoft checks up on the big manufacturers of videocards (hint: there's not many left in the high-end consumer 3D space) to see what features they are looking to implement, and works those into DirectX..So if anything, DX8 is optimized for GF3 as opposed to the other way around...
In any case this makes good business sense for Microsoft and NVidia because of the XBox..both stand to make a lot of money if it is as successful as they hope, and the chipset is essentially a 'better GF3' (primarily due to being able to throw off some of the shackles of standard PC architecture and backwards compatibility).
Cannot wait to see this with XFree86!! (Score:1)
Underwhelming (Score:2)
Tom's Hardware Guide (Score:1)
Hardware testers: Please show the MINIMUM fps! (Score:1)
In all tests of new graphics cards, the average FPS is shown. And with new cards, it's usually something like 165 fps and such. But who cares? The only interesting figure is how much time there will be between frames when there are a large number of enemys on the screen, doing something that takes calculation time, such as shooting. So, instead of giving average fps's, make a demo that does some beyond-the-usual action and show us the WORST fps. THAT's what matters. You don't want that below 40 or something if you play like CounterStrike.
I will be happy when it's out (Score:1)
Can it do calculus? (Score:1)
Order one later... (Score:1)
--
My, what an insightful yawn! (Score:1)
--
Quake III (Score:1)