Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

Zooming in on the GeForce 3 90

Couple of more in depth hands on looks at the GeForce 3 popped up lately including Sharky's coverage (with DX8 screenshots) and AnandTech's take. Same basic story. Good card, ahead of its time, overpriced, nothing will take advantage of its best features. I bet in 12 months we think differently.
This discussion has been archived. No new comments can be posted.

Zooming in on a the GeForce 3

Comments Filter:
  • by Anonymous Coward
    "I bet in 12 months we think differently."

    If you owned a Mac, you could Think Different right now.

    --
    Anonymous cowards live for moments like these.

  • by Anonymous Coward
    Thats why nvidia should invest efforts in other features.

    1. built in analog capture
    2. built in firewire IO
    3. built in IDCT mpeg2 decomp/ compression
    4. built in video overlay for mpeg streams

    i nother words, the same shit $6000 cards do but for $300.

  • by Anonymous Coward
    Games have always pushed the limits of hardware, and have been responsible for a lot of the advances in hardware over the years. Sure there are always some strategy games out there like Civilization which will not need the latest greatest stuff. But other games will try new tricks, push new features, and see what kind of magic they can perform.

    People complain about the video card upgrades but looking back, this isn't such a bad situation. I don't buy the latest video card every time one comes out. I tend to go every other generation or so. At home I've used a Voodoo 1, a TNT 1, and GeForce 1 so far. Total money into these is around 400 bucks because I didn't buy them all hot off the production line.

    Now compare that to the pre-hardware acceleration days. If some game was pushing your computer beyond its limits you had to buy a new CPU. A single CPU would easily cost more than the three upgrades I've done on video cards. So I like the situation we are in these days, hardware-wise.

    As for gameplay value - well that's something people determine pretty easily. Better games will rise to the top - whether they have the latest greatest effects or not. Fallout and Halflife were both great sellers. Fallout was 2D, nothing flashy, while Halflife was using the Quake engine. Yet they both sold well because they were fun to play, which is the bottom line.
  • by Anonymous Coward

    I wish the drivers were open, or even more open -- but that's not the main reason why I dont eagerly
    follow Nvidia video cards (and, instead look more toward Matrox and ATI). FWIW, the kernel driver is fairly actively hacked on, so when kernel module changes break them, an (unofficial) fix is usually posted within a few hours/days.

    The reason I dont go for Nvidia cards is the (often forgotten) issue of 2D video quality. I've used TNT's and Geforces, but gee, text on the desktop is just not very sharp. There's a world
    of difference in image quality between Matrox,ATI and Nvidia based cards. Nvidia's licensees tend to cheap out on the filters on the card. I hope this situation changes, but since they're all competing on price -- well, I'm not holding my breath. I play the occasional game, but I have to stare at the desktop for long periods...Geforces just dont have the sharpness of their competitors.

    FWIW, I found that Number Nine (Alas, defunct now) made the sharpest, fastest 2d graphics+text displays. The Imagine128 and Revolution series (T2R-based; the chips they made themselves) were legendary in that regard.
  • by Anonymous Coward
    128 is not silly at all. The problem is only partially accuracy - it's also dynamic range. 32-bit integers *still* don't carry enough dynamic range to properly differentiate between a moonlit and a sunlit scene. Ideally you would use 32-bit floats. That's 23 bits of precision [more than enough] but more importantly 256 bits of dynamic range. I forget the dynamic range of the human eye, but it's vast due to the ability of the iris to allow more or less light onto the retina [the dynamic range of the retina is much smaller].

    Once you calculate the exact amounts of light [using that vast dynamic range] you then post-process the entire scene with an iris simulator which brings the pixels back into the dynamic range of the monitor ("retina").

    The whole complex process is required when you want to properly simulate scenes which are mostly dark but have extremely bright parts - e.g. a prison escape with spotlights.

    Some of the work at SIGGRAPH 96 in this area was awesome.

  • by Anonymous Coward
    Honestly, except for a very small number of rabid gamers, plus professionals with very special computing needs, who cares about this stuff? For that matter how many gamers can tell the difference between a bleeding edge graphcis card and an "old" one from 12 or 18 months back, without running a benchmark?

    Face it, graphics hardware has hit the same plateau that CPU power has--for the overwhelming majority of users what we have is fine, and there's no reason to upgrade.

  • by Bazman ( 4849 ) on Friday March 23, 2001 @04:16PM (#343813) Journal
    Who's going to be the first to say the screenshots have been edited in Photoshop? (or gimp) :)

    Baz
  • 128 bit color? 24 bit color most people can no longer distiguish between individual changes in color, 32 bit color is quite enough.

    While this is true, most cards add a few extra bits per colour component internally to keep roundoff errors in blending from causing visible artifacts. 30 bits was standard for that, if I recall correctly (more if you count the alpha channel).

    128 would just be silly, of course...
  • 8 bits per component suffices for display on CRTs in typical office viewing environments, but change the display technology and/or the viewing conditions and you'll need more.

    Doubtful, IMO. Human colour vision isn't infinitely acute. Black-and-white vision is a bit better than colour, requiring somewhere in the range of 10-12 bits before we can't see colour bands, but that's about it.

    I'm skeptical of conditions changing this very much. How would you modify the environment to give us the ability to see finer variations in colour?
  • 128 is not silly at all. The problem is only partially accuracy - it's also dynamic range. 32-bit integers *still* don't carry enough dynamic range to properly differentiate between a moonlit and a sunlit scene. [...] I forget the dynamic range of the human eye, but it's vast due to the ability of the iris to allow more or less light onto the retina [the dynamic range of the retina is much smaller].

    However, with all current display technologies, you can see all of the screen at one time. Thus, varying the iris size just brightens or darkens the scene by a constant factor. Within the scene as a whole, the dynamic range you can percieve is just the dynamic range of the retina - which is quite low, as you point out.

    Thus, being able to accurately represent sunlit and moonlit scenes on the same monitor would be useless; your iris would respond to the average brighness of the screen, which would cause the sunlit scene to look washed-out and the moonlit scene to look black.

    If you're looking at the images one at a time, you might as well just normalize both to the same 256-level brightness range.
  • ...it won't be ahead of its time anymore :-)

    Cheers,

    Tim
  • " Good card, ahead of its time, overpriced, nothing will take advantage of its best features. I bet in 12 months we think differently."

    ...and in 12 months we'll pay about a third for it too.... think i'll wait

  • And, of course, if the Indrema comes out, it's gonna have one of these puppies in it. :)

    (Send all "yeah, but it wont have games" replies to /dev/null)
  • Actually by making it pretty easy to put "flashy" effects in, this card might allow the developer to spend more time working on the other parts of the game. The "next big effect" is required in order to get noticed, otherwise people bitch up and down about how behind the times you are. If hardware T&L and the like make it easier to get nifty effects, then we'll have more time to work on AI, gameplay, stuff like that.

    In short, if you want games with plot and gameplay, perhaps these Uber-cards aren't the enemy after all.

  • Actually you can't even see 24bit colour from an emittive source such as a monitor or a tv. Reflective surfaces such as paper you can see about 50 milliion colours, but then of course we forget about the small percentage of the female population who are essentially anti-colourblind, and have 4 colour receptors in their eyes as opposed to our 3 (or two in most colour blindness).

    Of course, when they say 128bit colour, they mean intenrally, and trust me it's a good thing, 3 or 4 4channel (rgba) polygons on top of each other can introduce a fair amount of clipping in a 36 bit colour space before it gets trimmed down to 24bpp for output.

    And don't jump up about you're in 32bit colour mode, you're in 24bit colour mode, with an extra byte per pixel to speed up access.


    --Gfunk
  • All the cards are based on NVidia's reference design (with very few exceptions.. very very few) and so all the cards are basically the same.

    They only vary with the heatsink/fan used and the quality of the RAM (some kinds of RAM can overclock higher than others so that's something to keep a look out for). All the cards run at the same default MHz for memory and core speed, and the RAMDACs are identical because they're built into NVidia's chip.

    In the end, buy the cheapest one you can find because they're all the same.
  • In 12 months, there probably WONT be anything taking full advantage of the hardware features in the GF3; sure, there will be plenty of titles using the full raw horsepower for frame and fill rate, but the advanced hardware features that are unique to the GF3 probably won't be used. Look at the unique hardware features of nvidia's last few generations - there's still no software taking full advantage of it. And with the development time required by PC games today, games in development *right now* will be lucky to see stores in 12 months, and its unlikely the developers are going to start re-coding their engines to add more cruft in the current belt-tightening economy.

    So, yeah, in 12 months, we'll probably *still* be saying "nothing takes advantage of it".

  • The voodoo 5 5500 only uses an internal power connector. The 6000, which had the external power brick,was never released. So your story is not very funny.
  • 8 bits per component suffices for display on CRTs in typical office viewing environments, but change the display technology and/or the viewing conditions and you'll need more.
  • Last I heard the GF3 was coming out the same time it was on the PC as it was for the MAC.

    -PovRayMan
    ----------
  • > For that matter how many gamers can tell the difference between a bleeding edge graphcis card and an "old" one from 12 or 18 months back, without running a benchmark?
    By the blockiness of the models (low poly), and stuttering framerate with all the "high-detail" options turned on.

    > Face it, graphics hardware has hit the same plateau that CPU power has

    Oh please.

    CPUs are still dog-slow. Realistic cloth movement sucks the cpu dry just on the physics calcs alone.

    The GeForce 3 can render "Luxo Jr" in real time (Check the GF3 preview at MacExpo) and that's no where even _close_ to photo-realism. We *finally* are *starting* to see the ability to render heat waves, the ability to render hair properly. (Along with the end to billboarded trees - thank god.)

    Graphics (and video cards) have quite a way to go still.
  • Most human senses run about 80 dB of dynamic range, with about a 40 dB "AGC" (automatic gain control). For example, your eye has the iris and the changing levels of visual purple to change the sensitivity, but an any given time you could see about a 80 dB range. Furthur, our senses are logrithmic.

    For folks who aren't used to working in dB: 80 dB means a ten to the eighth change: about 100 million to one. For example, full sunlight ia about 1000 watts per sqare meter. However, you can see quite well in a room lit by a single 1 watt lightbulb, which is putting about ten milliwatts per square meter of wall (assuming a fairly large room).

    Way back when, researchers found that, for a change to be perceptible, it had to be about a percent change. In other words, if I give you a one kilogram mass to hold, and then hand you a one mass of 1001 grams, you probably won't be able to tell the difference, but if I give you a 1010 gram mass, you will. Researchers at Bell labs decided to make a unit to measure this, and called it the Bel (modest, aren't they?) However, a Bel was defined as log10(change) and that was just a little bit coarse, so in true metric fashion they used tenths of a Bel, or deci-bels (which is why decibel is properly written dB, not db).

    Now, in graphics cards they use a linear mapping of values: 128 is twice as bright as 64 and a little more than half a bright as 255. If they were to make the color scale logrithmic (10 is ten times a bright as 1, 20 is ten times a bright as 10, etc.), they would do much better at matching the eye's response. However, this makes the math of working with the data MUCH more complicated, so they just add bits.

    (In fact, this sort of mapping is actually used in the telephony system: the 8 bits used to represent the voice signal are mapped to a log response, to better get the dynamic range to represent a whisper to a shout.)
  • Nah, they both use 230MHz DDR (460MHz effective) SDRAM. But, you are correct, the GeForce better utilizes the bandwidth.

    Mark Duell
  • So this means that we will see more flashy games with no plot or gameplay. Some of the simplest games are the best (starcraft & warcraft come to mind for one particular genre)... Should we really encourage game developers to spend all of their time on the "next big effect"?

    -Chris

  • Honestly, except for a very small number of rabid gamers, plus professionals with very special computing needs, who cares about this stuff?

    Game developers.

    In 18-24 months, the GF3 level of "computing" performance will be more accessible and commonplace. Incidently this is about the amount of time it takes to build a game... and developers like to be able to debug on REAL hardware. :)

    -Chris [nondot.org]

  • by sabre ( 79070 ) on Friday March 23, 2001 @04:38PM (#343832) Homepage

    Actually by making it pretty easy to put "flashy" effects in, this card might allow the developer to spend more time working on the other parts of the game.

    Perhaps, but not likely. Almost every effect is a special case hack that is designed to look good, at the expense of generality. This means that the engines for games (Quake 2/3 is a perfect example) become very specialized. If you want to base a second game off of the same engine, you get exactly the same set of effects as before, because the design is not very extensible.

    My reference to simpler games of yesteryear goes farther though... try to get a group of people together for a netparty, for example... how many people do you think will be able to run stuff that requires a Geforce 1, much less a brand new Geforce 3? Second quesion is: why would you want to?

    It seems like the technology trend is to push the envelope to the next step (GF3 is a logical progression from GF2), and then software has to play catchup (no, not ketchup :). It seems that, especially with the first few games for a technology, games focus on doing things because they "can", not because they "should".

    Instead of designing games with a clear focus, plot, and motivation, games tend to get dilluted to being flashy silly things. Sure they look cool and have nifty features, but are they really FUN to play? Where is the replay value and interactivity with other human players?

    The problem with, for example, vertex shaders and the other DX8 features is that they are not a simple extension of a uniform graphics architecture.

    Instead of being able to designing a simple graphics engine that supports a variety of features, and then enhancing it as the game progresses, they have to "Target" a "technology" (for example DX7 or DX8). Of course, they two are essentially completely different APIs, and they are mutually exclusive. This means that you get to rewrite your engine every time you want to support a new "technology".

    In a lot of ways, I'm really happy that cards (specifically the radeon and GF3) are moving towards programmability and higher quality visuals. It's quite a different race than pushing fillrates and making up stats on textured poly's a second. Maybe when things are fully generalized, we won't NEED an API like DX that gets constantly mutated, torturing developers...

    Oh wait, that's called OpenGL. (sorry, couldn't help it. ;)

    -Chris [nondot.org]

  • Looks like my machine, and the "goodness" of the image depends on the rendering engine a lot more than the video card....
  • by Speare ( 84249 ) on Friday March 23, 2001 @05:49PM (#343834) Homepage Journal

    With 128bpp, they may be talking about different buffers.

    The high-end SGI workstations in '93 had an effective 140bpp video memory. (I thought I recalled 142, but this is from my hazy recollections.)

    8+8+8bpp RGB front buffer.

    8+8+8bpp RGB back buffer.

    32bpp Z front buffer.

    32bpp Z back buffer.

    24bpp Windowing buffer.

    4 bpp (rle compressed) per-pixel video mode selector.

    I'd like to see more of that (plus today's dedicated memory for texture, vertex, transform, lightmaps, etc.)

    As for color bit depth, 8bps (RGB 24bpp) is the most you'll see on most CRTs. You won't see 32bpp onscreen, usually the other 8 bits are just dword alignment for speed or an alpha channel for video source weaving.

    However, the human eye is quite capable of seeing more colors in other situations; Hollywood typically does 16bps (RGB 48bpp) on their special effects, because they don't like to see 1"x1" jaggies or dithering on the 30' screen.

  • Now all they need to make realistic looking graphics is 128bits for color and a few orders of magnitude of the speed improvement.

    You're kidding me, right? Why do you need 128 bits of color? Your eye can't really discern above 24 bits of color. Anything more than that would just be a waste. As for speed, you can't really tell above 70 fps, either, which is what some of the better setups are hitting nowadays. (You can get up to about 125 or so, but you're not going to be able to discern a real difference.)

    The GeForce 3 allows for essentially infinitely many different effects as its "vertex shader" (It just does whatever transforms on vertices) unit is fully programmable. You can do fisheye lenses, motion blur, lens flare, whatever with the unit. That's where the beauty of it is, since nVidia knows that pushing the speed and color envelope isn't really important anymore. Now their focus is on getting more effects out there for programmers to play with.

    It should be rather interesting, as hopefully programmers won't have to worry about optimizing graphics code so much, sacrificing time from AI, physics, STORY, etc. Maybe this whole post sounds like some pro-nVidia propaganda, and I hope it hasn't come off that way, but I've just been rather impressed with the *reviews* of the card so far (in other words, no, I haven't seen it in action yet).

  • But if the mac ever overtakes PCs... how differently will you be thinking?
  • by Wolfier ( 94144 ) on Friday March 23, 2001 @09:04PM (#343837)
    Transform and Lighting is nothing new - all 3D programs do transform - rotations, scales, translations, skews, projection, etc.

    Take OpenGL as an example. The "T" (in "T&L") functions are glRotatef, glTranslatef, glScalef, and glMulMatrix. Before there was hardware T&L, people don't use these functions often - they write their own. And it was amazing that even a very simple unoptimized matrix transform code performs better than these gl functions most of the time.

    What hardware T&L does (in terms of OpenGL) is to accelerate these functions in hardware - formerly, the OpenGL library does inefficient software transform. Now they'll just blast the arguments to some chip registers and let it do the rest. And it is fast, not only because it reduces bandwidth use (intra-chip communication is fast), but it also releases CPU cycles for other uses, which inevitably will have a positive impact on performance.

    So, in short, if developers ditch their own matrix libraries and use the ones provided by the graphics API, they're already making use of hardware T&L. And, yes, unfortunately, hardware T&L only has things to do with frame rates - there's no other advantage than frame rate that hardware T&L provides.

    Just remember - ALL effects are archeavable with software. The more you offload from the CPU to the GPU, the more CPU cycles you can save for physics, AI, and graphic effects that the hardware does not do yet. So, even hardware that "only" increases framerate sounds good enough for me.
  • set your monitor to a 85htz refresh rates, and you're PAST that!

    (if you don't catch the meaning of that joke, well... then you didn't catch the initial joke, either)
  • wtf?
  • When 32-bit colors go through 20 rendering passes, the error adds up, and the output looks like crap. To stop this, we need colors to be represented internally by four floating-point values, each one being 32 bits in size. 4*32 = 128.

    The output would still be 32-bit. It would only be processed at 128-bit internally.

    ------

    • NV_vertex_program
    • NV_register_combiners2
    • NV_texture_shader
    • NV_evaluators

    That's just a few of the new extensions NVidia has added to OpenGL to support the new features of the GeForce 3. They are every bit as good as the DX8 stuff.

    ------

  • by Temporal ( 96070 ) on Friday March 23, 2001 @04:25PM (#343842) Journal
    The main thing to notice is the combined usage of bump maps and environment maps. This wasn't possible before, and it looks really cool. Trust me, the video card makes a HUGE difference in the "goodness" of the image, provided that the rendering engine is set up to take advantage of it. Image quality is mainly determined by the detail of the art. However, newer cards support better features for displaying higher-detail images. Sure, you could do these shots on a Voodoo, but without hardware accelleration for all the rendering algorithms, it will run very very slowly (measured in spf rather than fps).

    Another thing to note is that alot of the really cool effects this thing can do are only noticeable in moving images. Bump mapping, for example, can be faked using regular texturing if nothing is moving. You really only see the difference when you see it in action.

    ------

  • One thing people forget about the GF3 vs the Xbox - they wont be the same chip. They're _similar_, but the GF3 cards you'll be able to buy still wont be as powerful as the XBox GPU - the XBox GPU will be more like a modified NV20 core with a couple of extra pipelines at different spots.. iirc it was 2 pixel shaders and 2 vertex shaders or something silly like that...

    So, even if the indrema has a 750mhz cpu, on paper the Xbox is still a stronger platform (in situations where the GPU's extra pipes help)

    Also, the Xbox has developers and financial backing. I think its a neat idea, but don't hold your breath for Indrema :)
  • In an ideal world, all pixel computation would take place in floating point, period. You'd choose color that way. 24/32 bit is entirely inadequate for sophisticated image processing. It's not about what gets on the display; it's about the computations that got it there.
  • One of the reasons for the high price of the Ultras is the high price for components: It uses very fast (and expensive) memory chips, and this is the main reason for the performance increase. If memory serves, the memory on the GeForce Ultra is faster than on the GeForce3 - the latter uses new technology to make better use of available bandwidth instead of increasing it.

    So until the prices on this kind of memory decreases quite a bit, I don't see the Ultras coming down

  • this card isnt about the higher FPS numbers, its about the programmable hardware. The Carmack is wringing all the 31337 out of this thing and if you don't have one, then you aren't going to experience Doom3 as it will be meant to be experienced... But by the time Doom3 ships (Christmas, 2002 anyone?) ...the GF3 will be about $350 and the next best will be there to make us all turn green.
  • No. I don't read Wired. I get all of my news from Steve Jobs. He Lives In My Closet And Tells Me Things.

    --

  • This is so true. A $40 ATI Xpert98 will, amazingly, give you a brighter and crisper desktop than nVidia's latest and greatest. This problem has been around for at least a couple years too.

    Incidentally, the Xpert98 is the card I recommend to people wanting to build cheap x86 Unix-workalike boxes, because in addition to its crisp 2D and low price, it also boasts excellent support in XFree86.

    ATI, despite their failings, will probably always lead nVidia in this area. I got another SUN workstation a few weeks ago, and its Riva chipset is very bright and crisp. (Offtopic: In fact, possibly too bright... when viewing large fields of white, I seem to hear a high-pitched whirring/humming coming from the case. It's not the monitor or other interference, and it's bugging me. It's not the annoying sound so much as I'm worried that there's something wrong with the chip.)

    IIRC Number Nine cards had legendary 2D and were very popular for x86 UNIX and Unix-workalike boxes, but I think they went out of business a year or two ago.

    --

  • Uh, why is this a troll? Come on, moderation downwards is for abuses, not for controversial or strange viewpoints.
  • You spent $2000 for a video card? I can't tell if you're just an idiot or a developer.

    That used to be a typical price for a "professional" 3D board, with hardware geometry transformation. I used it to run Softimage.

  • by Animats ( 122034 ) on Friday March 23, 2001 @07:00PM (#343851) Homepage
    The NV20 is all over the Game Developer's Conference this week, usually running a demo with a truly beautiful animated frog. Properly programmed, this thing can produce output that looks like it's from something at the Renderman level. Finally, the picture on the screen looks as good as the picture on the box. Maybe better.

    Yes, it cost $500, but I paid over $2000 for a far-inferior 3D board just a few years ago.

    It looks like Hercules will be the lead board vendor on this round. Creative is dropping out of graphics boards.

    Carmack has written that all developers should get one of these boards as soon as possible. Gamers may want to wait.

  • 128 bit color? 24 bit color most people can no longer distiguish between individual changes in color, 32 bit color is quite enough.
  • I still own a TNT2 Ultra, and am happy with it (I'm content with playing at 800x600 resoltution). Now, one of the big features of the GeForce was the hardware Transform and Lighting capabilities. Question: What game(s) out there actually take advantage of this (or the GeForce in general) in a way other than frame rate? Remember, it's been over a year since the GeForce was introduced, so consider this before running out to get a GeForce 3.
  • Ok, you've got a good point. But, again, what games currently on the market actually _take advantage of_ HW T&L? I think there's Quake3 and MDK2, but can't recall anything else.
  • Should hardware manufacturers really "optimize" around proprietary (e.g. win only platforms?) APIS/languages?

    I think this is a big mistake in the creation of the GeF3.
  • Err, if you can't even tell the difference from the screenshots alone, then obviously you're not paying enough attention.

    The fact that these babies can pull off this full-screen anti-aliased stuff with multiple transparencies and all sorts of multi-textured goodness at a high framerate sets them apart from anything available today.

    Graphics cards will keep getting better, always getting faster in order to achieve more detail, hence realism.

  • that's like, deep, man

    pass me some of what you're smoking

  • Perhaps I'm missing something significant though.
    Yup, you're missing something. For one thing, the stills don't show the dynamics of motion involved - you can have an original GeForce SDR and compare it with a GeForce 2 Ultra on Quake 3, and they look pretty much the same. But they sure as hell play different.

    I find this 3d "revolution" disappointing anyway.
    Well, then you're probably just not into that sort of thing. Nothing wrong with that, but it doesn't make sense for you to go around disparaging it just because it doesn't float your boat.

  • I read in another article, that in order for games to use the chip the programmer needs to learn assembler code to use the special effects. Wasn't the purpose of that hardware speed overkill to make it easier for programmers. They didn't like PS2 for that reason and GeForce3 has the same thing. Why don't they make a chip which makes it easier to programm faster and more realistic games. As far as I have heard the hardware is way ahead the software and that is part of the current crises on the market. There is no software (aka good voice recognition) that boosts productivity and needs faster processors. Therefore one could classify nVidias decision to go that way as dangerous for the world economy (they almos alone now next to ati).
  • uhh, what? how does DX8 "power" 95% of cards NVidia sells?

    I assume you're talking about programs using the video card.. I don't know about you, but 100% of the programs I run, windows and linux, all run OpenGL.

  • Let's say you're a wealthy dot-commer (you were smart enough to sell your shares before the NASDAQ tanked). Which manufacturer makes the top NVIDIA based board? Guillemot/Hercules, ELSA, etc? I know that Creative isn't doing the Nvidia uber-board this time around; but does it matter? In terms of quality, drivers, support, etc.
  • According to the article:

    Expect to see the likes of ELSA, Guillemot/Hercules and VisionTek (nothing from Creative this time around) fight it out for GeForce3 shelf space sometime in the coming month. Not only will they be releasing a new product line but also all of these companies have told us at one time or another that their current range of Geforce2's will have prices cut (but probably not the Ultras). This is a good thing..

    I tend to agree, these are top top end, in a different catagory then the Ultras. It's not just an upgrade (as the Ultra 2 was to the geForce). More of an evolution. Prices drop when products are either not selling well or they are old technology (and a replacement is out). I don't believe that to be the case with the Ultras.

    I showed you my two cents...

  • Right, understand that. I was kind of assuming it was very NASCAR, ie there are small differences that make the difference between a winner and loser. On the surface, all the boards look and smell the same, but underneath is anyone tweaking things; one way or the other? Or does that just not apply here?

    Also, I'm a Radeon person. I get my drivers from ATI. Do Nvidia drivers come from Nvidia or the board manufacturer? Does it matter? Is Nvidia still commited to Linux as it was in the past? Or have things rolled into XFree86 as the Radeon drivers have?

  • ....
    I bet in 12 months we think differently.
    ....

    In 12 Months we would have GeForce 5 (Nvidia releases a new card, either speed bumped or new architecture every six months)

  • by cOdEgUru ( 181536 ) on Friday March 23, 2001 @04:03PM (#343865) Homepage Journal
    Average eye cannot discern any difference between anything beyond 70 fps. And now that we have hit the plateau, its quite obvious that image quality is next. Even the GeForce2 Ultra hits a measly 25 fps when you run it on 32 bit, 1600 * 1200 splendour with 4x FSAA enabled. This is whats gonna keep the vendors busy for the next one year or more, bringing ultra realism to the graphics. Halo, Unreal II would all be capitalizing on that issue.

    Gameplay would be next. AI would improve tremendously, storylines would improve, though sometimes you just dont care a f@#$ about the story and just wanna jump in and let that Chaingun rip.

    My only worry would be that DirectX8 is fast becoming the API of choice among the developers (except Carmack who claims to only use OpenGL till kingdom come). And considering that Nvidia has now an unfair advantage over other cards, since they developed DirectX 8 along with M$, well my guess is as good as yours.

    However a couple of weeks back, there was much stirring among the gamers when Kyro II kind of beat Ultra in Tile Based Rendering capabilities. I would welcome someone like that anytime.
  • It's actually optimized for both OpenGL and Direct3D, this is just marketoid information for people with blinking VCR clocks and lots of cash.
  • Well, If you can get a hold on one of these babies, you can run the demos from the NVIDIA site:
    http://www.nvidia.com/Demos

    They are as real as it gets...Impressive.
  • On the surface, all the boards look and smell the same, but underneath is anyone tweaking things; one way or the other?

    The only tweaking is done by the board manufacturer, and then only the drivers.

    Do Nvidia drivers come from Nvidia or the board manufacturer? Does it matter?

    If you want to use the manufacturer's drivers, feel free. I personally use Nvidia reference drivers (Detonator). But some of the manufacturer's drivers enable overclocking, etc without changing much of anything except the card's name in Device Manager.

    In short, it really doesn't matter who you buy it from. Buy the cheapest card with the best software bundle (my GeForce 2MX came with WinDVD2000 and Soldier of Fortune...it was in an Asus box). :)

    the unbeliever
    aim:dasubergeek99
    yahoo!:blackrose91
    ICQ:1741281

  • Well...

    My fiance has a Generic GeForce 2MX, and I have an Asus GeForce 2MX, I can't tell a difference between the two. We have identical monitors, at the same identical resolution and refresh rate, and as far as I can see, there is no difference. Then again, we are both running the Detonator drivers.

    the unbeliever
    aim:dasubergeek99
    yahoo!:blackrose91
    ICQ:1741281

  • no way. video cards are nowhere near a plateu, horsepower-wise. processors don't really need any more oomph to do what they're used for, which is mostly manipulating windows and graphics.

    but as far as games are concerned, programmers are constantly pushing the limits. why? cuz you still can't render a photo-realistic 3d scene. people still have angular heads and rectangualr hands. trust me, the video card market is still on the steep part of the parabola.

  • Actually I was at work and didn't want to log in.
  • Just because I don't respond to people all day doesn't make me a newbie, stop assuming things.
  • uh...that would be you

    -fragbait
  • Same basic story. Good card, ahead of its time, overpriced, nothing will take advantage of its best features. I bet in 12 months we think differently.

    In 12 months the price will be what, half it is now? Still too high for this potential customer.

    If current games are any indication of what to expect I doubt I'll be buying any for some time:

    Counter-Strike (a Half-Life modification) is filling up with egotistical people who think a year is a long time to have been playing the game.

    Quake3 was a *big* disappointment. Especially with CTF, where there is no grapple and there are bottomless pits that do instant kills, stuff you didn't see in Quake2 CTF, probably the biggest mod for the Quake trilogy. Rocket arena is impressive, but doesn't have quite the same loyal following.

    Baldur's Gate series and the various MMORPGS just use graphics as eye-candy to lure 30-something year old men and women who are into fantasizing and playing roles, the meat of the games often misses the point of stratetic play, and they are a big downer for those of us hoping for more.

    Sacrifice is, IMHO, the best game to have come out to take advantage of what some of the new graphics cards can do. Unfortunately for the rest of us, developers like Shiny are dropping the PC platform and now going back to console programming.

    I hate to say it, but it looks like the future of gaming is going to rest back in the hands of consoles. I guess they finally figured out broadband was the device to attract users who used to use consoles before computers.

    Only problem is that I don't want to go back to consoles. Guess I'm just screwed. Hell awaits me.

  • I attended a LAN party where a guy came in and hooked up his computer, monitor, and speakers and then asked if there was a fourth outlet he could use.

    Everyone glanced at us when I loudly asked, "You don't have a Voodoo5 do you?!?".

    He sheepishly packed up his computer and left.
  • Yes, if this "proprietary" API is used to power 95% of cards they sell.
  • Psh, you have no base for your claims. Unreal 2 is specifically using several GeForce3-specific features in it, if you read the previews. ELSA is shipping a new build with their GF3 card of Giants: Citizen Kabuto that specifically takes advantage of the new features.

    By the way, you also claim that all the "hardware features of nvidia's last few generations" which I'll assume means "Hardware-assisted T&L" has "no software taking full advantage of it." This is just outright false. Every game made using the Quake 3 engine uses T&L (like Alice, FAKK, and that new Star Trek shooter), as do Giants: Citizen Kabuto, Sacrifice, all are examples of current 3d titles using T&L. You may say "they're not taking FULL advantage! They're not using Hardware Mipmapping!" or some other obscure function, but we all know that T&L was the GeForce's flagship technical function and developers have rallied behind it.

    What am I trying to say? Don't dismiss nVidia's new innovations as things developers are not going to support, because there's a damned good chance they will.

  • Coolest thing I've heard about GeForce 3 is from Tim Sweeney - the guy who made Unreal engine. In the last PC Gamer he said that the card is essentially feature complete for the purpose of creating photo-realistic realtime graphics. Now all they need to make realistic looking graphics is 128bits for color and a few orders of magnitude of the speed improvement.
  • People don't seem to get this -- 128 bit per color for on the board processing! How do you configure your monitor output is entirely up to you. It doesn't matter if you change your settings to output a sampled down equivalent of the color, what matters is the ability of the graphics programmer to perform multipass operations on the color components of the pixels. That includes all your RGBs, alpha blending, pixel shading, etc. Don't forget to consider the error in calculations.
  • Products like the GeForce3 are the reasons why Nvidia is flying high and 3dfx is out of business. Now who was really going to buy a huge video card that needed it's own power supply? I wonder how many people were going to buy it only to find out it wouldn't fit into their present machine.
  • 32 bit color is NOT enough, as John Carmack pointed out. With a single pass, it's sufficient, but with 4-20 passes, the one bit errors accumulate and the result looks like shit. Why do so many passes? If you have the fillrate to burn, you can do realistic shadows, multiple-light source per-pixel lighting, blurred reflections, BRDF materials, and other impressive looking effects.
  • and im thinking wow, that looks great, much better than my video card.... BUT THAT IS MY VIDEO CARD showing that image!
  • you never thought it was possible

    It keeps getting

    BIGGER


    --

  • I think they took a leaf out of the X-box book, and added a few lense flairs in photoshop :0)

    Still nothing is going to compare to my Riva TNT

  • Rot in hell spammer

    No one cares about your lame ass site..

    We Geeks don't even like girls, we have an attachment to our puters :)

  • Its funny how Geeks stared out being nice and accepting of everyone from all walks of life, but then turned into the same social ladder climbing, labeled clothes wearing preps that they were trying to escape from in the first place :)

  • As others have pointed out, NVidia is exposing the functionality of the GF3 through OpenGL extentions.

    Its somewhat wrongheaded to bash NVidia for having good DirectX support as they more than any other card manufacturer has pushed OpenGL as a viable API to use for gaming under Windows. A lot of current NVidia staff are old SGI employees that helped develop OpenGL and related libs (GLUT, etc).

    And while I have no first hand knowledge of such things, from all outward reports its more the other way around. Microsoft checks up on the big manufacturers of videocards (hint: there's not many left in the high-end consumer 3D space) to see what features they are looking to implement, and works those into DirectX..So if anything, DX8 is optimized for GF3 as opposed to the other way around...

    In any case this makes good business sense for Microsoft and NVidia because of the XBox..both stand to make a lot of money if it is as successful as they hope, and the chipset is essentially a 'better GF3' (primarily due to being able to throw off some of the shackles of standard PC architecture and backwards compatibility).

  • It would certainly give my GeForce2 a run for its money.

  • Am I the only one looking at these screenshots and thinking "this is underwhelming"? I mean with all the hype surrounding these cards and all the research that went into it I really expected to see a photographic quality 3d in a couple of years. From these cartoonish characters it looks like we're decades away from anything that resembles reality. I for one couldn't tell whether these shots are any better than those of Quake 3. Perhaps I'm missing something significant though. I find this 3d "revolution" disappointing anyway.
  • Tom's Hardware Guide [tomshardware.com] has a good article on it too. It's 'the longest article he has ever written and it doesn't even have any bar graphs.'
  • I just have to give this obligatory rant now.. since it's almost on topic:

    In all tests of new graphics cards, the average FPS is shown. And with new cards, it's usually something like 165 fps and such. But who cares? The only interesting figure is how much time there will be between frames when there are a large number of enemys on the screen, doing something that takes calculation time, such as shooting. So, instead of giving average fps's, make a demo that does some beyond-the-usual action and show us the WORST fps. THAT's what matters. You don't want that below 40 or something if you play like CounterStrike.

  • I make a practice of buying slightly behind the cutting edge hardware. I find it's much cheaper and nearly as powerful. In this case, I've been waiting for the GeForce3 to be released so the price of the GeForce2 Ultras will come down.
  • If the GeForce 3 can't derivatate, I want nothing to do with it.
  • When you can get it for $200 and games actually take advantage of it.
    --
  • I'm always happy when stuff like this comes out because it drives down the prices in a domino effect: 2nd best has to be cheaper than best, 3rd best has to be cheaper than 2nd best, et c. down to the cheap junk I buy.
    --
  • Well it seems like Sharky realy knows his stuff. He tested Quake 3 with DX8. Well that's nice, except Quake 3 uses OPENGL for rendering.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...