Forgot your password?
typodupeerror
Graphics Software

3dfx Unveils Info Regarding Voodoo 4 & 5 249

Posted by Hemos
from the better-graphics-coming-your-way dept.
A reader wrote to us about the latest press release from 3dfx regarding the Voodoo 4 and 5. The V4 and V5 will apparently be released in March of 2000. The V4 will be single processor, but the V5 will have both a commercial and professional version, respectively supporting up to 4 and up to 32 VSA-100 processors, and up to 128 and 2GB of RAM each. The release for the V4 and V5 is rolled in with the VSA-100 talks - definitely worth checking out.
This discussion has been archived. No new comments can be posted.

3dfx Unveils Info Regarding Voodoo 4 & 5

Comments Filter:
  • For the consumer market, products based on the VSA-100 deliver from 333 megatexels/megapixels per second up to 1.47 gigatexels/gigapixels per second fill rates using 16-128 MB of video memory and one to four processors per board.

    This has got to be the greatest bit of kit I've seen in a long time!

    Powerful? Without question!

    Linux support? I sincerly hope so!
  • we'll see when this board's released if it truly has these specs. plus, don't forget about nvidia, they have released new hardware so quickly this past year, i wouldn't put it past them to have a voodoo4/5 killer out by that time (or before). the geforce shows promise, but we'll have to wait until software is coded for these chips in order to see it perform in all of its glory.
  • by The Wing Lover (106357) <awh@awh.org> on Monday November 15, 1999 @07:26AM (#1531620) Homepage

    I can't figure out why everyone is so happy about 3dfx putting out another Voodoo chip. They're pushing a proprietary interface (Glide), where a perfectly good standard exists instead (OpenGL). They're using market pressure to get game manufacturers to adopt their standard, and lawsuits against developers who try to write Glide wrappers so that Glide-only games can be played on other video cards.

    Doesn't this sound a bit like another company that everyone is up in arms about?


    - Drew

  • by pen (7191) on Monday November 15, 1999 @07:26AM (#1531621)
    Let's get this off our chests...
    • Linux support?
    • This is just another trick by Microsoft
    • Wow, I'd like to see a Beowulf cluster of these..
    Personally, I think that this is great... let's hope iD is keeping up and giving us RealGuts(tm) in Quake 4.

    Remember how the cheapo motherboards used to be able to allocate some of the system RAM for video RAM? It would be pretty funny if these cards could do the opposite.

    --

  • I've heard of SMP computers, but SMP video cards?

    Looks like we'll have to all compile our kernels for SMP machines now.
    --
  • by marcos76 (113405) on Monday November 15, 1999 @07:29AM (#1531624)
    3Dfx unveils good number for his new 3d architecture...but It lacks of geometric acceleration. Do you still want 200 fps at 1600x1200 32 bit with 3 big polys at each frame? :) No thanks, I prefer lower fill rate and higher polys counter.
  • This thing is incredible. My jaw dropped to the floor and didn't stop till it went down a few floors. This is the first time 3dfx will put something out that is good not only for gaming. They are using a completly new architecture this time. It's about time I was wondering when the kings of 3d would retake their crown.
  • That's great and all, but these little babies are wonderful Beowulf's by themselves :)))
  • Tom's Hardware noted that the new parallel-processing video card from ATI, the Rage MAXX, has to wait 2 frames to accept a user input, as opposed to 1 for most single-chip solutions. Even though SLI has every chip working on the same frame, does it still suffer from the same delayed-input problem?

    It wouldn't have been a problem in the days of the Voodoo2 SLI setup, as any player with one could get frame rates typically twice as fast or faster than pretty much every 3D accelerator out there, so the 2-frame lag would be the same or less time as the single-frame delay. However, with the ungodly frame rates offered by a single GeForce 256 with Double Data Rate RAM, if there were a two-frame delay for someone with a Voodoo5 5500, in a LAN game of Quake III the Voodoo5 user would be toast.

  • Still no geometry acceleration?? Bah!
  • by Haven (34895) on Monday November 15, 1999 @07:35AM (#1531630) Homepage Journal
    Windows 95, 98, NT4.0 and Windows 2000 drivers Allows you to run the Voodoo5 6000 AGP with all popular operating systems.


    Okay, not only am I defending linux on this one, I am also wondering where the MAC drivers are. If 3dfx wanted to have some incredible benchmarks they should write a MAC driver and throw it into a G4. They say the Voodoo 5's aren't only for gamers, why not port the drivers to the most popular graphics design platform?
  • by jalefkowit (101585) <jason@nOSPam.jasonlefkowitz.net> on Monday November 15, 1999 @07:38AM (#1531631) Homepage

    I fail to understand why this stuff excites people. I've always thought that the market for add-on 3D graphics cards was going to develop a lot like the market for add-on sound cards did, and so far I'm seeing nothing that indicates otherwise.

    What I mean is -- consider for a moment how the market for add-on sound cards developed. Up to 1992, sound on the x86 PC was basically nonexistant, unless you owned a flaky almost-compatible like the Tandy 1000. Then the multimedia tidal wave hit and suddenly there was consumer demand for hardware sound support -- and a market sprang up to fill the demand.

    Once the demand for sound cards sprang up, the market developed through 3 distinct stages in the next 5 or so years:

    1. Race for Market Position: Five thousand companies hit the market selling sound cards that are all completely incompatible with each other. Software developers pull their hair out trying to decide which to support. Consumers pull their hair out trying to decide which to buy. Eventually one (Creative Labs' Sound Blaster) ekes out enough sales to justify making it the default choice for software developers to support, which launches a virtuous circle of consumers buying it because that's what the software supports and developers supporting it because that's what the consumers have.
    2. Hegemony through De Facto Standards. Soon the virtuous circle described above means that, for good or ill, the Sound Blaster becomes the de facto standard in the marketplace. Other products either become Sound Blaster compatible or are consigned to the margins. Creative maintains its profit margins by releasing a new board every so often(SB, SB Pro, SB16, SB32), upping features and performance. But eventually the feature set becomes Good Enough (TM) for most users, and adding new features becomes a less and less compelling reason for consumers to upgrade. (In the sound card market, this happened, IMHO, with the release of the Sound Blaster 16.) This puts downward pressure on prices, which broadens the market for these Good Enough products (and strains the market for the latest and greatest), which leads to...
    3. Integration and Commoditization. The fact that suddenly the hardware is cheap enough for everyone to own leads to integration -- the Good Enough hardware starts to become part of the motherboard, and the software APIs get rolled into the OS. This effectively kills the mass market for upgrade hardware -- if you can get a Good Enough sound card built right into your PC at the point of purchase, why spend $200 for the Latest and Greatest, especially since you'll never use most of those snazzy features anyway?

    So this is where we are today in sound cards -- while a few enthusiasts care about buying the latest Sound Blaster Live! or whatever, the vast majority of users are happy with the 16-bit audio that's hardwired into their motherboards. It's Good Enough!

    And that's what's going to happen in the 3D card marketplace, IMHO, fairly soon. We've already passed through stage 1 (I remember agonizing over whether to buy a Voodoo1 or a Rendition Verite card) and stage 2 (with 3Dfx milking their brand name for all it's worth through the Voodoo3). But now Good Enough 3D hardware is starting to come integrated on motherboards, and 3Dfx's Voodoo-only APIs have been almost entirely forsaken in favor of Direct3D, which is integrated into the OS. I've run 3D games on cheapo PCs using this integrated hardware, and while the performance isn't great, it's Good Enough -- while the add-on card companies fight over which card can provide 80 fps in Q3Test, or other "features" which would be lost on the average consumer anyway. So watch for it -- in a year I'd be amazed if there's still a market for whizbang add-on cards. Most people will be just fine with the Voodoo2-level hardware they'll get free with their PC.

    -- Jason A. Lefkowitz

  • by Hobbex (41473) on Monday November 15, 1999 @07:41AM (#1531632)

    Upon seeing the specs for that baby, part of me just screams I want it, but the other, more rational part of me wonders what the point really is.

    I mean, great: gigatexels per second. As much RAM as I currently have on my mainboard. Meaning what? I can now play Quake3 at 4,000*3,000 resolution? Yay. Yes, I know about anti-aliasing, but this is overkill for even that if not running very righ resolutions (1024*768 and above).

    Read my lips, 90% of all speed problems with games on current hardware is the geometry setup bogging down the processor. Unless you play at above mentioned resolutions, or happen to have dual athlon 700s and are playing at 100 fps already (and if I am right in assuming that this does not have a Geometry chip like the GeForce) this card will be exactly 0% faster for you.

    In my opinion Nvidia have taken a much wiser approach to the whole 3d acceleration concentrating on the weekest pointinstead of just pouring in endless amount of pixel fillrate that the processor can't render anyways unless you are stairing at a blank wall.

    -
    We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.
  • just a bit of pedanticism..

    Glide significantly out-performs Open-GL (open-gl is just too complicated, but I thought everyone knows this).

    As for the licensing.. I know :-(

    roll out crystal space.. :-)
  • This is great, and I totally agree with you, but I'd just like to know where your "90%" stat comes from.
  • by bwoodring (101515) on Monday November 15, 1999 @07:49AM (#1531636)
    The page with detailed info concerning these boards is www.3dfx.com/prod/voodoo/newvoodoo.html

    The really interesting thing is that *once again* 3dfx promised us more than it will deliver. On the low end (Voodoo4 4500) these babies are getting smoked by the GeForce 256, which will be a half a year older! The GeForce can do 480 Megapixels per second, about 1.3 times as fast as a Voodoo 4 (which clocks in at 367 Megapixels per second).

    If the past is any indication it at least a few more months for the Voodoo 5 to be released (ignore what 3dfx says), by this time Nvidia will probably already have a better card.

    In summary, the Voodoo 4 is slower and less feature rich than the GeForce 256, plus is won't be out for 4 more months. It could take longer for the Voodoo 5 which will probably be an anachronism before it is released.

    Come on 3dfx! This is *not* the technology that will keep us ahead of the PSX2!!!
  • I've been waiting to my old 90 Mpixel voodoo 2(which still is pretty decent on most games really)...
    If Linux drivers come out, I'll probably go for the Voodoo 5 6000 quad beasty.. That should hold out for a while..

    Anyone find anything on the memory technology yet?
    This card would have to have some massive memory bandwidth to keep up with those fillrates..
    I know even the sdram GeForce is memory bandwidth limited at much lower fillrate.
    It appeares that each graphics chip has its own dedicated 32 megs from the specs(with up to 64 being addressable by each chip), so that is one memory trick I'm sure they are using.. Any other details?

    (anyone notice that 3Dfx, long saying "32 bits doesn't make much difference" cuz they didn't have the technology, is now pushing it ;-)
  • by Anonymous Coward
    And nobody will ever need more than 640k...
  • WOW !! You are amazing, I've never seen such a high concentration of incorrectness in one post. Let's do this blow-by-blow, shall we ?



    > It's nice to see 3dfx sticking to the Glide standard rather than some proprietary OpenGL nonsense that doesn't port anywhere.

    Glide is a proprietary interface owned and VIOLENTLY copyrighted by 3Dfx. They sue the ass off of anybody who tries to figure out how it works, make a wrapper, etc. Search the slashdot archives for details. OpenGL, on the other hand, is an open, FULLY portable to basically ALL platforms available, complete graphics API.



    > Every tried to run a TNT in anything except Windows? The 3D works like shit.

    Oh, then I guess the fact that I was just playing Quake 3 was just my imagination, because it looked great at 1024x768x32 bit color. I could have SWORN I got a TNT2.
    Try updating your drivers.


    > Sure, Glide might be slower in Linux/FreeBSD than Windows, but it is a simple, fast, and more efficient protocol than the crap that nvidia is turning out anyday.

    nVidia didn't 'turn out' OpenGL, SGI did. And Glide actually works FASTER in Linux than in Windows. Even when you're making concessions, you're blatantly wrong. Dolt.


    > nvidia users, you own a very nice 32MB card that can do wonders on a 2D desktop...look at X go!

    The only part of your post with a shred of accuracy. The 2D on this card is almost as impressive as the 3D.

  • by Anonymous Coward
    Why should they bother writing MAC drivers when they don't even have an AGP slot to plug the card in ??
  • by Anonymous Coward on Monday November 15, 1999 @07:57AM (#1531644)
    a typical Slashdotter...I will pay $5000 for a video card, but $40 for Word Perfect is a travesty, because I can't see the source code.
  • Dude...I don't want to touch any of that. First, your eyes can take in and process orders of magnitude more information, bit for bit, than the ears. Sound cards these days are approaching levels of auditory saturation...and pushing standard, affordable speaker setups to their absolute limits. Hence the leveling off of sound card innovation

    Now.....ugh. Your argument is the equivalent of saying that people will be happy with their PIII 550's next year and won't ever need to buy an Athlon 1 GHz. This...makes no sense at all. Video cards still have a LONG LONG way to go before they reach the maximum levels of performance they can acheive with monitors. Until we have a graphics card that processes information on a pixel-by-pixel basis with 64 million colors and real-world geometry at over 1600X1200 resolution with 60FPS we won't be anywhere near graphic cards levelling off.

    Sound cards have already pushed speakers to their limits . Video cards have a long, long way to go.

  • don't forget about the 3dfx mini-opengl driver.
  • >But now Good Enough 3D hardware is starting to come

    For most people "Good Enough" hardware is here right now. A huge majority of gamers should be happy with the current generation of 3d cards, the improvements between generations are getting smaller and smaller. Also, the limiting factor in multiplayer is not the 3d card but your modem.

    Maybe its just me but pretty pictures are impressive for the first hour. After that I want game play. Its the software which is more important to me.

    I still have to go home and see how Q3Demo plays on my machine at home :)
  • How do you know more geometry will be better? I'm not sure myself, but IMO 3dfx has been king, more or less, up to this point because they choose the best features to include. That is the heart of engineering. Deciding what to include and what to sacrifice, which trade-offs are the best trade-offs. Other manufacturers have included more and "better" features, yet when you look at price and speed and quality, I feel that 3dfx has been the best so far. Although the TNT2s are very good products also, and I'm sure the GeForce will be too.

    What makes geometry better than fillrate?
  • More T&L...
    Right now is just a little triangle setup and lighting, there is much more that can be done in hardware..
    And we need faster T&l, faster memory interfaces, more fillrate, etc...
  • I wonder if this heralds an era in which a video accelerator has several processors dedicated to individual tasks(i.e. one for bump mapping, one for texture mapping, etc.) We could be on the virge of very fast graphics indeed.
  • makes sense... but they could port the PCI versions...
  • That sounded bad...what I meant to get across was that video cards have a long way to go until they saturate the capabilities of monitors. This is already possible, but graphics cards can, in theory, virtually mimic the real world on a 2D surface given good algorithms and ample processing power. I'm not too sure about how this would be done...but they need tremendous amounts of power and clocks to mimic the intricasies of the real world . The evolution of graphical power will continue to move quickly and steadily as we progress, just as the power of CPU's will.

  • Check your facts. [apple.com]
  • The Ge256 has proven itself to be no more then a logical extention of the Tnt2-line of cards. The drivers are even the same for both cards. My oc'd voodoo3 3000 is competitive with the ge256 up until >1280x1024, and my poor card doesnt have a fabled 'nVidia GPU'. 3dfx know what they are doing, and to all my fellow gamers out there, save your ducats until march. Quake3 or UT on a quad processor 128mb card? Oh ya.....
  • If you have seen ray traced graphics and compared them to the current crop of 3D graphics accelerators then you will realise there is a huge difference in quality, and it will take many years for you to have that level of graphics in realtime, maybe then it will end up as an integrated part, I would guess we are 5 to 10 years away from that.
  • That's what it might look like at face value, but I have found rarely that engineering makes these decisions:
    >because they choose the best features to include
    This is usually done by the marketing department, and often to the chagrin of the engineers, who would like to think they know better...

    Seth

  • Examples of "SMP video".
    • Quantum's dual Voodoo1.
    • 3dfx Voodoo2 (SLI)
    • ATI Rage Fury Pro 128 Max Turbo Fast Thing, or whatever the hell it is called.
    • Bitboys Glaze3D (if/when it comes out).
    Everyone implements it bit differenty, but the idea of SMP Video is not new. CAD-people, who use cards that cost more than a decent car, have had this stuff forever.
  • Ok, on page 54 of their quantum PDf presentation, it mentions the quantum 3D Alchemy (the high end cards with 8-32 processors) and it's memory bandwidth of "over 100GB/sec"
    I assume this is extrapolated from the 32 processor version, each with it's own mem interface..
    that means memory bandwidth per processor is about 3 GB second.
    Me thinks this will be a bit memory limited in games that don't use it's texture compression...
    However, if games start supporting 3dfx's texture compression, they should fly...
  • I can't see why people are up-in-arms about this paticular release, it's just the continuation of the graphics-card cycle. nVidia, S3, and the rest will continue to release new chipsets along with 3dfx. 3dfx is far from king of the hill: namely, they release products at such a slow rate that for much of the time it's better to get a TNT2 Ultra, for instance, instead of a Voodoo3.

    Sometimes 3dfx will be on top, sometimes nVidia will be reigning champion (the NV15 is in the works, i hear...)

    The Savage2000 and ATI Maxx are almost out...it really doesn't matter. These cards are way more powerful than anyone really needs, or will need, for a while. Until software comes out that really needs a couple hundred megatexels a second, I don't really care who's on top of the hill (and by that time, there will be even more powerful cards coming out)...
  • And not only that, but there are Voodoo 3 drivers available in Beta from 3dfx for those cards to run on Macs. I'd imagine they will continue evolving that code base to support the 4 and 5 cards on the Mac platform as well.
    Seth

  • OK. From the neat little Shockwave Flash thinger on the 3dfx website, it says "So powerful, it's kind of ridiculous." Actually, I find it highly ridiculous.

    Come on. Do we -really- need to be able to frag at 3200x2400x32bpp at 60 frames per second? Well, ok, I guess piping this to a 57" big screen TV would be nice.

    But seriously, isnt this approaching overkill? I find Quake II to be quite fun on a Voodoo3 3000 AGP. Granted, my shitty 14" monitor is the limit, and why I'm only running at 8x6. But alas.

  • by Haven (34895)
    now do we have the hardware to support Virtual Reality?
  • speed
  • I'm not trying to be flamebait, but I think most "hardcore gamers" decide how great a card is before they even try it. If the numbers are high, it rules. 2GB sounds awfully extreme to me, and can the architecture on the card handle all of that? I mean, what kind of game has 2GB of textures in it? I guess you could write several "versions" of each texture, but would that make it much faster than just re-rendering it? I'm curious to see if there's much difference. According to numbers, it should look amazing, but I'm not expecting much.
  • Macintouch [macintouch.com] is reporting that the PCI version is already supported by the company's existing Macintosh drivers. You can read the FAQ [3dfx.com] yourself. No doubt, if there is enough interest, drivers for AGP Macintoshes will be forthcoming.
  • T&L is a gamble. High fillrate is a gamble. Bump mapping is a gamble. Any new feature a chip manufacturer puts on thier chip is a gamble.

    No one knows what game companies are going to try next.

    There is no way of telling whether hardware transformation and lighting is going to make any difference at all in future games. Sure, nvidia is going to tell you that future games will depend on it! There is no way of knowing that 5 gazillion texels/sec is going to really make much of a difference to future games, although 3DFX doubless wants you to think that. No one knows whether game companies are going to stuff their games with bump-mapped polygons, no matter how much Matrox tells you it's the truth.

    Point is, each of these hardware developers are hedging their bets, that game companies will favor their technology.

    As for us consumers, I would take a "wait and see" approach. I'd never go out and buy the latest and greatest until I see what games run well on them and what games do not run well on them. Specs from pre-released hardware are meaningless, and even released hardware that runs a FEW games spectacularly is nothing to base a purchase on.

    Look for the architecture that stands the test of time, and has support for the platform and games you play.
  • The GeForce 'chip' does 480 MegaPixels/sec, true, but without textures. The ram on the GeForce boards is not fast enough to serve textures at that speed. When the dual cycle ram boards come out , it will actually go at 480, but in the meanwhile this is a meaningless number - you are limited by the speed of texture and pixel data through memory before you get close to 480 MPix/sec.

    I would say they are the same speed (Before GeForce gets new ram), but that is disappointing considering that the 3dfx also cannot do geometry.
  • In order to push that many pixels/texels fast writes will need to be enabled since the current memory bus is just too slow.

    Also, is the current PC-133 Mhz standard fast enough (for workstations not gamers) to send 2GB worth of textures?

    I don't know. That's why I'm asking you guys...
  • >We could be on the virge of very fast graphics
    >indeed.

    virge, LOL! please tell me that was intentionally funny (the virge being a highly crap excuse for a 3d card)
  • Do you seriously think that we have reached the ultimate plateau of graphics excellence, and there is nothing more to achieve? These boards aren't designed for playing Quake II at 800x600, they are designed for Quake III and other yet to be relased games. Current PC games still don't compare to the quality of demos for the PSX2, which itself isn't on par with the quality of movie effects or animation (eg, A Bug's Life, The Phantom Menace). Faster is still definitely better; the only question is whether 3Dfx is making there cards fast the right way.
  • Well, you could implement a fully ram-caching apache in this thingy and blow nt in the next mindcraft test ;-).
  • All of these Nvidia GeForce/3dfx Voodoo 4 and 5 boards are technically amazing but this level of consumer 3D hardware is in desperate need of a new killer app. I have a Voodoo 1 and I'll likely be more than satisfied with the performance of Quake Arena on that thing. I simply refuse to drop hundreds of dollars more to play games at higher resolutions/framerates.

    Why, when we all have a global network right in front of us that is ablaze with information and commerce, is no one strapping a hardware-accelerated 3D engine onto the net? When am I going to be able to navigate the web in 3D? When can I use my 3dfx or Nvidia board to do real work, or to shop, or to explore real information and news? Why is the web still 2D? Wake the fuck up. Screw VRML - I'm not even asking for any sort of server tech - just give me a fly-through 3D-abstraction of the HTML/XML content that is already there. If people know the engines are out there they will start to build for them.

    Bottom line - I won't give 3dfx or anyone else more of my money to play another FPS. I will part with more money for 3D hardware when I can use a Voodoo 5 6000 to give me an Ono-Sendai Cyberspace VII-like window into the net.

    Night
  • Billy "Wicked" Wilson of Voodoo Extreme asked a bunch of high-profile devs exactly this question. [voodooextreme.com]

    The majority response, was that if they had to choose, they'd pick a card with accelerated geometry processing and a mediocre fill rate over a card with an insane fill rate and no geometry acceleration.

    What does that tell you about the direction the game developers want to go? They want to build games with higher-polygon engines/content. My guess is that's what we're gonna see.

  • First some quotes (from the press release):

    Re: the Voodoo4
    The boards, which render two fully featured pixels per clock, will deliver between 333 and 367 megatexels/megapixels per second

    Re: the Voodoo5 5000/5500
    The board, which renders four fully featured pixels per clock, will deliver between 667 and 733 megatexels/megapixels per second fill rate

    Re: the Voodoo 5 6000
    The Voodoo5 6000 AGP, which renders eight fully featured pixels per clock, will deliver between 1.33 and 1.47 gigatexels/gigapixels per second fill rate


    Now, if you'll notice they state how many "fully featured pixels per clock" each card delivers. Also, notice that the V4 does 2, the V5-5500 4, and the V5-6000 8. Along with that, as I guess one would expect, the V5-6000 has double the fillrate of the 5500 which has double the fillrate of the V4.

    So? What's my point? Well, with the Voodoo2 -- which could render two pixels per clock -- the full fill rate was acheived only if the app was rendering two pixels per clock. (ie. multitexturing) If the app wasn't multitextured, the effective fillrate was actually only half the "marketing" fillrate. I think this was also the case with the Voodoo3, although I'm not positive.

    I'm not saying that this is definitely the case with these cards, but:

    Correct if I'm wrong, but I think these cards are still based on the same architecture as the V1, V2, and V3.
    3DFX is somewhat notorious for advertising the higher "marketing" fillrate as opposed to the true fillrate.
    The fact that they qualify the fillrate of each card by stating the number of render pixels per second kind of worries me.


    If this is the case, apps that don't take full advantage of the high end cards (ie. have less than 8 pass multitexturing) may leave you with nothing more then a glorified and expensive Voodoo4.
  • 3dfx appeals to the greed in developers with their Glide API. By this I mean that the developers can port to Glide, and the framerates are 10-15% better than OpenGL or Direct3D. The reality is that if the developers want to kill the Glide API, they should just exercise some restraint and NOT code to it. Its not like 3dfx doesn't support OpenGL (although I have yet to see a fully compliant ICD, correct me if I missed it...since I use TNT's for development now) or Direct3D.

    As for being happy about 3dfx putting out another competing 3D Board...you better damn well believe I'm happy. I love competition...it makes BOTH of the leaders better in the long run. Think about how competition has improved those frame rates and image quality just over the past 3 years. (thats even in spite of an initial head start with Glide by 3dfx.) Simply amazing really...

    -- DW
  • yes, i'll agree with your assessment of the sound card market, and yes, there is always a group of followers who will be happy with older equipment, but graphics cards still have a long way to go before commoditization.

    it's much easier to reproduce a sound that sounds real than a 3d image that looks real, and it won't be for another 5-10 years that the video card market becomes totally commoditized.

    now, the low end is already that way, and will slowly continue in that fashion for some time.

    -jm3
  • Do you know what a winmodem does? It makes the CPU do the work that a $16 chip could do, for a 10% performance hit.

    Good. I saved $16. Why do I really care if it uses 10% of the CPU? My PC is idle most of the time anyway. Linux is single-user on my machine anyway.
  • Hmm...It's fair to say that most people don't use 100% of the cpu very often. Why is saving 16 dollars (manufacturer's cost, consumer's cost difference will be greater) at the expense of giving up 10% of your cpu so bad? It's not my choice personally. But for the general market, there's no reason not to save a few bucks and let a $50, 3 or 4 hundred mhz cpu do the work.
  • I agree with you about the initial stages, but not regarding the later ones. I don't think we are anywhere close to seeing commoditisation of 3D graphics cards. The point is that sounds cards are fundamentally simple. They pump out waveforms, and that's about it. The system isn't at all complex, and a 44kHz output is fine for human ears. Sound content is by and large recorded, so there's not a lot of scope for sound processing beyond a bit of filtering and a few special effects. It was obvious very early on that a CD-quality sound card was going to turn up soon, and that this would be enough for most people. 3D Graphics cards are a completely different ballgame. Sure, they need to be able to reach sufficient resolutions, bit depths and refresh frequencies. No problem, 1600*1200*32 at 60Hz is about as much as a human eye needs. That kind of spec can and will become a commodity. But 3D rendering is insanely complex. Nothing in the world is even remotely close to being able to render complex realistic scenes in realtime. Even having a pentium processer for every pixel wouldn't be fast enough to do complex raytracing. Even if we double rendering capabilities every year, we're still over twenty years away from being able to do this in realtime on a commodity platform. Point is, there will always be scope for enormous innovation in the 3D cards market to implement new techniques. There are unlimited optimisations and innovations to be made. 3D cards are all about producing an approximation of a visual scene, and the winner will be the card that makes the best approximations, the most "realistic" scene while maintaining some kind compatibility with standards. Take, for example, hardware T&L technologies. Good idea, now becoming feasible to implement. Once it catches on cards without it won't have a chance. But even hardware T&L will get replaced as cards start to implement scene description langauges etc. My guess is that you will see ever-more processing delegated to the graphics subsystem until you have what is in effect a fully programmable, dedicated graphics computer. These will become the standard, and start to ramp up ever more powerful specs, rather like PCs at the moment. Clearly the graphics supercomputer-on-a-card is a long way off right now. This means there is going to be technological "leapfrogging" for the forseeable future. Sure, some people will be happy once technology gets to a certain point. These are the people who are already content with a decent hi-res 2D card to do their wordprocessing and run a few business apps. But in the 3D arena, there will *always* be something that looks and feels a lot better just around the corner, and that is exactly what all the hardcore gamers will want to buy. Still, I reckon that the first device to create photorealistic 3D scenes in realtime won't be a graphics card at all. It'll be a very clever genetic algorithm that "paints" the scene. Just give me ten years or so......
  • Yeah, I remember that. Not really a good test of the NV10. They were comparing fill rates at 640x480. The NV10 shines in fill when you start doing 1024x768, 1280x1024, 1600x1200 etc. Also, I don't remember any of the tests taking advantage of hardware T&L. All the T&L code for those tests are done in software, even if you have the hardware for it. So, again, those tests didn't really touch anything the NV10 does well.
  • Exactly... 3dfx has done a good job in the past of making good trade off's in their design. I will definitely be waiting to see how these two technologies compare to each other before buying a card.
  • I think the real reason that Linux has no drivers for Winmodems is not that they suck, but that the manufacturers have chosen not to make programming information available.

    I haven't ever used a Winmodem, so I don't know how hard they suck, if at all.

    Molly.
  • This is because each chip has it's own dedicated memory, in order to give decent overall memory rates..
    2GB/32 chips per high end card = 64 megs a chip, the max that the chips can address...
    The reason that they have the consumer market top out at 4 chips is there are diminishing returns with each chip..
    2 should be nearly twice as fast, 4 less then four times as fast, 8 perhaps 6 times as fast, etc..
    I don't believe with this architecture they can pull of very linear scaling.
  • How you managed to avoid having your post not moderated to flame-bait is beyond me.

    Matrox offers an AGP card with two outputs. It's called the G400. Unlike the V4 and V5, it's already released and available.

    Anyway, I'm somewhat off topic. But, I needed to correct this. The issue with PCI is bandwidth and texture swapping. If the card truly can have up to 2GB of RAM, (Yes, there are many simulation visualizations and mappings that can use this.), you'll need more than the 533 MB/s provided by the PCI bus. Even full AGP 4x (w/ RDRAM) has 1.06GB/s. An approximately 2s delay is damn noticeable.

    If they can put multiple graphics processors on one card, why can't they put multiple output ports on the same card?
  • Good point, the bottom line is 3dfx and nVidia and all the other companies are building these chips and cards to make money. 3dfx has gotten some bad press because they didn't include 32 bit color on their Voodoo3 line. To me, that doesn't seem indicative of a company that is throwing on features mindlessly to market their product. On the other hand, one could argue that 3dfx didn't include 32 bit color for other reasons. 1. 3dfx sucks, nVidia rules 2. they just missed the boat on this one I don't know which is the case, probably no one but employees of 3dfx truly know. I do like 3dfx, but my BS meter goes up a little when they rant about no one needing 32 bit color, 60 fps is more important. Come on, any company is going to say what they have is better.
  • Yes these are all gambles in a sense. But some things are not gambles. Anti-aliased polygons can make a scene look alot better (on the order of magnitude that bilinear and trilinear interpolation does) ... so I would consider this to be less of a gamble. I imagine that nVidia will support this soon. 3dfx may have just done it first. As for the "revolutionary" T-buffer [tm]...this is just the accumulation buffer, and I've been wondering when someone would support it. The lighting in T&L I think may be a bit more of a gamble. Most games still use precalculated lightmaps. I must admit that hardware lights looks really nice and allow for alot of cool effects, but the problem is the limit on the number of active lights in a scene (which results in the coder resorting to light maps again or perhaps implementing a partial lightmap/dynamic light engine.) As for the Transforms...this is a good idea and one that I think we'll see more of. I would wager that there will be a point when a 3d card has ALL of these nifty features in hardware (meaning everything that OpenGL can do and more is done in the hardware *drool*) but until then...we get to wait and see what the 3D card makers and game writers think is MOST important.

    So I agree that these things are all gambles in a way...but I think they are all bound to be supported by everyone eventually anyway...its really just a matter of priorities.
  • Hey, its a great idea. Probably slower than normal RAM, but ALOT faster than virtual HD ram... Just give it a priority inbetween those two. (Well, I dunno if windows/linux has RAM priorities like the Amiga....)
  • Now.....ugh. Your argument is the equivalent of saying that people will be happy with their PIII 550's next year and won't ever need to buy an Athlon 1 GHz. This...makes no sense at all.

    But you're proving my point! Have you looked at the PC hardware marketplace recently? Most people are happy with P3 550's. In fact, most people are happy with K6-350s! All the action in the consumer marketplace is happening at the low end around cheap PCs, not at the high end around 1GHz Athlons. Nobody but a few enthusiasts (myself included) gives a hoot about 1GHz Athlons.

    Why is this? Because for the vast majority of computing tasks, a K6-350 is... Good Enough (TM). The increase in utility for the average user from a CPU upgrade becomes vanishingly small -- certainly smaller than the difference in price. And when an upgrade's cost exceeds its perceived utility, people won't make the upgrade -- which leads to commoditization and integration.

    Sound cards have already pushed speakers to their limits . Video cards have a long, long way to go.

    Again, the issue here is not theoretical limits of graphics processing. You could make sound cards that would wring even more accuracy and fidelity out of PC speakers if you wanted to. The issue is increase in performance versus increase in cost. Most people think they won't see enough of a benefit from a 1GHz CPU to shell out a premium for it; most people think they won't see enough of a benefit from a 256-bit Sound Blaster to shell out a premium for it; and soon, most people won't see enough of a benefit from a Voodoo6 to shell out a premium for that, either -- regardless of how many more polygons you can squeeze out of it. Once the hardware gets Good Enough, most people stop caring if it will ever get any better.


    -- Jason A. Lefkowitz

  • Of course, if no-one has high poly-count cards, then these games will be longer coming, because the market won't be there.

    I truly hope that nVidia can gather enough support for the GeForce's geometry engine so that this battle is fought and won on which solution is technically better, not which company bullies the other better.

  • Damn, when does a 42-karma slashdotter get to moderate again? I'd unload all my points on the post which is the parent to this one. It's pretty much the most insightful post I've read in a long time.

    Anyway, we've already seen some moves to Stage 3, Apple tried it with ATI, but got screwed, I think, because of ATI's crappy drivers (heh, I'll take any chance I can get to bash ATI), plus, it seems like ATI would still rather build cards than sell chips to stick onto Apple mobo's, probably because Apple was such a niche market for them anyways.

    Then Intel was also trying it, with their AGP nonsense, and their 3d graphics chipset that they tried to use to corner the markets so they could foist it onto mobo manufacturers as the "de facto" standard. This failed. I'm not sure why. Maybe it just wasn't "good enough" enough.

    What will suck about this integration of "good enough" hardware, is the proprietary API that will be tied to it, probably DirectX, which will prevent game manufacturers from taking the sensible approach and using a cross platform 3d API like OpenGL, or MESA, which will probably mean that most games will continue to be written for Windows first, and Linux and Mac will continue to be the afterthought.

    While Office98 and iMac have built Apple up an impressive marketshare recovery in the past few years, and all the press and business support have bolstered Linux, these platforms will not gain broad acceptance outside of their niche markets until the game-software availability problem is resolved. And for that to happen, there needs to exist a compelling cross-platform 3d API, that the game industry actually uses.

    For this, I'd say that Carmack is the industry leader, and a great example to follow. Not only does Id design GREAT games, but they do it in a manner which facilitates cross-platform adoption, which gives them access to markets that are otherwise not available. I hope that this is enough of a competitive advantage that it persuades other game developers to follow suit. (at least indications for Bungie are that they will be releasing Halo cross-platform, so that's an encouraging trend.)

    I wish I had a nickel for every time someone said "Information wants to be free".
  • I agree. Soundcards can be compared much better to 2d cards, which have basically leveled off...
    All most sound cards do is d/a conversion and *very* minimal processing.. Sound cards are mostly analog devices with a d/a converter.
    2D cards are also the same sort of basic a very small amount of window processing, a d/a conversion, and analog components.
    3D cards, on the other hand, are very complex processors, in recent generations as complex as the recent geleral CPU's in our computers. Just as CPU's are getting pretty fast, but are not yet (nor ever will be, i suspect ;-) fast enough, current 3D cards arent, and are still evolving at least as fast, if not more, then CPU's are.
    However, you can also get less then cutting edge graphics cards (as well as CPU's) and still have a decent system.
    I for one am still using a Voodoo 2, and am only now, 2 years after the purchase, looking to upgrade. I also have my k6-2 300 from the same era running my gateway/server, though my main workstation has a celeron 400 on it (comody tech, bought darn cheap).
  • Just about every vid card maker over promises. Nvidia is the worst offender, as the TNT2 ultra finally provided what the TNT2 was promised to do. There will probably be a GeForce + or some such that will meet the published specs of the original press releases for the GeForce.

    S3 has done the same with their new chip as well, with lower mem and processor speeds than originally promised.

    matt
  • One of the FAQs (check the website), says that both Linux and BeOS will be supported... so I'd imagine that eventually drivers for both will be released, though probably not as early as the Windows drivers. Also in the FAQ, there are no plans for Mac support at this time.
  • Hehe. I'm sitting here, rendering in 3dstudiomax. Linux has a long way to go...you've got pretty computers, but no high-end software to run. (Don't push POVRAY at me, either.) As far as dissing SGI...when was the first time you could build your TNT/Xeon? How long was the onyx2 on the market before that? Howver...this rendering i'm working on, at a piddly 640*480, 100 frames...is taking 20 minutes to do on a dual 500mhz p3 box. I would love to have dedicated 3d hardware that could kick out pretty images. Anybody else feel my pain?
  • the G4's have a 133mhz AGP 2x slot
  • In case you did not read futher, the 32 chip version with 2GB of RAM will cost around $40k. Also, why in the world would you want one, unless you were rendering high quality video in faster than realtime.... The thing could render Toy Story (for the lack of a better example) in about 10-20 minutes. Also, if you do the math, it could run resolutions 4092x3072 in 30FPS, 32bit, or higher, but what monitor could support it. At least the thing does 32bit, finally. Im opting for the V5-6000 (AGP, 4 chip, 64MB), for myself, and I think it costs over $300.
  • Why would you want to BUY TWO video cards to run dual monitor support when you can buy a Matrox G400 dual head and get dual monitor support from a single graphics card???
  • Geometry and lighting setup is extremely important. For my stuff, it is a very significant bottleneck. It is the perfect type of operation to offload to hardware. Not only is 3dfx missing out on that, they have been dicks about the source code. When they go down, they'll have noone to belame but themselves.
  • And why is this insightful? Though I should point out in counterpoint of this previous post that nVidia has NEVER reached the hype of their products in the past. The TNT was supposed to be a TNT2. That's why they never released any GeForce specs until release was imminent.They didn't need to release 'real' specs though, they had Tom to blatantly plug the NV10 long before it existed.

    And on the other hand, 3dfx has never failed to meet their specs. Even though their products have severly lacked in many departments (AGP texturing, 32 bit colour), at least they didn't play the Hype Game that nVidia did.

    And I don't buy 3dfx products anymore. My first accelerator was an original Voodoo Graphics, that cost me close to $300(cdn). Since then it was an i740 (hey, I was on a budget) and now a G400. And the G400 is the best card I've ever had the pleasure of using, it's fast, pretty (environmental bump mapping BABY!) and more Open Source friendly (specs vs un-improvable open source driver). Granted, 3dfx is definitely the least open source friendly.

    Anyway, that was my rant. I'm happy now.
  • Note:
    Each processor will have it's own dedicated 64 megs or RAM in this system. If there are textures that are in more then one processors section of the screen, (basically guarenteed) then the texture must be in each one of the processors memory. I would assume because of this, max textures would actually be in the 128-256megs range. If you have a lot of large textures, could be as little as 64..
  • "All the action in the consumer marketplace is happening at the low end around cheap PCs, not at the high end around 1GHz Athlons."

    I understand what you are trying to say (I think), but the explosion in growth in the low-end is because the prices are falling. This makes PC's more accessible to people who aren't willing to spend $$$ on a computer. In other words, different consumers are buying the cheap PC's, not the older PC consumers who have been buying every couple years. And definitely not the hardcore gamer.

    I also believe that this enthusiast 3d market has a chance to expand and grow over time into more main stream. Sure, it's a toy now, but being able to render near photo-realistic worlds in real time in response to user inputs...that has potential. As the internet and technology grow and find new ways to solve problems, I think we may see some very neat applications for 3d technology.

    One final note, I just helped my girlfriend order a new Dell computer. It's not bottom of the barrel cheap, but it was only $1100. It included an Aureal 3d chipset sound card. Sound isn't a commodity. Common, "good enough" sound is a commodity
  • To the best of my knowledge, the GeForce meets the original published specs because they didn't publish any specs until the chipset was actually available. Nvidia learned something from the whole TNT thing.

    But one thing to note, Nvidia doesn't actually make cards unlike S3 and 3DFX. They just sell the chips to OEMs, and the OEMs set the final clock speeds, memory used, etc.

  • It took 3dfx long enough to realize that they needed to redo their 3D core, rather than keep on kludging on new features and marketing-hype to the same old tired Voodoo1 that, until now, all their chipsets have been based on. Finally they support real 32bit rendering, 24bit zbuffering, and stencils, which many programmers have been clamoring for for quite some time. Now maybe the 3dfx-induced standstill on game rendering technology can finally come to an end. 3D cards have had stencils, which are useful for several things (realtime, dynamic shadows being just one of them) for a couple years now, but nobody has yet to release a game using them; I have a feeling this is due largely to the fact that 3dfx owners would get hostile when their holiest card was no longer good enough.

    Finally getting rid of the 256x256 texture resolution limit is a Good Thing as well. Even Quake2 uses textures larger than that, and on the Voodoo[1-3] chips it just looked blurry and crappy because of it.

    That said, I wonder what sort of marketing spin 3dfx's wonderful PR people will put on this decision, when for the longest time they were constantly saying how worthless 32bpp rendering and large textures and the like were. I also wonder if these chips will have true accumulator buffers (the press release didn't say anything about this) or their bastardized, crippled "T-buffer" crap. I also wish they'd drop extending Glide (for a number of reasons) and only have Glide 3 for backwards compatability, especially since Glide can be relatively trivially implemented in terms of OpenGL and adding on more features to Glide to try to make it catch up will just cause more cumbersomeness and an even greater rift between their Windows and Linux support. (I feel even sorrier for Darryl Strauss if he's got to do even more for-free work on extending Glide for a relatively thankless company.)

    On the whole, though, 3dfx has a chance to actually redeem themselves with this new card. I hope they don't blow it; I'm all for giving them another chance. I just hope that they decide to actually have a good product instead of good marketing. For the longest time they seem to have just been resting on their laurels from having been the first usable (and not even decent) 3D card on the market. Maybe now that can finally change.
    ---
    "'Is not a quine' is not a quine" is a quine.

  • Sure, high fill rates are a good thing to have an all, but I am not sure I like how they are going about getting it. My problem with the card(s) (ignoring the lack of geometry support) is that they do it by means of multiple processors using SLI. If I do remember correctly, SLI does have some visual drawbacks which make it look like less than it should (thought I may have heard wrong). Having fullscreen AA might clear that up, but it isn't that they have a gigapixel chip on their cards but that they have several less powerful ones. I don't know. It just seems to underwhelm me (but duel Athlons are okay because single athlons are just plain wicked!)

    Ibag (I'll Buy A Geforce...with linux support, I do believe)

  • The GeForce supports OpenGL and Direct3D, but not Glide, which happens to be the Voodoo's native API. The Voodoo supports all three, at varying qualities. So, if a game is released *only* under Glide, you won't be able to play it on a GeForce; otherwise, you're fine. Now, two caveats:

    (1) Very, very, few games are Glide only. Don't worry about it. The sole exception I know of recently was the initial Unreal Tounrey demo.

    (2) Creative Labs ships all of their cards with a Unified Driver that allows you to use Glide with a TNT/TNT2/GeForce. It works quite well, from what I've heard (again, on the UT Demo).

    As for the others, Savage seems to have a proprietary API of some sort. Then again, no one buys Savage ;-)
  • yeah... for real... if anything it should have been moderated interesting
  • I think that the newer game consoles have this already, but the next great feat of accelerating games on a computer is a chip designed specifically to do the physics for you. Then we'd obviously need a standard world-description API, and then there'd be one more thing to fight over.

    I also think that a very cool thing would be to have a Dreamcast or PlayStation *card* that you insert into your computer. It could use the CD/DVD drive to read its roms, and then you can play some really cool-looking game while recompiling something, with a minimal drop in speed for each of them.
  • In the words of the immortal Buzz Lightyear:

    "You poor, sad little man".

  • I understand what you are trying to say (I think), but the explosion in growth in the low-end is because the prices are falling. This makes PC's more accessible to people who aren't willing to spend $$$ on a computer. In other words, different consumers are buying the cheap PC's, not the older PC consumers who have been buying every couple years. And definitely not the hardcore gamer.

    Exactly! Exactly! Yes, all the action is at the low end because prices are falling. Why do you think prices are falling? It's because demand for the Latest and Greatest is extremely soft. Nobody needs a 1GHz Athlon to do anything except play the most demanding games -- and the audience for the most demanding games is tiny. For every hardcore member of a Quake clan, there are ten people for whom "computer gaming" means firing up Deer Hunter for an hour a week. What's more, it's the Deer Hunter audience that is the one that's growing by leaps and bounds -- and you can bet that that audience isn't going to shell out $300 every year to get the Latest and Greatest 3D card. Hardcore gamers will? Who cares? "Hardcore gamers" are already an eensy tiny segment of the market.

    I also believe that this enthusiast 3d market has a chance to expand and grow over time into more main stream. Sure, it's a toy now, but being able to render near photo-realistic worlds in real time in response to user inputs...that has potential. As the internet and technology grow and find new ways to solve problems, I think we may see some very neat applications for 3d technology.

    OK, so we may see some neat 3D applications in the future that require massive polygon-pushing. Can anyone honestly say that we will see consumer versions of these applications within the next 3 years? If not, why should anyone pony up for the beefiest 3D card? Why not wait until these vaporous applications materialize and buy then, when the cards will be even beefier -- and cheaper to boot?

    One final note, I just helped my girlfriend order a new Dell computer. It's not bottom of the barrel cheap, but it was only $1100. It included an Aureal 3d chipset sound card. Sound isn't a commodity. Common, "good enough" sound is a commodity.

    OK, so you are too demanding for an el cheapo sound card. That's great. Heck, I'm too demanding for an el cheapo sound card, when it's my own money on the line. But don't confuse techno-literate Slashdot readers like you and I with the broad buying public. "Common, good enough sound" is plenty good for the Deer Hunter crowd, and common, good enough 3D will be good enough for them too.


    -- Jason A. Lefkowitz

  • Whatever happened to the days when I was a kid and dreamt of the far away year 2000, where I could watch TV in real 3d made up of lasers and holograms. Here we are a month and a half away and all I have to hope for is a Voodoo5! (sigh)

    Then again those dreams may have come from my parents putting too much acid in my Froot Loops!

  • 3dfx fails to deliver in so many other interesting ways, though. IIRC, the Voodoo4 was supposed to be released last month, then this month, then 'really early next year', and so on.

    No hype? What about the T-Buffer? If you read over some of interviews with 3dfx's marketting people, you'd think it was the greatest innovation since VGA graphics. (And the Voodoo4 is the second coming, just so you know....) They may not be official hype-mongers, but 'unofficialy' they're just as bad (note: I said just as bad; I'm not defending nVidia, I'm attacking 3dfx).

    Not to mention dumb. (No, I don't give a crap about fill-rate, I want *quality*. 40 fps is fine, now give me 32bit color, dammit!)

    *coughcough* That was fun. Anyway, at this point I'm a dedicated nVidia owner, mostly because of my beloved Hercules TNT2. I would have gone with Matrox, but they were just half a generation down from what I wanted; the G600 (or whatever) will almost certainly be my next card, if it can do what I expect.
  • by Haven (34895)
    That is an unfair moderation. That is an intelligent statement regarding what the poster wants. Who do you submit unfair moderations to?
  • I mean, great: gigatexels per second. As much RAM as I currently have on my mainboard. Meaning what? I can now play Quake3 at 4,000*3,000 resolution? Yay. Yes, I know about anti-aliasing, but this is overkill for even that if not running very righ resolutions (1024*768 and above).

    Actually, if you had been feverishly keeping up on hype for the last few months as I have, some of the features of this involve an accumulation buffer which will be used for doing motion-blur/depth of focus-blur. The way thse will be implemented is by doing multiple renders per frame, and this sure as hell uses a very high fill-rate.

    Further more, the number of pixels rendered per frame will often be significantly higher than the number of pixels on the screen as some polys are obscured by others - even if you fully z-sort your polys you can't be guaranteed to be free of having to draw over an old poly (which is why we have and need z-buffers).
  • No, it is not Good Enough(tm). When I can't tell the difference between looking out a window and looking at a monitor, then it will be Good Enough. And I don't think that the Sound Blaster 16 is Good Enough either. I used to have a very bah-humbug attitude towards the bells and whistles, but with my last machine I splurged and got an SBLive with a Dolby digital speaker system. When a thunderbolt cracks through the speakers, people turn and look outside for lightning. THAT is Good Enough, because it is almost indistinguishable from reality. I have found the 3D effects of Environmental Audio to be incredibly immersive. The more the game resembles reality, the easier it is to suspend your disbelief and really get into the game.

    While 3D graphics accelerators are very impressive these days, they are not Good Enough. When they can do realtime radiosity and raytracing (or something of similar quality) at a resolution that is below the physical limits of the human eye, that will be Good Enough.

    Even if you don't have the uncontrollable urge to play the latest and greatest game, you have to admit that the demand for The Next Big Thing has fueled a lot of innovation in the last few years. A number of the big players involved (Matrox, Nvidia, Creative, etc...) have released specs to their hardware and started open source driver projects. This lends industrial credibility to the Open Source model in general, as well as helping to make Linux a viable platform for gaming and more serious pursuits like graphical design. And that's a Good Thing(tm).

  • I bought a voodoo3 3000 becuase it supports 3dfxMiniOPENGL, Direct3d, and Glide. It also worked great under X (which the TNT2 didn't (at the time that I bought it)).
  • Just a note, 16-bit audio is not Good Enough...

    The human ear is limited to a dynamic range of about 120dB, and where 16-bit audio is theoretically limited to 96dB, and in reality the best consumer soundcards has a dynamic range of about 80dB due to noisy D/A converters...

    I didnt say the parent post says that current soundcards is good enough for absolute sound realism, its just some of the replies that does..
  • At a threshold of 3 I saw a lot of "so what" posts. Here's my response.

    Naturally, I'm all for a "let's just see when it comes out" attitude, but to answer the "oh great, even more texels" argument, remember that a massive fill rate means your geometry engine can have more overdraw without hurting performance. (Overdraw is where you "draw" multiple pixels into the frame buffer at the same place, and the one with the lowest "z" or distance from the observer is the one that actually shows up). If you had an infinite fill rate, you could draw the entire world as fast as your geometry setup would give you vertexes. Up until now engine designers have had to use tricks like BSP trees (ala DOOM) and Portals (Decent) to get overdraw as close to 0 as possible. With a high enough fill rate, you can get sloppy with your hidden surface removal and focus on other things. Of course, this is an over-simplication, but the point remains that more texels/s is not a bad thing.

    Also, CPUs are still getting faster and cheaper. It will not be unusual to see dual-processor machines in homes next year. With Athalon using Digital's bus technology, quad processor machines could become Christmas pressents in 2000.

    To answer the "what do I need 2G of textures for" question, think computed textures and textures with more information than use colors. If a texture has depth (bump mapping) or material information (alpha channel, refraction), it adds up. quake 3 uses 32-bit textures: 8 bits each for red, green, blue and alpha (transparency). Now let's immagine what we could do with another 32 bits: 8 bits of depth, 8 bits of reflection (I forget what this is called), and 16 bits for whatever effects would look good if they varied over the face of a polygon. Also, animated textures will quickly use up texture memory.

    Yes, what we have now is pretty cool. Yes, 3DFX is being unfriendly to open standards. Yes, other cards may be a better bet. No, this is not the end-all-be-all of real-time scene rendering. Personally, I can't wait to get my hands on almost any of the cards that's going to be coming out next year.

    Disclaimer:
    I do not work for or even know anyone who works for 3DFX. I know two people who work for Creative Labs, and they hate 3DFX.
  • Don't think 3200x2400x32x60: think 1024x768x32x60...
    ...with five specular highlights from five dynamic lightsources, flickering of torchlight on faces and more sharply from metal surfaces, and every barrel or crate or object slightly different from having each one overlay about three slightly different 'dent' or 'dirt' layers.
    I take it you don't read Cinefex ;) if you did, you'd know that this is _precisely_ what ILM did to make the battledroids photorealistic- they were all identical models, but you had the texturemap for the robot, and then five different overlay textures putting different patterns of dirt and wear onto the droids- which were applied in combinations, of course.
    'Cinematic' means impressive- means multitexturing that would _choke_ a GeForce (or indeed a Voodoo3, but that's a given). It means the modellers will still be caring about polys, but the _skinners_ can go HOG WILD. Surfacing is not merely choosing a really big texturemap- talk to rendering people- overlaying translucencies and transparent textures is when you start getting really startlingly impressive effects. This throws the door _wide_ open for really amazing stuff. Polys aren't everything (it should be OK on polys anyhow, but polys aren't everything).
  • I'd agree with this - and take it a little farther. The market will hit that "leveling"-off curve when your eyes can't tell the difference between the display & reality (including the "3d-ness" of the images). This will include the processing required to make a 3d-world for each eye, and the display technology.
  • *g*
    Seriously. It's coming out in PCI, I want that because I'm hanging onto my nice old powermac for a while. I know _exactly_ what to do with all that texture bandwidth- multitexturing babeee! *g* forget polys. You'll end up with really boringly textured well sculpted shapes- geometry is NOT the weak link. I've appreciated the 3dfx strong points even through the drawbacks of 16/22 bit color- I've seen the transparencies and shading and tonal values looking better at 16 bit than the competition at 32 (not always, but in a number of cases, and always due to the 32 bit card drawing washed out tonal values). Now that 3dfx is ready to do the card with antialiasing that works with all my existing games, and with so much texture memory and fill rate that you could use it for fscking _filmmaking_ without it breathing hard *hehe*, well, I'm there. Build it, I'll buy it. My voodoo2 needs replacing, and I've never been more pleased that I didn't start planning to try and get a GeForce or something.
    The output of this card _will_ look better than GeForce, by an order of magnitude. That's a prediction. That's also assuming a lot of multitexturing, but hey- if it's good enough for ILM, it's good enough for _you_ ;)
  • Microsoft's Direct3D has been tracking the latest developments in cards: D3D 7.0 will have direct support for lightmaps and stencil buffers, for example.

    Where is OpenGL headed? Is anyone furthering its development? Is it going to track the features it needs to stay competitive as a games API?

    Otherwise we're stuck with Glide and D3D. Talk about a Hobson's choice.
  • 9&% of all statistics are made up on the spot.

    -
    We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.

  • Where I'm coming from on that:

    * I care about today
    * Most of us have good but not amazing CPU's
    (overlocked 366 celeries)
    * Only game worth playing is Quake3
    * At normal res, Q3 is geometry limited with above hardware.
    * Quake3 already supports T&L.


    -
    We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.

10 to the 12th power microphones = 1 Megaphone

Working...