

3dfx Unveils Info Regarding Voodoo 4 & 5 249
A reader wrote to us about the latest press release from 3dfx regarding the Voodoo 4 and 5.
The V4 and V5 will apparently be released in March of 2000. The V4 will be
single processor, but the V5 will have both a commercial and professional
version, respectively supporting up to 4 and up to 32 VSA-100 processors, and up to 128 and 2GB of RAM
each. The release for the V4 and V5 is rolled in with the VSA-100 talks - definitely worth checking out.
Oh baby! (Score:1)
This has got to be the greatest bit of kit I've seen in a long time!
Powerful? Without question!
Linux support? I sincerly hope so!
we'll see...oh and NVIDIA rules (Score:1)
Why is everyone so excited? (Score:3)
I can't figure out why everyone is so happy about 3dfx putting out another Voodoo chip. They're pushing a proprietary interface (Glide), where a perfectly good standard exists instead (OpenGL). They're using market pressure to get game manufacturers to adopt their standard, and lawsuits against developers who try to write Glide wrappers so that Glide-only games can be played on other video cards.
Doesn't this sound a bit like another company that everyone is up in arms about?
- Drew
The usual... (Score:3)
Remember how the cheapo motherboards used to be able to allocate some of the system RAM for video RAM? It would be pretty funny if these cards could do the opposite.
--
SMP video (Score:1)
Looks like we'll have to all compile our kernels for SMP machines now.
--
umh...crap? (Score:3)
jaw drops (Score:1)
Re:The usual... (Score:1)
serious question (Score:1)
It wouldn't have been a problem in the days of the Voodoo2 SLI setup, as any player with one could get frame rates typically twice as fast or faster than pretty much every 3D accelerator out there, so the 2-frame lag would be the same or less time as the single-frame delay. However, with the ungodly frame rates offered by a single GeForce 256 with Double Data Rate RAM, if there were a two-frame delay for someone with a Voodoo5 5500, in a LAN game of Quake III the Voodoo5 user would be toast.
What? (Score:1)
hrm... (Score:4)
Okay, not only am I defending linux on this one, I am also wondering where the MAC drivers are. If 3dfx wanted to have some incredible benchmarks they should write a MAC driver and throw it into a G4. They say the Voodoo 5's aren't only for gamers, why not port the drivers to the most popular graphics design platform?
Why does anyone care? (Score:5)
I fail to understand why this stuff excites people. I've always thought that the market for add-on 3D graphics cards was going to develop a lot like the market for add-on sound cards did, and so far I'm seeing nothing that indicates otherwise.
What I mean is -- consider for a moment how the market for add-on sound cards developed. Up to 1992, sound on the x86 PC was basically nonexistant, unless you owned a flaky almost-compatible like the Tandy 1000. Then the multimedia tidal wave hit and suddenly there was consumer demand for hardware sound support -- and a market sprang up to fill the demand.
Once the demand for sound cards sprang up, the market developed through 3 distinct stages in the next 5 or so years:
So this is where we are today in sound cards -- while a few enthusiasts care about buying the latest Sound Blaster Live! or whatever, the vast majority of users are happy with the 16-bit audio that's hardwired into their motherboards. It's Good Enough!
And that's what's going to happen in the 3D card marketplace, IMHO, fairly soon. We've already passed through stage 1 (I remember agonizing over whether to buy a Voodoo1 or a Rendition Verite card) and stage 2 (with 3Dfx milking their brand name for all it's worth through the Voodoo3). But now Good Enough 3D hardware is starting to come integrated on motherboards, and 3Dfx's Voodoo-only APIs have been almost entirely forsaken in favor of Direct3D, which is integrated into the OS. I've run 3D games on cheapo PCs using this integrated hardware, and while the performance isn't great, it's Good Enough -- while the add-on card companies fight over which card can provide 80 fps in Q3Test, or other "features" which would be lost on the average consumer anyway. So watch for it -- in a year I'd be amazed if there's still a market for whizbang add-on cards. Most people will be just fine with the Voodoo2-level hardware they'll get free with their PC.
-- Jason A. Lefkowitz
Divided (Score:3)
Upon seeing the specs for that baby, part of me just screams I want it, but the other, more rational part of me wonders what the point really is.
I mean, great: gigatexels per second. As much RAM as I currently have on my mainboard. Meaning what? I can now play Quake3 at 4,000*3,000 resolution? Yay. Yes, I know about anti-aliasing, but this is overkill for even that if not running very righ resolutions (1024*768 and above).
Read my lips, 90% of all speed problems with games on current hardware is the geometry setup bogging down the processor. Unless you play at above mentioned resolutions, or happen to have dual athlon 700s and are playing at 100 fps already (and if I am right in assuming that this does not have a Geometry chip like the GeForce) this card will be exactly 0% faster for you.
In my opinion Nvidia have taken a much wiser approach to the whole 3d acceleration concentrating on the weekest pointinstead of just pouring in endless amount of pixel fillrate that the processor can't render anyways unless you are stairing at a blank wall.
-
We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.
Re:Why is everyone so excited? (Score:1)
Glide significantly out-performs Open-GL (open-gl is just too complicated, but I thought everyone knows this).
As for the licensing.. I know
roll out crystal space..
Re:Divided (Score:1)
Details... and analysis (Score:3)
The really interesting thing is that *once again* 3dfx promised us more than it will deliver. On the low end (Voodoo4 4500) these babies are getting smoked by the GeForce 256, which will be a half a year older! The GeForce can do 480 Megapixels per second, about 1.3 times as fast as a Voodoo 4 (which clocks in at 367 Megapixels per second).
If the past is any indication it at least a few more months for the Voodoo 5 to be released (ignore what 3dfx says), by this time Nvidia will probably already have a better card.
In summary, the Voodoo 4 is slower and less feature rich than the GeForce 256, plus is won't be out for 4 more months. It could take longer for the Voodoo 5 which will probably be an anachronism before it is released.
Come on 3dfx! This is *not* the technology that will keep us ahead of the PSX2!!!
Memory technology? (Score:1)
If Linux drivers come out, I'll probably go for the Voodoo 5 6000 quad beasty.. That should hold out for a while..
Anyone find anything on the memory technology yet?
This card would have to have some massive memory bandwidth to keep up with those fillrates..
I know even the sdram GeForce is memory bandwidth limited at much lower fillrate.
It appeares that each graphics chip has its own dedicated 32 megs from the specs(with up to 64 being addressable by each chip), so that is one memory trick I'm sure they are using.. Any other details?
(anyone notice that 3Dfx, long saying "32 bits doesn't make much difference" cuz they didn't have the technology, is now pushing it
Re:Why does anyone care? (Score:1)
Re:Hehe...this guy's funny (Score:2)
> It's nice to see 3dfx sticking to the Glide standard rather than some proprietary OpenGL nonsense that doesn't port anywhere.
Glide is a proprietary interface owned and VIOLENTLY copyrighted by 3Dfx. They sue the ass off of anybody who tries to figure out how it works, make a wrapper, etc. Search the slashdot archives for details. OpenGL, on the other hand, is an open, FULLY portable to basically ALL platforms available, complete graphics API.
> Every tried to run a TNT in anything except Windows? The 3D works like shit.
Oh, then I guess the fact that I was just playing Quake 3 was just my imagination, because it looked great at 1024x768x32 bit color. I could have SWORN I got a TNT2.
Try updating your drivers.
> Sure, Glide might be slower in Linux/FreeBSD than Windows, but it is a simple, fast, and more efficient protocol than the crap that nvidia is turning out anyday.
nVidia didn't 'turn out' OpenGL, SGI did. And Glide actually works FASTER in Linux than in Windows. Even when you're making concessions, you're blatantly wrong. Dolt.
> nvidia users, you own a very nice 32MB card that can do wonders on a 2D desktop...look at X go!
The only part of your post with a shred of accuracy. The 2D on this card is almost as impressive as the 3D.
Re: The answer (Score:1)
I am... (Score:3)
Re:Why does anyone care? (Score:2)
Now.....ugh. Your argument is the equivalent of saying that people will be happy with their PIII 550's next year and won't ever need to buy an Athlon 1 GHz. This...makes no sense at all. Video cards still have a LONG LONG way to go before they reach the maximum levels of performance they can acheive with monitors. Until we have a graphics card that processes information on a pixel-by-pixel basis with 64 million colors and real-world geometry at over 1600X1200 resolution with 60FPS we won't be anywhere near graphic cards levelling off.
Sound cards have already pushed speakers to their limits . Video cards have a long, long way to go.
Re:Why is everyone so excited? (Score:2)
Current Generation Good Enough? (Score:1)
For most people "Good Enough" hardware is here right now. A huge majority of gamers should be happy with the current generation of 3d cards, the improvements between generations are getting smaller and smaller. Also, the limiting factor in multiplayer is not the 3d card but your modem.
Maybe its just me but pretty pictures are impressive for the first hour. After that I want game play. Its the software which is more important to me.
I still have to go home and see how Q3Demo plays on my machine at home
3dfx picks the "right" features (Score:1)
What makes geometry better than fillrate?
Re:Question. (Score:1)
Right now is just a little triangle setup and lighting, there is much more that can be done in hardware..
And we need faster T&l, faster memory interfaces, more fillrate, etc...
Multi-processor based video cards? (Score:1)
Re: The answer (Score:2)
Re:Why does anyone care? (Score:1)
Re: The answer (Score:1)
Re:serious question (Score:1)
Sound is different to 3D graphics (Score:1)
re: 'heart of engineering' (Score:2)
>because they choose the best features to include
This is usually done by the marketing department, and often to the chagrin of the engineers, who would like to think they know better...
Seth
Nothing new. (Score:2)
Re:Memory technology? (Score:1)
I assume this is extrapolated from the 32 processor version, each with it's own mem interface..
that means memory bandwidth per processor is about 3 GB second.
Me thinks this will be a bit memory limited in games that don't use it's texture compression...
However, if games start supporting 3dfx's texture compression, they should fly...
And the cycle continues... (Score:1)
Sometimes 3dfx will be on top, sometimes nVidia will be reigning champion (the NV15 is in the works, i hear...)
The Savage2000 and ATI Maxx are almost out...it really doesn't matter. These cards are way more powerful than anyone really needs, or will need, for a while. Until software comes out that really needs a couple hundred megatexels a second, I don't really care who's on top of the hill (and by that time, there will be even more powerful cards coming out)...
Re: these cards come in PCI (Score:1)
Seth
Yeeeow! (Score:1)
Come on. Do we -really- need to be able to frag at 3200x2400x32bpp at 60 frames per second? Well, ok, I guess piping this to a 57" big screen TV would be nice.
But seriously, isnt this approaching overkill? I find Quake II to be quite fun on a Voodoo3 3000 AGP. Granted, my shitty 14" monitor is the limit, and why I'm only running at 8x6. But alas.
finally (Score:2)
Re:Question. (Score:1)
Re:Divided (Score:1)
Re:PCI Version Already Supports Macs (Score:2)
Each are gambles (Score:2)
No one knows what game companies are going to try next.
There is no way of telling whether hardware transformation and lighting is going to make any difference at all in future games. Sure, nvidia is going to tell you that future games will depend on it! There is no way of knowing that 5 gazillion texels/sec is going to really make much of a difference to future games, although 3DFX doubless wants you to think that. No one knows whether game companies are going to stuff their games with bump-mapped polygons, no matter how much Matrox tells you it's the truth.
Point is, each of these hardware developers are hedging their bets, that game companies will favor their technology.
As for us consumers, I would take a "wait and see" approach. I'd never go out and buy the latest and greatest until I see what games run well on them and what games do not run well on them. Specs from pre-released hardware are meaningless, and even released hardware that runs a FEW games spectacularly is nothing to base a purchase on.
Look for the architecture that stands the test of time, and has support for the platform and games you play.
Re:Details... and analysis (Score:1)
I would say they are the same speed (Before GeForce gets new ram), but that is disappointing considering that the 3dfx also cannot do geometry.
I saw nothing about fast writes (Score:1)
Also, is the current PC-133 Mhz standard fast enough (for workstations not gamers) to send 2GB worth of textures?
I don't know. That's why I'm asking you guys...
Re:Multi-processor based video cards? (Score:1)
>indeed.
virge, LOL! please tell me that was intentionally funny (the virge being a highly crap excuse for a 3d card)
Re:Yeeeow! (Score:1)
Re:Divided (Score:1)
What About When I'm Done Playing Games? (Score:2)
Why, when we all have a global network right in front of us that is ablaze with information and commerce, is no one strapping a hardware-accelerated 3D engine onto the net? When am I going to be able to navigate the web in 3D? When can I use my 3dfx or Nvidia board to do real work, or to shop, or to explore real information and news? Why is the web still 2D? Wake the fuck up. Screw VRML - I'm not even asking for any sort of server tech - just give me a fly-through 3D-abstraction of the HTML/XML content that is already there. If people know the engines are out there they will start to build for them.
Bottom line - I won't give 3dfx or anyone else more of my money to play another FPS. I will part with more money for 3D hardware when I can use a Voodoo 5 6000 to give me an Ono-Sendai Cyberspace VII-like window into the net.
Night
Do game developers want fill rate or T&L? (more) (Score:1)
The majority response, was that if they had to choose, they'd pick a card with accelerated geometry processing and a mediocre fill rate over a card with an insane fill rate and no geometry acceleration.
What does that tell you about the direction the game developers want to go? They want to build games with higher-polygon engines/content. My guess is that's what we're gonna see.
Caveats (Score:2)
Re: the Voodoo4
The boards, which render two fully featured pixels per clock, will deliver between 333 and 367 megatexels/megapixels per second
Re: the Voodoo5 5000/5500
The board, which renders four fully featured pixels per clock, will deliver between 667 and 733 megatexels/megapixels per second fill rate
Re: the Voodoo 5 6000
The Voodoo5 6000 AGP, which renders eight fully featured pixels per clock, will deliver between 1.33 and 1.47 gigatexels/gigapixels per second fill rate
Now, if you'll notice they state how many "fully featured pixels per clock" each card delivers. Also, notice that the V4 does 2, the V5-5500 4, and the V5-6000 8. Along with that, as I guess one would expect, the V5-6000 has double the fillrate of the 5500 which has double the fillrate of the V4.
So? What's my point? Well, with the Voodoo2 -- which could render two pixels per clock -- the full fill rate was acheived only if the app was rendering two pixels per clock. (ie. multitexturing) If the app wasn't multitextured, the effective fillrate was actually only half the "marketing" fillrate. I think this was also the case with the Voodoo3, although I'm not positive.
I'm not saying that this is definitely the case with these cards, but:
Correct if I'm wrong, but I think these cards are still based on the same architecture as the V1, V2, and V3.
3DFX is somewhat notorious for advertising the higher "marketing" fillrate as opposed to the true fillrate.
The fact that they qualify the fillrate of each card by stating the number of render pixels per second kind of worries me.
If this is the case, apps that don't take full advantage of the high end cards (ie. have less than 8 pass multitexturing) may leave you with nothing more then a glorified and expensive Voodoo4.
Re:Why is everyone so excited? (Score:1)
As for being happy about 3dfx putting out another competing 3D Board...you better damn well believe I'm happy. I love competition...it makes BOTH of the leaders better in the long run. Think about how competition has improved those frame rates and image quality just over the past 3 years. (thats even in spite of an initial head start with Glide by 3dfx.) Simply amazing really...
-- DW
Re:Why does anyone care? (Score:1)
it's much easier to reproduce a sound that sounds real than a 3d image that looks real, and it won't be for another 5-10 years that the video card market becomes totally commoditized.
now, the low end is already that way, and will slowly continue in that fashion for some time.
-jm3
Re:STUPID IDIOT (Score:1)
Good. I saved $16. Why do I really care if it uses 10% of the CPU? My PC is idle most of the time anyway. Linux is single-user on my machine anyway.
Re:STUPID IDIOT (Score:1)
Re:Why does anyone care? (Score:2)
Re:It's fillrate, not HW T&L (Score:1)
Re:Each are gambles (Score:1)
Winmodems (Score:1)
I haven't ever used a Winmodem, so I don't know how hard they suck, if at all.
Molly.
Re:Divided (Score:1)
2GB/32 chips per high end card = 64 megs a chip, the max that the chips can address...
The reason that they have the consumer market top out at 4 chips is there are diminishing returns with each chip..
2 should be nearly twice as fast, 4 less then four times as fast, 8 perhaps 6 times as fast, etc..
I don't believe with this architecture they can pull of very linear scaling.
Re:The importance of PCI: dual-head kings of tomor (Score:3)
Matrox offers an AGP card with two outputs. It's called the G400. Unlike the V4 and V5, it's already released and available.
Anyway, I'm somewhat off topic. But, I needed to correct this. The issue with PCI is bandwidth and texture swapping. If the card truly can have up to 2GB of RAM, (Yes, there are many simulation visualizations and mappings that can use this.), you'll need more than the 533 MB/s provided by the PCI bus. Even full AGP 4x (w/ RDRAM) has 1.06GB/s. An approximately 2s delay is damn noticeable.
If they can put multiple graphics processors on one card, why can't they put multiple output ports on the same card?
Re: 'heart of engineering' (Score:1)
Re:Each are gambles (Score:1)
So I agree that these things are all gambles in a way...but I think they are all bound to be supported by everyone eventually anyway...its really just a matter of priorities.
Re:The usual... (Score:1)
Re:Why does anyone care? (Score:1)
But you're proving my point! Have you looked at the PC hardware marketplace recently? Most people are happy with P3 550's. In fact, most people are happy with K6-350s! All the action in the consumer marketplace is happening at the low end around cheap PCs, not at the high end around 1GHz Athlons. Nobody but a few enthusiasts (myself included) gives a hoot about 1GHz Athlons.
Why is this? Because for the vast majority of computing tasks, a K6-350 is... Good Enough (TM). The increase in utility for the average user from a CPU upgrade becomes vanishingly small -- certainly smaller than the difference in price. And when an upgrade's cost exceeds its perceived utility, people won't make the upgrade -- which leads to commoditization and integration.
Again, the issue here is not theoretical limits of graphics processing. You could make sound cards that would wring even more accuracy and fidelity out of PC speakers if you wanted to. The issue is increase in performance versus increase in cost. Most people think they won't see enough of a benefit from a 1GHz CPU to shell out a premium for it; most people think they won't see enough of a benefit from a 256-bit Sound Blaster to shell out a premium for it; and soon, most people won't see enough of a benefit from a Voodoo6 to shell out a premium for that, either -- regardless of how many more polygons you can squeeze out of it. Once the hardware gets Good Enough, most people stop caring if it will ever get any better.
-- Jason A. Lefkowitz
I hope the _best_ card wins (Score:1)
I truly hope that nVidia can gather enough support for the GeForce's geometry engine so that this battle is fought and won on which solution is technically better, not which company bullies the other better.
Re:Why does anyone care? (Score:1)
Anyway, we've already seen some moves to Stage 3, Apple tried it with ATI, but got screwed, I think, because of ATI's crappy drivers (heh, I'll take any chance I can get to bash ATI), plus, it seems like ATI would still rather build cards than sell chips to stick onto Apple mobo's, probably because Apple was such a niche market for them anyways.
Then Intel was also trying it, with their AGP nonsense, and their 3d graphics chipset that they tried to use to corner the markets so they could foist it onto mobo manufacturers as the "de facto" standard. This failed. I'm not sure why. Maybe it just wasn't "good enough" enough.
What will suck about this integration of "good enough" hardware, is the proprietary API that will be tied to it, probably DirectX, which will prevent game manufacturers from taking the sensible approach and using a cross platform 3d API like OpenGL, or MESA, which will probably mean that most games will continue to be written for Windows first, and Linux and Mac will continue to be the afterthought.
While Office98 and iMac have built Apple up an impressive marketshare recovery in the past few years, and all the press and business support have bolstered Linux, these platforms will not gain broad acceptance outside of their niche markets until the game-software availability problem is resolved. And for that to happen, there needs to exist a compelling cross-platform 3d API, that the game industry actually uses.
For this, I'd say that Carmack is the industry leader, and a great example to follow. Not only does Id design GREAT games, but they do it in a manner which facilitates cross-platform adoption, which gives them access to markets that are otherwise not available. I hope that this is enough of a competitive advantage that it persuades other game developers to follow suit. (at least indications for Bungie are that they will be releasing Halo cross-platform, so that's an encouraging trend.)
I wish I had a nickel for every time someone said "Information wants to be free".
Re:Why does anyone care? (Score:1)
All most sound cards do is d/a conversion and *very* minimal processing.. Sound cards are mostly analog devices with a d/a converter.
2D cards are also the same sort of basic a very small amount of window processing, a d/a conversion, and analog components.
3D cards, on the other hand, are very complex processors, in recent generations as complex as the recent geleral CPU's in our computers. Just as CPU's are getting pretty fast, but are not yet (nor ever will be, i suspect
However, you can also get less then cutting edge graphics cards (as well as CPU's) and still have a decent system.
I for one am still using a Voodoo 2, and am only now, 2 years after the purchase, looking to upgrade. I also have my k6-2 300 from the same era running my gateway/server, though my main workstation has a celeron 400 on it (comody tech, bought darn cheap).
Re:Details... and analysis (Score:1)
S3 has done the same with their new chip as well, with lower mem and processor speeds than originally promised.
matt
Re:hrm... (Score:1)
Re:Linux IS THE 3D POWERHOUSE (Score:1)
Re: The answer (Score:2)
2GB/32chip reality check.... (Score:1)
Re:The importance of PCI: dual-head kings of tomor (Score:1)
agreed: 3dfx NEEDS geometry setup (Score:1)
Re:we'll see...oh and NVIDIA rules (Score:2)
And on the other hand, 3dfx has never failed to meet their specs. Even though their products have severly lacked in many departments (AGP texturing, 32 bit colour), at least they didn't play the Hype Game that nVidia did.
And I don't buy 3dfx products anymore. My first accelerator was an original Voodoo Graphics, that cost me close to $300(cdn). Since then it was an i740 (hey, I was on a budget) and now a G400. And the G400 is the best card I've ever had the pleasure of using, it's fast, pretty (environmental bump mapping BABY!) and more Open Source friendly (specs vs un-improvable open source driver). Granted, 3dfx is definitely the least open source friendly.
Anyway, that was my rant. I'm happy now.
Re:I saw nothing about fast writes (Score:1)
Each processor will have it's own dedicated 64 megs or RAM in this system. If there are textures that are in more then one processors section of the screen, (basically guarenteed) then the texture must be in each one of the processors memory. I would assume because of this, max textures would actually be in the 128-256megs range. If you have a lot of large textures, could be as little as 64..
I Care!!!! (Score:1)
I understand what you are trying to say (I think), but the explosion in growth in the low-end is because the prices are falling. This makes PC's more accessible to people who aren't willing to spend $$$ on a computer. In other words, different consumers are buying the cheap PC's, not the older PC consumers who have been buying every couple years. And definitely not the hardcore gamer.
I also believe that this enthusiast 3d market has a chance to expand and grow over time into more main stream. Sure, it's a toy now, but being able to render near photo-realistic worlds in real time in response to user inputs...that has potential. As the internet and technology grow and find new ways to solve problems, I think we may see some very neat applications for 3d technology.
One final note, I just helped my girlfriend order a new Dell computer. It's not bottom of the barrel cheap, but it was only $1100. It included an Aureal 3d chipset sound card. Sound isn't a commodity. Common, "good enough" sound is a commodity
GeForce meets original published specs (Score:1)
But one thing to note, Nvidia doesn't actually make cards unlike S3 and 3DFX. They just sell the chips to OEMs, and the OEMs set the final clock speeds, memory used, etc.
Finally their heads are out of their asses (Score:2)
Finally getting rid of the 256x256 texture resolution limit is a Good Thing as well. Even Quake2 uses textures larger than that, and on the Voodoo[1-3] chips it just looked blurry and crappy because of it.
That said, I wonder what sort of marketing spin 3dfx's wonderful PR people will put on this decision, when for the longest time they were constantly saying how worthless 32bpp rendering and large textures and the like were. I also wonder if these chips will have true accumulator buffers (the press release didn't say anything about this) or their bastardized, crippled "T-buffer" crap. I also wish they'd drop extending Glide (for a number of reasons) and only have Glide 3 for backwards compatability, especially since Glide can be relatively trivially implemented in terms of OpenGL and adding on more features to Glide to try to make it catch up will just cause more cumbersomeness and an even greater rift between their Windows and Linux support. (I feel even sorrier for Darryl Strauss if he's got to do even more for-free work on extending Glide for a relatively thankless company.)
On the whole, though, 3dfx has a chance to actually redeem themselves with this new card. I hope they don't blow it; I'm all for giving them another chance. I just hope that they decide to actually have a good product instead of good marketing. For the longest time they seem to have just been resting on their laurels from having been the first usable (and not even decent) 3D card on the market. Maybe now that can finally change.
---
"'Is not a quine' is not a quine" is a quine.
Not quite what I was expecting... (Score:1)
Ibag (I'll Buy A Geforce...with linux support, I do believe)
Re:Details... and analysis (Score:1)
(1) Very, very, few games are Glide only. Don't worry about it. The sole exception I know of recently was the initial Unreal Tounrey demo.
(2) Creative Labs ships all of their cards with a Unified Driver that allows you to use Glide with a TNT/TNT2/GeForce. It works quite well, from what I've heard (again, on the UT Demo).
As for the others, Savage seems to have a proprietary API of some sort. Then again, no one buys Savage
Re:What kind of monkey moderated this as Informati (Score:2)
Physics processing (Score:1)
I also think that a very cool thing would be to have a Dreamcast or PlayStation *card* that you insert into your computer. It could use the CD/DVD drive to read its roms, and then you can play some really cool-looking game while recompiling something, with a minimal drop in speed for each of them.
Linux IS THE 3D HOBBYHORSE (Score:1)
"You poor, sad little man".
Re:I Care!!!! (Score:1)
Exactly! Exactly! Yes, all the action is at the low end because prices are falling. Why do you think prices are falling? It's because demand for the Latest and Greatest is extremely soft. Nobody needs a 1GHz Athlon to do anything except play the most demanding games -- and the audience for the most demanding games is tiny. For every hardcore member of a Quake clan, there are ten people for whom "computer gaming" means firing up Deer Hunter for an hour a week. What's more, it's the Deer Hunter audience that is the one that's growing by leaps and bounds -- and you can bet that that audience isn't going to shell out $300 every year to get the Latest and Greatest 3D card. Hardcore gamers will? Who cares? "Hardcore gamers" are already an eensy tiny segment of the market.
OK, so we may see some neat 3D applications in the future that require massive polygon-pushing. Can anyone honestly say that we will see consumer versions of these applications within the next 3 years? If not, why should anyone pony up for the beefiest 3D card? Why not wait until these vaporous applications materialize and buy then, when the cards will be even beefier -- and cheaper to boot?
OK, so you are too demanding for an el cheapo sound card. That's great. Heck, I'm too demanding for an el cheapo sound card, when it's my own money on the line. But don't confuse techno-literate Slashdot readers like you and I with the broad buying public. "Common, good enough sound" is plenty good for the Deer Hunter crowd, and common, good enough 3D will be good enough for them too.
-- Jason A. Lefkowitz
Re:Why does anyone care? (Score:1)
Then again those dreams may have come from my parents putting too much acid in my Froot Loops!
Re:we'll see...oh and NVIDIA rules (Score:1)
No hype? What about the T-Buffer? If you read over some of interviews with 3dfx's marketting people, you'd think it was the greatest innovation since VGA graphics. (And the Voodoo4 is the second coming, just so you know....) They may not be official hype-mongers, but 'unofficialy' they're just as bad (note: I said just as bad; I'm not defending nVidia, I'm attacking 3dfx).
Not to mention dumb. (No, I don't give a crap about fill-rate, I want *quality*. 40 fps is fine, now give me 32bit color, dammit!)
*coughcough* That was fun. Anyway, at this point I'm a dedicated nVidia owner, mostly because of my beloved Hercules TNT2. I would have gone with Matrox, but they were just half a generation down from what I wanted; the G600 (or whatever) will almost certainly be my next card, if it can do what I expect.
Re:What? (Score:2)
Re:Divided (Score:2)
Actually, if you had been feverishly keeping up on hype for the last few months as I have, some of the features of this involve an accumulation buffer which will be used for doing motion-blur/depth of focus-blur. The way thse will be implemented is by doing multiple renders per frame, and this sure as hell uses a very high fill-rate.
Further more, the number of pixels rendered per frame will often be significantly higher than the number of pixels on the screen as some polys are obscured by others - even if you fully z-sort your polys you can't be guaranteed to be free of having to draw over an old poly (which is why we have and need z-buffers).
Re:Why does anyone care? (Score:1)
While 3D graphics accelerators are very impressive these days, they are not Good Enough. When they can do realtime radiosity and raytracing (or something of similar quality) at a resolution that is below the physical limits of the human eye, that will be Good Enough.
Even if you don't have the uncontrollable urge to play the latest and greatest game, you have to admit that the demand for The Next Big Thing has fueled a lot of innovation in the last few years. A number of the big players involved (Matrox, Nvidia, Creative, etc...) have released specs to their hardware and started open source driver projects. This lends industrial credibility to the Open Source model in general, as well as helping to make Linux a viable platform for gaming and more serious pursuits like graphical design. And that's a Good Thing(tm).
Re:Details... and analysis (Score:2)
Re:Why does anyone care? (Score:2)
The human ear is limited to a dynamic range of about 120dB, and where 16-bit audio is theoretically limited to 96dB, and in reality the best consumer soundcards has a dynamic range of about 80dB due to noisy D/A converters...
I didnt say the parent post says that current soundcards is good enough for absolute sound realism, its just some of the replies that does..
Why I care (Score:2)
Naturally, I'm all for a "let's just see when it comes out" attitude, but to answer the "oh great, even more texels" argument, remember that a massive fill rate means your geometry engine can have more overdraw without hurting performance. (Overdraw is where you "draw" multiple pixels into the frame buffer at the same place, and the one with the lowest "z" or distance from the observer is the one that actually shows up). If you had an infinite fill rate, you could draw the entire world as fast as your geometry setup would give you vertexes. Up until now engine designers have had to use tricks like BSP trees (ala DOOM) and Portals (Decent) to get overdraw as close to 0 as possible. With a high enough fill rate, you can get sloppy with your hidden surface removal and focus on other things. Of course, this is an over-simplication, but the point remains that more texels/s is not a bad thing.
Also, CPUs are still getting faster and cheaper. It will not be unusual to see dual-processor machines in homes next year. With Athalon using Digital's bus technology, quad processor machines could become Christmas pressents in 2000.
To answer the "what do I need 2G of textures for" question, think computed textures and textures with more information than use colors. If a texture has depth (bump mapping) or material information (alpha channel, refraction), it adds up. quake 3 uses 32-bit textures: 8 bits each for red, green, blue and alpha (transparency). Now let's immagine what we could do with another 32 bits: 8 bits of depth, 8 bits of reflection (I forget what this is called), and 16 bits for whatever effects would look good if they varied over the face of a polygon. Also, animated textures will quickly use up texture memory.
Yes, what we have now is pretty cool. Yes, 3DFX is being unfriendly to open standards. Yes, other cards may be a better bet. No, this is not the end-all-be-all of real-time scene rendering. Personally, I can't wait to get my hands on almost any of the cards that's going to be coming out next year.
Disclaimer:
I do not work for or even know anyone who works for 3DFX. I know two people who work for Creative Labs, and they hate 3DFX.
Simple. (Score:2)
I take it you don't read Cinefex
'Cinematic' means impressive- means multitexturing that would _choke_ a GeForce (or indeed a Voodoo3, but that's a given). It means the modellers will still be caring about polys, but the _skinners_ can go HOG WILD. Surfacing is not merely choosing a really big texturemap- talk to rendering people- overlaying translucencies and transparent textures is when you start getting really startlingly impressive effects. This throws the door _wide_ open for really amazing stuff. Polys aren't everything (it should be OK on polys anyhow, but polys aren't everything).
Re:Why does anyone care? (Score:2)
MINE. *grab* (Score:2)
Seriously. It's coming out in PCI, I want that because I'm hanging onto my nice old powermac for a while. I know _exactly_ what to do with all that texture bandwidth- multitexturing babeee! *g* forget polys. You'll end up with really boringly textured well sculpted shapes- geometry is NOT the weak link. I've appreciated the 3dfx strong points even through the drawbacks of 16/22 bit color- I've seen the transparencies and shading and tonal values looking better at 16 bit than the competition at 32 (not always, but in a number of cases, and always due to the 32 bit card drawing washed out tonal values). Now that 3dfx is ready to do the card with antialiasing that works with all my existing games, and with so much texture memory and fill rate that you could use it for fscking _filmmaking_ without it breathing hard *hehe*, well, I'm there. Build it, I'll buy it. My voodoo2 needs replacing, and I've never been more pleased that I didn't start planning to try and get a GeForce or something.
The output of this card _will_ look better than GeForce, by an order of magnitude. That's a prediction. That's also assuming a lot of multitexturing, but hey- if it's good enough for ILM, it's good enough for _you_
General OGL question (Score:2)
Where is OpenGL headed? Is anyone furthering its development? Is it going to track the features it needs to stay competitive as a games API?
Otherwise we're stuck with Glide and D3D. Talk about a Hobson's choice.
Re:Divided (Score:2)
-
We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.
Re:Each are gambles (Score:2)
Where I'm coming from on that:
* I care about today
* Most of us have good but not amazing CPU's
(overlocked 366 celeries)
* Only game worth playing is Quake3
* At normal res, Q3 is geometry limited with above hardware.
* Quake3 already supports T&L.
-
We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.