1368741
story
Jonas writes,
"I spotted that ATI has announced their next generation intentions where the 3D industry is concerned with their "Charisma Engine" and "Pixel Tapestry" technologies at this year's GDC. There's also an interesting article discussing the technology involved on their next gen 3D part. "
Re:What happened to pixel volume rendering? (Score:1)
Re:ATI next nVidia? (Score:1)
I believe their glory days are numbered. Voodoo is just too overpriced and has no stunning performance advantage anymore to justify the cost.
Re:ATI next nVidia? (Score:1)
++ indicates increasing market share, -- indicates decreasing.
Re:More features for no one to support (Score:1)
For this purpose, one would need proper raytracing acceleration and I don't think 3D accelerators do raytracing stuff...
Anyway, that would be a nice feature on a videocard... hardware povray :-) (like povray compiled on a playstation 2 which came pretty high in the povray benchmark)
Oh well, that's just me mumbling about how great it would be to get cheapo 3D harware into hospitals to do therapy planning...
---
time to rethink for intel/AMD/etcetc (Score:1)
This made me think that maybe the x86 arch needs to be made to play catchup ASAP. what use is a 64MB 200mhz 3D card that can pump 1bil pixels etcetc if the core arch it runs on is still based on 70/80's tech.
bain
Re:Moore's Law on Graphic Cards? (Score:1)
there's a 486 SX 25 (8MB) cranking out a score of 281.81 with 8bit 640x480 (built in S3-2mb).
A gateway Pentium III "other" (128 MB) with a nVidia TNT (16 MB) gets 260.4 at 16bit 640x480. Yet a gateway Pentium MMX 233 (not II, MMX) (128MB) with an ATI Rage 4MB, gets 266.4 at the same resolution and bit depth.
The same chart shows a 8MB ATI card running at 32bits*1600*1200, which is just silly...(264 on a Celeron 333)
Re:Any evidence? (Score:1)
All those figures are irrelevant anyhow
with release dates for the hardware
Anything I could say (besides the fact it would be breaking peoples' confidences in me) might very well be wrong. For example, lot of what 3dfx said (under NDA) to developers at GDC '99 about the geometry acceleration abilities of the Voodoo4/5 turned out to be wrong, because of last-minute design changes.
Much faster (Score:1)
This should be no surprise: Intel can adjust their R&D budget to hit Moore right on the nose. They haven't had much competition, so there's no reason for them to spend money going any faster. But the 3D hardware market is cutthroat (and no there hasn't been any reduction in competition -- all the key chipset manufacturers are still out there, with the exception of Rendition, who disappeared a long time ago). These guys are pushing their cycle time about as fast as possible -- they're basically looking at a few months of design, then fab on each generation...
Their stuff is easily as complex (element-count wise, I can't say if it's easier to harder to design) as the big CPU makers....
Don't forget the Velocity Engine! (Score:1)
I will say that G4 sounds a heck of a lot cooler than 7400. 7400 sounds like the name of a Volvo.
Re:What happened to pixel volume rendering? (Score:1)
The reason that you've seen it run better on more equipped systems is that they probably were not MS oriented. Win32 OpenGl does not have 3D texture mapping (still is only OGL 1.1) so you have probably been seeing these on Sun's or SGIs, where there is already hardware support. The reason your system ran slow was because it was an all software implementation.
Re:Linux and ATI (Score:1)
Quake II/III can draw using either Glide or OpenGL. If you select OpenGL, Glide isn't involved at all. So you're not quite right when you say you need a card with Glide support to play Quake III on Linux. You have a choice between Glide (3dfx only) and OpenGL (all others).
That's not to say that any card with Linux OpenGL drivers will perform as well under Linux as it does under Windows. This is where that direct rendering thing comes in. My TNT2 card only manages about 20 FPS in Q3A on my machine because the OpenGL calls need to pass through the X server. When XFree 4.0 comes out (and somebody said "early March") and drivers using its Direct Rendering Infrastructure (DRI) are released, this bottleneck will be removed and owners of TNT2 and other non-3dfx cards can enjoy the full power offered by their hardware.
(*) I believe there's a library that translates Glide calls to OpenGL calls, enabling software that was written only for the Glide API to run on any OpenGL implementation. 3dfx's lawyers were hassling the authors and I don't know how that turned out.
--
ArtX? (Score:1)
(Don't flame me about how ArtX's PC stuff suck[s|ed]!)
Re:ATI is the top company (Score:1)
ATI or NVidia, either way I like to see competition. When companies don't give competitors a chance to sit on their laurals the consumer benefits. As long as we have choice. With the recent trend toward integrating video support into the Mainboard chipsets and the threat of integrated solutions like the PSX2 and X-box driving choice out of the market, there are bigger worries.
When I bought a Matrox Millenium to play Duke Nukem II years back hoping to get a glimpse of a virtual world I'd surely hate to see the road to that dream take a sharp turn. Not when recent video hardware advancements make the goal seem so near.
Re:First impressions... (Score:1)
First impressions... (Score:1)
However, most of this is beside the point. For any of these features to be useful, they really need to be implemented on the majority of 3D cards out there, otherwise it's too much effort for too little return to make use of the extra features. Just look at most 3d games out there - they're still mainly just using 1 pass renderering. (Well, apart from 1st person shooters, but they're mainly only using 2 passes for their static shadow maps...)
Yawn! Call me when they catch up to SGI (Score:1)
use this stuff.
Re:Voodoo5 5000 (Score:1)
Both the V4 and the V5 use the same chip, 3dfx's VSA-100. The V4 uses one chip, while the V5 uses 2 or 4 chips, depending on the model. The two cards should come out about the same time.
You have missed the point (Score:1)
That SGI stuff is expensive. This stuff will ship on a (pulling number out of my ass) $250 graphics card for PeeCees. Then it'll be on a $150 graphics card the year after that.
The pro equipment will always be ahead. When ATI/3DFX/etc catch up to where SGI is, SGI will be at the next level by then. But it doesn't matter. This is about affordable stuff that you'll put into a $2000 box that you'll play Carmack's games on.
"Yawn, this Learjet is lame. Me-262 did this 55 years ago."
---
Re:Voodoo5 5000 (Score:1)
Re:I am crazy (Score:1)
Monkey (Score:1)
Due to this post, a monkey was allowed to live. Yes! That's right! Instead of dismembering or poisoning another of our hairy friends, we figured Bain would make a proper substitute.
We'll be coming, Bain. Be ready for us! We've selected yet another interesting and creative method of termination just for you! :)
(Note: This message was meant in jest. No matter how much we want to do violent things to Bain, a proper monkey will be selected, dressed up as Bain, and killed in Bain's stead in a Monkey Moderation to follow.)
LouZiffer
Kinda OT - What I would like... (Score:1)
A high-quality TV output
Why, you might ask? Well, my use of it wouldn't be typical (I want an easy way to plug in a homebrew, or el cheapo HMD, that uses a composite video signal), but I can see others using it for more normal usage - to play games on the TV, to watch their DVD's on the TV (I know many DVD decoder cards have the output to TV on them, but this should be a function of the video card), or to set up a TV-based internet connection (yeah, I know - yech! - but it should be an option!). One other use I could think of would be to be able to record a 3D movie to a VCR (of course, you and I would just keep it digital, render it to an MPEG or AVI file).
I don't want the cheesy TV-out system either (where you have to set the system to 640x480x30Hz or something, then the scan-rate conversion is done, but you can't see it on the monitor) - I want to be able to view the image on the monitor and TV at the same time. For my purpose, this would allow me to preview a world rendered for an HMD on my monitor as I work, but then put on the HMD to do actual testing. I also want something that can handle the fast motion that can accompany FPS games and VR sims.
Now, for my purpose, all of this would be moot if el cheapo HMDs had SVGA quality LCDs and interfaces - but they don't. My application is a niche anyhow - I am sure many have wanted to play their FPS game on the TV from their computer (esp. when the TV is generally much larger than the monitor on the PC).
I know that there exists external hardware to do this - but much of it isn't cheap, and the cheap stuff isn't great. I think, though, that an intergrated, low-cost solution could be done, if some company would do it.
Rage 128 (Score:1)
Ever seen the "Flying Windows" screen saver crash? Always fun!
Voxels on games.. (Score:1)
Hmm.. Delta Force I and II use volumetric pixels, and both look good, but use way too much CPU.. I tried the demo of DF II, but it was too much for my 400 MHz AMD.
Voxels might be the next feature to be added to 3d-accelators. I think SGI already does voxel with hardware...
Hardware without drivers == Practice in Futility (Score:1)
Shouts to 3dfx for staying on top.. I junked my rage fury and bought a voodoo.
Re:Rage 128 (Score:1)
basis with everything from a S3ViRGE to a Matrox.... I don't think it's nescessarily the driver.
Only one beef. (Score:1)
The Divine Creatrix in a Mortal Shell that stays Crunchy in Milk
Re:Rage 128 (Score:1)
I was looking at the all in wonder 128 pro
or the Voodoo 3 3500, Is there anything i should know about these or similar cards?
...I run RH 6.1, BeOS, and windows98.
Re:Rage 128 (Score:1)
Is the AiW 128 a true mpeg2(dvd)decoder?
..the only thing that kept me looking at the voodoo 3 3500 was the radio tuner, and the fact that BeOS suppports glide pretty well.
Thanks again
Re:3D Texture mapping comments (Score:1)
Practically every 3D game out there makes extensive, if not sole use of hand-drawn texture maps. Theres no way you can reasonably procedurally texture a low-poly model and make it look like, say, a face.
Practically every broadcast or film 3D project uses hand-drawn or photgraphically based texture maps.
I agree, these are often combined with procedural maps and volume shaders, but a chrome sphere on a checkerboard with a marble column in the background rendered in POV-Ray doesn't exactly cut it when it comes to games and other professionally produced 3D content.
Whoever moderated this as 'informative' needs to get their head checked.
Re:ATI next nVidia? (Score:1)
Re:Depth of Field effects in 3dfx's new Voodoo5 (Score:1)
Back in 1995 the NV1 multimedia accelerator performed curved surface rendering. It could perform forward texturing of a quadratic surface defined by 9 control points. Unlike any current card, it could do a realistic curved tire with a dozen patches, instead of needing hundreds of triangles to avoid that caveman/combine harvester appearance. It could do an unrealistic Lara Croft breast in a single patch.
Absolutely amazing stuff, and since it was (and still is) totally unrelated to any standard 3-D API, there was an SDK which exposed the interesting programming model of the chip. It was more than the hardware registers but less than a software API.
At the time no developer wanted to write for non-standard hardware unless the hardware vendor shelled out the money. Plus, quadratic patches (parabolas) aren't supported by low-end 3-D authoring tools, and high-end tools use bicubic (NURBS) splines, which don't always degrade to parabolas well. So apart from Martin Hash's Animation Master, no 3-D authoring tool ever supported quadratic textures, which made development tricky. Plus it's pretty hard to do collision detection of curved surfaces.
I saw the future back in 1995 but the game industry wasn't interested.
ATI == low quality (Score:1)
I myself will probably never take ATI seriously when considering a high-end gaming card. Why? Nearly all of their products in the past have been quite substandard in the quality department. If the hardware was good, then the drivers and software suffered badly and vice versa.
I know of no serious gamer that owns an ATI card. My friend bought one once (Rage 128, I think) based on a good review in PC Gamer, and has regretted it ever since. (He doesn't read PC Gamer anymore, either.)
A few people mentioned that ATI has the majority market share on the 3D card racket. This is true, but misleading when you look at the big picture. The reason for this is that ATI ships a lot of cards and graphic chipsets to OEMs like Dell and Gateway for use in lower budget machines ($500-$1500). In the last couple of years, this price range has dominated any others due to the huge amount of people buying a PC for the first time. These people do not even know what a video card is, therefore Gateway & Co can get away with putting anything in their machines that is barely capable of generating graphics.
Need I go on?
Of course, it is completely feasible that ATI could turn themselves around. AMD rose up from the ashes to compete directly against Intel. (Yeah, apples and oranges, but I think the principle is the same.) And 3Dfx, once the 3D king has fallen to almost last position because they were awful good on getting chips out the door, but simply sat around while companies like nVidia started to innovate and add features to their products.
Could write more, but I'm about to go home right now and play Q3 on my GeForce.
Re:Why NVidia is on top (Score:1)
This is great because there is a HUGE fight for quality and price-control, and it gets done at TWO levels! This means that NVidia chips from a good card-company with great options and speed will be top-notch.
Once NVidia gets out their XFree4 drivers (IF XFree4.....), they will destroy everything in sight. Supposedly, these drivers will "kick the snot out of anything else out there", says their Senior Engineer, Nick.
Because of this, I am in full support of NVidia
Mike Roberto
- roberto@apk.net
-- AOL IM: MicroBerto
Re:Depth of Field effects in 3dfx's new Voodoo5 (Score:1)
This is all fine and dandy, but this is hardly hardware support for things like depth of field. It requires that you render the scene n-times just to achieve a single frame. Real hardware support (for depth of field) would allow you to specify the focal length and the hardware would automatically blur non-focussed areas automatically.
While the 3dfx "T-Buffer" affects are nice, I can't imagine developers using them, since it would lock the developers into certain hardware (again!). This is something that I thought was insane when Glide first came around. These type of proprietary extensions are exactly why I have never and will never buy 3dfx-only games or 3dfx video cards. You said it above, "basically an accumulation buffer", instead of using this type of thing, why not make an open extension to OpenGL/Direct3D to support these type of effects.
Full scene anti-aliasing is one of the few "new" features that 3dfx cards sport that I actually like, and that I think will be useful in future games.
Just my 2c
The "Top 10" Reasons to procrastinate:
Re:Depth of Field effects in 3dfx's new Voodoo5 (Score:1)
I believe you are correct about the T-Buffer hardware "jitter" thing. I'll have to check 3dfx's documentation. My main point was that the card has to render the same scene multiple times (into each of it's n T-buffers), then merge the scene together to produce the effect. This is how it was shown in documentation I read, and in the demos that 3dfx gave (of course these were emulated).
I agree with you that Glide was needed at the time, but I would have rather seen them work with others, or with the OpenGL standards committee to get the job done. I realize (and don't think I ever said otherwise) that 3dfx was never a monopoly (although they acted like one, @ least in the 3D consumer market). And I agree that they did hold onto Glide longer than was good for the gaming/development community.
But, I don't think nVidia or ATI ever had a proprietary 3D API (please correct me if I'm wrong), I know of S3's (wasn't it much older than Glide??) and PowerVRs (I used to own one... yuck!). I didn't agree with PowerVR's work with PowerSGL either, it was just much cheaper than buying a voodoo1 when I got it
I don't think 3dfx's Glide was wrong for the time (and you're right, the hardware was quite amazing for the time period [jeesh! not too long ago]), but the fact that it even began shows how short-sighted the entire 3D industry was at the time. Don't be mistaken, 3dfx (3Dfx at the time) wanted to become the M$ of 3D, lucky for us that they failed.
The "Top 10" Reasons to procrastinate:
Re:3dfx is actually a fraction of nVidia's size (Score:1)
>as much as ATI doesn't tell me anything about how
>many people actually use ATI or nVidia cards
You are completely correct - the number of computers with an ATI Rage Pro chip *FAR* exceeds that of all nVidia card combined. Now if you count all other ATI cards in...
I'd say nVidia's worth is largely due to stock fluctuations, which in turn should thanks to their announcements (often hypes).
Re:More features for no one to support (Score:1)
I agree that a lot of these advances are being driven more by the marketing dept than any real requirement from developers or users. But on your point of having to code for multiple cards - surely this is the point of things like (gasp) DirectX? My (limited) understanding of it is that basically the architecture gives a whole load of API calls for texturing, rendering, bump mapping, whatever - and software implementations of them all. Then along comes your lovely new EmotionalPixelCarpet Engine(tm) and says - "hang on - I can do X, Y and Z in hardware". The idea is that to the application it's transparent - it says - "here's a cube, please bump & tex map it, then phong shade it and put it at this position". Whether that's slow or fast depends on the underlying drivers & hardware.
The remaining problem is keeping the API up to date with the new features which are not "good implementations" but rather completly new stuff. One for the architects I think - but it can't be that hard.
their feature set beyond TCL (Score:1)
If ATI follows through on their promise and loads things like hardware with vertex skinning and all that, and the 3d programmer comes up with a more advanced version of the feature in software, what then?
Does the game stick with the eventually outdated hardware version of the feature, or do they implement a newer optimized one via software, which'll sort of defeat one or more purposes of the card?
(Am I making any sense here?)
========================
63,000 bugs in the code, 63,000 bugs,
ya get 1 whacked with a service pack,
Re:Rage 128 (Score:1)
I have found the All in Wonder 128 to be an excellent card, and the drivers under XFree86 and Windows 98 (though not 95) top-notch (but, then again, my last card was a 2 meg Diamond Trio).
I don't know about BeOS, though.
You might want to know that the AiW128 can do real time MPEG2 compression, though not in hardware, and now has 3d support in Linux. Also, the V3500 cannot do 640x480 capture.
Ah, why have me tell you when Tom can tell you? There is a comparison of these cards and others in their on TomsHardware...
Multitalented All in One Graphic Boards [tomshardware.com] on tomshardware.com
Shop wisely, and don't forget that ATI has amazing DVD playback :)
different expectations != low quality (Score:1)
Surely that was flame.
While ATI had some problems getting its products to market in enough time to compete with the likes of NVIDIA and 3DFX, you cannot judge them based on the fact that their cards are not as fast as the cards in a completely different market segment. I have never seen an ATI card that failed to excel at the job it was intended to do.
Aside from a few past driver issues, ATI has, in my opinion, done a fine job (I am biased, though, as I own and work with ATI cards these days).
On another note, if I hear one more person talk about the rage 128 being slow I'm going to scream. Why don't we all just throw those TNT2 cards out, too?
J. T. MacLeod
Re:Moore's Law on Graphic Cards? (Score:1)
The interesting thing about graphic chips is they are still not on the bleeding edge as far as total number of transistors or speed and throughput. Eventually they may catch up to the CPU simply because the road is already paved for them. I won't be surprised to see a 500MHZ graphics engine within the year and possibly a 1GHZ sometime next year. The technology is already there and "proven" by the CPUs, its just a matter of pushing the graphic chips into that ballpark.
What does this all mean? Well I think our StarTrek Holodeck idea isn't too far fetched. Movies will involve more digital actors and maybe even become the majority in the acting world. Games of course will continue to boom and will only get better, more lifelike and realistic. I think simulators are the next big step, ones that really fool your brain into thinking that your actually within your virtual environment. What I don't understand is why the pipeline between the CPU and the graphics card isn't opened up more or at least made more direct. If this one done, truly astounding products would result.
Nathaniel P. Wilkerson
NPS Internet Solutions, LLC
www.npsis.com [npsis.com]
Re:3DLabs cards DO rock OpenGL (Score:1)
A side note: If you saw the simulations for Robbie Kinevil's train and Grand Canyon jumps then you saw my brother-in-law's work using this system.
Re:Moore's Law on Graphic Cards? (Score:1)
Best results for a Banshee -which I believe was a really fast card 18 months ago- was 168.5.
The fastest result for a modern card was 462.1, but that could have been overclocked. There are several results that are over 300. This isn't a totally accurate way of comparing, given the effect of the processor, but it does suggest that graphics cards are matching Moore's law pretty closely.
Re:Am I stupid (Score:1)
Any evidence? (Score:1)
Do you have any figures for actual fill rates and triangle drawing rates with release dates for the hardware?
and no there hasn't been any reduction in competition
There were a lot of graphics chip makers about 4 years ago. Most of them died very quickly. I'm not suggesting the reduction in competitors has reduced the strength of competition - in fact quite the opposite. Natural selection means that only strong aggresive companies have survived.
Re:Hopefully not there! (Score:1)
What happened to pixel volume rendering? (Score:1)
Looking forward, when can we expect to see mainstream games that use geometry acceleration?
thanx
Drivers (Score:1)
Re:their feature set beyond TCL (Score:1)
Unfortunately for them, and for us in general, we are stuck with this situation whilest using current generation PC technology. Many components of the PC arcitecture, including even the AGP bus, are just too slow to currently allow for this flexibility, even though processor speeds are relatively blazingly fast these days.
As we move up to the next generation (64bit PCs, fast buses), I'd expect a lot of the 3D stuff that is hardware accelerated now to move back into software on general purpose CPUs, until the ceiling starts being hit there too.
Re:More features for no one to support (Score:1)
Re:Where next for high-end graphic cards? (Score:1)
It would be cool to see some OEM's releasing some extension to allow bezier patch rendering.
As far as depth of field goes the new 3dfx part is supposed to be able to do it with their T-Buffers. Also full scene AA (blurring the edges).
I doubt a physics engine or hit detection is a possibilty though, as these are pretty implementation dependent on the game. With bezier patches, I could send the control points to the card, it could render them without further intervention. Things like physics and hit detection require return values (and a stall in the video card).
ATI's new features - And Why OGL rocks. (Score:1)
Re:More features for no one to support (Score:1)
Re:Don't forget the Velocity Engine! (Score:1)
Re:waiting for the iMac comment, Carmack? (Score:1)
Three thoughts/points to ponder:
1. Apple has way too much invested in ATI right now to jump ship.
2. It's true that Apple's ATI-sourced video lags behind PCs, and that the drivers are a little under-featured--but usually because Apple hasn't provided anough memory for the ATI hardware to do it's job properly, or because of other hardware restraints.
3. The latest Apple/ATI efforts, coupled with the vector-based Quartz graphics in Mac OS X, make for screen drawing so slick, you'll wonder why everyone's spending so much money on nVidia cards.
Moore's Law on Graphic Cards? (Score:2)
I sure need more power!
Drivers (Score:2)
Re:3dfx is actually a fraction of nVidia's size (Score:2)
Here's to hoping that 3dfx dies a miserable death.
You've got it backwards (Score:2)
And that number's only going up. First there was the Voodoo, the Voodoo, and only the Voodoo. If you wanted a choice, you shopped between different cards with the same Voodoo chipset. If you wanted a high end card, it had TV out.
There was much rejoicing when the TNT came out and started heating up the graphics card competition... and even Matrox seemed to want to do 3D, although it took them until the G200 before they even had an OpenGL driver using their card... but now they have the G400, and their card's actually good at it. For people buying a new system, it's generally worthwhile to look at the Voodoo 3, TNT 2, G400, GeForce(of course), and ATI cards... and that's a good thing.
Why NVidia is on top (Score:2)
ATI sure is talking a good game, but do they have what is takes to back it up. NVidia did three things that changed the video card industry:
ATI has shown it can produce a good video solution, but lacks in meeting retail market demand and driver support. Matrox builds awesome chipsets and cards (and excellent driver support) but doesn't give a damn about meeting demand. 3DFx kept their 3d spec closed therby limiting potential developer support, lost momentum, and didn't provide good reference drivers.
NVidia has proven they can do all three consistantly. And let's not forget #4 -Support from Software developers. Metting the first three criteria directly impact the fourth. Developers don't waste their time developing for hardware no-one owns.
Voodoo5 5000 (Score:2)
Highly parallelisable (Score:2)
Depth of Field effects in 3dfx's new Voodoo5 (Score:2)
Re:Depth of Field effects in 3dfx's new Voodoo5 (Score:2)
I'm not too familiar with the details of 3dfx's approach (not being a 3d programmer myself), but my understanding is that what the VSA-100 does is use hardware method for producing "jittered" samples slightly off from the target pixel, which are then blended together in the accumulation buffer.
The "T-Buffer" effects can then be specified to be applied to certain objects, but not others. If a sub-pixel jitter is specified, then you get anti-aliasing. A larger jitter gives softening, and a very large jitter gives blurring. So far as I can tell, a program does not need to perform"n-renderings". The program still needs to specify the object (Or it can specify a mask and just do an area, I think), and the degree and quality of jittered samples, but from their I believe the chip does the rest automatically.
While the 3dfx "T-Buffer" affects are nice, I can't imagine developers using them, since it would lock the developers into certain hardware (again!). This is something that I thought was insane when Glide first came around. These type of proprietary extensions are exactly why I have never and will never buy 3dfx-only games or 3dfx video cards. You said it above, "basically an accumulation buffer", instead of using this type of thing, why not make an open extension to OpenGL/Direct3D to support these type of effects.
When the Voodoo 1 was first introduced, competing accelerators included the S3 Virge (The most numerous by a wide margin), the Rendition Verite, the nVidia NV1, etc... Good, fast OpenGL support was still the providence of ultra-high end professional accelerators, while Direct3D (Still below version 3 at that time, I think) was slow and glitchy. As a result, in addition to OpenGL and D3D, *everyone* at the time everyone had their own proprietary API which matched their own hardware closely, thus giving a sometimes substantial performance boost. nVidia had one for their NV1, PowerVR was pushing their PowerSGL, and I sort of remember S3 and ATI had their own as well. You still see PowerSGL support in Unreal, and I think S3 still pushes programmers to use Metal for the Savage2000, but of all the cards from that era only the Voodoo1 is still somewhat fast enough to be useable today, and only Glide is still programmed for.
Glide survived because it appeared at a time when consumer-level 3D was first starting to appear, plus the hardware was good, plus it was fast, and doubleplus because it was easier to learn and program for than D3D at the time. I think 3dfx held onto Glide far too tightly for too long, but it's existence is due more to history than any monopoly power on 3dfx's part (In fact, ATI and S3, both then and now, are each several times larger than 3dfx, both in terms of market cap and number of card shipped. nVidia has something like 5x the marketshare that 3dfx has.)
Re:3dfx is actually a fraction of nVidia's size (Score:2)
I'm well aware of that, but I didn't have any hard statistics on hand for the more interesting data, like marketshare. I was researching this a few days ago, but I can't seem to find my source again.
Anyway, skipping the hard statistics, in terms of 3D accelerator market share (IE, who sells the most $$ worth of cards/chipsets), nVidia is number one. I forget who comes next, but I believe it's ATI, then S3, then Intel (Big with OEMs). Then way in the back comes everyone else, including 3dfx and Matrox. 3dfx has a pretty strong retail presence, but that's a small slice compared with OEM pie.
Here's to hoping that 3dfx dies a miserable death.
I have a hard time understanding the anger directed towards 3dfx. They don't have any monopoly power over the market (Never did), have released just about all of their source code, and their cards offer a pretty good bang for the buck.
At any rate, you could very well get your wish. 3dfx has been experiencing severe and accelerating losses for the last few quarters. They just had a layoff a few weeks ago. At the current rate, they only have enough cash to last for a year or two.
And one less 3D company means less competition in the marketplace. In the past few years we've seen a huge number of companies leave the field--Tseng Labs (Out of Business), Cirrus (Now doing audio/modem chips instead), Trident (Still around but miniscule), Real3D (Remains bought by Intel), Rendition (Remains bought by Micron), Hercules (Remains bought by Gullimot), Number9 (Still around, but just a brand that sells S3/nVidia chips), and Chromatics (Bought by ATI). I think Permedia might be out too.
That's a pretty big number of companies that used to design chips, but no longer. Now everbody else like Diamond and Creative just slaps a label and an S3/nVidia chip onto a board. A lot of industry analysts think the consolidation is going to continue.
Think nVidia wouldn't try to establish a lock on the market if they get big enough? Intel, ATI, and nVidia have all been looking at integrated chipsets--in the short term as a low-cost part, but in the long term as a possible way to get that lock. And their investors seem to like the idea.
ArtX has *serious* ethical problems (Article Link) (Score:2)
The article details what happened when Jon "Hannibal" Stokes, a writer for Ars Techica, posted a negative article on an ArtX trade show appearance. Afterwards, a number of Anonymous posts appeared on the Ars Technica forum which appeared to support ArtX, but which turned out to be from an ArtX's Director of Marketing.
This incident appeared on Slashdot as ArtX, Hannibal and Consumer Fraud [slashdot.org].
Re:What happened to pixel volume rendering? (Score:2)
Anyway, we may find out if any of the rumors are true at the Game Developer's Conference that is taking place March 8-12.
Re:Voodoo5 5000 (Score:2)
"We have placed orders for production silicon already. Our software development is right on track. We are on the same release schedule as when the VSA-100 product was introduced at Comdex, which we stated would be in the Spring. That product will include all the features that have been promised. It will deliver real time, full scene anti aliasing. It will support dazzling cinematic effects via our t-buffer. It will feature 32-bit color depth, SLI implementations and astronomical fill rates. Despite the outstanding state of this first silicon, the boards used in the Cebit demonstrations do not represent production silicon. Shortly after GDC, we expect to be demonstrating Voodoo4 and Voodoo5 boards that are much closer to production quality."
The GDC is being held right now, March 8-12, so we should be getting some reports soon. Right now it looks like 3dfx is shooting for late April or May.
Stupid Idea; Port Linux to a Graphics card? (Score:2)
What with cards with 64mb+ of memory, 'GPU', etc.
IE, framebuffer and display is no problem. Data would be loaded through a simple, stupid, microprocessor across the AGP bus; all you'd need. I'm sure there are Linux distros that could fit themselves comfortably within 64mb ^^
Anyone?
-AS
Hopefully not there! (Score:2)
Depth of field is not appropriate for interactive games. In RL you refocus your eyes to look at different things, if you can't refocus just by controlling your eyes, you'd be half blind. It'd drive people crazy.
Integrated physics would lock the programmer into a certain physics model. Physics is not terribly CPU intense, and the demands vary a lot from game to game. Having specialized physics hardware on the video card is about as appropriate as having specialized AI hardware (IOW, it's not).
Voxels are either huge memory pigs or butt ugly. They might make nice 3d texture maps (if you're okay with fuzzy interpolation), but I wouldn't want to bother with them for whole 3d models.
Chromatics are a waste. They are so rarely useful that it would be better to special case the lighting effects when needed.
Radiosity would be nice, but it's not something you can just pipeline in (ditto for casting rays). However, there might be cheaper ways to get the same effect.
Am I stupid (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Re:waiting for the iMac comment, Carmack? (Score:2)
IIRC Apple did rewrite parts of the RagePro driver library for the Mac, although I don't know if they're working on Linux versions as well. I'm guessing they're working very hard on BSD versions, though.
The G4 and even the newer iMacs make a quite decent Q3A platform thanks to the Rage128, but they still lag behind the latest PC video cards. Hopefully Apple will persuade developers to write the appropriate drivers for OS X so you can stick any AGP card in a G4 and have it work out of the box.
Re:Moore's Law on Graphic Cards? (Score:2)
Re:Hopefully not there! (Score:2)
To look into the future of consumer 3D one might want to look at high-end companies like SGI. Their machines can do cool stuff with the various buffers (i.e. render into texture memory, mutliple, independently controllable paths, etc.)
Finally, in reference to nVidia vs. ATI. It seems that ATI has always scrambled to get competitive products to market (good marketing and channels though), whereas nVidia has been following a well-controlled technology curve and introducing innovative products (for the consumer market) that are well-rounded and work. Following this trend, I'll bet that in 6 months nVidia will have a good solid product with usefull features, but ATI and 3dfx will have products with quirky features and will be of questionable quality (how 3dfx could get away with saying that 16bpp is "good enough" and all we really need or want for so long is beyond me). Example: hardware T&L at the consumer level is truly useful (placing a major part of the rendering pipeline onto the card!!) whereas FSAA, which is very cool, is nothing more than oversampling. Technically, it can be done on any general-purpose 3D graphics which supports an accumulation buffer. I hope that 3dfx can do something useful besides pushing fill-rate, and I hope that ATI can come up with a truly powerful and timely product, but history doesn't bode well for these two. I'd love to be proven wrong by either company.
Re:Where next for high-end graphic cards? (Score:2)
How about Phong instead of Gourard shading? Fast Phong algorithms for hardware implementation have been about for years. They're still more computationally intensive than Gourard but remove the need for specular texture maps and reduce mach-banding artifacts.
Real-time radiosity? Not for a very long time, methinks. Radiosity is usually pre-computed. I remember reading one of John Carmack's
HH
Yellow tigers crouched in jungles in her dark eyes.
Re:3D Texture mapping comments (Score:2)
As I mentioned elsewhere in this discussion, precomputed 3D texture maps would take up vast amounts of memory on your video card. IMHO it would probably be better to let the CPU compute the procedural textures and transfer them to the card using AGP.
Better still, provide a texture compiler that produces bytecode that can be executed directly by the card. Now that would be cool. Procedural displacement mapping (like RenderMan uses) would be ultra-cool.
So the first 3D card that can execute RenderMan shader bytecode will get my money
HH
Yellow tigers crouched in jungles in her dark eyes.
3D Textures (Score:2)
Traditionally, polygons are used to represent 3D objects. However, with 3D textures, volumes of texels (textured pixels) may also be used. In a 2D texture map (the kind that we see "glued" to a wall for instance) indexing occurs via two texture coordinates, whereas in a 3D texture, there are three coordinates.
One good example of 3D texture use would be that of a marble cube. If the corner of the cube were to be chipped off, any veins running through the marble would already be defined and visible without any additional textures being generated.
This means that you could chop a block of wood up, and have the wood grain on the cut surfaces rendered correcly. However, the article then goes on to say:
Unfortunately, we feel 3D textures will have to be used incredibly sparingly because in order to implement the marble cube example explained above, an artist would have to draw the entire 3D surface (including the veins inside the cube which may never be seen!).
This is incorrect. How can an artist draw the inside of a solid cube of marble or wood? I've never heard of a 3D texture being created in this way. They are normally generated procedurally, where you have a function that mathematically calculates the texture colour given the x,y,z coordinates within the texture. This does mean that you can't store 3D textures on the card, unless you pre-calculate an array of texels, but this would require vast amounts of texture memory on the card.
HH
Yellow tigers crouched in jungles in her dark eyes.
Charisma Engine? Emotion Engine? (Score:2)
Yeah, that's me, trendspotter extraordinaire. Takes a genius these days, eh?
Re:Where next for high-end graphic cards? (Score:2)
What comes after that? Well, I have to fight hard to keep my eretion down when thinking of this... Hardware-based realtime radiosity. *uNF*
BTW, the idea of hardware-based collision detection... I hope to GAWD that the hardware manufacturers out there, nVidia in particular (cause I haven't read what they're doing after the geForce256, everyone else has something in the public works) read your post. It would be most gorgeous and possible to have such a thing...
Esperandi
Re:Hopefully not there! (Score:2)
Esperandi
Full-screen AA - the geForce does it (Score:2)
Esperandi
Re:3D Texture mapping comments (Score:2)
Esperandi
ATI Track History (Score:2)
I honestly hope that they support this card well and do a good job with the drivers because even if you've got the best card, its only as good as its drivers.
Esperandi
Re:Charisma Engine? Emotion Engine? (Score:2)
I'm kidding. It was a joke.
ATI next nVidia? (Score:2)
I hope they'll use this chipset to target the hardcore gamers and start a good battle against nVidia's supremacy. Like the AMD vs. Intel thing, us consumers will only benefit :)
(Another cool article on the charisma is here [gamersdepot.com].)
3D Texture mapping comments (Score:3)
The one major thing the author misses about 3D texture maps is that rarely are they hand drawn by an illustrator. A typical map is a procedural texture (think of rendering a marble texture using POVRay) so generating a lump of marble is not that difficult a thing to do.
For games, the programmer just needs to fire up Povray, 3DS etc and get it to generate the appropriate texture volume and then put that in the image cache with the standard 2D versions. I'm sure a lot of game engines will handle this pretty quickly.
Charisma and Emotions explained. (Score:3)
I feel that those two names come real close to describing these two very excellent egines as best describle on earth
Lets see..
On Charisma, David Jenson wrote
When scientists and technicians hear the word charisma, they may first think of sales reps or politicians. But you'd be hard pressed to
find a person in any influential biotechnology position who doesn't have some measure of charisma. Those on the scientific track are
not exempt from this need.
Charisma is derived from the ancient Greek word kharis, meaning "to cause to strive or desire." The ability to motivate others to strive
and succeed is a major building block of successful management, whether in a QC lab or in a corner office. Charismatic people
describe goals by painting word pictures, thereby motivating others to a particular end. They have an exceptional ability to win the
devotion and support of others. They have no fear of presenting their ideas to anyone who may be able to help them. And they have
excellent persuasion and negotiation skills.
But more to the effect, I see charisma here derived from the Indian (as in South Asia) word. It too has a similar meaning to the regular charasmatic word. In this way, the word comes closer to a powerful healing force that is being ispearsed around the subject than anything else. This is a very visual word. A very charasmatic word. The word conjures a halo around it's subject and renders it in a light that leaves a very strong impression on anyone who hears it uttered. Thus, it is fitting a name for this chip (Which I believe would live up to this name). As would, the emotion engine in PlayStation 2. Which also conjures strong vibes and powerfully drawn meanings to the word and what the chip can do. Human emotions are powerful, machines, from the start of time (execpt for Marvin) are known to lack them. The very thought of a machine having these very emotions drive a very strong stab at anyone who looks at the PS2. The engine was made for it's artistic quality, it's ability to render something beautiful, so beautiful that it is almost real, that is the emotion, the charisma.
--
Another article, plus ATI's Charisma White Paper (Score:3)
3dfx is actually a fraction of nVidia's size (Score:3)
Here's a comparision of some market caps (data from The Motley Fool).
ATI: 4,141.51 million
S3: 1,607.80 million
nVidia: 1,808.46 million
TDFX: 218.09 million
Re:What happened to pixel volume rendering? (Score:3)
Realistically, the boost is less than you may think. An average game doesn't spend more than 15% of a frame doing transformation. So the Ultra-Fast-Geometry-Accelerator-of-the-Future is going to buy you a 15% speedup in that case. The other issue is that geometry acceleration is only useful when you pump the data straight to the card and don't want or need intermediate results. For example, you'll have to transform points (one way or another) to do collision detection against instanced objects. But you can't use the geometry acceleration in that case, because the CPU needs the results.
Geometry acceleration is good, but it's not the panacea that many people are expecting it to be.
Re:time to rethink for intel/AMD/etcetc (Score:3)
What kind of use is the card? LOTS! Go check out the Intergraph Intense3D Wildcat 4110. It runs in most prebuilt p2/p3 graphics workstations (huge card), and takes so much of the processing off of it, the only thing the cpus are needed for is getting the software started, and other extended math calculations (we love those fcurves!), and rendering of the final image. By the way, this card does everything the geforce AND v5 do... but i'm not sure as to it's fillrate, but 21fps in a scene with 80000 polys is impressive. And game cards are catching up quite quickly to the power of the "industrial card".
Acelleration... What needs to be done is the accelleration of the front side bus. It's just poking along at roughly 133mhz now, maybe 200 on athlon systems (but that's only ram to cpu). It'd be better if that were 1/1 with the cpu speed, or if it were faster, leaving the cpu with a wide open pipe to the ram and the other system peripherals.
Totally not answering the question.... (Score:4)
Every 18 months, the number of people making 3D graphics cards halves. There's only about 6 companies making 3D chips now.
More features for no one to support (Score:5)
On the first point, there's not enough time to sit down and focus on where all the rendering time is going in a complex game. Well, more like there are so many card and driver combos out there that the best we can do is try to write generic code and have it work across the board. If we could focus on one card, say a Voodoo 2, then we could push the limits of that chipset out beyond what people only expect from a GeForce. But there's no time for that, so we plow ahead using about 50% of each card's capabilities for the three month window until the next card comes along.
On the second point, 3dfx, Nvidia, Matrox and ATI (and S3, and...) are all branching out into odd and card specific feature sets. 3dfx has their T-buffer. Nvidia has "8 lights per triangle hardware lighting." Matrox has a certain kind of hardware bump-mapping. ATI has all sorts of wacky stuff. The bottom line becomes "Do we want to just focus on writing a great game, or do we want to spend an extra six months of development so we can support special features of all these cards that were considered hot eight months ago when we were still pre-beta?" And tacking in special Matrox-only support, for example, is hell on QA. It makes a lot of sense to ignore such features, unless we're getting a bundle from the card company to cover us for the trouble.
Where next for high-end graphic cards? (Score:5)
It seems that while the push for ever increasing image quality is going on, we are getting much closer to realistic, real time rendering of scenes. I wonder just what else is needed to really be able to push the envelope of visualization and realism further. Here's my current wish list.
There must be others - it looks like ATI is going to finally give us proper bump mapping and range-based fogging. Do we also need a proper chromatic model so that we can get rainbows through glass objects? Should there be real-time ray-casting or radiosity support so that real lighting effects (say carrying a flaming torch down a corridor and having proper soft-edged shadows) can be achieved?
Cheers,
Toby Haynes