7 Years of 3D Graphics 264
xtra writes "At Accelenation they are running a nice timeline about 7 years of pc 3d graphics
contains much info and even talks about some of the not so well known players
anyone still remember rendition? or BitBoys?" How many cards on their
timeline chart have you used?
Number Nine (Score:1, Interesting)
Re:Number Nine (Score:2, Interesting)
Re:Number Nine (Score:2)
Re:Number Nine (Score:5, Insightful)
You only need 16MB to handle th highest resolution computer graphics displays ever made.
Re:Number Nine (Score:2, Informative)
again i could be wrong.
Re:Number Nine (Score:2)
I also remember the horrid Windows 3.1 drivers. S3 was known for the best drivers. My Trident card had lousy drivers as well. ATI was notorious for bad drivers, funny how that reputation lingers.
Re:Number Nine (Score:4, Informative)
This is true for 2D displays, but when you start having double and triple buffering plus z-buffers it starts to add up. Then add the texture requirement and you can see why most newer cards have 64-128MB of memory on the cards.
Milalwi
Re:Number Nine (Score:2)
This is true for 2D displays, but when you start having double and triple buffering plus z-buffers it starts to add up. Then add the texture requirement and you can see why most newer cards have 64-128MB of memory on the cards.
Technically, though, he's still correct. Sure, your card would be awfully slow playing something like Quake III without on-card memory for textures, etc.
More memory (Score:4, Informative)
> resolution computer graphics displays ever made
you will allways need more memory (in 3D graphics accelerators), even if the display resolutions don't increase. Lets say we settle for a nice 2000x2000 ish display. Thats 4M pixels, at 32-bit is 16MB for the display.
At least double (32MB) but preferably triple (48MB) buffer this so you can create a new frame while the old one is being displayed. Then we need a Z-buffer (or W-buffer) to hold the depth values (24bit values) for each pixel, so we know what is in front of what, typically you might want to do some stencil effects to (8-bits, can be packed with the Z-buffer) that would be another 16MB. Now we have the basics for a 3D graphics display and are at 48-64MB.
But we are not done yet, now for some more interesting effects:
- Texture memory. Typically use the leftover graphics memory and swap the rest from host memory (but we don't like swapping, so preferably all textures should be in onboard mem) 2-64MB
- 2x Antialiasing (1 Backbuffer + 1 Z-buffer 2*2*size of display buffer) = 64MB (4x antialiasing = 256MB)
- Shadowbuffer (rendering into a kind of Z-buffer from the lightsource to create realistic shadows) 16MB
- Accumulation buffer effects like motion blur (very expensive, a good blur could take 4 to 32 frames) or depth of view could make us want another 4-32*16=64-512MB
I for one could easily use more then 1GB of onboard graphics memory.
Re:Number Nine (Score:3, Informative)
Only 7 years? (Score:2, Interesting)
Re:Only 7 years? (Score:2, Interesting)
Re:Only 7 years? (Score:2)
You mean Void [pdabusiness.com]? Or did you have another one in mind?
Cheers,
Toby Haynes
Infinity and beyond... (Score:5, Funny)
Re:Infinity and beyond... (Score:2)
Some machines might just need Math modules (all you SETI junkies.)
Though unfortunately computing for the home seems to be moving towards "all in one" motherboards and the such.
Where are the real physics engines? (Score:3, Interesting)
Re:Where are the real physics engines? (Score:3, Insightful)
The processor itself would likely be a little specialized to handle (x,y,z,vx,vy,vz) style location/speed vectors and the such.
The closest thing I've seen to it was something one of my Materials Science TA's wrote to show and simulate forces on beams.
Re:Where are the real physics engines? (Score:2)
Re:Infinity and beyond... (Score:2)
Re:Infinity and beyond... (Score:3, Informative)
Athlon:
add ax,bx; % where ax and bx are 32-bit scalars
GF3:
add reg0, reg1; % where reg0 and reg1 are vectors of scalars
If you wanted to do the same work of the GF3 on an Athlon, you'd need 32 (or however deep the vector registers are) successive instructions.
Once you ditch the instruction overhead for doing an operation on X number of successive scalars, the processor spends more time doing the math and the FLOPS goes up. Take a look at the AltiVec unit in the G4, or CRAY vector supercomputers.
Still running Riva TNT (Score:1)
Why 7 years? (Score:3, Interesting)
What about Stunts, Elite, and other 3D games?
Re:Why 7 years? (Score:2, Informative)
The point was accelerated 3D not "software" 3D. I still remember bombing runs from Ace of Aces on a Commodore 128 though it definitely wasn't a hardware accelerated game.
Re:Why 7 years? (Score:2)
It wasn't even 3D. Sublogic's Flight Simulator and "Jet" were 3D, which is why they ran at about 1fps on a C64!
Re:Why 7 years? (Score:2)
Hmm. Release year 1989. Nope, after my time!
or how about the mystical SENTINEL?
I had a copy of that one. I recall it was one of very few games that I couldn't figure out how to play by just playing around pressing all the keys. So no *good* memories, anyway.
Re:Why 7 years? (Score:2)
The point was accelerated 3D not "software" 3D.
The amiga had filling of polygons in 1985 using the blitter. $DFF058 was the register use to fire up that beast if I remember correclty.
Re:Why 7 years? (Score:2)
The article might better be titled "7 Years of *Consumer* 3D Graphics Cards"...
Re:Why 7 years? (Score:3, Funny)
Given that ZX81 (aka TS1000) emulators for the PC were around from day dot (or a ZX81 emulator running on an Atari ST emulator running on a PC...) how about 3D Monster Maze [u-net.com]? It's not that much more primitive than Doom. Run! Run from the scary Tyrannosaur!
(Yes, I know, the article is probably about hardware that draws triangles real fast, but it's Slashdotted hard, so we may as well have some fun reminiscing. If nothing else, it'll confuse the young 'uns ;-) )
How fast do we really need to go? (Score:5, Informative)
Is there really much visual difference between 700 fps and 135 fps? I'm not really sure if the human eye can make the distinction. They're sure pretty-looking numbers, but do the results show for it?
And how long before video cards can render essentially photo-realistic graphics? Soon games will be more like interactive movies.
Re:How fast do we really need to go? (Score:2)
Re:How fast do we really need to go? (Score:2)
> 700 fps and 135 fps?
Remember, Quake 3 is fairly old now; already games like MoH have parts that will make most above-average machines struggle (like that mission with all the trees); newer engines, larger cheaper monitors etc are only going to push that further.
Re:How fast do we really need to go? (Score:2)
While you're right that there's little point in rendering faster than the monitor refresh beyond benchmarks (which is really what we're talking about), being able to go above the refresh rate on average scenes is important because complex scenes will be, um, more complex and drop the framerate.
You may be able to push an average of 90FPS in Quake 3, but that's not so wonderful if you drop down to 20FPS every time you leave a corridor and drop to 10FPS every time someone throws a smoke grenade or fires a rocket.
Re:How fast do we really need to go? (Score:2)
So it may not help the visuals, and aside from using the extra frames for motion blur, etc, it also provides you physics engine with a more 'correct' version of your path
Re:How fast do we really need to go? (Score:2, Insightful)
Is there really much visual difference between 700 fps and 135 fps? I'm not really sure if the human eye can make the distinction. They're sure pretty- looking numbers, but do the results show for it?
Quake 3 just happens to be a benchmark (and an old one at that) whose numbers are relative, but not necessarily realistic : Imagine if they benchmarked harddrives by always saying "The IBM 75GXP can load 40,000 10KB hello.c files / second", to which everyone follows that up by commenting that they only need to load 1 hello.c file, etc. In other words, for a more demanding task like the new Doom, Quake 3 with complex mods like Urban Terror, or much more complex games like Operation Flashpoint, 135fps in the stock Q3 equals ~15 fps in a complex outdoor scene in OpFlash. And as has been recapped many times in the past: We are just touching the surface of realistic environments (i.e. try to model nature in a dynamic fashion and the best boards put out single digits FPS, if that).
Two words : (Score:3, Informative)
Or how about "rendering passes"?
Or how about "anti-aliasing"? (Kind of cheating on that one.)
Or how about "soft shadows"?
In short, more is better. If you give me higher framerate, I'll figure out what to do with those extra cycles.
Re:Two words : (Score:3, Informative)
If you can render the entire image in the 1/700th of a second before the screen refresh, you can much more accurately track events than if it took you 1/60th of a second to render the image. USB mice allow a much higher sampling rate that you could take advantage of, for instance. When you incorporate things like motion-tracking polhemus [polhemus.com] devices, and a head-mounted display, things get really interesting.
Re:Two words : (Score:2)
Your points have nothing to do with framerate. Yes, more power for more visual quality is better, but a higher number of frames per second makes ZERO difference, once you surpass the refresh rate of the monitor. In fact, it's worse, since you're wasting resources that could be allocated to other parts of the system.
Re:Two words : (Score:2)
The rate of preparing frames and the rate of displaying frames are related, but saying that additional framerate beyond the refresh rate "makes ZERO difference" is incorrect.
Also, my latency point holds quite well. Framerate is the rate at which you may draw frames, wouldn't you agree? If you have an increased framerate, you may either draw more frames than you render (which is foolish, unless you combine them, somehow - as in motion blur, anti-aliasing, soft shadows), or you can simply delay drawing the frame until the last possible instant before you need it. Doing so will decrease the perceived latency, especially if you have a higher sampling rate for your input devices than your refresh rate on your display - which I believe is quite possible with USB mice, and I'm certain it's possible with firewire devices.
To borrow an analogy, framerate is the speed of your car. If you only intend on driving a certain overall speed (a given refresh rate), there are other ways you can use that extra speed. You can travel a greater overall distance (motion blur, multiple rendering passes, anti-aliasing, soft shadows), or you can simply leave later and still arrive on time (for decreased latency.)
By the way, I'd like to analyze your term, "power for more visual quality"; "Power" is the rate of doing work. Therefore it sounds to me like what you're describing is "framerate," the rate of doing the work of preparing images. My analysis is that you're saying that "higher framerate is better," which was my original point.
Re:Two words : (Score:2)
Framerate is the number of completed frames per second that are written to the framebuffer. Hence, any excess beyond the refresh rate of the monitor make no difference whatsoever to the viewer. The intricacies of the internal rendering procedures are irrelevant to the measure of framerate.
you may either draw more frames than you render (which is foolish, unless you combine them, somehow - as in motion blur, anti-aliasing, soft shadows), or you can simply delay drawing the frame until the last possible instant before you need it.
You're mixing your terminology. If you "draw more frames than you render", framerate is a function of how many times per second you "render". If you "delay drawing the frame until the last possible instant", the framerate is a function of how many times per second you "draw the frame".
My analysis is that you're saying that "higher framerate is better,"
Sorry, no.
Re:Two words : (Score:2)
The proposed OpenGL 2.0 specification supports more arbitrary blending modes under the title "Frame Buffer Operations". By their definition, the framebuffer does not only store "completed frames," so your definition of framerate is incorrect.
Why are the intricacies of the internal rendering procedure irrelevant? I thought that's what we were talking about. If you want to pretend that rendering is a black box, and then magically an image is painted on the screen, feel free - but don't pretend you understand "framerate."
Maybe you're correct - maybe I should use the term "render rate" to refer to partially-completed frames per second. But I can't understand why you'd even _hint_ that more "render rate" isn't better.
Re:How fast do we really need to go? (Score:2)
Granted, you won't notice any difference past your monitors refresh rate. But there are a lot of reasons to have a much higher frame rate. When you do a benchmark you are getting the AVERAGE framerate, there are times when the actual framerate is much lower than this. I figure if I can keep my average framerate above 100fps then it shouldn't dip below my refresh rate.
One other things is that you might be able to get 200-300fps in quake3 right now, but some benchmarks [anandtech.com] of the new Unreal engine on anandtech [anandtech.com] show that those same cards that are spanking the q3 engine will be spanked by the new unreal engine. And I don't imagine the new Doom engine will be any easier. Having extra horsepower now helps when the newer games come out.
Re:How fast do we really need to go? (Score:2)
Re:How fast do we really need to go? (Score:2)
Wrong. Excessive framerate is pointless.
What you mean is "It's good to have the power capable of rendering 700 fps in benchmark X, because that much power will allow you to maintain the optimum framerate in all circumstances.".
Re:How fast do we really need to go? (Score:5, Informative)
We've still got a very long way to go until we get Monsters Inc. quality real-time games. As you say, current cards render triangles. Curved surface rendering (e.g. NURBS) may come next. Anti-aliasing takes a lot of power. I think that current cards are still using Gourard shading, which is the most primitive shading model there is (correct me if I'm wrong here). The next step is Phong shading for highlight effects (there are hardware-optimised Phong shading algorithms, but they're still slower than Gourard). Then there's deformation mapping (Renderman again), etc. etc.
I believe that Quake 3 etc. does use radiosity algorithms, but that doesn't need to be done in real time, just when the level is compiled.
HH
Re:How fast do we really need to go? (Score:5, Informative)
You're referring to PhotoRealistic Renderman (PRMan), the actual product developed by Pixar. It uses the REYES algorithm.
RenderMan is a specification for defining 3D scenes, much as PostScript is a specification for defining 2D documents/images. There are many renderers that are RenderMan compatible, including raytracers such as BMRT.
primitive shading model there is (correct me if I' (Score:2)
Re:How fast do we really need to go? (Score:2)
Although, from a bandwidth perspective hardware tesselation will be a huge step.
If we could describe a sphere as a sphere instead of 300 triangles, an arc as an arc instead of 200 triangles, then who needs AGP graphic cards, or even PCI for that matter?
Re:How fast do we really need to go? (Score:2)
Experiments and researches have shown that (take a look at some SIGGRAPH 99-01 papers) even in high triangle count, Gouraud shaded objects still look inferior to the same Phong shaded object using fewer triangles.
And the bomb is, texture-based Phong shading is a relatively simple technique to implement, gives good results if you have good texture hardware, and fast.
Even with T&L I highly doubt a Gouraud shaded object with "dense enough" triangles to look on par with the same Phong shaded object with fewer triangles would render faster. (in fact I'll bet you it'll render slower)
Re:How fast do we really need to go? (Score:2)
But even in raytracing, the overwhelming majority of objects are made up of triangles. Besides quadrics, most of the things are just converted to triangles anyway.
With a ray-traced image, shadows, reflection, and refraction are accurate and free (no extra CPU time needed), but the rendering itself takes a while.
While these effects do somewhat come "for free", reflection and refraction do take extra CPU time. For each reflection/refraction point, you have to shoot another ray. It's free in the respect that it's the same algorithm, there isn't any fancy special-case nonsense to get these effects. But, for example, if the original ray hits an object that is both reflective and translucent, two more rays are shot, and each of those rays need to interact with other objects, possibly shooting more rays. Try rendering a scene with a tracing depth of 1, then try it with a depth of 10. Big difference.
It's a ways off, but I'd guess that twenty years might see such technology within the reach of wealthier consumers.
Man, I hope it's not that far off...really, if you think about how far things have progressed even in the past 3 years, it's not hard to imagine Toy Story-like graphics in games in just a few years.
Re:How fast do we really need to go? (Score:2)
I suppose letting the user adjust the raytracing depth would provide an easy way to let the user adjust the balance between frame rate and graphical detail.
Current games usually have about 20 different variables you can tweak to achieve the same result, but it's often hard to tell which variables are actually having an effect, let alone what some of them do!
Re:How fast do we really need to go? (Score:2, Interesting)
What was being discussed was not 1x1 traced, static geometry localisation demos with environment mapping and scanline rendering. What was being discussed was full-fledged, true real-time raytracing of complex scenes with lighting and reflection taken into account in the trace passes. Show me something with even the complexity of Doom in a real-time raytraced environment, I'll be impressed. Half-way demos like this (not that they aren't pretty) just don't cut it.
Re:How fast do we really need to go? (Score:2)
Only 7 Years? (Score:4, Interesting)
Rendition? Sure I remember... (Score:2, Insightful)
Anyway, thanks for asking if I remembered Rendition!
Cheers.
Re:Rendition? Sure I remember... (Score:2)
Pretty darn good game still, hell, damn GREAT game still, heh.
I remember a looking from at a screenshot from GPL printed on the then Highest Resolution Printer In The World(tm)Lexmark z(whatever). Damn nearly looked like a photograph.
When I saw screenshots of it directly, hell, it DOES look damn nearly like a photograph!
Whatever API they used and however it interacted with that chip was damn powerful, blew the living shit out of anything to come for another 2 years or so.
Demo scene. (Score:2)
--saint
Re:Demo scene. (Score:3, Informative)
Future Crew reassembled, broke up, faded out, tried to get into the games biz, etc. Their site has been offline for awhile.
Fortunately the demo scene lives on; pouet [pouet.net] hosts links to nearly every demo in existence across multiple platforms. And to keep us on topic, most demos nowadays are 3-D accelerated. It's become less a game of "What techie tricks can you do?" and more a game of "How artistically can you use the technology?". There's some visually striking demos being made nowadays, and not just because they have shadebobs or glenz cubes.
scene.org! (Score:2)
Re:Demo scene. (Score:2)
How artistically can you use the technology?"
For me it was the music in sync with the visuals. A demo can have the coolest visuals ever, but if the tunes suck...
Re:Demo scene. (Score:2)
Ok...anyone remember Kosmic Free Music Foundation's "Little Green Men"? What about good old Cubic Player and .mod's?
Re:Demo scene. (Score:2, Informative)
The actual Future Crew is no more, but many of the members have been active in various projects in for example gaming industry. You may have heard of Max Payne, made by Remedy Entertainment [remedy.fi], or 3DMark, made by MadOnion [madonion.com]. Though not really related to FC, they both employ former FC members, and may be the best known examples. As for other demosceners, some Byterapers members were involved in Rally Trophy, made by Bugbear [bugbear.fi]. It also features some music by Purple Motion/FC.
Any other examples of demosceners, perhaps from outside of Finland?-)
Re:Demo scene. (Score:3, Informative)
Palm version here [astraware.co.uk], but I'm guessing it lacks the tunes
Slashdot new Slogan (Score:2, Offtopic)
Just seems to be allot of history stuff lately.
Yes i remember bitboys... (Score:4, Informative)
- Glaze3D claims 300fps performance, Sept 29, 1999 'The Glaze3D chip is at the moment in the last stages of the silicon implementation, and when ìt is done, the database goes to Infineon for processing. We will get the first prototypes in the very beginning of the next year. We have at the moment a 100% accurate simulationmodel of the chip, which can do millisecond accurate runs. From these we have been able to 'calculate' that to draw one Quake3 frame takes 3-5 milliseconds, which means 200-300fps on behalf of the graphics card.'
- Glaze3D announcement in March, Feb 15, 2000 [shacknews.com]
- Glaze3D misses announcement date, Apr 1, 2000 [shacknews.com]
- Glaze3D shooting for 2001 , Apr 10, 2000 [shacknews.com]
- John Carmack on BitBoys, Oy?, Jan 9, 2001 [shacknews.com]
I have never had any contact with Bitboys, let alone seen their technology. I DO NOT ENDORSE THEM, and I am disturbed that they are saying that I do. There is room in the abridged quote to have this be a misunderstanding, perhaps something along the lines of me endorsing some particular track of future technology which they implement. John Carmack
- BitBoys claim silicon being manufactured, Jan 9, 2001 [aceshardware.com]
"I know we have been shallow in our comments, but I can reveal that chip design is complete and chip is being manufactured", says Juha Taipale, general manager.
Much of this was shamelessly compiled from a great news source Shacknews.com [shacknews.com]
Hmmm... (Score:2)
When I look at timeline on the first page I realize that all the video-cards I've bought since the Voodoo1 was bought within two months of release (Voodoo1, Matrox Millenium2, Voodoo2 and Geforce 256 DDR (which I still uses)). I bought the Voodoo1 in November 1996, and that was really a quantum leap. I remember I had to crack the beta-version of the Tomb Raider 3dfx-patch to make it run with the cd-rip :) And glQuake... ooohhh...
My timeline is kind of similar (Score:2)
Upgraded to 4mb
ATI All-in-Wonder Rage Pro (with a K6200)
Voodoo2 add-on (Needed this for Unreal)
GeForce (with an Athlon 550) (Needed this for Unreal Tournament)
I still use the GeForce Athlon combo.
If only all that 3D research had gone into.... (Score:2)
Anyone got the story text cached? Because the server is El Hosed.
Yes, I remember Rendition (Score:2)
I seem to remeber there was even a custom quake binary written for the Rendition API, because the card didn't have OpenGL drivers when it came out.
Heh. Remember when a "good" ping was anything under 300?
C-X C-S
I remember 100k tri/s for $100k (Score:2)
For fun, I hacked up xbiff to make pexbiff, complete with a 3D mailbox and 3 bouncing point lights. It pushed the limits of the machine -- everytime you got mail, the mailbox would animate and the lights would bounce. Aaahh... the days before spam...
more than that (Score:2)
VGXT "Skywriter":
http://antero.reputable.com/~skywri
Reality Engine:
http://www.futuretech.vuurwerk.nl/re.htm
http://www.futuretech.vuurwerk.nl/rehicig.html
http
http://ww
http://www.fut
http://www.futurete
calling Dr. Jim Clark... (Score:3, Insightful)
I love playing with the SGIs at work and I enjoy playing with the wizbang PCs that my roommates and I have, but to be honest, I'm really not that impressed with modern gfx accelerators. The original geForce was pretty neat, and SGI's last big leap (InfiniteReality in '95) was cool... but golly, things really haven't changed much since Clark and his gang from Stanford opened our eyes to 3D in '82.
We've gone from cabinets to cards to chips to a single chip. We've added some gfx extensions and now do multiple rendering passes to make things look prettier... but really, nothing has changed much in the recent years. It's smaller, faster, cheaper. Steady evolution... but so is the scum growing in my bathroom sink.
Please excuse me while I yawn.
Re:calling Dr. Jim Clark... (Score:2)
3D could use some work. Very few applications and displays make good use of it.
Rendition? Did you say Rendition? (Score:3, Interesting)
I still have nightmares about developing for the Rendition Verite 1000, which was a lovely graphics decelerator on anything faster than a P100. When we got our first batch of Voodoo 1's delivered, there was a brief but very ugly struggle to get our clammy hands on them. You ain't seen pathetic until you've seen geeks wrestling and squealing like stuck pigs over 4Mb graphics cards, let me tell you.
Question to anyone else who has developed 3D graphics: who did you find driving the demand? In our games house, there was a running battle between the programmers and the artists. Us code monkeys were forever on at the artists to cut down the polygon counts, but they kept trying to slip in models that were barely stripped down from the FMV sequences. In the end, we came to an equitable solution: they won, the game ran at 10fps, and all the programmers left.
I wonder how many other games were ahead of their time in that regard, and how many of them would be rescuable given cards that scoff at polygons and eat dozens of 256x256 textures before breakfast?
Moore's law applied to 3D graphics (Score:2)
http://www.3dlabs.com/product/technology/moo resla.htm
Unfortunately it's a company paper and very biased towards the 3Dlabs Wildcat. That, and it's a bit dated. Then I found a Microsoft Research pdf:
http://amp.ece.cmu.edu/ECESeminar/slides/Whitte
it's an interesting read, but not 100% relevant. Anyone else have relevant info?
Re:Moore's law applied to 3D graphics (Score:2)
~GoRK
Ah, 3d hardware. The beginning of ugly. (Score:2)
Yep, after all the fanboys started demanding the games in 3d, and then the game companies turned to supplying them, the effective graphic quality of computer games plummeted, and has only now maybe reached the beauty that we had at the pinnacle of sprite-based games. Sure, you could only see one side of the monsters, etc., but they were good-looking monsters - none of these chunky triangular-looking things that didn't even have fingers, toes, etc. and were plastered with dim-looking repetitive textures.
3d is almost getting good enough that I can stand to look at it. But for a while there, it really made games look a lot worse, just for some undefined promise of realism that was never really satisfied until maybe recently - those early 3d games just looked unrealistic in different ways than the 2d ones had. It's like the gaming industry fired anyone with taste and just kept all the techs.
OK, I think I'm done ranting now.
Re:Ah, 3d hardware. The beginning of ugly. (Score:2)
As an avid console gamer who had stopped gaming for a couple of years after the PSX succeeded the throne of console dominance from the Super NES, I understand _exactly_ what you're talking about. In my absence from the gaming world, I lamented the "death" of 2-D at the hands of ugly, boring, primitive 3-D graphics. Even within the SNES era, people raved over games like StarFox, a 3-D game which had very little appeal for me, but which for many was a vision of how games should look and play. Meanwhile, I foresaw that it would be many years before 3-D graphics would even approach the beauty of sprite-based graphics, and that's turned out to be true IMO.
However, it should be noted that there are examples of games that utilize 3-D graphics while maintaining 2-D gameplay and feel to great effect. One example that comes to mind for no real reason is ThunderForce V, which is a great horizontal-scrolling shooter (aka shoot-em-up or "shmup [shmups.com]") that uses 3-D graphics which are small enough to be somewhat detailed. The kicker comes when encountering boss enemies, where the camera seamlessly zooms and rotates around the scenery from the standard side view, taking obvious advantage of the 3-D nature of the graphics.
I now happen to enjoy a lot of games that have made the switch, in all sorts of genres. Some quick examples include Mario (platformer), Zelda (action RPG/platformer), Final Fantasy (RPG), Hundred Swords (SRPG), etc. Street Fighter EX in any incarnation will never be able to replace its 2-D progenitor for me, but in many other ways, I've come to tolerate 3-D graphics in games where its usage adds more to the gameplay than it detracts from the visual appeal.
In this last regard, I think Nintendo's Legend of Zelda: Ocarina of Time was revolutionary for me (as it's the one game that brought me back to console gaming). It was full of rough graphical edges, but to be fair, its immediate 2-D predecessor on the Super NES used graphics that were small and fairly undetailed, and in comparison were less impressive overall at the time. Ocarina of Time is still beautiful, and 3-D graphics helped that game achieve incredible depth, as well as a fantastic sense of the sheer vastness of the game world.
< tofuhead >
Interesting trend. (Score:3, Interesting)
The companies that have long red lines (meaning the time it took for them to ship since their announcements, ie HYPE) are all gone!
The ones that kept a relatively consistent schedule are still around. Once again, a smart business plan wins, not super-hyped, non proven stuff [xbox.com].
(On a side note, I wonder how long the line would've been for the xbox!
Re:Interesting trend. (Score:2)
Nvidea and ATI (Score:2)
Heh. Could this have any factor in their success?
3d hardware was the end ... (Score:2, Insightful)
I had been involved in demo coding for a while as an high school student and we had managed to implement a 3d software engine which was working really fine, at higher resolutions as well. The most important thing is, we were having lot of fun.
Maybe I am getting wrong now, but I believe the first version of quake came out without any kind of 3d acceleration (everything was software made, they just wrote an almost perfect code
But one day, well, 3d hardware came out and the whole thing wasn't funny any more. In the beginning it was very difficult for a single person to develop something decent using 3d hardware (because of a lack of good docs), while big companies started to produce lots of games using 3d acceleration, which were very badly optimized.
Well, I don't know, I still think that 3d acceleration took away a big part of the intellectual work due to the optimization process of code in games. Of course there were and there still are exceptions.
Re:3d hardware was the end ... (Score:2)
I remember how much I thought it sucked as soon as I got my voodoo 1 because the Mystique couldn't do texture-smoothing.
graspee
Is this really an event worth tracking? (Score:2, Funny)
Any processor intensive application will spawn modular add-ons to take some of the burden off the CPU. So long as the task itself, of course, is generic enough to have a sufficiently large market. Basic economics.
By saying there was no proper 3D graphics before the advent of the accelerators, you are doing a great injustice to the demo [hornet.org] scene as it was back then. Remember the 256 byte competitions? The 1 kb and 4 kb competitions? Now here were people who knew how to milk code for every iota of juice that was there. The (almost) forgotten art of Code Optimisation.
Heck, there was 3D graphics on my old Commodore 128; I still have Elite. What do you call the original Battlezone? The only difference was, there wasn't any specialised add-on card to do this task on the market back then.
I don't mean to disrespect current makers, researchers, coders, and gamers. I just think there's got to be many more significant birthdays to commemorate.
How about a feature on the demo scene on slashdot? The younger crowd will appreciate the demos, and we'll get these funny comments from the war-torn 386 vets about how they used to make their own transistors out of sand...
Re:Is this really an event worth tracking? (Score:2)
High-End stuff? (Score:2)
But there was some killer high-end stuff for the PC architecture!
Anyone remember the Intergraph workstations? They had custom 3d hardware. In late '98 (or was it early 99?) we had an Intergraph with Wildcat graphics. 16MB framebuffer and 64MB texture (I may have it backwards). Highly accelerated, and killer. We used it to run ballistics and weather simulations.
I think they misunderstand Matrox's position... (Score:2, Insightful)
1: The best multimonitor around
They starting it, they perfected it, they can do different resolutions under Windows 2000 (they were the first, if not only)
2: Excellent overlay charactaristics
Wanna use a TV tuner card at high resolutions? Ignore nVidia. From my experience programs that run overlay really like Matrox's card, w/ the DVD max feature that allows any overlay to be displayed on the secondary monitor you can port divx video out to the TV. Also, overlay works at much higher resolutions than nVidia solutions have. I don't want to turn my 19" down to 1024x768x16 bit just to watch a DVD, my 14" runs more than that.
3: Acceptable 3D performance, exceptional 3D quality
Although it's not the fastest card on the block, it will still play virtually all games atleast acceptably. And when you are playing them, they have a low amount of artifacts and the textures are well drawn.
4: 2D quality
Although it's much overlooked, it's what most people stare at a majority of the time. Matrox makes thier own boards so they can have a tight control over the filtering components.
I've used a couple S3 cards (low end), Permedia 1 and 2 cards, Riva128ZX, TNT, and TNT2, Matrox MGA, G400 and 450 cards. And so far I have to give props to Matrox for a product that matches my needs. Granted my needs are different from most.
(triple monitors w/ TV tuner and alot of video player programs)
Re:I think they misunderstand Matrox's position... (Score:2)
Matrox could really clean house.... (Score:2)
This is why I'm REALLY hoping that Matrox does make another stab at a high-end 3-D graphics card that can compete against the Radeon 8500 and GeForce4 Ti4200/Ti4600 series but still offer the unrivalled 2-D display quality Matrox is famous for. Using the modern 0.13 micron process to make the next-generation Matrox chip, they could easily offer industry-leading graphics acceleration and MPEG-2 decoding equal to that of the GeForce4 series. Such a card--even if it costs slightly more than the cards that use the GeForce4 Ti chipsets--would be instantly lapped up by gamers who want the clearest graphics display.
Incomplete picture. (Score:2)
To ignore the early GLINT work from 3DLabs and not give them their own column in the table is a bit unfair.
The Number9 stuff is missing (no great loss).
Other early work is missing, for example SGIs PC graphics card which predates all of this by about 5 years.
Enough with the Polygons, How about Ray Tracing (Score:2)
Has anyone tried to make a GPU for ray tracing? Good ray tracing scenes can be much better than the scenes drawn by polygon engines.
Yeah, it would mean a whole change of code for current software. D3D would have to change, or maybe have another API beside it, say DirectRay. But the rendering would really get better. Todays hardware should be able to handle the load. And they should scale well also. More GPU's equals more parallel rendering of pixels.
Imagine a truely ray traced virtual world. {shudder with anticipation}
Re:Enough with the Polygons, How about Ray Tracing (Score:2)
(other than the vast decrease of memory bandwidth usage by a real scene description language than describing stuffs in terms of triangles)
Smoother silhoettes due to the use of NURBS than triangles? Sure. We have T&L - just increase the number of triangles in the scene.
Different lighting effects? Pixel shaders can do that too.
Refraction and Reflection? Even Voodoos could fake that reasonably well with environmental maps.
Both approaches fail to render good-looking global illumination effects without faking it with textures or "ambient" lights.
I say, add support for non-triangle scene description and back that up with hardware-accelerated meshing. Then the quest for the ultimate triangle rasterizer is complete.
The real beginning of 3D hardware (Score:3, Interesting)
I saw one once, at Case Tech, in 1969. About six racks worth of hardware. Nobody really knew what to do with it.
Best performance (Score:2)
I have tried the ATi Rage 128, Voodoo2/3,nVidia Geforce 2 and nVidia Geforce MX. Without exception, all these cards perform better under Linux than under Windows.
No, this is not a troll. I have got people to try linux after they have seen TacticalOps running on my slackware powered laptop
The best example of this was running Return To Castle Wolfenstein on my Geforce 2. It played OK under Win2K (latest DirectX, drivers etc.) - but I had to run it at 800x600 for it to be playable. Running the same binaries under Transgaming's WineX, I could bump it up to 1200x1024 and get a better framerate than under windows at 800x600!
The best supported cards I have owned have been the nVidia cards. Regular driver releases available for both Windows and Linux from their web site - I challenge you to find another gfx card supplier that does the same!
Ok, so part of the drivers are binary only. I say : so what? Nvidia are good at maintaining them, they know the card best and seem happy to support us, so why moan?
Now I'm waiting for my Geforce 4 Ti4600 to arrive...
More than 7 years ago. (Score:2)
PPA, the girl next door.
Re:Just incremental improvements (Score:3, Interesting)
Amen, brotha... my only real PC at home is a PII/400 with a pair of Creative Labs Voodoo2 cards running SLI. Win98SE is stable (enough) for the few games and utilities I run on it. And 56 FPS in Quake2 and 41 FPS in Unreal is good enough for me.
Re:Just incremental improvements (Score:2)
I bought shogo in the old games area at the store, tried to play it, had to turn off zbuffer/wbuffer to play.
Kinda a shame, when your old games wont play on new hardware. Lucky I have a p233, voodoo2sli/tnt2/sb32awe/128 megs with win98
Re:I remember tons of stuff. (Score:2)
And thats why you shower regularly!
I thank you...
Re:Hmmmm Matrox looks pretty good... (Score:2)
I use a Matrox G400 DualHead AGP myself and frankly, I have YET to see a graphics card using the ATI or nVidia chipsets that match the amazing sharpness of the current Matrox AGP cards on 2-D graphics. These cards are perfect for business users, desktop publishers and CAD/CAM users, where picture quality takes precedent over 3-D graphics speed.
This is why I really want to plead to Matrox to develop a no-holds-barred 3-D graphics card that can match the ATI Radeon 8500 card and any card that uses the nVidia GeForce4 Ti4400/Ti4600 in terms of 3-D graphics acceleration and also assist in MPEG-2 decoding for DVD movies, but still maintain the legendary display quality sharpness Matrox is famous for. I can say that even if the resulting card costs slightly more than cards that use the GeForce4 Ti chipsets you know gamers are going to lap this potential Matrox card up in a New York minute, mostly because many gamers have been disappointed by the sub-par sharpness of ATI and nVidia chipset graphics cards.
Re:I'm a 3D gaming junkie.... (Score:2)
Also, older cards can be sold off the usenet when you upgrade. Did that for quite a few times (I sold my stuffs at half the price I was going to pay for my upgrade, and it worked) until one day I decided to put all in-use hardware into my girlfriend's computer when I upgrade, then sell hers. This chain has been working pretty well.
Re:Video Card Woes (Score:2)
However, you can really open your eyes by paying $10 for a used i740 card.