Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software

Voxel/Polygon Accelerator 153

G. Waters writes: "Ars Technica writes that "3DLabs and Real Time Visualization have teamed up to design an accelerator that accelerates both voxels and polygons in the same scene." A link to the announcement can be found here. Perhaps voxels will become more mainstream with similar developments." I'm still waiting for the cards with accelerated bezier patches, but this is cool too. *grin*
This discussion has been archived. No new comments can be posted.

Voxel/Polygon Accelerator

Comments Filter:
  • I'm pretty sure he used dict [dict.org] which checks quite a few dictionaries including Websters, the jargon file, and the gazetteer.
  • by eval ( 8638 ) on Friday August 11, 2000 @05:06AM (#862710) Homepage
    That definition is only partially correct. Unfortunately, it falls into one of the oldest mental traps of the graphics world, thinking of pixels as squares and voxels as cubes.

    Pixels and voxels are zero-dimensional samples of some 2D image or 3D volume. Thinking about them as squares, gaussian splats, or something other than samples is the path to the Dark Side.

    For more info, read Alvy Ray Smith's Tech Memo, "A Pixel is Not a Little Square, a Pixel is Not a Little Square, a Pixel is Not a Little Square! (And a Voxel is Not a Little Cube)" available here [alvyray.com].

  • by Anonymous Coward
    Really pisses me off when people forget that the ACCLERATOR is on the RIGHT.

    Over here, the ACCELERATOR is on the left. I know /. is hosted in the US, but hackers from other countries read it to you know.
  • I agree. Novalogic made a great game with this technology. Nearly all of the reviews I've read have suggested that it would be an exceptional "if only" it had used a polygon engine. Much advancement has been made with polygons, so it's easy to compare a relatively mature technology with one that is being freshly implemented into a field it has not been initially intended for. Have any of you guys seen DF2 at 1024x768? The terrain looks gorgeous. The frame rate (at least on my system) is unplayable, but so would a unaccellerated polygon scene with the same level of detail and distance. Keep in mind that the maps in DF2 are rendering 2000+ meters of visibility, including grass, stones, and special textures like railroad tracks (made with voxels and not polygons). To do all of this, while rendering misc. polygon objects at an acceptable frame rate @ 640x480, without acceleration, seems to show just how valuable this technology could be with a little advancement. The fact that this announced graphics card will support both polygons and voxels seems to make this a technology we really should be talking more about. G. Waters "Sigs Cause Cancer"
  • My under educated opinion is that a subdivision surface scheme would make more sense than a NURBS or Bezier Patch one both from a modeling and a hardware implementation point of view. There was an interesting paper (http://24.19.151.16/RI_DSS.html) in the subdivision session at Siggraph this year which describes a technique the presenter claimed will be useful for a hardware implementation (this was supposedly in contrast to a similar paper which followed his in the same session: http://www.cs.caltech.edu/~ivguskov/papers.html#no rmalmeshes).

  • I think you mean "vowels," No? :)

    -Vercingetorix
  • The chip can do 256^3 at 20fps. Larger sized volumes can be held on the board (current board has 256MB ram) and is handled transparently by the library they supply. So, you probably have nothing to worry about. I routinely use 512^3 volumes and get maybe 4 frames per second. I would guess future boards would support larger volumes and be faster at doing it.
  • I know the Edge 3d had one version with hardware Sega Genesis emulation. The saturn was very similar to the genesis (backwards compatible, i believe?) so I think you're right on that....

    Walter H. Trent "Muad'Dib"
    Padishah Emperor of the Known Universe, IMHO
  • Oops. Please make that ~1 fps for 512^3.
  • Can't anybody read english anymore?

    he said: Really pisses me off when people forget that the ACCLERATOR is on the RIGHT.

    So it pisses him off when people FORGET that the accelerator IS on the right.

    Should I repeat that? He is stating that the accellerator is on the right, but some people forget about that from time to time, and that pisses him off.

    He was trying to be humerous.....

    Johan V.
  • Basically for everything but the landscape.

    But it used bumpmapping and anti-aliasing so those looked quite nice for software too.
  • Genesis emulation? Possible, but why?

    The saturn hardware has dual hitachi CPUs. The genesis used a 68000 chip. Now, there was an upgrade for the genesis that plugged into the cartidge slot, and then cartridges went into this thing. I forget the name, but it also had dual hitachi CPUs, but they were slower.

    The dual CPUs is why the saturn flopped. The PS also has dual CPUs, but the software library that came with it took advantage of that. The software libraries that came with th Saturn didn't, so programmers had to balance their program across both CPUs themselves. This meant that the initial games only used a little over 50% of the power available, whereas the initial PS games used over 80% of the available power.

    Also, the second CPU of the PS had the same core instructions, but the stripped parts of it and added some other parts, so the stuff they added might have given it an additional edge. Well, that's what I heard at least. The docs I'm able to find aren't clean whether that second chip is a modified r3000 or if it is totally different, and I'm not familiar with the r3000 to tell just by looking at the instruction sets.
  • Supporting voxels for things like medical imaging is good, but I don't think it's such a big thing for games. Yes, voxels can be used for landscapes, and even characters. But do we even need that third dimension of resolution? When I'm playing a game, I don't see through a character, so wouldn't it be a waste to define elements inside the character? (Though a neat trick there would be to define his guts so shooting him would reveal them at any level of penetrati--Geez, I'm disturbed. There are other ways of doing that anyway.)

    A landscape could be defined with things like curved surfaces and probably be much more efficient. From what I understand about 3D graphics, using voxels to store something like a 3D object as opposed to polygons is akin to using a huge array instead of a linked list to store data in memory.
  • so..umm...what is one? you seem to know... What advantages do they provide? What are nurbs and what do they provide?

    ---
  • Way OT but in the UK the accelerator is on the right. It's the pedal nearest the door.

  • by ceswiedler ( 165311 ) <chris@swiedler.org> on Friday August 11, 2000 @05:16AM (#862724)
    no, not apache the web server, but a helicopter-sim game for the PC several years ago, had voxel-based rendering. I remember the lead programmer saying that they had constructed simple shapes for the landscape, then used an erosion simulator to wear away the voxels. Take a flat surface, run a "river" through it, and calculate which voxels are removed. That's something you can't really do with polygons.

    It generated much more realistic landscapes than anything else at the time. Does anyone remember the title?
  • Comanche. It had a really impressive landscape engine for its time. The heli movement near the landscape and the shading had some more to be desired but the overall landscape is still hardly matched.
  • by Mike Schiraldi ( 18296 ) on Friday August 11, 2000 @05:21AM (#862726) Homepage Journal
    Pointless unless you're gaming or rendering...

    On the contrary, i use voxels for word processing.
    --
  • It was called Comanche (therefore, not an Apache game). It was cool.

    - Rackham

    "You can't protect anyone.... You can only love them."

  • It was "Commanche", IIRC there were sequals also.

    More recently there was "Delta Force II", using software voxel rendering. As a game I didn't really like it, and as an engine, well, I didn't really like it ;)

    It just couldn't perform well enough in software to compare to the combined might of CPU+3D card that polygon based engines get to use. It's a shame, because it had some really good points:

    * The terrain was really 'curvey', none of the up-a-ramp, down-a-ramp of the polygon style.

    * The grass was cool.

    * Erm... ;)

    Who knows, if these cards actually deliver (modulo cost, entry point, programming, etc), it might become a more popular approach.

    best wishes,
    Mike
    ps) I notice they will have development kits for linux - hooray!
  • Hmm, if it's several years ago, maybe you meant "Comanche" from Novalogic. It definitely looked
    voxel-based. There could also be a game called "Apache" but I can't remember an old one, there is a somewhat new one by EA called longbow or
    something like that which uses Apache helicopters
    I think. Haven't played it though so I don't know
    if it's voxel-based or not.
  • After TaOS was mentioned here a while back, I was inspired to go and check... and sure enough I had the issue of Edge magazine that had TaOS as its cover. But more to the point.

    While perusing other issues, I recall reading about a voxel-based 3/4 view helicopter game made for the Amiga. It sounded quite interesting, because it also allowed for realtime landscape deformation. You could blow a huge hole in the ground to force the enemy someplace else where you had a more strategic advantage. It sounded quite neat. I also can't remember the name. Plus I'm not even sure it's what you're talking about ;)

    Just thought I'd share...

  • I'm waiting for the hardware companies to start using more intelligent bus systems like I2O (over PCI or multiple AGP) to allow for new and improved systems.

    For gamers, this could mean a 3D card that stores scene description data and allows the sound card and video card to intercommunicate with it, doing co-rendering (one card handles the scene itself as a mathematical entity, the others handle mapping the sounds and/or images).

    These types of interactions between hardware are difficult because of competition, of course.
  • don't flame, I don't game...

    there's not too much point to accelleration. memory yes, but accelleration for desktop machines that are used for practical purposes besides rendering is worthless... I think we're going to hit critical i-dont-care faster with video cards than with cpus.
    critical i-dont-care being the point where it doesn't matter anymore what is in your system
  • No because they're already compressed. Can't compress a compressed file, you won't get anything out of it!
  • And that's where the high powered cpus everyone claims no one needs come in.

    For everything that no one needs, it has a complement that everyone says the same about :)
  • Actually, voxel terrains (from the computer gaming world) are not actually voxels. They are 2D heightfields. The misnomer is due to two unfortunate traits that voxel data sets and heightfields share:
    • They both represent 3D data, though one represents data (usually scalars) at XYZ positions and the other represents a Z at each XY position.
    • They can both be rendered via raycasting (raytracing without the bounces), though again, one is 3D raycasting and the other is a sort of 2.5D method.
    The "voxel terrain" rendering method was first described by P.K. Robertson in "Fast Perspective Views of Images Using One-Dimensional Operations" (IEEE CG&A, February 1987, page 47). It's been rediscovered by various people since then. I believe (but don't quote me) that it could be considered a limited form of McMillan's Occlusion Compatible ordering [mit.edu].
  • Are voxels somehow superior to polygons? It seems like we'll soo have these graphics processors which can render an incredible number of textured/shaded/AA'ed polygons without breaking a sweat. I'm not sure I see what the advantage of voxels are...

    --

  • Pixel: picture-element
    texel: texture-element
    voxel: volume-element

    Have a nice day!
  • by CIHMaster ( 208218 ) on Friday August 11, 2000 @04:42AM (#862738)
    And guess who it's geared for?

    That's right! Gamers and CG people! Really, the more we can dump on hardware for those who need it, the more useful everything is. Useful, that is, to those who want/need it.

    I want hardware based disk compression (do any hds do this already?)!
  • Its meant to render a 256^3 voxel set in real time with full transparancy, not to quickly render a displacement mapped surface... which I agree is very usefull and has bugger all to do with voxels.

    This chip has near zero relevance to gaming.
  • Only half true. Everyone who's ever used a low-res monitor knows that a pixel really is a little square (or at least a little rectangle). This image-processing stuff is of tenuous applicability - it applies to 1D waveforms, not 2D images. And it works for 1D waveforms, because the basic unit of perception is the tone - a 1D sinewave. The basic unit of visual perception is NOT a 2D sinewave. The fact that JPEG works at all is a fluke; the nastiness of actual JPEG implementations shows how inapplicable this model really is.

    In any case, show me a monitor that does correct 2D reconstruction of an image from these samples. Can't? That's because it doesn't exist. In 1D audio processing there are known ways to reconstruct the 1D "image" given the samples. There is no such postprocessing on any modern monitors. And all this image processing stuff tacitly assumes there is. Ergo, again, it is not applicable.

    To ram the point home, remember that "little square" and "sample" are just two MODELS of limited applicability in different situations. Mankind DOES NOT HAVE a model for image processing which is in any way "correct".

    Calling it the "path to the Dark Side" is just silly.

  • This voxel acceleration isn't even being pushed for gaming. It's being pushed for Augmented/Virtual Reality surgery and oil drilling types of applications. Sure it'd be nice to have a voxel accelerator so when you blow some guys arm off in a game you can see chuncks fly correctly, but it's more important for other applications. I do research in AR and the fast the accelerator the better. We've already hit walls with $1400 OpenGL accelerators. Sure gaming is nice, but put on a head mounted display and try to make CG things look like they're in the real world and you'll see that acceleration has PLENTY of room to grow.

    Links for those interested in AR:
    rit.edu [rit.edu]
    Media Lab [mit.edu]
    The Navy [navy.mil]

    There are plenty more out there also. VR stuff looks fine for now, but when you're trying to make CG stuff look like real world stuff and have it line up with real world objects you can use all the acceleration you can get. Untill CG looks real we're not there yet.
  • Yup, remember it well. Only it was the Comanche, not the Apache. Check it out over at Novalogic's site [novalogic.com] Also, both Delta Force games utilized the Voxel Space engine. Nifty terrain effects, but the people and buildings tended to not look so hot. Plus, you can't accelerate voxels with any existing card, so that huge Geforce2 you just bought won't give you any huge, accelerated advantage.

    The engine was also used in Armored Fist 3, IIRC...

    For further interest, check out a good r eview of Delta Force 2 [cdmag.com]. It talks quite a bit about Novalogic's voxel engine.

    -------------
  • I want hardware light wave-tracing in real-time. :)

    Seriously, the use of polygons in graphics have not really done games any favours. Instead of having slow, but textured, graphics, we now have fast but clumsy & low-res graphics, instead.

    IMHO, I'd rather have the quality than the quantity.

  • No, it helps you stop being French more quickly.
  • by tolldog ( 1571 ) on Friday August 11, 2000 @05:40AM (#862745) Homepage Journal
    Instead of knocking out the cobwebs, I will give you the links that I learned from.
    bezier patches [ucdavis.edu]

    Bezier curves [mtu.edu]

    Nurbs [mtu.edu]

    What it boils down to is an easy way to store a curved data set. The display part is trickier... and that is where the acceleration would be nice.
    If you had a curved object, you could break it into poly's and have all the triangle points stored in memory or you can have the control points (and the weights if used) stored in memory.

    Obviously the math for the poly's are faster but the display isn't as smooth (Such as Quake 2). With bezier patches, the display takes more math but is smoother because you are representing curves and not lines.
    When it is all said and done, the math isn't too bad, it is just additional math that needs to be done at 30+ fps.
  • And one big disadvantage: the card would have to have the entire scene in fast, local RAM.

    The performance wouldn't be close to what you can get with polygons. A certain console renders a flat polygon in 2 cycles. TWO CYCLES! You can get a lot further with that sort of power than you can having complex recursive algorithms-on-a-chip.

  • The fact you haven't heard the term "voxel" before is more an indicator of your ignorance than anything else. The term voxels goes back at least twenty years, if not thirty.
  • And Comanche 3 runs splendidly on a Pentium 166.
    Warning: Some Rambling follows ;)
    C3 run's decent on my P75 laptop with a 2 meg video card too, of course with detail turned down. I purchased Commanche 3, Armored Fist 2 (another nice game using the same graphics engine) and F-22 Lightnight II (which used the graphics engine too) for like $30 in a package called "The Art of War" I believe. This turned me on to Novalogic (www.novalogic.com as someoen pointed out earlier I believe). That prompted me to buy Delta Force, which again used the graphics engine. DF came bundled with F-22 Raptor, which I'm not completely sure if it's the same graphics engine or not. Anyways, multiplayer on DF is really decent, the distance the terrain can get mapped too makes it really nice for long-range combat. If only it didn't lack weapons. I played F-22 Raptor multiplayer on the net a couple times too, although most of the time I was flying a parachute. The original three games, AF2, C3, F22 Lightning, I played them all over my home network (two or three people at a time depending on which computers were working :) and they were all pretty fun, C3 probably the most, because of the afore-mentioned "popping up out of canyons and blasting off a couple hellfires" or whatever. Uh, oh, anyways, Novalogic really comes out with some decent games. If I only had a newer computer... Deltaforce 2 doesn't really run well on my computer :\
  • It's interesting to note that bilinear filtering and trilinear filtering are exactly the proper point sampling scaling techniques now implemented in all 3D cards that this fellow talks about when discussing how images should be scaled up for use on a monitor.

    Think how much better textures look in Quake 2 and 3 when they are properly sampled with their neigbours and blended for use on the walls, rather than just pixel replication (like walking up to a wall in Doom and seeing a square of some ugly, solid) colour. Although there are still other ways to make the image quality look worse (compare how the blood/smoke clouds look on a Voodoo2 or a Voodoo5 vs. the square-ish-grid-look that seems to be inside them on an nVidia chipset [at least on the NV3, NV4, and NV5 chips ]:-)).
    ---
  • Voxels are interesting because, once you get to a certain resolution using triangles, most of them project to be about 1 pixel large (or less) on the screen anyway. The idea is that using a cloud of points, for high density images you can get equal or nearly equal image quality with less data (it theoretically takes more data, and processing time, to handle a structured triangle mesh than it does a cloud of points).

    At SIGGRAPH this year there were a number of papers about direct point rendering. (And also about lightfield rendering, which is about drawing scenes without using any geometry at all). Try digging up the proceedings if you are interested in this.

    Hardware accelerated Bezier patches are a lot like hardware accelerated Phong shading: they sound like a great idea, the "obvious next step", unless you're trying to use them to do something real. Just as Phong shading is not a particularly interesting lighting model once you reach a certain level of sophistication, Bezier patches are not very interesting shapes. Yeah, they're curvy, but they are curvy *without surface detail at a higher resolution than the curve*, which is just not very interesting.

    John Carmack had a .plan file 1.5 or 2 years ago about why he thinks Bezier patches are a bad idea and I pretty much agree with him. For the amount of data it takes you to create the shapes you want with Bezier patches, you can construct triangle meshes for the same shapes using less data and less headache.

    Jonathan Blow
    Game Research Scientist
    Bolt Action Software
  • Actually, Quake 1's renderer did this. When a non-static polygonal object reached a certain distance from the camera, it would jump into a different renderer that just plotted the object's vertices. If a particular triangle in the model happened to be more than 3 pixels "large" at that distance (meaning there would be gaps), it would recursively subdivide the triangle, plotting the vertices along the way, until the subdivision yielded triangles 3 pixels in size. Crude, but fast.

    To stay on topic, this accelerator IMHO represents a fairly significant advance in graphics hardware (not that it's new, but that it represents an intent to bring the hardware closer to the consumer market). As good as textured/lit/AA/bumpmapped/envmapped polys look, people need to remember that they're still just approximations. Take any polygonal/curved object, and keep increasing the resolution of detail. Eventually you're going to end up with just vertices. So while those approximations are the hip and in thing now, it's important to remember that eventually they will no longer be sufficient.

    It should also be noted that when they say "voxels" they are talking about actual volume data, meaning a 3d array of samples. Delta Force/Commanche/Bladerunner/Tiberian Sun are all 2d simplifications.

  • "The basic unit of visual perception is NOT a 2D sinewave."

    You're right, it's not the basic unit, but there's evidence that one of the basic elements of visual perception is a 2D sine wave. The receptive fields of some neurons in the human visual cortex can be modeled using Gabor functions [ruhr-uni-bochum.de], which consist of a plane wave and a Gaussian function. This model is useful in describing and modeling patern perception and edge detection [ptloma.edu].

  • A lot of the posts thus far have focused on how cool (or how useless) such a thing would be for games. Though the press release doesn't specifically state it, you can count on cards using this technology to costs thousands if not tens of thousands of dollars at first. This is more for desktop medical imaging, etc than for games. Of course, as with all things, I'm sure the price will come down and it will eventually find mainstream usage...but that is likely years off.
  • ..and that this was a Novalogic game. I remember almost having a seizure when I saw this running on a 486 for the first time.

    Novalogic have recent received a new patent [ibm.com] on their use of Voxels for rendering realtime 3d terrains (see also this patent [ibm.com], or here [digitalgamedeveloper.com]).

    IIRC, the big problem with the use of voxels in the past is that Novalogic have actively enforced their patent, which has made many games companies reluctant to use voxels in games (to represent terrain, in any case. Bladerunner used voxels to represent characters, IIRC). Hardware acceleration may be good, but I'm wondering how many games companies will take advantage of the technology. From the Yahoo article it sounds like the technology is going to be aimed at the professional marketplace anyway.

    In my own opinion, voxels are great for representing distant terrain, but they look horrible at short range (not to mention the memory requirements needed to represent a detailed scene with voxels). With today's TnL acceleration, polygon based scenery is more likely to provoke the response I had when seeing Comanche for the first time.

  • Why don't you just shut your pie hole.

    There is nothing wrong with finding and posting useful information. Isn't that the point of allowing people to post here?

  • > I want hardware based disk compression (do any hds do this already?)!

    Let me guess, for your jpegs, mp3s, and dvds?
    Ryan
  • they are i believe two different methods of providing for curved surfaces. bezier curves for instance are use in q3:a. but i think its quite meaningless to provide acceleration for them, because current geometry acceleration [geforce 1,2, mx] *i believe* does that already. hope i could help.
  • #define GLAMATRON_IS_NOT_AN_EXPERT
    #include <grain_of_salt.h>
    /* hopefully if I got it wrong, someone will correct me */

    I believe NURBS is an acronym for Non-Uniform Reticular B-Splines. B-Spline in turn is, I think, bilinear spline. Bilinear I think means that it's got 2 dimensions in which it extends. Of course, since it's curved, it takes up 3 dimensions.. like a piece of cloth. Whereas a normal spline would be like a piece of string. Bezier curves are a form of spline. I would guess that bezier patches are the 2D extension thereof.

    Anyway, splines are a mathematical way to describe smooth curves that change direction a lot. (well, I guess you _could_ describe a hyperbola with splines, but you'd be better off just saying x = 1/y) So, when you take the spline model and extend it into 2 dimensions, you can make nifty curved surfaces like automobile bodies or rippled water or flux capacitance diagrams.. all with a relatively low number of control points.

    Of course, the process of turning a bunch of control points into a matrix of really small triangles takes quite a bit of floating point math.. so it would be way cool for it to be accelerated in hardware. What would be even cooler would be for the hardware to translate it directly into hundreds or thousands of projected pixels.

  • "Jon" Carmack ?

    At LEAST spell his name right. "John Carmack"

    nf
  • It's an older voxel-based game with polygon buildings and vehivcles. The terrain is very well shaped though it's low on detail by today's standards. I especially like the desert missions where you're sneaking through canyons and popping up to fire off a few Hellfires... This sort of terrain really worked well with voxels. And Comanche 3 runs splendidly on a Pentium 166. (Now if only they hadn't implemented bag-of-hammers stupid wingmen it would be the perfect "classic" game)
  • Hm, I thought that the next generation of 3d cards (e.g. the NV20) could accelerate such organic 'primitives' as they could do the necessary tesselation directly in the hardware.

    This would actually give a good performance boost as you could reduce the data on the AGP bus quite a lot ('compressed geometry')

    Esp. doesn't DirectX 8 support that kind of stuff? Even if you don't like DirectX (I also prefer OpenGL) it makes the standards...

    CU,
    Maori

  • That says more about the limits of applied mathematics than it does about neurons. When your only tool is a hammer, everything looks like a nail. Corollory: when your only tool is the Fourier transform, everything looks like a sinewave.
  • novalogics games all use the voxelspace engine and are really kewl. novalogic.com
  • The good ol' NV1 from NVidia did this... it wasn't utilized at all, so the future generations of NVidia cards didn't incorperate it.
  • Yeah, but keep in mind that those folks who just *have* to have those chunks fly correctly when they blow an arm off some digital character are helping drive down prices on 3d graphics hardware.
  • you are forgetting the old voodoo 2's. with 2 in one box and an sli cable (anyone know what sli stands for??).
    each one does an alternate line of each image. this is pci only but I dont see motherboards with 2 agp slots appearing before the "next big thing"tm comes along
  • Voxels in consumer products can actually prove to be useful. Treating things with actual mathematical volume and substance can add to the realism of 3D environments like water and the like, things act differently as they pass through such things where in Quake you just define different universe properties to a certain area. A boulder as a three dimensional construct could have better physical properties, large enough that Lara Croft's mass couldnt easily move it, said boulder could also be blown up without storing a 3D model for the smaller pieces of rock.
  • More beneficial for games possibly, but voxels tend to be used for some rather important stuff like medical imaging. Nurbs, like polygons, only represent the surface and not the interior. If we were talking 2D instead of 3D, your claim would become that accelerated spline drawing for vector graphics would be 'more beneficial' than accelerated raster graphics. I'd disagree - to me voxel rendering is a much more radical step than accelerated Nurbs, and voxels have a host of advantages that Nurbs don't; try comparing polygon-based morphing with voxel-based morphing. Polygon-based morphing has real difficulty with topological changes such as torus / sphere morphs, which voxels handle easily, and few advantages over voxel morphing methods, while voxels seem to have a considerable edge for purposes of medical imaging, etc.

    Savant
  • Ok.. looks like tolldog posted some good links..
    From reading those links, it also looks like I got the R in NURBS wrong.. it's "rational" not "reticular" (wonder where I got the latter :-) )
  • nope, chevy. (at least, nova and cavalier are chevrolet)
    "They think its sexist"
  • Anyone working in 3D CAD or 3D engineering simulation software (like me) will have an almost insatiable demand for faster 3D. Many companies are using 3D simulation software for plant layouts, offline robotic programming, human ergonomic analysis, finite element analysis, and of course CAD among 3D provides some enormous advantages for communication of concepts.

    It isn't about games for everyone. Games are great but I personally would rather see the acceleration hardware aimed at major CAD vendors (Autodesk [autodesk.com], PTC (ProEngineer) [ptc.com], SDRC (I*DEAS) [sdrc.com], Dassault Systemes (CATIA) [dsweb.com], UG Solutions Inc [ugsolutions.com]) rather than games because that would help me more. 3D graphics available today are really pretty slow compared to what I really need. (and yes we have some pretty high end hardware to work on too) Try rendering an entire plant in 3D with product in it and flying around in real time with a reasonable level of detail. (no you don't use a CAD system for this, you use dedicated VR or 3D simultion software like QUEST [delmia.com]) The currently availlable hardware still only permits fairly crude cartoonish models. It has been quickly improving though...

    Actually what I'd really love to see any of them release their products for linux, but that's another topic... (funny thing is, most of them have unix versions already so it shouldn't be all that hard a port)

  • From their web page, it renders a 256^3 space in real-time. Okay... Is that only a color sample at each point? Or does each point get a normal, a diffuse, ambient, and specular lighting component - what? Because already, a 256^3 * RGB adds up to 48 Megs - which is not too small. What more are they going to do?
  • Problem with this approach is that the ray tree can get arbitrarily deep, in complex scenes. So you don't know with any certainty how long a frame will take to render, or even how long it MIGHT take without directly testing it. Sure, you can do it in parallel, but it's still a huge system of equasions that need to be solved. The benefit of polys and voxels is that they are relativly simple. A triangle, for instantce. Easy to draw a clip.

    Everything gets much harder in ray tracing, much harder. And ray tracing means you have to have really accurate models (no blocky models) because you're doing real ray intersections, as such you can't fake things very well with a chunky model. With our current hardware, we can make 300 poly's look like 10000 polys, even under dynamic lighting conditions, in real time. There may be some degenerate cases, but hey, nothing's perfect :)

    - Paradox
    Man of the C!!!
  • switching to another modelling paradigm is not just a case of 'ooh it goes fast!'. The use of polygons is so ingrained in computer graphics today that to move to another primitive would involve a lot of effort.

    AFAIK, artist's 3d tools are geared towards polygonal based rendering. It's not a given that artists will find voxels intuitive to model with (although it may be that the raster-vector analogy stands). I'm not an artist so I don't know. But one of the challenges will be providing decent tools.

    More significantly, many of the fundemental problems in computer graphics, such as visibility and lighting, have been solved in efficient ways for polygons in particular. I'm not greatly familiar with the current state of affairs in voxel based research, but there's a lot of basic techniques that are used today in polygonal-based rendering aside from drawing filled spans that may not translate directly to a new paradigm.

    Perhaps the fact that the accelerator is a hybrid is key, since the different representations can be applied to more suitable constructions. But I think there's a way to go before voxels become mainstream, simply because they don't translate directly to polygons, and hence the class of problems associated with primitive rendering is not already solved.

    Gingko

  • I was just thinking. I read a SIGGRAPH paper about adaptive voxels for real time fly-overs.
    The idea was to swap voxels for when the objects get nearer to the camera.
    A system could be used like this where voxels are used on all objects that need little detail far away and polygonal objects are swaped in when the object is near.

    Just another idea from a sleep deprived soul...
  • When games people talk about using voxels (usually for representing landscape) aren't they really often talking about height fields and optimized height field rendering? I've always been a bit fuzzy on this.

    And what was up with "voxel-based" characters in westwood's Blade Runner game a few years back?
  • We are using the current card from rtviz - the VolumePro 500 - for medical applications. It's a PCI card that can fit into PC (NT), Sun, or SGI systems. It can render 256^3 volumes at about 20 fps (it can handle larger volumes with slower framerates). To put this in perspective, that is faster than a low end SGI infinite reality! Keep in mind that the card costs only $4k (maybe 4-5% of the cost of IR) and you can see why this is a boon for those who need it (medical, geophysical, etc). Furthermore, the quality is very good. It supports some things you cannot easily do on SGI hardware, like high-quality per-voxel lighting with no performance penalty.

    On the down side, there are some limitations in the current card: no perspective projection (needed for applications like virtual endoscopy) and no way to mix surfaces with volume data (needed for surgical simulation, etc). That's why this news is exciting for us medical folks. As far as the rest of you (gamers, etc), my feeling is that if you build it, they will come. When it gets to the point that voxel data and surface data are handled by the same chip on a $200 video card with 1gig of memory, the game makers will use it.

  • BTW, the Blade Runner game released two or so years ago used Voxels for the characters and character animation.
  • If we hadn't gone the polygon route, but used ray casting instead, I could see two big advanges:

    You wouldn't need much from a graphics card (fill rate + 2D for the UI)

    You would need a fast CPU (for software ray casting). In fact, you'd need so much CPU, and ray casting works so well in parallel, that gamers would be driving the SMP market, and SMP machines would be common.

    I'd love the simplicity of more CPU == better quality graphics. It's a pity we missed it :(

    best wishes,
    Mike.
  • Actually, I was very surprised the other day when I loaded a using java3d, and saw MS's java3d implementation seems to bind into direct3d! Maybe I'm wrong and the applet was just really well designed, but there was dynamic lighting and the thing looked great, even had perspective correct texture mapping and bilinear filtering. It was the coolest thing I've seen. I'm not an MSBoy, but I had to hand it to whever made that decision.

    What we need is for someone to patch opengl into a java applet, for all platforms. Hardware acceleration for 3d web content is way too cool an idea.
    - Paradox
    Man of the C!!!
  • Don't forgot "Outcast"
  • Well, voxels are great ways to model landscapes and such, but to make a voxel-based engine with the versatility and freedom of current polygon engines, and one that has similar memory requirements, is a pain. It's a big pain. And it's not even worth it, you don't stand to gain that much. In more limited (non 6dof) situations, voxels can be pretty easy to implement, though.
    This is why Commanche used them. Remember Commanche didn't have a full 6dof, and as such it made the voxel engine design easier.
    - Paradox
    Man of the C!!!
  • >Are voxels somehow superior to polygons?

    Usually - yes. Mostly, we are rendering objects that are actually 3D - people, monsters, houses, whatever.


    But for all those things we only see a 2d-surface. And most of the transparent things you can look into are monochrome, so you can do them with a few polys. If you would put tea in a transparent cup and add milk so that it isnt homogeneous and so that you can still see through the tea - then you have an effect that is easy to do via voxels and difficult (not impossible!) to do via polys.

    So, for the visualization of almost all things the 2d-surface is enough. Notable exception s are scientists that want to visualize a field or doctors that want transparent organs and flesh since this helps visualize the anatomy.

    Granted, sometimes we render things that actually are purely 2D (CAD, FEM models, etc), but a 3D representation (ie Octree vs. BSP) is far more natural.

    Funny, its just the other way round:
    When rendering things you almost always see only the 2d surface. When doing CAD and caring about the center of gravity, weight, (rotational) inertia etc, the volume is important.

    To put it another way - we actually spend time first creating a lot of polygons from a solid 3D model - for no other reason that the fact we need them for rendering, *and* decimating the very same meshes so that we have less facets! Bleugh! ;)

    I am not sure I understand you. If you, for example create the polys with a 3D scanner and then reduce them that is because its faster to do it this way than to have a reasonable tesselation from the start.

    My boss was looking at these at SIGGRAPH, so I might have one to play with with a bit of luck...

    What will you use it for? I am not saying voxels are useless, but I personally see much more uses for polys than for voxels.
  • No, this is demonstrably incorrect. And I'll tell you why.

    Consider, for instance, a non-ideal reconstruction filter on an audio channel. This distorts the output. Now you are saying that the input is still a point sample, but the output is distorted. This is totally the wrong way to view things. The output is "correct" - a priori. That's what you hear. There is no way to tell your ears that the actual physical output is somehow "wrong", and instruct your ears to hear the correctly-reconstructed version ... your ears hear what they hear.

    In which case, we have to push the interpretation back up the line, and ask the question: if this is what my signal gives me through this reconstruction filter, then what signal would give the same results through perfect reconstruction?

    And THAT is the definition of what your samples mean. Therefore, the samples are only point samples if the reconstruction filter is ideal. We like reconstruction filters to be close to ideal, PURELY so we can use the point sample model, because it's much easier than any other sampling model.

    This is a somewhat moot point in audio theory, since you can get arbitrarily close to perfect reconstruction; however in image processing the reconstruction filters are nowhere NEAR ideal. Therefore, it is necessary to reinterpret your number sequence as something other than point samples. Unfortunately, this doesn't fit into image processing's usage of 18th-century mathematics, so it's not even ACKNOWLEDGED by teachers of the subject - of course; when you're teaching Newtonian Dynamics you don't waste time explaining that all of it is actually incorrect.

    As to the idea that an image can be "bandlimited" - I reject that idea as plain nonsense. It works mathematically, but gives (as you say) visually impaired results in practice. Images just aren't made from frequencies in the same way that sound is. They just aren't.

    So, 2D signal processing is a field well-grounded in irrelevant mathematics that doesn't work in pratice. In terms of reconstructing images from samples, it's NOT provably correct, unless you take on board this ridiculous and counter-intuitive idea that images can somehow be "bandlimited". They can't!

    I'm not saying that 2D image processing isn't a useful field. It clearly is; using 2D image processing ideas you can do high-quality work. BUT it is totally incorrect to try and force this MODEL from image processing down people's throats, when the MODEL is demonstrably not reality.

    Or, as Stroustrup puts it in "Design and Evolution of C++": If the map and the terrain differ, trust the terrain.

  • I meant proper as in "better than how Doom scaled up pixels," rather than "proper for best possible image quality."

    You are right. I still think it's an alright tradeoff at this point, at least until anistropic filtering gets implemented in hardware :-)
    ---
  • by codemonkey_uk ( 105775 ) on Friday August 11, 2000 @06:01AM (#862804) Homepage
    They are just orders of magnitude more expensive than polygons, that's all.
    No they are not. They are significantly less expensive than polygons.

    The problem is that a single voxel only models a single point in 3d space, where as a polygon can model a whole surface. Using voxels can be more expensive than using polygons because you often need many more of them to model a given subject (when viewed close up).

    This development is signficant because voxels come into their own when viewed from such a distance that a single voxel/polygon is reduced to a few pixels or less. For example, landscape rendering. This development gives the developer the flexability to render voxels in the distance, and switch to polygons (which provide more detailed visul information, but are more expensive to render) for close to the cammera.

    Thad

  • Vogon Accelerator.
  • by bartok ( 111886 ) on Friday August 11, 2000 @06:53AM (#862808)
    Outcast (http://www.outcast-thegame.com/) was released last year and it's based on a voxel engine. It's the best adventure game I ever player and if you can stand a little pixelation, it's graphics look like what Quake 6 will probably look...
  • Further evidence that the moderators have been smoking crack... Again
    What is it lately? Are the drug cartels offloading a whole load of cheap crack at the minute?


    Strong data typing is for those with weak minds.

  • by Salsaman ( 141471 ) on Friday August 11, 2000 @04:46AM (#862814) Homepage
    Is it something that helps you give up smoking quicker ?
  • by molo ( 94384 ) on Friday August 11, 2000 @04:47AM (#862815) Journal

    > dict voxel
    1 definition found

    From The Free On-line Dictionary of Computing (15Feb98) [foldoc]:

    voxel

    <jargon> (By analogy with "{pixel}") Volume element.

    The smallest distinguishable box-shaped part of a
    three-dimensional space. A particular voxel will be
    identified by the x, y and z coordinates of one of its eight
    corners, or perhaps its centre. The term is used in three
    dimensional modelling.

    (10 Mar 1995)
  • But why hasn't Intel started offering true multi-bus boards with 4 PCI slots, 2 64 bit PCI slots and 2 AGP slots?

    I'd love to have my Ultra3 SCSI on an AGP port instead -- imagine what they do then!
  • by Evangelion ( 2145 ) on Friday August 11, 2000 @04:50AM (#862817) Homepage

    Pose the question like this : are raster graphics somehow superior to vector graphics?

    At one point, video games were done with vector graphics (Tempest [klov.com] was the most memorable =) beacuse raster graphics were too expensive computationally to do. Once they were possible, much more freedom was allowed.

    Polygons are basically vector graphics in 3d - an approximation generated by drawing lines through space to simulate the construction of objects. Whereas voxels are much more like pixels - you choose a resolution, and then you fill in each 3d point with a colour. They are just orders of magnitude more expensive than polygons, that's all.

    The advantages? More freedom and realisim in what can be designed.

    --
  • by grahamsz ( 150076 ) on Friday August 11, 2000 @04:52AM (#862822) Homepage Journal
    I'm not sure here but i'm fairly suspicious that the original nv1 graphics processor (found on the diamond edge 3d series) rendered splines instead of polygons. I had one about 5 years ago and for the 2 games that were actually written for it it was quite impressive.

    From what I recall they went back to polygons because they were easier and you could create a better impression just using a lot of poly's.
  • there's not too much point to accelleration

    Agreed, but then the whole 3D acceleration market is almost entirely geared towards the gaming industry anyway. 2D cards reached your critical i-dont-care limit long ago. There is simply no market for hugely fast 2D cards any more because they're all already fast enough that users won't notice any increase in speed. 3D cards haven't yet reached that point, and are relying on bigger and better games coming out that force users to upgrade. Eventually, there will reach a point at which it won't matter any more, and my guess is it won't be all that long. That said, I'm still waiting for a poly-based 3D game that can cope with the number of enemies on screen that Doom managed. That was what gave Doom it's frenzied atmosphere, and ultimately what made it such a good game.

  • by iapetus ( 24050 ) on Friday August 11, 2000 @06:05AM (#862824) Homepage

    That's a slightly inaccurate answer, because you don't mention any of the disadvantages of using voxels.

    The main disadvantage comes when you choose to view the shape at a higher resolution that that which it was created at. With a polygon (or other surface type) based model you still get a smooth image. With voxels, unless you're doing something clever to approximate the effect (which still won't work as well as using a surface definition), you don't. If you're generating your textures procedurally then you can zoom into a surface-based model as far as you like, whereas with a voxel based model eventually you'll end up with a single voxel filling your screen. Yum.

    Both have their uses, and some games software in the past has used both (hey, there's a reason this card is supposed to be able to do both simultaneously, you know...), but to imply that voxels are somehow better than polygons is, IMHO, more than a little misleading.

  • by superlame ( 48021 ) on Friday August 11, 2000 @06:23AM (#862827)
    An accelerated bezier patch just means that the hardware can draw bezier patches rather than just polygons. Currently, if you want bezier pathes (like the curved surfaces in Quake 3) you have to tesselate the patch into a set of polygons before the accelerator card can rendering it. The tesselation takes a lot of CPU power, so having a video card do it would be a great speed improvement. It would mean that the animated characters in games don't have to keep being so blocky.
  • I can play UT with 15 bots... on a PII-350 w/GF256DDR.

    Hmmm. Yes, you can play with large numbers of bots, but UT slows down for me when there are more than about 7 or 8 visible on screen at any one time. I have various machines ranging from an AMD K6-2/450 to a PIII-550, with Rage 128, Voodoo 3 and G400 cards, and all with 128MB or more of RAM. Admittedly, I can't persuade UT to use anything other than software rendering on the G400, even with the latest Matrox drivers :-(

  • Chevy Nova was discontinued in 1987. From 85-88 it was a rebadged Toyota Corolla, not Tercel. In 88 the Geo line came out, they made the Corolla into the Geo Prizm. Then they dumped the Geo name (wonder why), and no longer sell rebadged Corollas, and the Geo Tracker, Metro -> Chevy Tracker, Metro. Long before that the Nova was a bigass car.
    ---
  • They're actually "surfaces of elevation". But Novalogic, starting with their ground breaking Comanche game, abused the terminology, and called their clever rendering method for surfaces of elevation "Voxel Space[tm]". (They tried to patent it too.) The terminology stuck.

    Whatever. In 5 years surfaces of elevation will rule the 3D game world. Call them what you want to.
    --
  • by AstynaxX ( 217139 ) on Friday August 11, 2000 @04:54AM (#862840) Homepage
    It's been said before, quite often, that /. isn't just for group X. You don't game, fine, that's your choice, live long and prosper, etc. etc. But many of us on /., myself included, enjoy a good fragfest every so often, or like a detailed flight sim, etc. etc. So stuff like this is interesting to us. Also, having seen 3D surgical applications in action here at my University, a card with the capabilities they describe could be very useful to the medical and scientific communities. So, really, its gaming, rendering, training, experimenting, simulating, teaching, etc. Not for Joe Average maybe, but far from pointless.

    -={(Astynax)}=-
  • Hmmm... takes me back to my Senior year in Surfaces and Modeling.
    I think that accelerated Nurbs would be more benificial. At least nurbs are the choice of Maya... I can't remember what the other packages like.
    But... accelerated Bezier patches is a step towards faster nurbs.
  • 3-D-accelerators do not accelerate high-quality rendering, where ray-tracing and radisiosity and such are used.

    They're great for when you're modelling, so you can get a quick preview and get a decent idea of how highlights and textures are going to look, but, for the final render, they're not very useful.
  • I can play UT with 15 bots... on a PII-350 w/GF256DDR. Granted, their novice bots, but they still shoot in about the right direction. I kick the crap out of them, but you do that for Doom too... 15 bots, double speed, insta-gib...1550 FPH... Not great frame rate, but playable (30 maybe?) And I think the bots are more intelligent than doom monsters. It gets old kinda fast, so I only do that when in serious need of stress relief.

    ---
  • FWIW this was called Comanche, not Apache. It was (IMO) an excellent, although not very realistic, flight simulator for the Comanche helicopter. (This was back when the AH-64 Apache was the main helicopter of the US Army and the Comanche was still mostly a prototype.)

    I'll try not to stray too far off-topic, but Comanche made excellent use of a voxel landscape that was extremely realistic looking, but dark at times. (Heh, this was when a 486DX2/66 was a high-end computer.) The biggest drawback of the game at the time was that voxels used up huge amounts of memory at a time when most people only had 4 or 8 MB RAM, so the Comanche worlds were pretty but small. Another drawback to the game was that the landscape was so pretty that it made other visual elements--rockets, oil tanks, other helicopters--look cheesy in comparison.

    When Win95 came out MS disabled real-mode hard drive access under DOS, which is something Comanche needed to run (anyone remember c.exe?). I still have the box sitting on a shelf. It's a cool-looking trapeziodal shape, which might be what influenced me to buy it in the first place.
    --

  • > Are voxels somehow superior to polygons?

    Usually - yes. Mostly, we are rendering objects that are actually 3D - people, monsters, houses, whatever. Granted, sometimes we render things that actually are purely 2D (CAD, FEM models, etc), but a 3D representation (ie Octree vs. BSP) is far more natural.

    To put it another way - we actually spend time first creating a lot of polygons from a solid 3D model - for no other reason that the fact we need them for rendering, *and* decimating the very same meshes so that we have less facets! Bleugh! ;)

    Advantages of volume viz. include the ability the really parallelise the rendering in image space (ie CPU per screen pixel). This works well in software, but I'm not sure how well it scales in hardware. You can also do really cool QOS - degrading the rendering depending on available CPU much *much* easier than with polygon based systems.

    My boss was looking at these at SIGGRAPH, so I might have one to play with with a bit of luck...

    I also want it just cause it's got 256Meg of RAM on it ;)

    best wishes,
    Mike.
    ps) http://www.rtviz.com/technology/index.html
  • No one does hardware disk compression because we don't need it. We've always been able to manufacture larger-capacity drives. Now, when the fsck-hits-the-fan (as it were) and the engineers can't cram any more bits onto a platter, THEN we'll see a boom in the compression industry (hardware AND software).

    chris
  • by Anonymous Coward on Friday August 11, 2000 @05:05AM (#862861)
    That depends entirely on what you are trying to do.

    Most games & CAD systems are polygon based because what you see and work with are surfaces, which polys are ideal to represent. Another advantage is that efficient polygon rendering is pretty easy to implement.

    This changes when you are looking at volumetric data - this can be anything from medical scans to computational fluid dynamics results.

    Volume rendering with "standard" 3d hardware is quite a rich research topic at the moment, but there are a few ways to do it.

    1. Isosurface extraction - you have a field of, say, temperature values and you decide to pull out a surface at t=100 centigrade. You can use an algorithm such as "marching cubes" or "marching triangles" to give you a mesh that corresponds to the value you are looking for.

    The problem is that this is expensive and you get *lots* of polygons. This is one of the reasons why "high end" boards are good for millions of tiny polygons, but fall flat when asked to do "game" type work.

    2. "splatting" - this is where you just draw semi-transparent blobs where "active" voxels are and get some kind of image out. It is more complex than that (of course), but you vcan get good images.

    3. Cunning stuff involving stencil buffers & 3D textures - there is a paper in siggraph proceedings from (i think) '98 or '99 that covers this. I didn't really get it to be honest.

    The trouble with these approaches is that they are really just tortuous ways of visualising information that you would be able to just see if you could render your volume directly. Surface reconstruction is simple, but can take ages. Other algorithms are tricky to write & debug.

    One final note is that 3D labs do some of the more fully featured accelerators, some of which support 3D textures. I would not be surprised if the volume representation was tied to texture memory in some way. Certainly 3D texture/voxel compression algorithms would be a likely place to start sharing technology.

    And in answer to your question: polygons and voxels are both better, depending on what you want to do with them.

One man's constant is another man's variable. -- A.J. Perlis

Working...