Forgot your password?
typodupeerror
Graphics Software

Mesh Compression for 3D Graphics 297

Posted by michael
from the this-is-not-a-hack dept.
IanDanforth writes "A new algorithm that uses successive approximations of detailed models to get significant compression has been revealed by researchers at The University of Southern California. Just as MP3s remove high frequencies we can't hear, this algorithm removes the extra triangles in flat or near flat surfaces that we can't see. Experts in the field are giving this work high praise and imply that is will be immediately applicable to 3D modeling in games, movies, CAD and more."
This discussion has been archived. No new comments can be posted.

Mesh Compression for 3D Graphics

Comments Filter:
  • This will usher in a new age of video piracy!
  • by Alphanos (596595) on Wednesday June 16, 2004 @11:34PM (#9448836)
    Wide-spread use of graphics on the web didn't really take off until jpeg and gif compression became common. Will the easy compression of 3D models allow use of 3D content on the web to take off?
    • by Anonymous Coward on Wednesday June 16, 2004 @11:54PM (#9448950)
      Bandwidth probably isn't the problem, because 3D models can be described in ways that don't require much space. A renderman .rib file is far smaller than an image of the scene it describes, and a renderman shader can also be quite small. I'd expect something similar is the case for OpenGL.

      I'd guess the bandwidth would really be taxed by the transmission of bitmaps used for textures. That won't be helped by removing triangles from the model.

      I expect any acceleration would be in the processing on your computer. The CPU and/or GPU would have less work to do, because of the reduced number of triangles to render. So your game gets a higher frame rate, and/or uses fewer cycles, or can perform faster with less powerful hardware.

      The real reason 3D content hasn't taken off is that it frankly isn't very useful for every-day browsing.
      • by Lord Kano (13027) on Thursday June 17, 2004 @03:49AM (#9449864) Homepage Journal
        The real reason 3D content hasn't taken off is that it frankly isn't very useful for every-day browsing.

        Just wait until the porno industry gets involved. Imagine being able to freeze frame and get Matrix-like fly arounds of the money shot.

        Seriously, my first jpgs and gifs were of porno. Not schematics, or technical info. But big bouncing boobies. I'd be willing to bet that most of you who go back to the 1980s or before had a similar experience. Or how about streaming video? Porno and Mac World expos were the first streaming videos that I ever heard about. If this type of thing is going to take off it'll be because of smut. Sad isn't it?

        LK
        • by zero_offset (200586) on Thursday June 17, 2004 @07:15AM (#9450648) Homepage
          I was with you until that last sentence, then you lost me.
        • by rsmith-mac (639075) on Thursday June 17, 2004 @10:13AM (#9452103)
          Sad? How is it sad? As far as I'm concerned, the porno industry is the "perfect" industry from a geek perspective. They are technological innovators that are always willing to try something new and are always on the bleeding edge of technology, they believe in free speech instead of trying to squish it, and they, unlike their **AA counterparts, aren't trying to sue the pants off of the online world, or run to Congress whining.

          It's not a sad thing, it's a great thing. The fact that the content is what it is, is unimportant; what counts is that there's an industry out there that's willing to "do things right" the first time, rather than be dragged kicking and screaming.
          • Uh (Score:3, Insightful)

            by bonch (38532)
            they believe in free speech instead of trying to squish it, and they, unlike their **AA counterparts, aren't trying to sue the pants off of the online world, or run to Congress whining.

            Nice random MPAA/RIAA dig there (is it all Slashdotters think about anymore that they have to interject it at every opportunity?), but the fact is that there have been several articles in the past five years about how the porn industry is worried about P2P because it pirates their material. Ever done a search on eMule to s
          • Right. I already know how this will be responded to, but I'm going to say it anyway.

            Most of the girls you see in porn movies and pictures aren't there because they really enjoy doing porn.

            They are probably there because at first they needed money (porn pays well), and started out by doing some non-nude or semi-nude pictures, then they just got tangled up in all of it.

            I don't have statistics or anyting, but honestly, do you think a lot of women just decide one day that they want to receive anal sex from
    • No. (Score:5, Informative)

      by autopr0n (534291) on Wednesday June 16, 2004 @11:56PM (#9448961) Homepage Journal
      This isn't about compressing the data required to store a mesh, although it will help.

      This is about reducing the complexity of meshes so that they can render faster.
      • As others have pointed out this is a new solution to a classic computer graphics problem. The first technique I know of to automatically reduce the poly count of meshes, while preserving the overall appearance was Garland and Heckbert's QSLIM [uiuc.edu] algorithm. This was first published in SIGGRAPH 97 [uiuc.edu]. Or actually, hmmm, no, it looks like Hoppe's work on mesh optimization [microsoft.com] came a good bit earlier (1993).

        Anyway, it's a pretty old problem in graphics. The USC press release that prompted this slashdot story is simpl
    • Wide-spread use of graphics on the web didn't really take off until jpeg and gif compression became common.

      Wasn't GIF a format developed by Compuserve, as in predating the commercial development of the Internet? I thought that web servers and browsers had it already once the US government opened up the Internet to commercial uses (IIRC, 1993?).
    • Quite simply, no. This may well help to lower the bar for rendering 3D graphics on low-powered hardware, which could indeed serve to speed the wider adoption of real-time 3D graphics on the web -- but it won't have anything to do with file size reductions in 3D models, which are negligibly small to begin with. This particular compression technique isn't aimed at smaller file sizes, but rather reductions in the complexity of 3d meshes: fewer triangles mean simpler geometry, resulting in increased rendering e
    • Wrong. Widespread use of graphics on the Internet didn't really take off until JPEG and GIF compression became common. The Web-- which is only one part of the Internet, and not a synonym for it-- had GIF and JPEG from day one.
    • Notice that this is mesh compression. It is a way to compress an irregular and aperiodic surface. If you want a compact 3D model the first thing to try is using simpler geometry.

      Bruce

    • I could have sworn that someone came up with a format for streaming 3D on the web ages ago. No, not VRML, something else. I just tried to do a Google search for it, but came up with too many results [google.com]. It was supposed to allow 3D content on the web to take off as well.

      VRML was supposed to do that, for that matter, and has been around since around 1996. I think 3D has never really taken off on the web because of the way you have to navigate through 3D worlds. I recall navigating through VRML was a real pain

      • I remember using Viewpoint about 5 years ago for web based 3d. It would gradually stream in the points of the mesh so that as it was recieving the data, the model would gradually build itself as you watched. Maybe this is what you are thinking of?

        IIRC it was a very impressive piece of technology. It could be placed in an html layer and was transparent with a nice drop shadow over whatever content was beneath it. It used a binary format to deliver the actual models, than an xml file defined how they should
  • Patented? (Score:5, Interesting)

    by CharAznable (702598) on Wednesday June 16, 2004 @11:36PM (#9448846)
    So, is this something everyone can use, or will it be patented?
  • by Raindance (680694) * <johnsonmx.gmail@com> on Wednesday June 16, 2004 @11:37PM (#9448847) Homepage Journal
    I think this is interesting, but the analogy drawn between MP3s and this 3d-object compression is a bit strained.

    The MP3 compression routine revolves around 'frequency masking' much more than it does "remov[ing] high frequencies we can't hear". Most of the work in MP3 is done through 'frequency masking'. That is, imagine a graph of the frequencies being played at any given time- find the high points, then draw sloping lines down to either side of those points. Humans can't hear anything under those lines- they're 'masked' by the nearby strong frequency.

    Nothing very much like that goes on in this algorithm. There might be some other mesh-compression-analogous process that goes on in MP3 that's like this, but that ain't it.

    Sorry to nitpick, but I figured it's important that
    1. MP3 compression is not just simply throwing out high frequencies (a lot of these are actually retained) and
    2. This isn't anything analogous to that, anyway.

    Looking over my post, I'd have been fine if the submitter had said "Just as MP3s remove frequencies we can't hear, this algorithm removes..." but that's not very descriptive anyway.

    RD
    • by Mister Transistor (259842) on Thursday June 17, 2004 @12:28AM (#9449119) Journal
      A much better analogy would have been to refer to the digital vocoder in cellular phones. They take a phonemic audio sample and find the nearest match, then replace it with a compression token that represents that bit of speech.

      That achieves compression effectively by recreating a high bandwith audio stream from a low bitrate stream of tokens.

      A thought I had years ago is:

      3-D imaging via raytracing can be thought of as one of the most aggressive forms of compression, in that you represent a fastastically complex high-bitrate stream (i.e. The World, or at least the 3-D scene in question) with a very small (usually under 1K) stream of "tokens" (the raytracer's command repertoire). That "compresses" billions of voxels of 3-D space into a tiny scene descrption stream, and vice-versa during "decompression".

    • Actually, your description of what MP3 is doing is almost identical to what the algorithm is doing to remove unnecessary triangles.

      He's not just throwing out high definition data either (which would be a poor compression algorithm). He's finding a seed point, and then trying to build the largest flat surface that masks the underlying points, because they don't really give much detail anyways (not always true).

  • CAD??? ;-) (Score:5, Funny)

    by PaulBu (473180) on Wednesday June 16, 2004 @11:37PM (#9448850) Homepage
    Well, if THAT surface was there I bet there was someone to put it there, and (s)he thought that it had some useful function...

    How would you like to fly a plane designed without those thin "thingies" called "wings"? ;-)

    Paul B.
    • by keefey (571438)
      Try flying the Dodo in Grand Theft Auto 3 to find out. Bloody difficult.
    • The actual data might be retained but the models used for visualisatios could be compressed no?

    • Re:CAD??? ;-) (Score:2, Interesting)

      by iLEZ (594245)
      Also, we 3d-modellers usually do this by hand anyway.
      Does this mean that we are going to see crappy 3d-game modellers making hi poly objects and simply run them through a little wizard to "make 'em good for the game"? =)

      Guess i'm just bitter for not working with CGI yet.
  • by Speare (84249) on Wednesday June 16, 2004 @11:38PM (#9448855) Homepage Journal
    Man, this has been around for years. I'd bet a decade. Almost all GPSes with mapping features use a 2D variant of this to store less line segment data for roads. 3D systems with multiple levels of detail choose among a number of differently-optimized models to reduce vertex transformation overhead on far-away objects. Where have you guys been?
    • Indeed. It appears to be an interesting new approach to the not-so-new field of mesh optimization [google.com].
    • Autodesk 3DStudio R4 (for DOS, from 1994, which I still use now and again) has a plug-in which does mesh optimization, simplifying objects by combining faces that are nearly co-planar. Depending on the complexity of the object, a savings of between 30% and 70% can be achieved.

      Yes, I RTFA, and I don't see how this is such a big deal. Now, if I could reduce face count by 90% with no loss of detail...

      k.
    • by MrBigInThePants (624986) on Wednesday June 16, 2004 @11:55PM (#9448951)
      You both should try reading the article:
      Computer scientists have struggled with the problem of finding an optimal mix of large and small elements for years. In 1998, theoreticians proved that the problem was "NP hard" that no general solution exists that can be solved by a computer in finite length of time. They did find work-arounds: fast methods to simplify meshes, which were unable to guarantee accuracy, and accurate techniques, which were too slow.


      The Desbrun teams novel approach comes from the seemingly unrelated field of machine learning using a technique invented in 1959 called Lloyd Clustering named after its inventor Stuart Lloyd. Desbruns algorithm uses it to automatically segment an object into a group of non-overlapping connected regions an instant draft alternative to the too-numerous triangles of the original scan.
      If you actually read it, it would be pretty obvious why this is new...sheesh!

      Also, game data is built of far fewer triangles and in a much easier form than raw data read from a real-life source. (such as a laser range finder)LOD mesh reduction is usually done by full or partial MANUAL selection.
    • by Ibag (101144) on Wednesday June 16, 2004 @11:57PM (#9448975)
      While I can't say for sure that nobody has used this method before for 3D models, the article seems to suggest that this is slightly different than using differently optimize models. Instead, this seems to be a way to optimize the models so that they look good up close as well.

      The concept of lossy compression of 3D models might not be new, but that doesn't mean that the method for doing it isn't.

      Also, even if the problem were trivial for 2 dimensions, it wouldn't neccesarily be so in 3. The 2 body problem has a simple solution, the 3 body problem has no solution in elementary functions. Random walks are recurrent in 1 and 2 dimensions but transient in 3 or more. I can think of several other mathematical examples where the difference between 2 and 3 dimensions (or 2 and 3 objects) changes things completely.

      Don't judge unless you know you understand the subtleties of this algorithm compared to others :-)
    • by pavon (30274) on Thursday June 17, 2004 @12:08AM (#9449037)
      Read the fine article. You are correct that mesh optimization has been a most popular MA/PhD thesis subjects for over two decades. Which is exactly why someone comming up with a method that is an order of magnitude better than any other previous method is big news.

      Also for all those questioning it's usefullness, you need not look any further than 3D scanning. When it comes to detailed models, very few things are done by scratch, instead the are digitized using one of many scanning techniques. This model is then massaged by hand by an artist. This technique would allow you to get a much better first cut, saving time for the artists.

      Lastly, quake and others generated meshes from smooth NURBS objects. This is quite different, and much easier than generating one mesh object from another. Those tequniques are not usefull for scanned objects where you start with a dense mesh object.
  • Greatness! (Score:3, Funny)

    by Milo of Kroton (780850) <milo.of.kroton ( ... l.com minus poet> on Wednesday June 16, 2004 @11:41PM (#9448871) Journal
    I am for cannot waiting able frequency to this have! I too am so greatness compression going to get.

    I am ask: can use this games? UT2k4 is good. It is very big game however maybe some for people.

    Can this technology fast enough for gaming be?


  • Link to publication (Score:5, Informative)

    by j1m+5n0w (749199) on Wednesday June 16, 2004 @11:44PM (#9448890) Homepage Journal

    The actual paper can be dowloaded from here [usc.edu].

    -jim

  • This kind of mesh simplification has been around for years, in many of the high end programs such as Lightwave and Maya. Also, when you're dealing with data that is triangulated, most likely, you're dealing with mathematical contructions based on DEMs, or other automated processes, and not the type of graphics that you see on TV and movies. All in all, not too groundbreaking, it just means that some scientists' computers can relax just a little bit more....
  • by Anubis333 (103791) on Wednesday June 16, 2004 @11:48PM (#9448903) Homepage
    This is a great way to minimize scan data, but it isn't as useful as the article makes it out to be. Most modeled 3d objects are as low resolution as possible. Shrek has as many polygons as he needs to have, to take away some, or swap their location would destroy the model. For instance, I am a Modeler/TD and most animable character models have 5 divisions, or 'loops' around a deformable joint. Any less would not allow for the deformation control we need. As with most background scenery, it is modeled by hand and as low resolution as possible.

    This could come into more handy later if it is built into a renderer.

    A subpixel displacement renderer that can nullify coplanar polys in this way (though there arent that many usually in detailed oranic objects) it could speed things up quite a bit.
    • There is a lot of work on mesh simplification and compression happening right now because there _is_ a real need for it. Meshes that are modeled by hand may not benifit from it, but many many mesh datasets are being produced by laser range scaners or isosurface extraction of volume data (from some kind of medical imager such as mri say). These meshes are often messy and generally have far more polygons than they need.
    • I disagree. I built a polyreducer for a game company and I can say first-hand that, despite the fact that we had models built by hand, despite the fact that we had really skilled artists, despite the fact that they *knew* triangles were at a premium, the polyreducer I constructed was able to get rid of an easy 10% of the triangles before visual quality decreased noticably. 20%-30% if the camera was far away (which it was through most of our game, so we polyreduced our models a lot :) )

      I don't know how many
  • This has been something that LightWave (and probably other big 3d apps) could easily do for years and years. How's this different?
  • How new is this (Score:3, Informative)

    by SnprBoB86 (576143) on Wednesday June 16, 2004 @11:50PM (#9448915) Homepage
    The article is short on technical details but...

    While the algo may be new, the idea certainly isn't. Direct3D has built in support for optimized meshes, the ROAM algo http://gamasutra.com/features/20000403/turner_01.h tm is in wide use. In fact, pretty much all 3d gemoetric level of detail techniques rely on collapsing "flat" areas. The source data for the geometry can also compress geometric data with stuff like NURBS and other parametric surfaces which is probably much better than some sort of lossy compression. With the coming "DirectX Next", OGL 2, and newer video cards, parametric surfaces (read: infinite curve detail) will easily become the norm.
  • "Just as MP3s remove high frequencies we can't hear"

    Not quite. The primary brunt of MP3 focuses on areas of repeated sound (which can easily be compressed). All of the MPEG codecs attempt to find areas where change is infrequent, then tell the system "from frame X to frame Y, don't change the vast majority of the sound/video".

    In the case of 3D graphics in particular, the image changes. Often. Actually, it's more like an action movie than anything else (Ever see the artifacts on a poor digital cable or
    • Hi (Score:2, Informative)

      by autopr0n (534291)
      I just wanted to let you know that you seem not to have any idea what you're talking about, and you definitely don't have any idea what the article is talking about.
    • Actually, the primary brunt of MP3 focuses on perceptual coding; to put it simply, it uses a model of how important a given sound is based on its frequency and position in time. These 'importance' numbers are used to determine how much accuracy should be used to store the specific time/frequency you're looking at. More accuracy, more bits, less accuracy, less bits.

      You're thinking of the video versions, which work the way you described (to my knowledge; they probably also do some perceptual stuff, but I'm
      • they probably also do some perceptual stuff, but I'm not familiar with video perceptual coding

        They do. Your eyes have better resolution when dealing with luminosity than colour, and also detect lower frequency changes better than high frequency ones. JPEG uses both these effects, as do all video compressors AFAIK.

        Cheers,
        Dave
  • How's the lighting meant to work if the extraneous triangles are removed from flat surfaces? You'll end up with shading that isn't very pleasing to look at. You need those extra triangles, even though you can't see them and the surface is relatively flat, if you want the model to look nicely shaded.
  • Impressive. (Score:4, Insightful)

    by autopr0n (534291) on Wednesday June 16, 2004 @11:53PM (#9448943) Homepage Journal
    I'm surprised no one's done this before, actually. Good texture maps, and especially bump maps can alleviate the need for a lot of triangles. I wonder if this compression routine takes those things into account. It would be great if you could pass in a detailed mesh, and get a simple mesh + texture + bump map back out.
    • It does seem surprising, doesn't it?

      I think this is one of those inventions that "anyone could have invented," but nobody ever did... which makes it all the more impressive, doesn't it? :)

    • There are already programs that generate bumpmaps from detailed models.

      The only reason they are good is that sending a large texture once is a lot better than sending a ton of geometry to a video card every frame. It is unrealistic for storage- unless the model is extremely detailed the texture will probably be larger than the model.
    • Re:Impressive. (Score:2, Interesting)

      by gtada (191158)
      It exists. Check out what ZBrush is doing. [209.132.69.82]

      Also, I believe ATI has a tool to do this as well.
    • In production code, for that matter.

      I wrote a polyreducer for a game I worked on. It would take as input a mesh, bone data, and an input texture map, crunch over them for a few minutes, and spit out a mesh with fewer triangles (and a new texture map). It would have been easy to make it spit out a bump map as well, except we were targeting PS2 and a bump map would have taken another rendering pass.

      Quite effective. We stripped about 25% of the triangles out of most models. I kinda wish I'd gotten time to ap
  • by Anonymous Coward
    I've played often with triangle meshes for various softwares. One thing that many do is to merge groups of adjacent triangles with the same surface normals. This is lossless, versus something like JPG. There are programs for POVRay that will do this, essentially iterating through the grid of triangles, calculating the normals, then merging.

    The POVRay mesh format is a good place to start if you want to learn about triangle meshes. Check the povray.org site for lots of good info.

    You can also do something si
  • by Hektor_Troy (262592) on Wednesday June 16, 2004 @11:55PM (#9448959)
    It's a MOON!
    • That's No Icosahedron, It's a MOON!

      Well, even if no one else did, I considered that one of the most insightful (and funny) comments in this thread so far. :-)

      I did find some of the examples at the link amusing, however... Sure, it reduces polygon counts - But makes your spiffy model of a human head look like someone attacked a styrofoam hat form with a cheese-slicer.
  • This isn't new? (Score:5, Informative)

    by grammar fascist (239789) on Wednesday June 16, 2004 @11:59PM (#9448983) Homepage
    Un-disclaimer: I'm currently pursuing a PhD in machine learning.

    Yes, it is new. First of all, y'all need to read the article and find out how.

    It is for two reasons, both of which are stated:

    The Desbrun team's novel approach comes from the seemingly unrelated field of machine learning...

    Machine learning: getting a computer to generalize (invent hypotheses) given data instances. Work in machine learning has proven that generalization and compression are equivalent. That someone has applied those ideas to 3D model compression is at least notable.

    We believe this approach to geometry approximation offers both solid foundations and unprecedented results...

    In other words, it's not using some hacked-up heuristics. The bias behind the generalizations it makes are solidly described, and can be tuned. Machine learning consistently beats heuristics in accuracy, so their expectation of "unprecedented results" has a good foundation.
    • Re:This isn't new? (Score:3, Insightful)

      by WasterDave (20047)
      Ok. So, your post summarises exactly what is wrong with Slashdot that never used to be wrong with Slashdot.

      We have hoardes and hoardes of "lighwave/maya/povray/myarse has had this for years" posts, some completely wrong understandings of MP3's, a few dozen soviet russia's and profit! posts then this.

      Modded +5, like everything else, but actually *genuinely* insightful and written with a confidence and succintness that comes from knowing WTF you are talking about.

      Jesus. Problem with Slashdot is that there'
  • by CompSci101 (706779) on Thursday June 17, 2004 @12:00AM (#9448994)

    The immediate problem that springs to mind for me is that current graphics cards and APIs don't produce good shading effects when the geometry is turned down. Gouraud shading (color-per-vertex interpolated across the face of the triangle) is the best that hardware acceleration will handle right now, and turning down the number of vertices will lead to problems with detailed color operations under normal circumstances (complicated lighting/shadow effects, etc.)

    Shouldn't the industry be pushing further toward graphics cards that can accelerate true Phong shading, rather than shortcuts and texture mapping tricks? Or even automatic interpolation between meshes of different complexity depending on how much of the scene a particular model takes up? If that functionality was developed first, then this mesh optimization would make perfect sense. But, for now, anyway, it seems like getting rid of the geometry is going to force developers to continue to rely on tricks to get the best look out of their engines.

    Not that you'd HAVE to use it, though...

    C

    • Phong shading isn't _that_ much better-looking than gouraud shading in most cases (diffusely reflecting surfaces). I'd say more effort should be expended looking into high-dynamic-range rendering, depth-of-field and focusing effects, global lighting and soft shadows, and better techniques for animating human characters.

      The last one, in particular, is my pet peeve. Game developers have been so obsessed with graphics over the past years that they have failed to notice that the graphics are actually becomi

    • Shouldn't the industry be pushing further toward graphics cards that can accelerate true Phong shading, rather than shortcuts and texture mapping tricks? No it shouldn't. At least not in the sense of special hardware. Modern card use programmable shaders, and it's up to a programmer, use limited GPU resource for Phong shading or other effects which could be more important at the moment (for example shperical harmonics lightning). Or even automatic interpolation between meshes of different complexity de
  • This might be slightly off topic, but it seems to me that an idea very similar to this is already being used in development. What I am talking about is the new Unreal engine. From the videos I have seen, it seems like the technology strives to create complex surfaces without using many polygons. Once of the examples they show in the game is a box with a complex grated surface which interacts with light and is shadowed appropriately, but when viewed in wireframe mode is simply a flat box made of very few
  • Has their been any word on licensing? Considering that JPEG and GIF are both subject to the whims of private groups (Joint Pictures Expert Group, and Compuserve respectively) it'd be nice to have a good free image format. I haven't "R"ed the "FA," so if my question's answered there I apologize.

    • Joint Pictures...
      You mean like XRays of people's knees?

      JPEG stands for Joint Photographic Experts Group, which is an open committee. The standard is royalty free.
      GIF is owned by Unisys, but it's patent expired in June 2003.
  • I've been using 3D Studio for about 12 years. I can't remember when this type of triangle reduction feature came in, but 3DS had it.

    It would basically reduce the number of trianges more where they together made flatish surfaces and practically not touch the triangles that made up significant details.

    "Mathieu Desbrun, assistant professor of computer science at the USC Viterbi School of Engineering says that digital sound, pictures and video are relatively easy to compress today but that the complex files
    • This is really hyped. This is not compression in the sense of MP3, where you have to decode it. It's just replacing lots of small trianges that make up a flatish surface, with fewer large triangles or polygons. Big deal! Uh... using your analogy, DCT+quantization based video compression is just replacing lots of different frequencies of similar magnitudes with one magnitude. Transforms aren't necessary for compression, especially if the input data is already in a somewhat analyzed state, like triangle ver
    • Nice rant. See my earlier post [slashdot.org] on why this is new and cool.

      "It has a strong formal basis. You can make up extreme cases that will trick it, but for ordinary shapes, it works remarkably well."

      Cool, Shrek 3 will be nothing but primitives! Move along, nothing to see here...

      Ordinary != primitives. Ordinary = things you generally find in reality. That would be faces, bodies, hands, everyday objects like trees, toasters and television sets...

      The technique is borrowed from machine learning (which is my cur
  • I couldn't tell from the article. To have an algorithm is nice, to have an efficient one is nicer. I will get excited when I see some benchmarks or at least a time analysis of it.
  • but the level of detail for a 3d mesh is affected by how close you will end up zooming in. This can be tweaked by using vertex normals to smooth out a mesh, but the loss of detail for this sort of compresssion is a pretty risky tradeoff.

    Can it reliably restore the level of detail after compression? How does it handle animated objects vs static objects? What is the intended use for this compression?

    Still, it is intresting enough to warrant a closer look, I suppose.

    END COMMUNICATION
  • by davenz (788969) on Thursday June 17, 2004 @01:10AM (#9449264)
    is probably not going to be seen by the end user in games or movies or otherwise, as has been noted 3d models are allready as low poly as they can be. The only use that comes to mind is in the area of scanning real models into computers which outputs huge files and many many poly's, this is where an algorithym like this would be very useful to get a model that can be used without being overtaxing on system resources.
  • by t_aug (649093) on Thursday June 17, 2004 @01:17AM (#9449289)
    I get the feeling this technique won't be so useful for what most people consider to be CAD. That is, defining the size and shape of parts. (ALA Pro/Engineer, Catia or the like) The part of CAD that I feel may benefit is Finite Element Analysis (encompased by the phrase: computer aided design). Meshes of 3D shapes can get VERY complex VERY fast and this complexity has to be stored in large files. The hangup is probably that this technique was developed to retain visual similarity. That dosn't mean that the data it retains will provide a good numerical solution.
  • There's already a process for taking a 3D world and then flatening all of the surfaces and removing all of the surfaces that are not within the view of the camera so that they no longer are included and then compressing the result...

    It's called rasterizing... the process of taking a 3D world down to a 2D image.
  • Doah... (Score:2, Troll)

    by MacWiz (665750)
    "Just as MP3s remove high frequencies we can't hear..."

    This is the single most idiotic comment I've heard this year.
  • Splines (Score:2, Interesting)

    by Bones3D_mac (324952)
    This is nice and all, but is it really necessary now that curved surfaces can be accurately represented by splines? While it may require more data per point in the model, the total number of points in a spline-based model is already far lower than the number of verticies needed to create the same model using polygons.

    I can't imagine why else anyone would need a high-density of verticies unless they were trying to represent a curved surface.
  • It's about time someone came up with lossy compression for 3D graphics.

    Given advances in bump mapping, texture mapping and antialissing I feel I could live with a few less detailed polygons, in exchange for faster download time. This could have a big impact on blue vs red, no?

    I think this question harps back to the old argument about levels of detail. Should we spend an extra 1000 CPU cycles giving counterstrike bots a more refined nose or spend it on more advanced AI?

    I think the nose job camp is winning
  • Could the same techniques be used to reduce mouse/pen gestures so that things like joined up handwriting recognition become possible
  • Um... Wow. According to the article the only thing they are doing is running over the meshes and lowering the polygon count to reduce filesize... This technique has been around for a very long time.

    If you really want to lower the file size of large, complex 3d models, you should be looking into things such a using bezier paths to describe curved surfaces, which also has the advantage of providing infinite levels of detail and thus can be dynamically scaled depending on the power of the rendering engine b

  • by Arkaein (264614) on Thursday June 17, 2004 @12:01PM (#9453221) Homepage
    Because the linked article was a little light on details, and because 90% of the posts in this discussion either have very little understanding of what techniques exist in 3D mesh optimization, I thought I'd actually skim the paper (linked to in an above comment) and describe a summary of why this new technique is innovative. I studied the basics of Computer Graphics why working for my BS in CS and worked for several years on a project where I wrote code to triangulate and decimate (i.e. reduce triangle count) for range data, so I do have an idea of what I'm talking about here.

    First of all, as many posts have stated there are wuite a few algorithms out there for mesh optimization. Two of the classic techniques were developed by Schroeder and Turk.
    Schroeder's method [columbia.edu] (PDF) is fast and is able to reuse a subset of the original vertices, but the quality is not great. Essentially, the mesh is simplified mainly by collapsing edges (eliminating two triangles for each edge collapsed) in the flattest parts of the mesh.

    Turk's method [gatech.edu] (PDF) is more accurate, but cannot reuse the original vertices. Basically a new set of vertices are scattered across the original surface, forced to spread out from their neighbors. The amount of local spreading or repulsion is determined using local curvature, allowing greater point density where curvature and therefore detail is high. A new mesh is generated through these points using the original as a guide.

    Further work has been done to create progressively decimated meshes, much like progressive JPEG images work. A model sent over the web could be displayed in low resolution very quickly while the bulk of the geometry is still in transit. Methods for this tend to be colser to Schroeder's approach because obviously it is desirable to reuse the existing data at each level of representation.

    This new method is quite a bit different. It clusters triangles into patches that can be represented simply. These patches are optimized iteratively. Finally a new mesh is created, using the pathces as partitions and reusing vertices where the partitions meet.

    Some benefits to this method:
    • High Accuracy: The total surface deviations are small, and the partitions fit very well to the contours of the original surface
    • Speed: the method is apparently reasonably fast, though not as fast as greedy methods
    • Ability to allow user interaction for variable refinement of specific regions, without requiring it in general cases
    • Iterative process means that in time constrained situations a time/quality tradeoff can be made without modifying the algorithm
    • Possible fuure applications in animation and simulation by introducing a time variable into the partitioning process

    To me the potential animation capabilities and optional interactivity sound most interesting. Accurate decimation methods are already available that work well offline, and faster methods are available for online LOD management. Merging decimation with animation could lead to higher quality, lower computational cost 3D anmiation. Allowing high interactivity could help artists improve the aesthetics of scanned artifacts.

If I had only known, I would have been a locksmith. -- Albert Einstein

Working...