Mesh Compression for 3D Graphics 297
IanDanforth writes "A new algorithm that uses successive approximations of detailed models to get significant compression has been revealed by researchers at The University of Southern California. Just as MP3s remove high frequencies we can't hear, this algorithm removes the extra triangles in flat or near flat surfaces that we can't see. Experts in the field are giving this work high praise and imply that is will be immediately applicable to 3D modeling in games, movies, CAD and more."
Excellent! (Score:2, Funny)
Re:Excellent! (Score:5, Funny)
Proliferation of 3D Content on the Web? (Score:5, Interesting)
Re:Proliferation of 3D Content on the Web? (Score:5, Interesting)
I'd guess the bandwidth would really be taxed by the transmission of bitmaps used for textures. That won't be helped by removing triangles from the model.
I expect any acceleration would be in the processing on your computer. The CPU and/or GPU would have less work to do, because of the reduced number of triangles to render. So your game gets a higher frame rate, and/or uses fewer cycles, or can perform faster with less powerful hardware.
The real reason 3D content hasn't taken off is that it frankly isn't very useful for every-day browsing.
Re:Proliferation of 3D Content on the Web? (Score:5, Insightful)
Just wait until the porno industry gets involved. Imagine being able to freeze frame and get Matrix-like fly arounds of the money shot.
Seriously, my first jpgs and gifs were of porno. Not schematics, or technical info. But big bouncing boobies. I'd be willing to bet that most of you who go back to the 1980s or before had a similar experience. Or how about streaming video? Porno and Mac World expos were the first streaming videos that I ever heard about. If this type of thing is going to take off it'll be because of smut. Sad isn't it?
LK
Re:Proliferation of 3D Content on the Web? (Score:5, Funny)
Re:Proliferation of 3D Content on the Web? (Score:4, Insightful)
It's not a sad thing, it's a great thing. The fact that the content is what it is, is unimportant; what counts is that there's an industry out there that's willing to "do things right" the first time, rather than be dragged kicking and screaming.
Uh (Score:3, Insightful)
Nice random MPAA/RIAA dig there (is it all Slashdotters think about anymore that they have to interject it at every opportunity?), but the fact is that there have been several articles in the past five years about how the porn industry is worried about P2P because it pirates their material. Ever done a search on eMule to s
Re:Proliferation of 3D Content on the Web? (Score:3, Insightful)
Most of the girls you see in porn movies and pictures aren't there because they really enjoy doing porn.
They are probably there because at first they needed money (porn pays well), and started out by doing some non-nude or semi-nude pictures, then they just got tangled up in all of it.
I don't have statistics or anyting, but honestly, do you think a lot of women just decide one day that they want to receive anal sex from
No. (Score:5, Informative)
This is about reducing the complexity of meshes so that they can render faster.
Classic problem in computer graphics (Score:3, Informative)
Anyway, it's a pretty old problem in graphics. The USC press release that prompted this slashdot story is simpl
No. Re:No. Re:No. (Score:5, Informative)
Skimming the article, this just seems to be polygon aggregation on the model ( not HSR, which is certainly not what grandparent was implying anyway ). It's certainly not a method for compressing the stored mesh, it's just discarding arguably redundant detail.
Desbrun explains that his accomplishment was to simplify such a mesh, by combining as many of the little triangles as possible into larger elements without compromising the actual shape. Nearly flat regions are efficiently represented by one large, flat mesh element while curved regions require more mesh elements.
( My emphasis ). I was pretty sure this was nothing new, although I'm sure a general case algorithm, let alone a fast and accurate general case would be novel. But I was writing polygon aggregation code for my undergraduate computer graphics subjects ( much simpler meshes though ), and I would expect anyone with any CSG education to not confuse the subject matter with an actual storage optimisation.
Re:No. Re:No. Re:No. (Score:2)
What have they done that's different I wonder. Hopefully somthing that does a LOT better job of keeping the shape with respect to the number of pollies removed and or leave a more workable mesh. the mess Optimize leaves sometimes is atrocious if you want to do more work on a model.
Mycroft
Re:Proliferation of 3D Content on the Web? (Score:2)
Wasn't GIF a format developed by Compuserve, as in predating the commercial development of the Internet? I thought that web servers and browsers had it already once the US government opened up the Internet to commercial uses (IIRC, 1993?).
Re:Proliferation of 3D Content on the Web? (Score:2, Informative)
Re:Proliferation of 3D Content on the Web? (Score:2, Informative)
Re:Proliferation of 3D Content on the Web? (Score:2)
I'm not sure that's the problem. (Score:3, Interesting)
Bruce
3D Content could have proliferated long ago (Score:2, Informative)
I could have sworn that someone came up with a format for streaming 3D on the web ages ago. No, not VRML, something else. I just tried to do a Google search for it, but came up with too many results [google.com]. It was supposed to allow 3D content on the web to take off as well.
VRML was supposed to do that, for that matter, and has been around since around 1996. I think 3D has never really taken off on the web because of the way you have to navigate through 3D worlds. I recall navigating through VRML was a real pain
Re:3D Content could have proliferated long ago (Score:2)
IIRC it was a very impressive piece of technology. It could be placed in an html layer and was transparent with a nice drop shadow over whatever content was beneath it. It used a binary format to deliver the actual models, than an xml file defined how they should
Re:slow connections (Score:4, Interesting)
Today you can get a cable modem connection at 5mb down
You can watch multiple mp4 video/audio streams at this speed - so why not 1 3d model?
Re:slow connections (Score:3, Insightful)
Because we're all still using 2D cameras and monitors... and that's the real hold-up in 3D content production. Things like QuickTime VR have been around for years, but haven't really caught on because they're not easy to make content with and the results are not exactly stunning sometimes.
Patented? (Score:5, Interesting)
Re:Patented? (Score:4, Insightful)
Re: (Score:2)
Re:Patented? (Score:2)
Re:That's one of two questions I have (Score:2)
whether it will be used like that I don't know. But seeing as how 3d software has already had options to reduce polly count based on detail level for quite some time I don't see how it hasn't already been used unless this is some significant improvement on what is already done.
Mycroft
MP3 compression == complicated (Score:5, Informative)
The MP3 compression routine revolves around 'frequency masking' much more than it does "remov[ing] high frequencies we can't hear". Most of the work in MP3 is done through 'frequency masking'. That is, imagine a graph of the frequencies being played at any given time- find the high points, then draw sloping lines down to either side of those points. Humans can't hear anything under those lines- they're 'masked' by the nearby strong frequency.
Nothing very much like that goes on in this algorithm. There might be some other mesh-compression-analogous process that goes on in MP3 that's like this, but that ain't it.
Sorry to nitpick, but I figured it's important that
1. MP3 compression is not just simply throwing out high frequencies (a lot of these are actually retained) and
2. This isn't anything analogous to that, anyway.
Looking over my post, I'd have been fine if the submitter had said "Just as MP3s remove frequencies we can't hear, this algorithm removes..." but that's not very descriptive anyway.
RD
Re:MP3 compression == complicated (Score:5, Interesting)
That achieves compression effectively by recreating a high bandwith audio stream from a low bitrate stream of tokens.
A thought I had years ago is:
3-D imaging via raytracing can be thought of as one of the most aggressive forms of compression, in that you represent a fastastically complex high-bitrate stream (i.e. The World, or at least the 3-D scene in question) with a very small (usually under 1K) stream of "tokens" (the raytracer's command repertoire). That "compresses" billions of voxels of 3-D space into a tiny scene descrption stream, and vice-versa during "decompression".
Re:MP3 compression == complicated (Score:3, Interesting)
He's not just throwing out high definition data either (which would be a poor compression algorithm). He's finding a seed point, and then trying to build the largest flat surface that masks the underlying points, because they don't really give much detail anyways (not always true).
Re:MP3 compression == complicated (Score:2)
This has nothing to do with MP3 compression. This is a direct result of the way sampling works, as proven by the Nyquist theorem [wikipedia.org] (as referred to in your aliasing link.)
MP3, on the other hand, takes a sampled signal and applies psychoacoustic encoding, removing stuff we effectively can't hear, as stated by the grandparent post.
CAD??? ;-) (Score:5, Funny)
How would you like to fly a plane designed without those thin "thingies" called "wings"?
Paul B.
Re:CAD??? ;-) (Score:3, Funny)
Re:CAD??? ;-) (Score:2)
Re:CAD??? ;-) (Score:2, Interesting)
Does this mean that we are going to see crappy 3d-game modellers making hi poly objects and simply run them through a little wizard to "make 'em good for the game"? =)
Guess i'm just bitter for not working with CGI yet.
This has been around for many years. (Score:5, Interesting)
Re:This has been around for many years. (Score:2)
Re:This has been around for many years. (Score:2)
Yes, I RTFA, and I don't see how this is such a big deal. Now, if I could reduce face count by 90% with no loss of detail...
k.
Re:This has been around for many years. (Score:5, Informative)
If you actually read it, it would be pretty obvious why this is new...sheesh!
Also, game data is built of far fewer triangles and in a much easier form than raw data read from a real-life source. (such as a laser range finder)LOD mesh reduction is usually done by full or partial MANUAL selection.
Re:This has been around for many years. (Score:5, Informative)
Wow. That's pretty far from what "NP hard" actually means.
seems vaguely embarrassing (Score:2)
I was going to give them a friendly heads-up that they're publishing information most undergraduates in the field know to be flatly wrong, but I couldn't find a relevant contact address.
Re:This has been around for many years. (Score:4, Informative)
The concept of lossy compression of 3D models might not be new, but that doesn't mean that the method for doing it isn't.
Also, even if the problem were trivial for 2 dimensions, it wouldn't neccesarily be so in 3. The 2 body problem has a simple solution, the 3 body problem has no solution in elementary functions. Random walks are recurrent in 1 and 2 dimensions but transient in 3 or more. I can think of several other mathematical examples where the difference between 2 and 3 dimensions (or 2 and 3 objects) changes things completely.
Don't judge unless you know you understand the subtleties of this algorithm compared to others
That is why it is news. (Score:4, Informative)
Also for all those questioning it's usefullness, you need not look any further than 3D scanning. When it comes to detailed models, very few things are done by scratch, instead the are digitized using one of many scanning techniques. This model is then massaged by hand by an artist. This technique would allow you to get a much better first cut, saving time for the artists.
Lastly, quake and others generated meshes from smooth NURBS objects. This is quite different, and much easier than generating one mesh object from another. Those tequniques are not usefull for scanned objects where you start with a dense mesh object.
Re:3D scanning methods (Score:2)
Re:This has been around for many years. (Score:2)
Greatness! (Score:3, Funny)
I am ask: can use this games? UT2k4 is good. It is very big game however maybe some for people.
Can this technology fast enough for gaming be?
Re:Greatness! (Score:5, Funny)
Slashdot's using lossy compression on posts now?
Re:Greatness! (Score:2)
Link to publication (Score:5, Informative)
The actual paper can be dowloaded from here [usc.edu].
-jim
Abstract (Score:3, Informative)
David Cohen-Steiner, Pierre Alliez, and Mathieu Desbrun
To appear, ACM SIGGRAPH '04.
Abstract: Achieving efficiency in mesh processing often demands that overly verbose 3D datasets be reduced to more concise, yet faithful representations. Despite numerous applications ranging from geometry compression to reverse engineering, concisely capturing the geometry of a surface remains a tedious task. In this paper, we present both theoretical and practical contributions that result in a novel and versatile fram
Re:Link to publication (Score:2, Funny)
Make $5250 Guaranteed!!! All you need is a PayPal account and $25. We'll do the rest. Click here to find out how. [flamingboard.com]
This is NOT new technology (Score:2)
Re:This is NOT new technology (Score:2)
Useful, but over stated... (Score:5, Informative)
This could come into more handy later if it is built into a renderer.
A subpixel displacement renderer that can nullify coplanar polys in this way (though there arent that many usually in detailed oranic objects) it could speed things up quite a bit.
Re:Useful, but over stated... (Score:2, Insightful)
Re:Useful, but over stated... (Score:2, Interesting)
I don't know how many
uhh LightWave (Score:2)
How new is this (Score:3, Informative)
While the algo may be new, the idea certainly isn't. Direct3D has built in support for optimized meshes, the ROAM algo http://gamasutra.com/features/20000403/turner_01.
A little skeptical, at least based on post (Score:2)
Not quite. The primary brunt of MP3 focuses on areas of repeated sound (which can easily be compressed). All of the MPEG codecs attempt to find areas where change is infrequent, then tell the system "from frame X to frame Y, don't change the vast majority of the sound/video".
In the case of 3D graphics in particular, the image changes. Often. Actually, it's more like an action movie than anything else (Ever see the artifacts on a poor digital cable or
Hi (Score:2, Informative)
Re:A little skeptical, at least based on post (Score:3, Informative)
You're thinking of the video versions, which work the way you described (to my knowledge; they probably also do some perceptual stuff, but I'm
Re:A little skeptical, at least based on post (Score:3, Informative)
They do. Your eyes have better resolution when dealing with luminosity than colour, and also detect lower frequency changes better than high frequency ones. JPEG uses both these effects, as do all video compressors AFAIK.
Cheers,
Dave
Problems with lighting interpolation (Score:2, Interesting)
Impressive. (Score:4, Insightful)
Re:Impressive. (Score:2)
It does seem surprising, doesn't it?
I think this is one of those inventions that "anyone could have invented," but nobody ever did... which makes it all the more impressive, doesn't it? :)
Re:Impressive. (Score:2)
The only reason they are good is that sending a large texture once is a lot better than sending a ton of geometry to a video card every frame. It is unrealistic for storage- unless the model is extremely detailed the texture will probably be larger than the model.
Re:Impressive. (Score:2, Interesting)
Also, I believe ATI has a tool to do this as well.
It has been done before. (Score:2, Interesting)
I wrote a polyreducer for a game I worked on. It would take as input a mesh, bone data, and an input texture map, crunch over them for a few minutes, and spit out a mesh with fewer triangles (and a new texture map). It would have been easy to make it spit out a bump map as well, except we were targeting PS2 and a bump map would have taken another rendering pass.
Quite effective. We stripped about 25% of the triangles out of most models. I kinda wish I'd gotten time to ap
Many algorithms do this (Score:2, Informative)
The POVRay mesh format is a good place to start if you want to learn about triangle meshes. Check the povray.org site for lots of good info.
You can also do something si
That's No Icosahedron (Score:5, Funny)
Re:That's No Icosahedron (Score:2)
Well, even if no one else did, I considered that one of the most insightful (and funny) comments in this thread so far.
I did find some of the examples at the link amusing, however... Sure, it reduces polygon counts - But makes your spiffy model of a human head look like someone attacked a styrofoam hat form with a cheese-slicer.
This isn't new? (Score:5, Informative)
Yes, it is new. First of all, y'all need to read the article and find out how.
It is for two reasons, both of which are stated:
The Desbrun team's novel approach comes from the seemingly unrelated field of machine learning...
Machine learning: getting a computer to generalize (invent hypotheses) given data instances. Work in machine learning has proven that generalization and compression are equivalent. That someone has applied those ideas to 3D model compression is at least notable.
We believe this approach to geometry approximation offers both solid foundations and unprecedented results...
In other words, it's not using some hacked-up heuristics. The bias behind the generalizations it makes are solidly described, and can be tuned. Machine learning consistently beats heuristics in accuracy, so their expectation of "unprecedented results" has a good foundation.
Re:This isn't new? (Score:3, Insightful)
We have hoardes and hoardes of "lighwave/maya/povray/myarse has had this for years" posts, some completely wrong understandings of MP3's, a few dozen soviet russia's and profit! posts then this.
Modded +5, like everything else, but actually *genuinely* insightful and written with a confidence and succintness that comes from knowing WTF you are talking about.
Jesus. Problem with Slashdot is that there'
Re:This isn't new? (Score:2)
I'd say multilevel meshes is a better answer... (Score:5, Interesting)
The immediate problem that springs to mind for me is that current graphics cards and APIs don't produce good shading effects when the geometry is turned down. Gouraud shading (color-per-vertex interpolated across the face of the triangle) is the best that hardware acceleration will handle right now, and turning down the number of vertices will lead to problems with detailed color operations under normal circumstances (complicated lighting/shadow effects, etc.)
Shouldn't the industry be pushing further toward graphics cards that can accelerate true Phong shading, rather than shortcuts and texture mapping tricks? Or even automatic interpolation between meshes of different complexity depending on how much of the scene a particular model takes up? If that functionality was developed first, then this mesh optimization would make perfect sense. But, for now, anyway, it seems like getting rid of the geometry is going to force developers to continue to rely on tricks to get the best look out of their engines.
Not that you'd HAVE to use it, though...
C
Re:I'd say multilevel meshes is a better answer... (Score:2)
The last one, in particular, is my pet peeve. Game developers have been so obsessed with graphics over the past years that they have failed to notice that the graphics are actually becomi
Re:I'd say multilevel meshes is a better answer... (Score:2)
Similar Idea already in use... (Score:2, Interesting)
Licensing Concerns (Score:2)
Re:Licensing Concerns (Score:2)
You mean like XRays of people's knees?
JPEG stands for Joint Photographic Experts Group, which is an open committee. The standard is royalty free.
GIF is owned by Unisys, but it's patent expired in June 2003.
Is this maybe a little hyped? (Score:2, Insightful)
It would basically reduce the number of trianges more where they together made flatish surfaces and practically not touch the triangles that made up significant details.
"Mathieu Desbrun, assistant professor of computer science at the USC Viterbi School of Engineering says that digital sound, pictures and video are relatively easy to compress today but that the complex files
Re:Is this maybe a little hyped? (Score:2)
Re:Is this maybe a little hyped? (Score:3, Informative)
"It has a strong formal basis. You can make up extreme cases that will trick it, but for ordinary shapes, it works remarkably well."
Cool, Shrek 3 will be nothing but primitives! Move along, nothing to see here...
Ordinary != primitives. Ordinary = things you generally find in reality. That would be faces, bodies, hands, everyday objects like trees, toasters and television sets...
The technique is borrowed from machine learning (which is my cur
What is the time complexity of it? (Score:2)
This almost sounds good, (Score:2)
Can it reliably restore the level of detail after compression? How does it handle animated objects vs static objects? What is the intended use for this compression?
Still, it is intresting enough to warrant a closer look, I suppose.
END COMMUNICATION
the use of this technology... (Score:3, Insightful)
Here's one: Re:the use of this technology... (Score:2)
Re:the use of this technology... (Score:3, Informative)
Useful for CAD? yes/no (Score:4, Interesting)
Prior art... (Score:2)
It's called rasterizing... the process of taking a 3D world down to a 2D image.
Doah... (Score:2, Troll)
This is the single most idiotic comment I've heard this year.
Re:Doah... (Score:2)
You're new here, aren't you?
Splines (Score:2, Interesting)
I can't imagine why else anyone would need a high-density of verticies unless they were trying to represent a curved surface.
Long Overdue (Score:2)
Given advances in bump mapping, texture mapping and antialissing I feel I could live with a few less detailed polygons, in exchange for faster download time. This could have a big impact on blue vs red, no?
I think this question harps back to the old argument about levels of detail. Should we spend an extra 1000 CPU cycles giving counterstrike bots a more refined nose or spend it on more advanced AI?
I think the nose job camp is winning
gestures (Score:2)
Umm... wow... (Score:2)
If you really want to lower the file size of large, complex 3d models, you should be looking into things such a using bezier paths to describe curved surfaces, which also has the advantage of providing infinite levels of detail and thus can be dynamically scaled depending on the power of the rendering engine b
A Summary for the Uninitiated (Score:3, Informative)
First of all, as many posts have stated there are wuite a few algorithms out there for mesh optimization. Two of the classic techniques were developed by Schroeder and Turk.
Schroeder's method [columbia.edu] (PDF) is fast and is able to reuse a subset of the original vertices, but the quality is not great. Essentially, the mesh is simplified mainly by collapsing edges (eliminating two triangles for each edge collapsed) in the flattest parts of the mesh.
Turk's method [gatech.edu] (PDF) is more accurate, but cannot reuse the original vertices. Basically a new set of vertices are scattered across the original surface, forced to spread out from their neighbors. The amount of local spreading or repulsion is determined using local curvature, allowing greater point density where curvature and therefore detail is high. A new mesh is generated through these points using the original as a guide.
Further work has been done to create progressively decimated meshes, much like progressive JPEG images work. A model sent over the web could be displayed in low resolution very quickly while the bulk of the geometry is still in transit. Methods for this tend to be colser to Schroeder's approach because obviously it is desirable to reuse the existing data at each level of representation.
This new method is quite a bit different. It clusters triangles into patches that can be represented simply. These patches are optimized iteratively. Finally a new mesh is created, using the pathces as partitions and reusing vertices where the partitions meet.
Some benefits to this method:
To me the potential animation capabilities and optional interactivity sound most interesting. Accurate decimation methods are already available that work well offline, and faster methods are available for online LOD management. Merging decimation with animation could lead to higher quality, lower computational cost 3D anmiation. Allowing high interactivity could help artists improve the aesthetics of scanned artifacts.
Re:Nice add-on to 3d movies (Score:3, Informative)
Re:Nice add-on to 3d movies (Score:2)
What this would do in the case of Shrek 3 et al is reduce how long it takes to pre-render the movie, and possibly reduce the quality of the render. Assuming this level of 'compression' used is settable like J-peg, then it might be possible to save some render time on background objects and less important details to spend more time on the important parts. Movie
Re:Not Written by the Scientists (Score:2)
"Informally this class can be described as containing the decision problems that are at least as hard as any problem in NP."
(Has anyone else been impressed lately at the quality of CS- and math-related pages at Wikipedia?)
What the article described was pretty close to Turing-undecidable (though it flubbed that, too), which is much different. Of course, explaining NP-hard to someone unfamiliar with complexity theory is