Mesh Compression for 3D Graphics 297
IanDanforth writes "A new algorithm that uses successive approximations of detailed models to get significant compression has been revealed by researchers at The University of Southern California. Just as MP3s remove high frequencies we can't hear, this algorithm removes the extra triangles in flat or near flat surfaces that we can't see. Experts in the field are giving this work high praise and imply that is will be immediately applicable to 3D modeling in games, movies, CAD and more."
Proliferation of 3D Content on the Web? (Score:5, Interesting)
Patented? (Score:5, Interesting)
This has been around for many years. (Score:5, Interesting)
Re:slow connections (Score:4, Interesting)
Today you can get a cable modem connection at 5mb down
You can watch multiple mp4 video/audio streams at this speed - so why not 1 3d model?
Problems with lighting interpolation (Score:2, Interesting)
Re:Proliferation of 3D Content on the Web? (Score:5, Interesting)
I'd guess the bandwidth would really be taxed by the transmission of bitmaps used for textures. That won't be helped by removing triangles from the model.
I expect any acceleration would be in the processing on your computer. The CPU and/or GPU would have less work to do, because of the reduced number of triangles to render. So your game gets a higher frame rate, and/or uses fewer cycles, or can perform faster with less powerful hardware.
The real reason 3D content hasn't taken off is that it frankly isn't very useful for every-day browsing.
I'd say multilevel meshes is a better answer... (Score:5, Interesting)
The immediate problem that springs to mind for me is that current graphics cards and APIs don't produce good shading effects when the geometry is turned down. Gouraud shading (color-per-vertex interpolated across the face of the triangle) is the best that hardware acceleration will handle right now, and turning down the number of vertices will lead to problems with detailed color operations under normal circumstances (complicated lighting/shadow effects, etc.)
Shouldn't the industry be pushing further toward graphics cards that can accelerate true Phong shading, rather than shortcuts and texture mapping tricks? Or even automatic interpolation between meshes of different complexity depending on how much of the scene a particular model takes up? If that functionality was developed first, then this mesh optimization would make perfect sense. But, for now, anyway, it seems like getting rid of the geometry is going to force developers to continue to rely on tricks to get the best look out of their engines.
Not that you'd HAVE to use it, though...
C
Similar Idea already in use... (Score:2, Interesting)
Re:MP3 compression == complicated (Score:5, Interesting)
That achieves compression effectively by recreating a high bandwith audio stream from a low bitrate stream of tokens.
A thought I had years ago is:
3-D imaging via raytracing can be thought of as one of the most aggressive forms of compression, in that you represent a fastastically complex high-bitrate stream (i.e. The World, or at least the 3-D scene in question) with a very small (usually under 1K) stream of "tokens" (the raytracer's command repertoire). That "compresses" billions of voxels of 3-D space into a tiny scene descrption stream, and vice-versa during "decompression".
Re:This has been around for many years. (Score:1, Interesting)
Useful for CAD? yes/no (Score:4, Interesting)
Re:Impressive. (Score:2, Interesting)
Also, I believe ATI has a tool to do this as well.
Re:CAD??? ;-) (Score:2, Interesting)
Does this mean that we are going to see crappy 3d-game modellers making hi poly objects and simply run them through a little wizard to "make 'em good for the game"? =)
Guess i'm just bitter for not working with CGI yet.
I'm not sure that's the problem. (Score:3, Interesting)
Bruce
Re:Proliferation of 3D Content on the Web? (Score:1, Interesting)
Slow Connections: This concept is to help the slow connections, but it still may be slow for them. They just need to realize that for about twice the cost (thinking of AOL here) you can have like 500 times the speed.
Text browsers: It would be inserted content and there for the text browsers wouldn't display it, just as they don't display images. Not a problem here.
Re:Useful, but over stated... (Score:2, Interesting)
I don't know how many triangles the models in movies have, but I find it hard to believe that all of them are 100% necessary - like with large programming projects, the focus tends to move more towards "don't worry about making it as efficient as possible, just make it look good/feel good/work". A well-designed polyreducer could probably do quite a number on those.
It has been done before. (Score:2, Interesting)
I wrote a polyreducer for a game I worked on. It would take as input a mesh, bone data, and an input texture map, crunch over them for a few minutes, and spit out a mesh with fewer triangles (and a new texture map). It would have been easy to make it spit out a bump map as well, except we were targeting PS2 and a bump map would have taken another rendering pass.
Quite effective. We stripped about 25% of the triangles out of most models. I kinda wish I'd gotten time to apply it to the world geometry too - especially if I could have snuck it in before the lighting step. That might have been tricky though.
One amusing side effect is that I end up looking at people's examples of their algorithms (like, say, ZBrush [209.132.69.82]) and just laughing. They're not doing *any* of the hard parts - they're getting as input the target mesh, they're guaranteed the high-detail mesh is a subdivided version of the target mesh, what are they doing to earn their $500? Mine would take the high-detail mesh only and do *everything* from there!
Maybe I should talk to my old boss and see if he'll let me reimplement the algorithm and sell it as a plugin . . .
Splines (Score:2, Interesting)
I can't imagine why else anyone would need a high-density of verticies unless they were trying to represent a curved surface.
Re:MP3 compression == complicated (Score:3, Interesting)
He's not just throwing out high definition data either (which would be a poor compression algorithm). He's finding a seed point, and then trying to build the largest flat surface that masks the underlying points, because they don't really give much detail anyways (not always true).