Jump to content
  • Advertisement
????????? ?????

3D Mesh Simplification & Decimation

This topic is 437 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello i recently started to tackle the various mesh decimation & simplification algorithms, mostly from Hoppe & Melax. The algorithms in question are the Quadric error metrics mesh simplification & the Progressive mesh simplification. I found a few good github starting points (VTK, hoppe's & melax's githubs etc.) to try and test the various implementations but i run into some problems:

1. I tried to simplify a few of mine and a few of the provided (inside the code base) meshes and i noticed that not all meshes survive the same simplification. For example a 400 vertices robot mesh can be simplified down do 10-20 (or about 3-7% of the src vertices) vertices without any problem no missing faces or huge topology distortion, yet an Eagle mesh with 3000 vertices can be just simplified to 1900-2000 (or about ~60-70% of the src vertices) vertices before i start noticing that the mesh starts to lose faces (which leads to a mesh with quite noticeable number of holes). And that is something i noticed with quite a few models (be it mine or the provided). I would like to know what is the actual cause which stops / prohibits some meshes from being simplified as much as others. I am probably asking a quite stupid or obvious question 

2. Even if the simplification was perfect i still have difficulty figuring out how would i preserve the vertex appearance (uv coord, tangents, normals etc.) I read a few papers - one in particular from Hoppe - New Quadric Metric for Simplifying Meshes with Appearance Attributes. But did not find any real world examples or at least snippets of code to give me a better grasp of the technique


Furthermore what has been bugging me lately is more about techniques to dynamically render different LOD levels when walking over a large terrain mesh (non regular grid - NOT generated from a height map, noise, 2D data, etc. so clipmapping techniques and alike go to the bin). I will give you a CS:GO analogy or Take League of Legends as it seems a more appropriate example here, i am not sure if they are doing that exact thing, but presume that their maps are mostly hand crafted, all trees, boxes, walls buildings which are not interactive, destructible or static are probably embedded and modeled along with the rest of the map & terrain. This can potentially allow for fewer draw calls, and much richer environment. But then again how would one optimize such a huge rendering step, where the entire model of the map / terrain may consist of at least 400-500k triangles at best
Or is it just a flat plane, with everything else rendered as a separate entity on top (instancing where possible, although given that most objects are quite unique and not very many instancing will be limited) ?

Edited by ????????? ?????

Share this post

Link to post
Share on other sites

No answers, but some input...


Interesting remeshing project, mainly due its performance and simplicity.

Remeshing and LODs is closely related. You should also try Simplygon which offers both options. There remeshing can fuse multiple objects and materials. Think of fusing terrain and individual rock models: In combination with megatexturing you can turn geometry into texture texels at the distance. Another interesting usecase for remeshing is geometry images and displacement mapping. (although i'm not impressd by current hardware displacement mapping, but the idea presists.)

There is also the idea to morph between fixed precalculated LOD levels. So instead of traversing something like a tree of collapsing edges, each LOD level has a morphing version with the vertex count of the higher detail level, but also knowing the positions of the lower detail to allow seamless transition (much more hardware friendly).

For high frequency/detailed geometry (e.g. vegetation) none of the above can do wonders, but things like volume rendering, splatting or automatic generation of billboards makes sense.


Personally i think the main issue is that we lack robust algorithms that can 'understand' geometry. Just collapsing edges is pretty naive if you think about it. Most interesting current approaches are there in the fields of quad remeshing (e.g. 'QuadCover' or 'Mixed Integer Quadrangulation') or automatic generation of Polycubes. Unlike the simple project linked above those methods try to find a global (likely minimal) set of singularities on the surface, and from that you could calculate things like minimal possible LOD or seamless UV parametrization (the latter is my current interest).


There is also related research on texture mapping: http://www.cemyuksel.com/courses/conferences/siggraph2017-rethinking_texture_mapping/rethinking_texture_mapping_course_notes.pdf

E.g. you can use volume encoded UV maps to decouple mesh LODs from texture coords.

Edited by JoeJ

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!