Members - Reputation: 127
Posted 23 April 2012 - 05:25 AM
I have seen impressive mesh optimization methods, but I don't want to optimize the mesh so much as extract part of it. What I want is a way to approximate the 'inside surface' of a mesh, since in the 'real world' this is what would interact physically with mesh being deformed with.
Take the images below; the second mesh contains no overlapping polygons - the lapels, shoulder straps and buttons are gone - it is a single surface consisting of the points closest to the character.
(Checking for and removing overlapping polygons would be one way I suppose, but how to decide which are the 'outer' and which are the 'inner' bearing in mind the normals of the semantically inside polys won't necessarily emanate from the geometric centre of the mesh)
Does anyone know of an existing implementation that does something like this?
Members - Reputation: 1377
Posted 23 April 2012 - 08:56 AM
Usually the meshes to be displayed are modeled optimally by an artist. The pieces to be removed as you described...there might be some automation but probably in practice it is done with the aid of a 3D modeling package, then compiled/preprocessed/exported from the modeling package into game- or simulation-ready media. So, therefore, one possibility is to have a human in the loop. I guess that's what you did above? I can imagine doing something where you do something like:
1) Draw all the meshes as shaded using additive blending...where there are overlapping meshes the image will appear brighter
2) 3D modeling human-in-the-loop user clicks one of the bright spots where it is clear there are overlapping meshes
3) Code ray traces through meshes until it hits the underlying character mesh. Sort the ray trace selection results and tag all meshes *except* the one closest to the character mesh as "delete me!". Perhaps color these differently and redraw just to visually indicate action by the user.
4) Repeat until all bright areas are handled.
5) Hit a button to delete all meshes tagged "delete me!"
So, human in the loop but reasonably straightforward to do. Full automation is going to be pretty challenging, but you might be able to imagine using the above process as a starting point. Next step could be to build some automation to tag the bright areas but keep a human-in-the-loop to validate or "undo" the automated tagging before deleting the meshes.
Members - Reputation: 127
Posted 28 April 2012 - 05:17 AM
First, sorry for the late reply, I am starting to wonder if I am completely misunderstanding the "Follow This Topic" button!
To clarify, the first image is the 'detailed mesh' the second is the 'physical mesh'. The 'physical mesh' is literally the detailed mesh with overlapping polygons removed (and in this example, it was manual). This may require some explanation:
In my project, I am working on automatic mesh deformation whereby my algorithm fits one mesh over another. To do this, I reduce the target mesh to a simplified 'physical mesh' and check for collisions with a 'face cloud'. The 'face cloud' consists of the baked faces of every mesh making up the model(s) that the target mesh should deform to fit. (The target mesh when done will completely encompass the face cloud.)
For each point in the 'physical mesh', I project a ray and test for intersections with the face cloud, find the furthest one away then transform that control point to this position.
Before this is done, I 'skin' my detailed mesh to the 'physical mesh' - for each point in the detailed mesh (regardless of position/normal etc) I find the closest four points in the 'physical mesh', then weight the point to each of them (where the weight is the proportion of each points distance, to the sum of the distances); the result is, when the 'physical mesh' is deformed, each point in the 'detailed mesh' is deformed linearly with it.
The purpose of this is to preserve features such as overlapping edges, buttons, etc because with these, the normals of each point cannot be relied upon to determine which side of the surface the point exists on, hence the need for a control mesh.
What I am attempting to create in the 'physical mesh' is simply a single surface where all the points' normals accurately describe that surface.
So far, I do this by using the skinning data to calculate a 'roaming' centre of mass for each point, which is the average position of the point + all others that share the same bones. Any point whose normal is contrary to (Point Position - Centre Of Mass for that Point), is culled. (But is still deformed correctly because it is skinned to the surrounding points which are not deformed)
This whole setup is designed for user generated content, hence why I can't do what normal sensible people do and just have artists build a collision mesh in Max, it is also why I cannot make any assumptions about the target mesh*.
*Well, I can make some assumptions, for 1. I can assume it is skinned, and that the mesh it is deforming to fit is also skinned. Since I started using the skinning data the peformance (quality of results) has increased dramatically.
For more complex meshes though I still need a better solution, as it won't cull two points that sit very close, one outside the collision mesh, one inside (and hence when deformed the features are crushed as only one pulls its skinned verts out).
Your idea for ray tracing to find overlapping polys sounds very promising, I will look into this, Thanks!