Jump to content
  • Advertisement
Sign in to follow this  
sevenfold1

Shape interpolation?

This topic is 821 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Does anyone know of a 3D modelling program or open-source library that can take two arbitrarily shaped 3D meshes, that don't necessarily have the same vertex counts, and transform one mesh into the other, without adding or removing vertices?

I would not specifically call it mesh morphing, as I think that requires a 1:1 mapping of vertices, although its related. But, anyway, I would like to find something that works, rather than try to write it myself. The math involved may be over my head.

I would like to morph one mesh into another, but before I can do that, I need to create some kind of mapping that says this vertex morphs into that vertex, and so forth.

Share this post


Link to post
Share on other sites
Advertisement
Two completely arbitrary meshes cannot be smoothly interpolated between. The minimum common attribute is the topology (number of holes and/or open edges). When you do have the same topology, you can try to parametrize the surfaces in 2 dimensions to establish the common mapping between them. After you have that, it is relatively easy to actually interpolate between the geometries.

Share this post


Link to post
Share on other sites
Thanks, I think spherical parameterization might help here. But I need to research it a bit more. Might have problems with overlapping faces. I think if the points get smoothed out, the similarities between the two meshes will get lost.

If I have to smooth out the points, I would also like to preserve symmetry as well. Is it possible to pin two points (say the poles) to preserve symmetry, or at least pin one point to maintain a point of reference?

Share this post


Link to post
Share on other sites

Are you absolutely sure you need to do this with actual geometry? How complex is the topology?

 

Morphing in 2D is a lot easier. If you can get away with a screen space transform, you'd be much better off doing that and crossfading the final few frames to fully transition to the target mesh. Morphing still requires identification of features as a preparation step, so you'd have to either extract a silhouette or dominant features (eg the eyes, nose, mouth, etc in case of a monster) via projection. That being said, I can't imagine morphing alone being viable if your transition is slow and there are substantial changes in lighting or if the shapes in question are wildly different and/or complex.

 

By the way, I wouldn't discount some form of a screen space/texturing cheat (pdf, ~4.5 MB) as inferior, in particular if it is combined with actual topological changes.

 

 

 

Does anyone know of a 3D modelling program or open-source library that can take two arbitrarily shaped 3D meshes, that don't necessarily have the same vertex counts, and transform one mesh into the other, without adding or removing vertices?

 

This is impossible without creating very fine-tuned simplifications of both the source and target meshes. Matching their energy (pdf file, ~2 MB) is not necessarily trivial and so far as I know targeting a specific number of faces can be done, but will result in potentially a very slow simplifications step or in an unevenly simplified mesh.

 

 

As an alternative option you might consider a skeletal approach, which relies on a small number of intermediate ("shared") bind pose configurations that are easy to blend between and easy to transition to.

 

Also consider glitch effects to hide the change. Things like sucking the source mesh into a point while growing the target mesh. When it comes to transitions, it's all about speed and collateral fidelity. Can you add smoke or dust to hide the transition? Can you put the meshes in motion to disguise the switch? Some screen shake? The transformers in The Transformers don't really have the amount of gadgetry you can see on screen - they just use perspective, motion and the environment to hide the fact that almost none of the morphing process makes any sense.

Share this post


Link to post
Share on other sites
Sorry, this isn't a visual effect, so I can't use any hacks. It's a general application of mesh processing. It has to be a real morph.

I'm trying to morph two humanoid figures, so they do share similar features. I was thinking about projecting both meshes to an unit sphere, but I'm not sure how to match up vertices where the triangles overlap (caused by cavities around the nose, mouth, ears, fingers, etc..). Overlapping triangles would give multiple vertex matches.

Is it really necessary to map each mesh to a domain where triangles don't overlap?

Share this post


Link to post
Share on other sites

Sorry, this isn't a visual effect, so I can't use any hacks. It's a general application of mesh processing. It has to be a real morph.

I'm trying to morph two humanoid figures, so they do share similar features. I was thinking about projecting both meshes to an unit sphere, but I'm not sure how to match up vertices where the triangles overlap (caused by cavities around the nose, mouth, ears, fingers, etc..). Overlapping triangles would give multiple vertex matches.

Is it really necessary to map each mesh to a domain where triangles don't overlap?

 

If you need to have access to the mesh at any given moment of time, then your best bet is to use mesh simplification to come up with a one-to-one vertex mapping and morph from one to the other. 

 

One thing you might look into is unwrapping. As in texture unwrapping, but adapted for vertices. It'd be sort of similar to your attempt to map your vertices to a sphere (which you can't do with something as complex as a humanoid). Instead what might work would be subdividing the mesh, "UV-unwrapping" the vertex space to 2D space and perform a regular morph there. You could then use the position difference of a "vertex pixel" as a lerp factor during the morph to calculate its object space position between the source and target meshes.

 

If you do pull this off, please let me know how it worked out :).

Share this post


Link to post
Share on other sites

I was doing some research on an unrelated topic and happened to come across this. I haven't had a closer look at it, so it may be something trivial, but the exe does do 3D morphing between similar objects with identical vertex counts loaded from obj files, and it purportedly comes with source and an article included. The project is from 1998, although that needn't necessarily be anything to think ill about. It was the year Half-Life was released, after all  :ph34r: .

Share this post


Link to post
Share on other sites

I was doing some research on an unrelated topic and happened to come across this. I haven't had a closer look at it, so it may be something trivial, but the exe does do 3D morphing between similar objects with identical vertex counts loaded from obj files, and it purportedly comes with source and an article included. The project is from 1998, although that needn't necessarily be anything to think ill about. It was the year Half-Life was released, after all  :ph34r: .

From the docs:
"For this technique to work, the two models you are morphing between must have identical vertex counts, and the vertices must correspond to each other."

Thanks, but my main issue is that I need to first establish a correspondence between two different sets of vertices. After that, everything becomes simple.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!