Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Nov 2005
Online Last Active Today, 06:22 AM

Posts I've Made

In Topic: Shape interpolation?

19 June 2016 - 12:11 PM

I was doing some research on an unrelated topic and happened to come across this. I haven't had a closer look at it, so it may be something trivial, but the exe does do 3D morphing between similar objects with identical vertex counts loaded from obj files, and it purportedly comes with source and an article included. The project is from 1998, although that needn't necessarily be anything to think ill about. It was the year Half-Life was released, after all  :ph34r: .

In Topic: what was the first video game?

11 June 2016 - 07:22 AM

A must watch for any video game enthusiast:


In Topic: Shape interpolation?

07 June 2016 - 11:12 AM

Sorry, this isn't a visual effect, so I can't use any hacks. It's a general application of mesh processing. It has to be a real morph.

I'm trying to morph two humanoid figures, so they do share similar features. I was thinking about projecting both meshes to an unit sphere, but I'm not sure how to match up vertices where the triangles overlap (caused by cavities around the nose, mouth, ears, fingers, etc..). Overlapping triangles would give multiple vertex matches.

Is it really necessary to map each mesh to a domain where triangles don't overlap?


If you need to have access to the mesh at any given moment of time, then your best bet is to use mesh simplification to come up with a one-to-one vertex mapping and morph from one to the other. 


One thing you might look into is unwrapping. As in texture unwrapping, but adapted for vertices. It'd be sort of similar to your attempt to map your vertices to a sphere (which you can't do with something as complex as a humanoid). Instead what might work would be subdividing the mesh, "UV-unwrapping" the vertex space to 2D space and perform a regular morph there. You could then use the position difference of a "vertex pixel" as a lerp factor during the morph to calculate its object space position between the source and target meshes.


If you do pull this off, please let me know how it worked out :).

In Topic: Identifying polygons of one mesh inside another mesh?

06 June 2016 - 10:43 AM

Generally you'd need to do a bunch of vertex-in-mesh tests which can be of wildly different cost (although not necessarily complexity) depending on what kind of geometry you're dealing with (eg if your geometry is concave or convex). If I had to what you outlined myself and I was using something like PhysX, I'd generate low resolution collision meshes that are "good enough" and let the physics API do the tests for me (hopefully on the GPU). PhysX returns points of intersection, which you could then use to extrapolate the faces to conduct more thorough tests. Alternatively you could roll your own BVH of convex collision meshes to quickly approximate which faces are likely the offending ones and then do more elaborate tests.


That being said, if I understand your description correctly, what you mean by clipping is essentially Z-fighting. In such a case (and I don't know the first thing about the format you're working with), have you tried tried good old polygon offsetting or simply scaling up the clothing mesh by a small margin? :) If this is the case, then removing some faces will not fix your problem, because your clothing and body geometries probably do not have perfectly overlapping faces, which would either generate holes in your body mesh or leave faces that are still partially Z-fighting the clothing.


Depending on a couple of factors, such as the type of clothing and the hardware you're going for, as a somewhat more general approach, you might try rendering your character model in two passes - first the base mesh and then the clothing mesh with either depth testing disabled or, if reading from the depth buffer is not available, by emulating depth bias in the shader (eg if what your favorite API provides is not sufficient).

In Topic: Shape interpolation?

06 June 2016 - 12:22 AM

Are you absolutely sure you need to do this with actual geometry? How complex is the topology?


Morphing in 2D is a lot easier. If you can get away with a screen space transform, you'd be much better off doing that and crossfading the final few frames to fully transition to the target mesh. Morphing still requires identification of features as a preparation step, so you'd have to either extract a silhouette or dominant features (eg the eyes, nose, mouth, etc in case of a monster) via projection. That being said, I can't imagine morphing alone being viable if your transition is slow and there are substantial changes in lighting or if the shapes in question are wildly different and/or complex.


By the way, I wouldn't discount some form of a screen space/texturing cheat (pdf, ~4.5 MB) as inferior, in particular if it is combined with actual topological changes.




Does anyone know of a 3D modelling program or open-source library that can take two arbitrarily shaped 3D meshes, that don't necessarily have the same vertex counts, and transform one mesh into the other, without adding or removing vertices?


This is impossible without creating very fine-tuned simplifications of both the source and target meshes. Matching their energy (pdf file, ~2 MB) is not necessarily trivial and so far as I know targeting a specific number of faces can be done, but will result in potentially a very slow simplifications step or in an unevenly simplified mesh.



As an alternative option you might consider a skeletal approach, which relies on a small number of intermediate ("shared") bind pose configurations that are easy to blend between and easy to transition to.


Also consider glitch effects to hide the change. Things like sucking the source mesh into a point while growing the target mesh. When it comes to transitions, it's all about speed and collateral fidelity. Can you add smoke or dust to hide the transition? Can you put the meshes in motion to disguise the switch? Some screen shake? The transformers in The Transformers don't really have the amount of gadgetry you can see on screen - they just use perspective, motion and the environment to hide the fact that almost none of the morphing process makes any sense.