Hey I'm prototyping an face editor where a "generic" mesh can be tweaked to resembles someones face. I have a database of the facial ratios of a number of people and the idea is that I should be able to input these values into the system and it should return a face that resembles that person.
I've started of simple by having meshes for the extremes and then lerp between them; eg I have 2 meshes thinnest nose and widest nose I know what value the mesh represent and then lerp between them with the value from the database. I'm a bit concerned if this approach will work when the feature set builds up. Say that a vertex on the cheek is affected by multiple features would it look good to do an accumulated blend between all the blend shapes? Performance is not an issue since the results can be baked. I would also like to know how this will affect facial animation.
Has anybody faced a similar problem? How did you solved it? Is there any resources where I can read up? I'm not entirely sure what words to google.
They way this is typically done in comercial products is for the deltas of the morph targets to be stored relative to the starting point. This way, interpolation doesn't end up messing up the results, as everything instead is concatenative
Unfortunately I'm not aware of any silver bullet solutions; AFAIK most games just avoid the problem by being really, really careful at asset authoring time and minimizing morph overlap.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.