• Advertisement
Sign in to follow this  

Hardware vertex morphing, feedback and/or resources please

This topic is 4433 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I'm working on a little research project on doing vertex morphing on the GPU, using a technique akin to frame tweening. It works just like the Morpher approach in 3D Studio Max, using a base mesh and several target meshes, but in real time with quite good performance (~330 fps for a mesh with 17.000 polygons and 3 active targets). To get the vertices of the base mesh and the targets to the shader, I set up the standard D3DX meshes on different streams and set a vertex declaration on the device so the vertex shader is supplied with the vertices of the base mesh and targets at the same time. The shader then performs a simple interpolation based on the distance between the vertex of the base mesh and the targets and the weight (blend factor) of the targets. This was the simplest hardware based technique I could come up with, since I couldn't find much useful information about hardware vertex morphing on the internet (these resources either deal with terrain GeoMorphing or software morphing). So I was wondering... 1) Can anyone point me to some resources on hardware vertex morphing? I'm planning to use vertex morphing in an animation toolkit I'm working on and I'd like to try out some alternatives to see what approach would be best. 2) Is this an existing technique? It would sure be nice if I've finally come up with a new technique with all my tinkering :) If it's already a known technique, could anyone comment on its performance & other characteristics compared to other techniques?

Share this post


Link to post
Share on other sites
Advertisement
I remember walking someone through this technique a couple of years ago.. he was looking to work with MD2 models and animations, and we set something up that loaded each keyframe as a seperate buffer then bound the two keyframes as seperate streams to interpolate between them on the GPU. Gave very smooth results.

Something that may be of interest is Spherical Scale Mapping.

However you approach it, if you want to perform the morphing in hardware then you have to get the vertex data for the base and for each target across to the GPU somehow. Unless the data is small enough to fit into the constant buffer, then I could only see this happening through either vertex streams or through textures.

I suppose you could possibly do something involving textures... give each vertex in the base mesh texture coordinates, then in the vertex shader, use those coordinates to do a texture lookup to get a position for that particular target. Then interpolate in the normal way. The benefit of this approach might be compression; instead of storing a full picture of vertex positions in a target, you could store some and let the rest be calculated by the texture filter unit. Calculating texture coordinates for optimal compression seems like an incredibly hard problem though... perhaps Hoppe has done some relevant work...

Share this post


Link to post
Share on other sites
I'm using the streams approach now, putting the complete vertex data from the base and target meshes onto 4 streams (will be the minimum requirement, 1 base, 3 targets). I've been searching a bit on the internet and I found a recent article from NVidia from October 2005 which uses more or less the same technique. They use a special 'group' mesh, whereas my approach just loads the meshes from seperate files. Here is a screenshot of my approach so far:



And here's a little movie showing the morpher in action. I think I'll stick with this approach for the time being, but I am wondering how many concurrently active morph targets I will need to support to make the technique generally useful.

From what I've gathered, 2 active targets should be enough for a decent looking lip-sync implementation (based on the MS Speech SDK, one target for the current Viseme and another for the next Viseme to blend into). This leaves one morph target for other (facial) expressions. I can't find the hardware caps sheet in the docs right now, but 4 vertex streams should be about the minimum for SM2 hardware, right?

The DirectX10 preview also provides a morphing implementation, using textures. It took a while for me to figure it out properly, but I think I understand it now. It seems to be a better approach than my streams, since it supports sparse morph targets (if that's the correct term, only data for morphing vertices is needed). The implementation of the blending should be about the same, but I am wondering about the overhead of sampling the textures...

My streams approach supplies the vertices to the vertex shader, but the texture approach requires an additional three texture samplings per vertex (pos, normal, tangent) and possibly all three are also per target (when blending the seperate targets on the GPU). So all in all, I'm not so sure if it would be faster, even though it's definately more efficient in terms of memory use. And it supports less morph targets than the streams approach from what I can tell (again, when sticking completely to the GPU). My X850 supports 16 simultaneous streams but it only supports 8 simultaneous textures, so thats 15 vs 8 morph targets supported.

But of course, that should only be an issue if I'd ever need more than 8 morph targets. I'm an application programmer by trade, so I can't tell if this is ever needed in a normal graphics scenario. Since I'm aiming to create a generic animation toolkit, I really like to know more about the typical requirements for morphing. And also if the performance so far of 330 fps for a 17.000 polygon mesh with 3 active targets is acceptable.

Share this post


Link to post
Share on other sites
I've got no idea about the performance characteristics of the texture/geometry map approach... I was just looking to give you an alternative technique for comparison purposes [smile]

Re the limit on number of targets: you can pack multiple images into a single texture and do multiple lookups. Splitting textures into quarters and packing one geometry map per quarter would give you 32 targets on your X850.

Share this post


Link to post
Share on other sites
Hey, at least your reply got me figuring out the texture alternative in the DX10 preview, since I was a bit put off by the daunting looking maps when I first looked at it... and to be honest, I thought it was some kind of 2D mapping technique, which would explain why it made absolutely no sense to me at all [smile]

I think I'll take a shot at implementing it anyway, as it does provide the added benefit that I can blend the target textures prior to rendering to keep the clutter in the rest of the pipeline down. Plus this should prove useful if the Morph doesn't need to change from one frame to another, since I can just use the texture from the last frame in this case. The stream approach on the other hand would invariably have to redo the morph blending each time it's rendered, which is also undesirable for multipass rendering.

With your quarters approach I can indeed pack 32 targets into a single texture, which should be quite sufficient for nearly all purposes, I think. It'll still use 3 textures in the final pipeline for position/normal/tangent data, but that should be acceptable too.

So, the last questions I have (for now :) is whether or not there's already a standard format for packing geometry mapping data into textures, like there is for normal maps? And would a texture sampler with point sampling be accurate enough for fetching the appropriate morph displacements from the texture, provided every vertex maps to a pixel?

Share this post


Link to post
Share on other sites
Take a closer look at Hoppe's page. He's developed (or is developing?) something called 'Geometry Maps' that may be exactly what you want.

Share this post


Link to post
Share on other sites
Hmm, I missed that one... even though it seems his main research topic :) Thanks for pointing it out yet again.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement