Best way to render skeletally animated meshes

Started by
3 comments, last by GameDev.net 19 years, 6 months ago
I was just wondering how most people handle their skeletally animated characters. I have a shared mesh (in bind pose) used by many characters. The shared mesh is never modified, but it's vertices are used to compute the transformed position which is then stored in a separate vertex buffer. I was wondering which would be more efficient: 1) for each character, transform the shared mesh into one vertex buffer, then render it. (IE: transform into VB, render VB, transform into VB, render VB, etc) 2) each character has it's own vertex buffer. the shared mesh is transformed and stored for each character. then all the characters are rendered with their own vertex buffer. (IE: transform into VB, transform into VB, transform into VB, render VB, render VB, render VB). What's your preference? Thanks, Jerec
JerecCM Software"Oro?"
Advertisement
Each instance get its own transformed vertices. You need this to support certain functionality, such as shadow volume extraction (Doom III does this) or sorting by material or depth (for transparent bits).

If you don't need to touch the transformed data, and don't draw anything transparent, then you could re-use a large draw buffer, using NOOVERWRITE/DISCARD on a DYNAMIC vertex buffer in D3D, or using BufferData(NULL,size) before mapping the buffer for each object in OpenGL.
enum Bool { True, False, FileNotFound };
I'd say 1), with a dynamic buffer. If you use a static buffer, it's going to stall either way you approach it.

Why 1? Because you're rendering while preparing the next mesh, processing in parallel.

If you require a card with shaders, it's probably faster to do the bone transform on the GPU in a shader. If you're doing many passes, and have highly tesselated meshes you might want to software transform to save the transform cost. If your mesh isn't tesselated the transform cost will likely be shadowed by the fill cost.

If you allow non-shader cards, it's probably faster to software skin using your #1 method. This means you do the work only once for as many passes as you need, and also allows HW T&L work. If you tried to keep everything the same as the high-end, and using shader emulation (D3D only I think), the T&L would become software too, and you'd likely recompute the data for each pass.
you want to use separate vertex buffers. if you use the same vertex buffer for each instance then the mesh would need to be updated every frame, which would slow things down (unless you VSync). if each mesh instance has it's own VB, then you can update the verts every 30 fps+ or so.

in my engine i do it both ways actually. however, by default each instance gets their own VB.
I create for a character 2 buffers:

- Static buffer
- Dynamic buffer

And each buffer has a set of primitives linked with it, which would set materials, etc.

The static buffer contains all rigid body data, so which doesn't deform by CPU skinning.

The dynamic one contains vertices that are being updated every frame by the CPU. I skin on the CPU, then copy the skinned data to the dynamic buffer.

For instances of characters I point to the same vertex buffers. They can have different materials as an option though. So also different primitives. That way I can give them different colors/textures/shaders to make the instances look still different.

I update the dynamic buffers, and render immediately after. Since the buffers are shared I have to do this.

So I use very little memory for instanced characters. Also their meshes internally are shared.

Cheers,
- John

This topic is closed to new replies.

Advertisement