Skeletal animation with vertex shaders

Started by
2 comments, last by klajntib 20 years ago
Ok. Not doing blending at the moment. 1 joint per vertex only This is what I have so far: - vertices transformed by the inverse of it''s bone''s reference matrix - vertex normals - vertex texcoords - triangles (indices into the vertex array) - joints - position and rotation in a matrix - this is the one by the inverse of which the vertices are multiplied - each joint has it''s local transformation matrix - and it''s vertex transformation matrix - each joint also has keyframes with rotations stored in them What I am doing for each bone each frame is: 1. Interpolate between 2 rotation keyframes 2. Create a matrix from the interpolated rotation 3. Postmultiply this bone''s local matrix with the one created in step 2 4. Go up the hierarcy and postmultiply the parent''s matrix with the child''s and put the result in the vertex transformation matrix I end up with one vertex transformation matrix per bone. Now for each vertex I just multiply it by it''s bone''s vertex transformation matrix. Then I assign vertex array pointers and draw. This works ok, but it has some drawbacks: 1. Vertex X Matrix transformation is done on the CPU 2. I constantly have to modify the vertex array Now I was thinking about doing this in vertex shaders. I could have a static vertex array pretransformed by the inverse matrix. Then I would only have to calculate each bone''s vertex transformation matrix on the CPU (from the keyframes). I would upload all bones'' matrices into the vertex shader''s constant registers. I have 10 bones, each has a 4x4 matrix, but only the first 3 lines are needed. So that means 30 registers. Could be done. Now the shader would do the matrix multiplication to transform the vertices and output the result. This would mean unloading the CPU for other tasks and also save quite a lot of AGP bandwidth, because I wouldn''t have to change the vertex array all the time. Am I on the right track?
Advertisement
Yup. That''s pretty much exactly what I do with my SA code. I implement a check to see if the hardware supports vertex shaders, and can switch between transforming by the CPU into a buffer, or transforming via vertex shader. The pre-transformed model (pre-transformed by inverse of the rest pose, that is) can be locked into a VBO, and you never have to write to it, only read from it.

Golem
Blender--The Gimp--Python--Lua--SDL
Nethack--Crawl--ADOM--Angband--Dungeondweller
I''d recommend storing the vertices in the VB without doing the inverse first. This allows you to do multi-bone blends really easily. If you move the vertices to be relative to a specific bone before hand, you''re going to be screwed when you need to use the same vertex with multiple bones. Each time the pose changes it does mean an extra matrix multiply per bone matrix, but the per-vertex cost is the same and you''re free to add blending with another dozen lines of shader code.
Huh, so it looks like I pretty much nailed it.
I will definitely consider your advice Namethatlalala (nick too long to remember lol).

I''m gonig to implement it tommorow (or today by our time - 2AM here) and report back.

VertexNormal - what kind of a speed increase do you experience when using shader instead of the CPU?

This topic is closed to new replies.

Advertisement