Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

moggy

vertex buffers and animation, vertex weighting

This topic is 5211 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, If I''m chunking my geometry, and storing it in big lists so I can easily load it into vertex buffers, how on earth do I cope with animation? I want to still load the same data set to memory but I can''t, because the vertices have moved/been interpolated. I''m sure this is easily solvable, but I dunno how! Thanks Alex

Share this post


Link to post
Share on other sites
Advertisement
You could modify your vertex arrays on the fly (using the CPU) and stream them in via AGP every frame. That''s not very performance effective, though. Or, if you have hardware vertex shader support, you should do the entire animation on the GPU (for example weighted skinning). In the later case, your vertex arrays would remain static all the time.

Share this post


Link to post
Share on other sites
Vertex weight skinning sounds like the thing I''m after. I want my vertex arrays to remain static once they''re in memory, and, therefore to only have to interpolate between two sets of vertices in memory... Anyone have more info on this?

Alex

Share this post


Link to post
Share on other sites
quote:
Original post by moggy
Vertex weight skinning sounds like the thing I''m after. I want my vertex arrays to remain static once they''re in memory, and, therefore to only have to interpolate between two sets of vertices in memory... Anyone have more info on this?


Now that''s really easy, that''s not even skinning, but simple vertex interpolation. You simply load the two vertex animation frames (start and end) into two separate vertex array attributes (OpenGL) or streams (D3D), and use a constant parameter register (with a value from 0 to 1) to interpolate between both in a vertex shader. Don''t forget to do the same with the normals for lighting (and the tangents, if you want bump mapping).

Share this post


Link to post
Share on other sites
Vertex weighting/skinning designate vertices linked to bones, with a weight per bone linked.
What you want is simple vertex interpolation called morphing/shapeAnimation.

BTW Yann, do you really compute the animation on the card ?
That is interpolate between matrices for each bone before applying weight * boneMatrix for each bone on a set of vertices, or do you ''simply'' load the final bone matrices and compute weight * boneMatrix ?


-* So many things to do, so little time to spend. *-

Share this post


Link to post
Share on other sites
Got a sort of related question I''ve been curious about:

Sometimes you do stuff like shadows or you need to do more texture passes than the cards allows and you have to render the same vertices twice. Is it possible to use a skeletal/bones type shader for models to calculate the transformed geometry and cache the results instead of going through all the skinning-bones matrix stuff in the shader for each pass?

Share this post


Link to post
Share on other sites
No way that I''m aware of.
Data just goes through the pipe to end up being displayed on a surface, internal computations can''t be stored ''outside'' the chip.



-* So many things to do, so little time to spend. *-

Share this post


Link to post
Share on other sites
quote:

BTW Yann, do you really compute the animation on the card ?
That is interpolate between matrices for each bone before applying weight * boneMatrix for each bone on a set of vertices, or do you ''simply'' load the final bone matrices and compute weight * boneMatrix ?


I load the trasnformed matrix skeleton (which is computed on the CPU, from quaternions) into constant VP registers, and do the weight*pos*matrix in the vertex shader. There is no way I''m aware of to compute the matrix concatenation of the GPU (yet).

quote:

Sometimes you do stuff like shadows or you need to do more texture passes than the cards allows and you have to render the same vertices twice. Is it possible to use a skeletal/bones type shader for models to calculate the transformed geometry and cache the results instead of going through all the skinning-bones matrix stuff in the shader for each pass?


You can''t really cache the vertices, but you could use deferred shading for that kind of task (where you basically store the vertices in the framebuffer). Requires a DX9/ARB_FP capable card though.

Share this post


Link to post
Share on other sites
Thank you both, your replies are very much appreciated.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!