[DirectX9][C++]Animation Complexities

Started by
1 comment, last by Dhaos 15 years, 6 months ago
I have several questions regarding how animation is applied withing DirectX. --------------------------------------------------------------------------------- 1)Is the core concept behind 3d model animation simply updating the vertex buffer? 2)If so how does one cope with hundreds of units? Many of which are identical and would quickly make the vertex buffer huge. Or can the vertex buffer easily store thousands of objects worth of vertice data (one unique vertice/animation set per model)? 3)Should the vertex buffer itself should only hold one set of vertice data per unique object? 4)Is it possible transform a selected range of vertices, from the vertex buffer? If so, how? 5)Currently the only process I know is using DrawPrimitive/DrawIndexPrimitive to draw some of the vertices and transform them via world transform. However I know this must be wrong due to all the overhead associated with settransform and DrawPrimitive/DrawIndexPrimitive. What other options do I have? --------------------------------------------------------------------------------- Really simplified example: You have this object, a cube, in fact you have a dozen of them. How would I animate (for simplicities sake, rotate them) say the top four vertices of the first. Then do the something similiar to four right vertices on another cube etc. I am trying to grasp how to update/modify "parts" of an object without using settransform or directly updating the vertex buffer. However I am not even sure which is the proper and relatively efficient way. --------------------------------------------------------------------------------- I have tried searching for this information for some time, and everything keeps pointing me to meshes. However I do not think that is answering what I want. I wish to know how, internally, and programatically, DirectX updates and applies animation data to some arbitrary object. If anyone can elaborate on this it would make things so much clearer.
Advertisement
You have asked for an extremely complicated answer, whether you knew it or not.

My numbers are not related to your numbers; I am not answering your questions one-by-one.


#1: Your graphics data should be shared. You load a model once, then create instances for each time it is used in the scene. This means all your Goombas each keep tracks of their own positions and data, but when it comes time to draw they use a pointer to the loaded-one-time vertex data that composes the visual representation of the Goomba.

#2: Goombas each run their own animations individually as well, which means each little Goomba has its own unique copy of animation data.

#3: In many cases animations serve no purpose other than visual. Since you are sharing your model data, you have to re-apply the animation for each instance. If you have 10 little Goombas, each of them ticks their own animations, but the model is not morphed to fit the animation until render time. Applying an animation to a model is explained below. The point here is that it costs some time because you may have to manipulate vertices individually, so you want to avoid it when possible. Thus attaching animations to models should be done at rendering time, and only after you have done frustum culling to ensure that your Goomba will be drawn. If it is not, do not waste time applying the animation to the model (unless it is required for something more than visual).

#4: There are many factors and methods involved with giving motion to the 3D model (which, as I mentioned, is shared). There are 2 types of skinning: rigid and soft. Both are able to work per-vertex rather than per-object. Soft-skinning is always per-vertex, but rigid-skinning can work on either a whole object all at once, or only on some vertices inside the object. Maya exports all skins per-vertex, so if your engine is going to be any good it needs to handle the case where rigid skins do not work on the whole object.

-> A: You can save a lot of time and reduce your workload by forcing rigid-skin data to work on objects as a whole, which they often do. If you do this (or if you make a special case which detects when rigid-skinned data is applied to the whole model), you can animate the body part simply by setting its transform based off the animation/skeletal data.

-> B: In any case, rigid or soft, that is per-vertex, you have to update the vertices in your vertex buffer. This means keeping a copy outside of the vertex buffer, locking the vertex buffer, applying the transforms accordingly to each vertex, and storing their results in to locked vertex buffer (this requires updating the normals by the inverse transpose as well).

#5: The process of attaching the animation to a model is done by recursively walking down the bone hierarchy, keeping tracks of transforms, and applying one of the above methods to each object/vertex you encounter.




So, now the answers to your numbered questions:
#1: If per-vertex manipulation is required, the vertex buffer will be updated. Otherwise it suffices simply to transform the object (which means arm, leg, hand, or any small part of the whole that is in itself a single model).
#2: By sharing the graphics data.
#3: Usually less than that. A single object can have several sets of vertex buffers, one for this texture, one for that texture, etc. Any time the pieces of a single object do not share all the same properties you will get a new vertex buffer. So you need to take this into account when you are manipulating vertices of an object, and make a map that takes the desired vertex index and gives back the buffer index and the index of the vertex inside that buffer.
#4: Yes. Lock that range of vertices, transform them all together, and unlock them. But this goes against what I described above; you have to keep a local copy of any buffers you plan to update or (#1) the model will eventually distort into a frog and (#2) you can not share the graphics data.
#5: Lock the vertex buffer, manipulate all the vertices you want (from the original copy into the locked buffer), unlock, and draw all at once.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Thank you so much for replying YogurtEmperor. When I was trying to work this out I was thinking a single model was much larger than it really was (memory wise). Big difference between 32mb per model and 32kb per model ha ha ha *slaps self*. Again thank you for clearing up a lot of want I wanted to know. Right now I'm studying on how instance works. One technique mentioned caching all the animations and frames into the vertex buffer, provided you have that many objects using the information. This topic is so fascinating.

*ramblings removed, realized even 100, 1000-polygon models is only 32mb worth of data*

[Edited by - Dhaos on October 24, 2008 1:40:34 PM]

This topic is closed to new replies.

Advertisement