Quote:Original post by RobTheBloke
It's not done for efficiency, it's done to make the anim system re-usable and improve build times. Doing it the other way requires the anim system to know about bones, meshes, lights, materials, blend shapes, and anything else it might want to animate. Ultimately that breaks modularity and makes the system a PITA to work with. (btw, I have been writing commercial anim systems for the last 10 years or so, so i've seen the positives and negatives from most ways of doing it.
Uhh, well, I'm reworking my first animation system. So your experience in this topic is by far greater. Nevertheless, there is something in-between black and white: Your abstraction is absolute in that you use a single data type only: float over all. Our abstraction is less strict, but we still have an abstraction. ATM we are speaking of float scalars, float vectors, and (unit) quaternions. You have these types, too, of course, but look at them as sequence of 1 to 4 floats. This allows you to concentrate all the state variables, and to perform a unified blending operation on them. Hence you can presumbly perform a more efficient blending w.r.t. caching and/or SSE. On the other hand, you have the requirement of continuous float buffers for every and all operations, or else the cache advantage will be lowered. Moreover, you are restricted to operations that can be performed on all (vector) components separately. E.g. you could do a nlerp (although only with an "external" normalization step) but no slerp. Don't misunderstand me: That is okay so far. I believe that this makes a performance difference to your benefits (although I'm not able to quantize it).
Quote:Original post by RobTheBlokeQuote:Original post by godmodder
My last argument is about the logic of the code. Think about what happens in real life: a bone doesn't query it's next position but is forced into it. Having the tracks explicitely impose a transformation on the bones seems more realistic. I like to imagine it like controlling a puppet with strings.
But that argument fails the second you have something like an aim constraint that is applied before the final stage of the animation. With your method, the character would have to be updated, and then you'd have get the aim to pull data from those transforms, and then you'd fill the temp buffer with more data. You're adding another read where you don't need it, that or you'd always be a frame behind.
That depends on how the aiming is done. IK solutions need not necessarily be implemented the traditional way. Aiming, and any other animation given an advantage by IK, can itself be implemented by blending (perhaps static) animations. Putting this animation to a higher priority and there is no need to "post-process".
Quote:Original post by godmodderQuote:Original post by RobTheBloke
It's not done for efficiency, it's done to make the anim system re-usable and improve build times. Doing it the other way requires the anim system to know about bones, meshes, lights, materials, blend shapes, and anything else it might want to animate. Ultimately that breaks modularity and makes the system a PITA to work with.
Our system doesn't need to know anything about those specifics either. We just need to pass a generic scenegraph node to the animation system, which could represent almost anything.
In fact, the scenegraph node here is nothing more than a scope for the name resolution. As such, its purpose is to restrict the name resolution to work on a defined set of possible outcomes. This is done for both efficieny and re-usability. One can pass a specific NameResolver if wanted. It is also possible to give it up if one absolutely wishes to do so.
In the meanwhile I've read some stuff on animation trees. You remember that little discussion belonging to morpheme at the beginning of this thread? With the concept of animation trees in mind I no longer wonder that morpheme provides 30 or so blend modes (that isn't meant in a negative sense). It also explains RobTheBloke's insisting on masking. I see 2 big differences: Concentration of animated values vs. distribution of generally modifyable values, and the splitting of data and operations vs. "class" building. Its not surprising that exactly both these issues are currently subject of discussion in this thread.