Animation of arbitrary data

Started by
6 comments, last by haegarr 11 years, 4 months ago
I'd like to make most things in my engine animatable. This could be anything from the time input to a skeletal animation, to the x part of some vector.

My initial thought was to store a pointer reference to the item which can be animated and link it to some kind of delegate which will apply the animation based on its type, so for instance if I had a vector type, LVec3 in my engine, and I wanted to apply some kind of sine wave animation to its x component, I would do the following:

1) Create a new AnimationState object (this contains properties such as animation length, current time, speed, loop, etc), and store the address of the x component in a void pointer within the state object.

2) Create a new delegate (or function pointer) which takes the element (the data pointed to by the void pointer) and the animation state and set a the value to a value from a dataset based on the animation properties in the animation state. In this case, the function would simply set the value to some sinusodal curve value.

3) register the new animation state object with the animation controller (this is a manager which goes through each registered animation and updates it).

When the animation controller iterates and comes to this animation state, it calls the delegate method which calculates the value, the physics simulation then just uses the vector normally.

Am I complicating things? I feel sure there are much better ways to do this and at the moment I'm not sure how flexible it would be. Any thoughts?

Thanks
Advertisement
Main question is: for what usage case?
Of course, if you want a generic vector to be animated, you'll have to provide that vector with a name.
And since, at problem solving stage, every vector might need to be animated, every vector needs to be uniquely identified.

It appears to me at this abstraction level it does not make much sense.

How would you keyframe a mesh for example? Doing a func-call per-vertex isn't going to work.

Void pointers. Think at it again. When you're writing a [font=courier new,courier,monospace]reinterpret_cast[/font] (or avoiding it by raw [font=courier new,courier,monospace]memcpy[/font]) you're setting you for some potential trouble, where pain is avoided by the estabilished standards/documentation. A year from now, you're going to forgot those.

Notably, stuff that is static remains static in most engines I've looked at. World mesh is typically static and there are very good reasons for which destruction / dynamic content is still limited.

How flexible would it be?
Probably a lot, question is if this flexibility is "flexible" in the current direction. There's no doubt wheels are crucial to ground transportation, but they are much less relevant on submarines. My best bet is that you're end up having a lot of unused flexibility in a dangerous form.

The physics simulation, does not "just uses the vector normally". Physics simulations have strict requirements on their input. In general, you cannot just update a position like you could expect. Are you aware of the details involved?

Previously "Krohm"

Main question is: for what usage case?[/quote]
I'm talking mostly about things like position, scale and transform of objects in the game. It would be nice when designing my levels to say "I'd like that satellite dish to spin on the y axis once every 3 seconds".

How would you keyframe a mesh for example? Doing a func-call per-vertex isn't going to work.[/quote]

This part is already completed, my skeleton animation code just takes a few parameters to produce the necessary pose based on key frame interpolation, including full blending, none of the bone animation data would be exposed for generic animations. You could, of course, animate those parameters of the animation for some interesting effects (doubt that's a use case though).

Void pointers. Think at it again. When you're writing a reinterpret_cast (or avoiding it by raw memcpy) you're setting you for some potential trouble, where pain is avoided by the estabilished standards/documentation. A year from now, you're going to forgot those.[/quote]
Yes, now you've written it out, you're right. I don't use void pointers anywhere else in my engine (or at work!), I'm pretty strict with smart pointers - it's very true that if it doesn't feel right, it isn't and this didn't feel right at all. I'll rethink it.

The physics simulation, does not "just uses the vector normally". Physics simulations have strict requirements on their input. [/quote]
Sorry, by physics simulation, I just meant my game logic. So in the example above with the satellite dish, that wouldn't have much in the way of physics interaction other than perhaps collision. By using the vector normally, I just meant the drawing code for 'static' non-physics based objects would just use its position/transform vector.

This whole generic animation idea would mostly be for static things that don't interact with the physics environment too much, like a few bits of level eye candy.

Are you aware of the details involved?[/quote]
Not so much really, which is why I'm on here getting great advice from you guys. I've come back to my engine from a long break and I'm just playing with areas I want to work on next. I generally work on bits that interest me because that's what I'm doing it for. I have a game idea in mind so I'm not fully writing engines not games (let's not get started on that again...).

With regard to physics integration, I assume I can pass the physics engine my intended position for an object rather than it tell me where it thinks it should be? Take, for instance, a shooting range with targets that move from left to right in a controlled range. This is the sort of thing I would expect to use this generic animation for. In my level designer, I want to be able to choose an object and animate some of its properties, in this case, the x or z component of the target's position. I can supply a range of data and tell the engine that I want the animation engine to only use that specific data range, or to interpolate between the values (compression of sorts), how fast it should be, whether it should loop, invert, etc. I would also like these targets to take collision hits for decals, etc.

Supplying a range of data might be quite neat because I can play around with curves in the front end and create some interesting effects. When the game is running, I'm assuming I will have to create/register collision hulls/planes, etc, for the targets within the physics library, but unless I specify that it should control its movement, I'm free to do that myself..?

I haven't looked into physics libraries yet so forgive my lack of understanding.

Thanks for your response, much appreciated.
This throws me back. Way back.

Let me start with some critique. Modern game engines have abandoned the strict scene hierarchy model in favor of composition (You sound like you have a pretty strict hierarchy), where any "entity" is a game object. The scene graph is a list of game objects. And any game object can have a script attached to it. A script isn't specialized, it's just a script with an update method. The script does whatever it wants to the game object (such as apply a sine wave animation).

With that being said, back in the olden days, we used to have Interpolator objects. At some point in your hierarchy you have a common base class that has a transform. The interpolation object stores a reference (pointer) to this. Then, it applies it's animation. You can interpolate linearly, on a curve, custom data, whatever you want. If you want everything animatable, you will probably need to make specialized interpolators. Like a VectorInterpolator, which is bulky and not fun to work with. But if that's your only option, by all means go for it.

Hope this helps.
You sound like you have a pretty strict hierarchy[/quote]

It's interesting you've said that, my setup is very heavily component/entity based, I spent a fair bit of time working on this and making things nice and simple. Out of interest, what made you think that?

Orientation (position, rotation, scale, etc) just gets added as a component to an entity and its the vector(s) of that component which can be animated. My first post was just trying to consider ways of getting a handle on these vectors to genetically animate them. I was going to just ask how it's done but I feel as people make the effort to help, it's only fair I show I've put at least some thought into my questions.

But thanks for your post, I like the script idea.
:) Glad to hear it! I haven't heard of anyone needing a specific Vector animator since the hierarchy days; that was the assumption i was under.

In my opinion, the engine is responsible for providing an easy way to set object transforms. It's also responsible for being able to play animations associated with content (i.e. things that come down the pipeline). But animating objects on paths is not the engine's responsibility, that's up the to content programmer.

Best of luck!
Another thing to consider is the possibility of animating u,v coordinates of a texture to create cheap effects, like water flowing or flames shimmering.

Of course, if you want a generic vector to be animated, you'll have to provide that vector with a name.
And since, at problem solving stage, every vector might need to be animated, every vector needs to be uniquely identified.

Although this is true, the unique identification doesn't need to be explicit.

If one requests a Placement component (what is my name here used for a combination of position and orientation) for the address of the "position", or for the address of the "position.x" element, the component can resolve this with its knowledge of its own internal structure.

However: Animating vectors without considering their semantics is dangerous. E.g. animating a point like a direction (or looking at scaling factors as a vector in the mathematical sense) will be problematic. So one has to distinguish things by type and semantics to build up a robust system.

This lack of robustness is one reason why I, personally, have dropped the idea of being able to animate everything. Nowadays I'm using defined controllers like Parenting, Positioning, Aiming, Aligning, ... up to more complex ones like Grabbing and Animating (perhaps I should mention that Animating is a time dependent thing, while the others are automatisms depending on other Placements and perhaps a state) for controlling Placement. This list is to be expanded for other targets under control, of course.

This topic is closed to new replies.

Advertisement