Character Animation in OpenGL

Started by
4 comments, last by psyjax 20 years ago
Hey All, I have been creating a game engine using OpenGL for a while. It''s going fairly well. I have written a model loader, that loads a custom model format, I have shadows, and lighting, tons of other snazy fetures, and things are lookin great. I pull in wonderfull framerates.. whatever... I only say this cuz I''m proud of my little creation it being my first serious OpenGL endevour since I learned the API :D Anyway, one thing I don''t know how to do is any sort of model animation and I was hopeing some of you guys might point me in the right direction so that I may learn more about it. I have been toying around with various aproaches, I was considering saving my model in a series of "frames", that is, intermediate poses and cycling thrugh them during animation as you would a 2D sprite. Is this a simple, and fesable way of doing this? What is the best way to get your feet wet in what is obviously a very complicated part of 3D programming. Thank you very much.
Advertisement
quote:
I was considering saving my model in a series of "frames", that is, intermediate poses and cycling thrugh them during animation as you would a 2D sprite. Is this a simple, and fesable way of doing this?


Certainly. This is the form that the Quake2 MD2 animations take. With this method (sometimes called vertex key-framing) you can use interpolation to smoothly transition from one frame to the next. Keyframes can be generated for, say, every 4 or 5 game display frames, then an interpolant between 0 and 1 is used to smoothly morph from the previous keyframe to the next keyframe. This eliminates the jerkiness that is inherent to key-framed animation (very visible with 2D sprite graphics, as interpolation is not feasible). To start with, this is a good animation route to take.

Another (more advanced) method is skeletal animation. In vertex-keyframing, memory usage can get out of hand for large models and long animations, as each keyframe requires a local copy of vertex data. In skeletal animation, a model is constructed and animated as a "skin" attached to a hierarchical frame or skeleton of "bones". Each vertex is transformed by a bone or a set of bones with weighting factors applied. Bones are represented as quaternions or some other form of rotation, a location relative to the parent hierarchy, and an endpoint to which any children in the chain are attached. With skeletal animation, instead of storing each vertex per key-frame, you can instead merely store the key-frame''s bone orientations, and maintin one reference copy of the mesh in it''s rest pose. Vertex transformations are performed on the fly, generated by transforming the reference copy of the mesh either into a temporary buffer or with the use of a vertex shader, and displaying the transformed geometry.

Skeletal animation offers additional benefits besides just decreased memory usage. With vertex key-framing, interpolation from one keyframe to the next is fastest accomplished with linear interpolation. However, in the case of mesh components that are hierarchical in nature and constrained, this can result in visual artifacts. Instead of, say, an arm rotating around an elbow joint, the mesh instead linearly morphs from one orientation to the next, causing shortening of the arm over the course of the movement. With keyframes that are spaced closely together this sort of shortening is not obvious. But in the case of rapidly moving animation with keyframes far apart, it becomes more apparent. Skeletal animation interpolates the orientation of the mesh sections, not their final positions, so that the arm is seen to rotate around the elbow joint rather than trying to morph "through" it.

With skeletal animation, it is also easier to compose different animation sequences together. For instance, it is easy to generate an animation sequence for walking, which modifies only the leg bones and perhaps the torso to a slight degree. Other animations might affect only the head, for head turning or gaze tracking, or the arms such as when a sword or other weapon is swung. With vertex key-framing, character multi-tasking in this manner is limited, but with skeletal animation these different animations which affect different parts of the body can easily be combined together, so that the character is seen to walk, turn his head, and swing his sword all at once.

Skeletal animation also allows the possibility of realistic physics, such as ragdoll physics and the like, wherein the members of a body are subject to real-time effects of impact, gravity, application of force, etc... Animations can then be generated that are not limited to what the designer creates in an animation package, but instead follow more (hopefully) realistic sequences created by the laws of the physics system applied.

As is to be expected, these latter methods can get to be extremely complex, and are not always suited to the application. Skeletal animation itself is far more complex, and thus more resource intensive, than simple vertex interpolation. Shaders and vertex programs can offload the grunt work of vertex transformation to the GPU, as long as programs are hardware supported. Realistic physics, too, add their complications, though some of these difficulties can be overcome by using pre-made physics packages such as OpenDE or Tokamak or others, which can handle many of the tricky calculations and allow you to concentrate on the big picture.

But not all games require realistic, powerful physics or modelling simulations. If you are willing to constrain animation to pre-packaged movements, ala traditional 2D animation, vertex key-framing can be more than sufficient for your needs. Even in fantastically modelled simulations, simple vertex morphing can have its place, in animations not so well suited to hierarchical skeletal arrangement.

Golem
Blender--The Gimp--Python--Lua--SDL
Nethack--Crawl--ADOM--Angband--Dungeondweller
VertexNormal,

Thanks allot for that excellent, well thought out, and articulate reply. Because of your thurough explanation, I realize that Linear interpolation sounds like the way I want to go.

My game is a turn-based RPG, so it dosn''t need fancy physycs or anything like that. My models are also not incredibly complex either, so I don''t think the memory usage will be too over the top.

I found some tutorials using .md2 models over at DigiBen. It has a good explanation of how it''s done in the header file. Though my game dosn''t use md2s, rather a much simpler format devised by myself, the tutorial still gives a good understanding to hack thugh it for my personal needs.

However, if anyone has any advice, links, or relevant suggestions concerning Liniar Interpolation please, by all means, let me know.

Thank you very much.
Linear interpolation is really very simple.

Say you have two points: PointA and PointB. Each is a vertex in a mesh (x,y, and z components). The standard linear interp. function is:

c = a + t*(b-a)

Where t is a number in the range of [ 0,1].

So, if PointA=(5, 2, 12) and PointB=(18, 4, 20) then we can find any location in between using the above formula. For instance, at t=0.5 (the exact midpoint between the two points):

PointC = PointA + t*(PointB-PointA);

PointC.x = 5 + 0.5*(18-5) = 11.5
PointC.y = 2 + 0.5*(4-2) = 3
PointC.z = 12 + 0.5*(20-12) = 16

So, PointC at t=0.5 is equal to (11.5, 3, 16)

Now, each keyframe of the animation is going to consist of an array or list of vertices for the entire mesh for that frame. An animating object needs to keep track of two keyframes: Previous and Next. It will also track a current value for t, which will increase in small increments each time the game logic updates. Each object will also track a Previous and Next Location as well.

Now, when you render the scene you need to generate a current "snapshot" of the object as it stands at that point in time, using it''s t value. You apply the linear interpolation equation above to the PreviousLocation and NextLocation to generate an intermediate location-- the object''s location at that point in time. By the same token, you apply the linear interpolation equation to each vertex in PreviousFrame and NextFrame, to generate the in-between frame for time=t.

t can be updated and manipulated to increase in increments as fine as you need or as the frame-rate will allow. Each time it is incremented, you check to see if it goes greater than 1. If so, then you need to wrap it back around by subtracting one, then advance your animation and positional data. PreviousFrame is set to NextFrame, and a new NextFrame is generated to continue the animation. PreviousLocation is set to NextLocation, and a new location is generated for NextLocation. And so on, and so on.

The way I do it is I space all of my key-frames a constant number of frames apart (say, 4 logic frames per keyframe). This way, each time the game logic updates, I can advance t by 0.25 (1 / UpdateRate, or 1/4 for UpdateRate=4) to generate the in between frames. So, in sequence the render will draw frames at t=0 (or, PreviousFrame), t=0.25, t=0.5, t=0.75, t=1.0 (or, NextFrame). At t=1.0, I advance a frame and wrap t back around to 0 to start again.

There are other methods for interpolation besides linear. Linear interpolation, as the name implies, can determine arbitrary points along a straight line between point a and point b-- thus the flattening or shortening of rotating arms and the like. Other forms of interpolation can approximate a curve between points rather than a straight line, thus possibly resulting in smoother, more realistic animation with less flattening and distortion.

Cosine interpolation is calculated thus:
  float ft = t * 3.1415927f;  float f = (1 - cos(ft)) * 0.5f;  return  a*(1-f) + b*f; 


This form of interpolation connects two points with an approximation of a curve. I say approximation, because that is all it is. A true curve takes into account more information to generate a smoother path. Such a true curve might be cubic interpolation:
Given: Points a, b, c, d as control points (keyframes) on the curvePoint e = (d - c) - (a - b);Point f = (a - b) - e;Point g = c - a;Point h = b;return e*t*t*t + f*t*t + g*t + h; 


This gives a much smoother, more continuous curve connecting the points, but at the cost of having to remember 4 points: two Previous and two Next points. This may not be suitable for animation which can change state between one frame and the next, so it probably is not appropriate for vertex keyframing.

But, like I said before, if you generate your key-frames close enough together, any distortion from linear interpolation can be minimized so as to be nearly undetectable. More complex interpolation requires more processing time, which can have an effect on framerate when you need to do several hundred thousand interpolations per frame for a lot of objects.




If you structure your game loop correctly, it is possible to not only interpolate from one key frame to the next by frames (ie, advancing t by 0.25 each time, or whatever); it is also possible to go even finer, interpolating between even these intermediate frames to a degree allowed by the video frame rate.

For example, say I am running my simulation logic updating at 25 frames per second, advancing the game logic one step every 40 ms. That means that every 40 ms, t for all object animations advances by 0.25 (assuming, of course, that is the update interval I chose). 25 fps is great for complicated logic, as it gives a decent CPU plenty of time to calculate a frame and do all of the AI, but the drawback is it locks the video frame rate to 25 fps as well. t only increments every 40ms, so in the meantime we are just drawing the same exact scene over and over until t increments again. Our video card might be capable of 900 fps, but the game locks it to 25 by forcing it to redraw the same scene over and over for most of the time.

What we want to be able to do instead is interpolate from Previous keyframe to Next keyframe to generate the current frame, then advance the animation even further based on how far into the next frame we are. It''s a little complicated, so I won''t describe it in depth here. The implementation I use is pretty much exactly as Javier Arevalo details in his Tip of the Day at Flipcode.com. In his algorithm, he performs game logic updates at a fixed time step, and in the meantime calculates an interpolation factor (PercentWithinTick) to calculate how far along we are between this tick and the next. All rendering functions can use this factor to further interpolate animation and smooth things out.

Consider the case where we advance t for a model by 0.25 each tick. Now, say at a given point in time the loop calculates PercentWithinTick to be 0.5, meaning we are halfway to the next update tick. We can apply this PercentWithinTick to the update rate (PercentWithinTick * 0.25) and add this to an object''s current t value to account for how far into the next tick we are. This has the effect of smoothing out our 25 fps video framerate to take advantage of all the fps the card can pile on. 25fps jerks or steps in animation are interpolated and smoothed. It works very nicely, but can be a little difficult to understand at first.

Anyway, I hope this helps and I hope my wandering all over the place hasn''t confused you. Good luck and have fun.


Golem
Blender--The Gimp--Python--Lua--SDL
Nethack--Crawl--ADOM--Angband--Dungeondweller
VN,

I just had to reply with a tremendous thank you!

Your fantastic post totaly gor my model animated and walking around my 3D world!

Thank you so much.
Superb post, Vertex Normal. Very informative.

This topic is closed to new replies.

Advertisement