• entries
    13
  • comments
    15
  • views
    29110

About this blog

Progress of my projects

Entries in this blog

MilchoPenchev

2D Skeleton Woes

About two months ago, I started writing a 2D game. Given that my previous work was on a 3d deformable terrain, I figured a nice 2D game would be a nice change of pace, and give me less hassle. I was right...mostly.

Character animation in 3d is not a simple task. There's some great software out there to help you animate it, heck, two years ago even I wrote a simple character animation program that had the ability to automatically attach a mesh to a skeleton. But enough nostalgia!

It seems that skeleton animation in 2D should be easier. You only have one simple rotation angle to worry about, and no need to account for the gimbal lock problem using those pesky yet mathematically beautiful quaternions.

Here's a simple 2d texture skeleton in my game:
Tld91Vs.jpg
Seems straight forward enough to animate. You don't have to necessarily worry about vertex attachments, you can make each bone have its individual sprite, and design them in such a way that they blend in together.

But you don't want to design your animations twice do you? Because a walking animation should be able to be played both walking LEFT and RIGHT. So, need a way to easily flip animations. Unlike in 3d where you can rotate your animation along some axis to orient it in the proper direction, in 2D you have to actually mirror the animation.

I'm cheap, so I decided to go for a cop-out - I'm going to only flip the sprites instead of the whole skeleton. Sound good? Yea. But we can't just flip the image in the sprite - you have to take the actual sprite rectangle and mirror all its vertices along a certain axis. Since SFML doesn't support this, time to write my own CustomSprite class.
Still, not the hardest thing. The simple code for flipping a vertex along an arbitrary x-axis:sf::Vector2f CustomSprite::FlipHorizontal ( const sf::Vector2f &point ) const{ sf::Vector2f pt = point; pt.x = m_axisIntersect - ( pt.x - m_axisIntersect ); return pt;}
The variable m_axisIntersect specifies the line along which to flip. The same method can be used to flip along an arbitrary y-axis-aligned line.
So, here are the results:
zpcduWI.jpg

Ok, the actual bones (which may be a little hard to notice - they're the thin blue lines) aren't flipped, but the sprite flip seems to have worked fine. The results look promising so far.

Except, I forgot - my character isn't just going to stand always oriented up. Due to the physics of the game, he will lean on slopes and corners. Here's an example:
1r8bXGG.jpg

So, wait, what happens if I use the direction flip on a slope? Well...
SKocLSK.jpg

Oh, right. I'm flipping along the axis-aligned line that passes through the center of the character, so of course - the program is doing exactly what I told it to do, even if i told it to do was wrong.

It looks like I'm going to have to apply a mirroring along an arbitrarily oriented line now. Mirroring around an arbitrary line isn't that bad, though it's certainly more involved.
Supposing we have a line that passes through the point p, by which you want to mirror. The basics are then:
1. Translate all points by a vector -p - so now the origin of the line matches the global origin.
2. Rotate all points so that the line you want to mirror by is aligned with one of the axis
3. Mirror around that axis using the same method above
4. Undo step 2
5. Undo step 1

While I was thinking about this, I realized that I actually have all my sprites on the model in local coordinates already - they store their positions relative to the model's origin, which is the center point through which the mirroring line will have to pass. And I'm already setting the model's orientation when I touch a slope, so I already have a function that rotates it.
In fact, I was setting the rotation like this: float angle = atan2( -up.x, up.y )*180.f/(float)PI;m_rootBone.SetRotation( angle );
However, I knew that when I mirrored the model, I could simply re-adjust to 'up' vector that the model received so that it was now facing the right direction:float angle = atan2( -up.x, up.y )*180.f/(float)PI;if ( m_rootBone.GetSpriteFlip().first == CustomSprite::xAxisFlip ){ m_rootBone.SetRotation( 360.f - angle );}else{ m_rootBone.SetRotation( angle );}
And the results now looked good:
3sBxHj3.jpg

Sure, the actual skeleton was nowhere near what the sprites displayed were, but that doesn't matter. The skeleton is only used to draw sprites, not for collision, or any other purpose.

At the end making a 2d skeleton overall was easier than a full 3d skeleton, but had some challenges that you don't face when dealing with 3d.
MilchoPenchev
I'm posting this as an excercise/lesson, hopefully its useful to someone.
Eight years after I started learning c++, I was still caught off guard by this.

Basically I had code like this: (ignore the LOG_DEBUG - that was just put there when I was testing this)struct ViewMember{ ViewMember(Widget *wid, int wei) : widget(wid), weight(wei) { } ~ViewMember() { LOG_DEBUG << "calling View Member destructor"; delete widget; } Widget *widget; int weight;};now, in a class called View, I have a member - widgetList is of type std::vectorView& View::AddWidget (Widget *toAdd, int weight){ if (weight < 1) weight = 1; if (Contains(toAdd)) return (*this); // won't add same widget again widgetList.push_back(ViewMember(toAdd, weight)); needToReorganize = true; return (*this);} This code (barring typos I may have made in copy/re-arrange) compiles fine, without warnings or errors.

However, once I actually tried to access something in widgetList, I got a crash - it turns out my widgetList.widget was an invalid pointer... as if something had deleted it.

I went through this, and I've figured it out, but I thought I'd share since I think it's somewhat important/interesting.

[rollup="The Problem"]
When pushing back into a std::vector (or in fact, any other stl container), the container will create a copy of the object you tried to push back, and then destroy the original. The copy is a shallow copy, unless you've specified your own copy constructor.

So in my case above, pushing a ViewMember caused a copy to be created (which is fine), and then the original was destroyed, which called its constructor, which in turn deleted the pointer. Since the copy that was put in the vector was shallow, the pointer in that ViewMember is now pointing to invalid memory.
[/rollup]

[rollup="The Solution"]
1. Don't delete the pointer like that. Seems like a fairly bad way to handle this - now you have to manually delete the widget pointer.
2. Insert pointers in the std::vector instead of copied objects - pretty much the same, because now you have to manually delete the objects you inserted in the container.
3. Don't use pointer! Use std::shared_ptr instead! Best way - no manual memory management needed, can be copied without trouble, and won't delete its pointer if there's still something referencing it!
[/rollup]
MilchoPenchev
I've been working on water, slowly progressing forward. To those who might wonder, keeping track of, generating storing and updating water when you're dealing with a 5km planetoid (our current test planet) isn't quite straight forward. This is sort of a backpost, since i already had basic water in my last post. But this is a bit more in depth.

The water simulation we went with is not like anything I've read about. There were several methods I considered before going with what we have now.

First was particle water.
The pros: Good water simulation. Realistic waves, breaking etc. possible.
The cons: Hard to extract surface. Impossible to keep track of all particles on any significant planetary scale.
This was obviously not going to work for us.


Height-field based water.
The pros: Significantly less storage. Easy surface extraction. Decent water simulation.
The cons: Braking waves are harder (though not impossible). How do you do a height field based water on a spherical planet? The answer: not well. You can either split into 6 separate height fields, or try to create one with polar coordinates based on a even point distribution.
This is too bad because back before we went for an actual planet, on a flat 2d terrain, this was my top choice

What I went with:
Storing water in a 3d voxel density grid. Much like terrain.
Pros: Storage concerns were already figured out - storing can be done in same datablocks as we store terrain - thus its possible on a planetary scale.
Cons: It's not a very realistic simulation. It's hard to make huge waves.


There was also one other pro, which i didn't realize until later - updating water was made just somewhat easier by the fact that I stored water on a grid. Of course, the grid is NOT oriented with the surface, yet due to a range of densities [-127,127] - it was possible to achieve a perfectly smooth water surface anywhere on the planet despite the grid being all squirrly.

Here are some screenshots of the apparently misaligned grid and the non-the less smooth water surface:

vw_small.jpg vnw_small.jpg




And here is a video of the new water shader:

[media]
[/media]

And a video with the older shader, but the only video of water spreading in a huge hole.

[media]
[/media]
Update: video of the water on a small planet (200m radius)


[media]
[/media]



For more info, and a demo of the project, you can visit at http://blog.milchopenchev.com.

Thanks.
MilchoPenchev
Here's a video of the work we've done on water, grass and detail textures. There's also a new build with these featuers on the blog: http://blog.milchopenchev.com

We haven't really had time to post any detailed description on the technicals behind the water or detail maps, but hopefully soon we will. As always, thanks for reading.

[media]
[/media]

MilchoPenchev
[size=2]

[size="2"]How we handled doing normal maps when also doing tri-planar texturing.

[size="2"][size=2]Note: this is a duplicate post from our project blog: http://blog.milchopenchev.com - the formatting may be a bit off, sorry.

[size="2"][size=2][size=2]For our texturing, we had no choice but to use tri-planar texture mapping - since we generate an actual a planet and the terrain can be oriented in any direction. Combine that with the fact that the terrain is diggable, we had to make the texture adapt to any angle. Triplanar mapping was the perfect solution.

Doing normal mapping on top of triplanar mapping may seem hard at first, but it's just a little harder than triplanar texture mapping.


triplanar_diagram_01.png



To obtain the final fragment color for triplanar mapping, you basically sample the same texture as though it was oriented along the three planes (See diagram on right).

Once you have a sample from each of these planar projections, you combine the three samples depending on the normal vector of the fragment. The normal vector essentially tells you how close to each plane the projection actually is. So if you have a mostly horizontal plane, the normal vector would be vertical and thus you would sample mostly from the horizontal projection.

This same principle can be used to compute the normal from a sample from a normal map. Instead of sampling from the texture, you would sample from the normal map. The RGB color you get would give you the normal vector, as seen in that plane. Then you can combine these normals using the same weights that you use to compute the mixture from the texture coordinates.

[size=2]

triplanar_diagram_02.png



Basically you obtain three normal vectors, one on each plane, and each having a certain coordinate system that is aligned with the texture on the side.

On the picture on the right, the red, green and blue are the axis on each projection of the texture, while the dark purple is a sample normal vector. You can imagine, the closer the fragment's normal is to each plane the more it samples from that plane. One thing is that unlike texture mapping, is that when the normal is close to the plane's, but is facing the opposite direction, you have to reverse the normal map's results.

This is what the code for obtaining the normal of one texture from its three normal projections looks like in our terrain shader:



vec4 bump1 = texture2DArray(normalArray, vec3(coordXY.xy, index));

vec4 bump2 = texture2DArray(normalArray, vec3(coordXZ.xy, index));

vec4 bump3 = texture2DArray(normalArray, vec3(coordYZ.xy, index));




vec3 bumpNormal1 = bump1.r * vec3(1, 0, 0) + bump1.g * vec3(0, 1, 0) + bump1.b * vec3(0, 0, 1);

vec3 bumpNormal2 = bump2.r * vec3(0, 0, 1) + bump2.g * vec3(1, 0, 0) + bump2.b * vec3(0, 1, 0);

vec3 bumpNormal3 = bump3.r * vec3(0, 1, 0) + bump3.g * vec3(0, 0, 1) + bump3.b * vec3(1, 0, 0);



return vec3(weightXY * bumpNormal1 + weightXZ * bumpNormal2 + weightYZ * bumpNormal3);


[font="inherit"]Where weightXY, weightXZ and weightYZ are determined like so from the normal that's calculated at that fragment:[/font]
[font="'Courier New"]weightXY = fNormal.z;[/font]
[font="'Courier New"]weightXZ = fNormal.y;[/font]
[font="'Courier New"]weightYZ = fNormal.x;[/font][font=inherit]
[/font]
[font="inherit"]I realize that it sounds a bit counter-[/font]intuitive[font="inherit"] that we need the normal before we can calculate the per-fragment normals, but this normal can be simply obtained by other means, such as per-vertex normal calculations. (We obtain it through density difference calculations of the voxels)[/font][font="inherit"]Finally, to get good results you need an actual good normal texture. We only had time to create one (neither of us are graphics designers), so here's a video of the rock triplanar normal map, with a short day length on our planet:[/font]

[media]

[/media]



MilchoPenchev
The Procedural Editable Terrain project is just what it sounds like - a project to make an engine for terrain that is both procedurally generated, and allows for editing functionliaty (lowering, raising etc.)

The project evolved from my previous project for simply procedural terrain. One other person joined me on this project, and has been helping with various tasks on the project.

Currently the project has the basic functionality as described above, and some additional things. The major features currently are:

  • Persistent perlin-noise based terrain generation
  • Data stored in discrete voxels, allowing for modification of the terrain
  • Planet generation of variable raidus, currently tested with 100km raidus, theoretically supporting much larger.
  • Planet-based biomes - desert, savanna, temperate, polar, distributed with variation accross the planet.
  • Basic physics from extracted terrain information - accurate collision detection with terrain
  • Different LODs, on more powerful PCs support comfortably half a kilometer viewing distance
  • Custom terrain shader supporting custom lighting, triplanar texturing and blending between any two textures
  • Custom sky shader displaying the moving sun and related effects.So, in an effort to increase the number of people aware of my work to 4, I'm going to be posting some blog entries describing some ideas.

    You can read more technical stuff on the blog, where you can see also download the current version of the program: http://blog.milchopenchev.com
    Currently, a lot of the options are not exposed to the user through a nice interface, but some options are accessible via a console (~).


    Here are some screen shots:

    screen_desert_biome.jpg screen_savanna_temperate_transition.jpg

    screen_temperate_1.jpg screen_temperate_5.jpg

    screen_temperate_6.jpg screen_temperate_7.jpg

    screen_temperate_8.jpg screen_temperate_9.jpg

    screen_temperate_10.jpg screen_temperate_11.jpg

    screen_polar_1.jpg screen_polar_2.jpg


MilchoPenchev
The terrain is finally starting to look like..well.. terrain. With normal-based texture coordinate assignment and 3d texturing, the results are promising. The updated version is also up on my site, available for download, and in fact, I encourage you to try it and would appreciate any feedback.

The main problem was how to assign texture coordinates properly. Well, part of that solution was to use the Normal vector and the elevation of the vertex. If the terrain at that point was not approx. horizontal (based on the normal vector) then grass or snow cannot hold on to it, thus it's set to the rock texture.

The height is just used for determining how the horizontally-oriented faces are textured - with snow or grass. However, if only the height is used, then there's a clear cut-off for the switch, making it seem unnatural. So, I just added a (consistent) variance of +/- 100 m to the height, and the clear-cut line was dissolved.

The visibility remains ~1.5km, which is illustrated in the last screenshot. That mountain is 1km high.

I also increased the amount of detailed perlin noise (I use 2 perlin noise functions), which now generates some interesting looking formations, like the overhang and the archway seen below.

So, here are the screenshots:

textterrtest01.jpg

textterrtest02.jpg

textterrtest03.jpg

textterrtest04.jpg

textterrtest05.jpg

textterrtest06.jpg

MilchoPenchev
I've completed the start of the procedural terrain. You can download the test on my site here. I know not a lot of people read this, but if you are one of the few, and you decide to download it, I'd appreciate some feedback on how well it runs.

If you do download it, you can move with wasd, and holding down the right mouse button. There's no load screen right now, so initially, please wait and don't move until the terrain loads (~10-20 seconds). After that you can move as far as you want, and the program should generate the terrain around you.

On my two year old PC I get around 200fps, (bottom left corner). I can crank up the terrain resolution higher, and I can make visibility past 0.5km, but it will probably have a very adverse effect.

Right now, I use two layers for the different LODs. The high resolution (colored blue), is generated in close proximity to the camera. (it's colored blue for testing purposes), and everything else is the low resolution. I'm thinking, I might get better performance if I add a medium resolution, and decrease the low-resolution's sampling density.

Update: Added a third LOD, which means there's visibility of 1.5km now, and I actually gained a small speed boost in rendering.

Also added collision with terrain, which prevents the camera (and any future objects) to go inside the terrain. Since the terrain is density-function based, there is an actual 'inside', so collision isn't based on polygon intersection. The method I'm using also allows for both elastic and inelastic collision. It also has a nice side-effect that it's decoupled from the rendered polygons, allowing any object to collide properly without the need for the ground to be rendered at all at the object's location.

Updated version has replaced the old download.

Anyway, here's a screenshot. Though the best way to experience the procedural (theoretically infinite, though 'arbitrarily large' is a better term) terrain is to download the program from the link above.




terrtest01.jpg




New screenshots, with 3 LODs

terrtest02.jpg

terrtest03.jpgEdit: new screenshot. I might switch to a texture atlas, because 3d texturing has it's quirks. Alternatively I could make my own mipmaps for the 3d texture.

fullview.jpg

MilchoPenchev
Today, in a mini-update, some screenshots of terrain generated by a combination of the Marching Cubes algorithm and a perlin-noise density function.

The images below have 2 parts: a small grid in a 64x64x64 meter cube, sampled at 2 samples per meter, and a large grid in a 512x512x512 cube sampled at 0.25 samples per meter (1 sample every 4 meters). Internally both are represented by the same number of "blocks" - a 4x4x4 grid of "blocks", each block consisting of 32x32x32 samples of the same density function.

The denser grid is suppose to be around the character, however, there's some decisions to make about this method, so no specifics yet.

The whole thing contains something on the order of 80,000 faces.

First an overview of the whole 512x512x512 m cube

terrgen1.jpgterrgen2.jpg

And how it looks when the camera is standing at eye-level in the middle of the denser patch:

terrgen3.jpgterrgen4.jpg




Note that this method of generating terrain can easily generate overhangs and caves, in difference to a heighmap-based approach. There's still a lot of work to do to make it completely useable, but for now, it's a solid start for my engine's terrain generator.

MilchoPenchev
With skeleton animation now functional, animating a model was the next step. That is, however, no trivial step.

My skeleton's design gave me an easy way to move a point "with" a given bone. Each bone has it's own Axes, which is always aligned with the bone. For the purpose of the skeleton this was called "EndAxes" because it's the axes around which all child bones get rotated.

The Axes is literary that, an Axes class. I wrote it with some features useful specifically for this type of work. The two functions used are called GetGlobalPointFromLocalPoint and GetLocalPointFromGlobalPoint.

They're fairly simple functions that look like this:

Vec3f Axes::getGlobalPointFromLocalPoint(Vec3f point)
{
Vec3f res(0.0f, 0.0f, 0.0f);
res = x_axis*point.x + y_axis*point.y + z_axis*point.z;
return res;
}

Vec3f Axes::getLocalPointFromGlobalPoint(Vec3f point)
{
Vec3f res(0.0f, 0.0f, 0.0f);

res.x = point.dot(x_axis);
res.y = point.dot(y_axis);
res.z = point.dot(z_axis);

return res;
}


Getting a global point from a local point is easy - simply take the local point's coordinates, multiply them by the Axes's axis, and add them up.

Doing the reverse is simply a matter of getting the projection of the global point's coordinates on to each of the Axis.

Note that x_axis, y_axis and z_axis are all of type Vec3, which is a self-written vector class, which functions as you'd expect it to.

My Axes class also doesn't store an origin. It assumes that if you pass in a point, it has already been translated as though the Axes are at the origin. This allows for the class to remain fairly generic, and since translation is just a matter of adding two vectors, it's hardly a difficulty.

So, my idea was to load a model, and for each point in the model, associate with a bone. Store the point's coordinates relative to that bone's Axes, using GetLocalPointFromGlobalPoint.. Then, after the bone has changed, simply call GetGlobalPointFromLocalPoint(..) with the stored original relative coordinates.

This works out fine, and the math isn't too costly either, simply multiplications and additions,

[size="4"]The Vertex-Bone automatic attachment problem


The main problem is, which bones to associate with which vertex. I posted on the forums here, and I got linked to an interesting article: http://blog.wolfire....usion-skinning/

The described method fills the model with voxels and then, for each attaches the bones closest to the vertex by path of the filled in voxels. The method described works fine, and could be use as a general purpose attachment method. It successfully overcomes the problem that certain vertecies are close to bones which they aren't actually suppose to be attached to.

Unfortunately, it also required, what to me seems like a lot of extra work, to create an algorithm to fill in the model with voxels correctly. According to the article, the method also took about a minute to run on the CPU (though he does mention that on the GPU it should be much faster).

Discarding the idea for an absolutely general purpose method, I decided to incorporate some extra knowledge about the skeleton structure I had.

The first simple step was to build a connectivity table. For each bone, I defined which bones are connected to it, and which one is connected at which end - meaning, for example, that the R_FOREARM bone is connected to the R_HAND bone and the R_ARM bone, but the R_ARM bone is connected to the start of the R_FOREARM bone, while the R_HAND to the end of the bone. Note that while this method now required extra information, the extra information isn't hard to enter, and it could be generated automatically from an existing bone structure. This means that there is some hope that this could be applied as a general purpose algorithm.

The actual vertex-bone attachment ran in two loops. The first loop simply picked the closest bone to the vertex, and called it a "parent" bone. It's because of this that the arms of the skeleton start off outstreched. Again, a minor requirement, but certainly not a difficult one.The pseudo code is fairly simple:

For each Vertex in the Model
For each Bone in the Skeleton
if CurrentBone is closer than the CurrentMinDistance
Parent Bone = CurrentBone; CurrentMinDistance = distance to CurrentBone;


note: I'm not going to post the actual code because it looks a little hard to understand with the indexing, but the code should be available on my site for download soon.

However, only attaching the vertex to one bone doesn't provide for good animation. Each vertex needs to be attached to multiple bones, and the attachments need to be weighted, for good looking results.


This is why after figuring out the parent bone, I use the bone connectivity information stored in the table to find out which other bones the vertex may be attached to. The pseudo code looks like this:

For each Vertex in the Model
{
Get the Projected length of the CurrentVertex onto the ParentBone's vector (the vector from the Bone's start to the Bone's end)
If (Projected length / ParentBone's length is within a ratio [L,U])
{
The vertex is ONLY attached to the parent bone, so we add the ParentBone to the list of attached bones, with weight of 1
}
else
{
Add The ParentBone to the Vertex's Bone attachment list, with weight 1/distance to center of ParentBone
Determine which end of the ParentBone is closer to the Vertex
With that information, obtain a list of other bones this vertex is attached to from the ConnectivityTable
For Every Connected Bone
{
Add the Bone to the Vertex's Bone attachment list,with eight 1/distance to center of Bone
}
Normalize all the weights (so that they add up to 1, but keeping their ratios)
}
}


Obtaining the projection of the Vertex on the Bone vector is a crucial part, and since the code didn't explain it too well, here's a diagram:
vertex_bone_projection.png


The bounds for the ratio [L, U] consists of two numbers, normally ranging within 0 to 1. It essentially makes for "hard" attachment only to the parent bone, and it can also differ from bone to bone (for example, I wanted almost all the vertecies of the head to be "hard" attached to the head bone. Because of that bound for the ratio were [0, 1] for the Head.

The other important thing to note, is that I load my character model in parts. I do this because it makes it easy to interchange parts, like say, for example, a different pair of pants.

[size="4"]Some Results:

The above algorithm guarantees that if two vertecies are at the same position at design time, then, in any animation, they will also move together. This guarantees that even if the model is loaded in different parts, it will still look like only 1 model.

The algorithm is also very quick, for a model of ~3000 vertecies it ran in ~70 milliseconds, meaning, I had no real need to store the vertex-bone attachment information because generating it was easy. The algorithm runs in linear time, because the bone list's size is a constant, and the only variable is the number of vertecies in the model.


Here's a screenshot of the model with each vertex colored per parent bone:
human_model_automatic_attachment.jpg


And here's a video of a two animations in progress:

[media]
[/media]




That covers it for my aim for animating the model. My .OBJ loader doesn't load textures yet, but that's completely unrelated to model animation, and is not a top priority right now.
I've already started working on terrain generation, and I'll post some quick results soon!
MilchoPenchev
So, with the bone system in place, and working well, it was time to add motion to it.

First, the animation file was going to be specific to this skeleton - if another skeleton was build, in terms of bone structure, it would have to use a different animation file.

There are actually 21 bones in my skeleton design - 20 visible, and 1 zero length bone, at the waist of the skeleton, that isn't visible, but is the only bone that could be moved directly. All other bones always started from their parent's end position.

I had long decided that frame-based animation was the way to go. So the first part was storing the data that would uniquely determine a skeleton's rotations and position.

For the zero-length center bone, I stored the position and rotation. For all other bones, only storing the rotation was enough.

This meant that the animation could be applied to any skeleton that had the same number of bones as this. Further, it could be meaningfully applied to any skeleton that had the same parent-child bone structure. This is important because, for example, modeling human or humanoid children, you'll find that they have the same bone structure, but for them, the bone lengths are in different proportions.

For adults of different heights, a simple scale would suffice, because the bone length proportions are the same (or nearly so that it doesn't make a difference).

Besides storing bone rotation, each frame also had to store some other information for it's animation.

I decided each frame would have a "wait time" which is the time we pause on that frame, and a "transition time" which is the time to move from this frame to the next.

To animate a skeleton then, would be a matter of interpolating between two frame's sets of stored angles and position.

However, a straight forward linear interpolation would look unnatural. When we move our limbs, or any real object for that matter, there's the notion of inertia. It takes time for an object to come up to speed, and for it to slow down before coming to a rest. With linear interpolation, the movement would "jump" up to a certain rate, remain at that rate for the entire animation, and then suddenly stop when it reached it's end position. This would look VERY unnatural.

Those of you who have done any sort of frame-based animation would probably be familiar with "easing" equations. It's simply a function that ranges from 0 to 1. For example, this is what linear interpolation looks like as such a function:(images from the free PowerToy calc)

linearfunc.jpg




Note that the function should only be defined to work on the range of [0,1].. The derivative, or slope, of the function is the speed of movement. Since the slope is the same for this entire function, movement sped is constant through the entire time of animation.

There are two simple ways to do what's called Easing in and easing out - that is to use a quadratic function:

easeinfunc.jpg easeoutfunc.jpg

Notice how the Ease In function's slope (first graph), starts off low, and increases linearly until reaching the end point. Any motion guided by this function would then slowly accelerate towards it's destination, and suddenly stop once it reaches it.

Similiarly, the ease out function's slope starts off at 1, and slowly diminishes until reaching the end point of 1. It does the opposite, it starts off suddently, and slows down until reaching the end.

Now in order to combine both easing in and easing out, we have to combine two functions, which can be done with a simple IF statement in the code. The two functions are this:

easeinoutAfunc.jpg

and

easeinoutBfunc.jpg




When combined, with, (if x < 0.5) else , you essentially get a function that looks like this:

easeinoutcombfunc.jpg

The above combination provides for very nice movement, as it somewhat realistically models how we move our limbs. Notice that the two functions not only have the same value at 0.5 (i.e. both reach "halfway" at 0.5), but also have the same slope at 0.5, meaning the motion speed won't have any jerky effects at all.


There are actually cubic and sinusoidal ease-in and ease-out functions, but for simplicity and quicker calculations i stuck with quadratic.

So, that's the four types of motion I decided to have.

Combine a bunch of frames together, and you have an animation. With all that work done, my final skeleton animator looked like this:

human_skeleton_animator.jpg




The current frame properties is where all the different properties of a frame are set. Everything else is related to editing the skeleton, by individually rotating each bone. The animations are then saved to a file, and are easily read and applied to a skeleton by the Skeleton Animator class.

That covers my work towards animating the skeleton. The next parts will focus on animating a model from the skeleton animation.
MilchoPenchev
Ahem, so my first entry on what's essentially going to be my long and tedious work towards a small custom game engine.

The first few entries are just going to be me, playing catchup on what I've done so far.

The first thing I've decided to try to do, is animate a human character. For this purpose I created my own custom human skeleton. I used the measurements from DaVinci's Vitruvian Man to obtain measurements and positioning of each bone. In case you don't know it, the Vitruvian Man is this drawing, of a supposedly ideal man:

300px-Uomo_Vitruviano.jpg




The initial skeleton design was done in Google's free program, sketchup, where I traced over the general bone structure in that drawing:

skeletonofman.jpg

After a somewhat painful manual entry of the positions, lengths, and orientations, I had the base for creating a set of bones. However, a set of bones isn't enough for proper skeleton animation.

Each bone could have 1 parent (potentially NULL) and many children. The next step is figuring out how to represent the rotations. I wanted to be able to access a bone's position and end position without recursive calls. I also wanted rotations to be relative, and to have the ability to move a point "with" the bone. The last requirement was because I know that I would have to animate a model based on that skeleton.

To solve all this, I wrote a class, called Axes. Axes contained 3 vectors, which started off as the XYZ world space vectors. I also wrote several functions for rotation around the Axes in the class, all of which eventually came down to 1 function: rotation of a point around an arbitrary axis.

The rotation of a point around an arbitrary axis, in brief, is done like so:

1. Translate your axis to pass through the origin. In my case, I always assumed my axis was at the origin, though it was actually at the bone's Starting position. To properly rotate then, all you have to do is perform a move of -sp, where sp is a vector representing the bone's starting position.

2. Align your axis with one of the three world axes (X- Y- or Z-) This essentially means undoing the rotation applied to that axis.

3. Perform your desired rotation, now around the world axis with which you aligned.

4. Undo step 2. (apply the rotation of the bone)

5. Undo step 1. (apply the translation to the bone's position)

That was it. After each bone was rotated then, all I had to do is update it's child bones with their new position (the current bone's end position) and their delta-rotation matrix (however much the bone was rotated)

The final result was this:

firstskeleton.jpg

The actual work too a little less than 2 weeks - 4 days or so to write all the necessary code, and about another week to get rid of all the bugs.

All the graphics are drawn with OpenGL, and I didn't quite start from scratch, I had a somewhat ready framework from a previous project.

The next step was to animate the skeleton.