• entries
    743
  • comments
    1924
  • views
    580349

Milkshape Exporter

Sign in to follow this  

100 views

[WARNING - this post is very rambling]



My basic Milkshape exporter is working. The above is being rendered in a normal D3D application, having loaded the model from my own exported file format, including all the vertex normals for lighting.

The format is text at the moment, using my XCS format for ease of loading and to aid debugging, so it is pretty slow to load. Once everything is working, I'll switch to a binary format to speed up loading time.

I'm very impressed with how easy the Milkshape SDK is to use, even with the total lack of documentation.

Only strange thing is that at the moment, the model is facing the wrong way. I'm wondering if Milkshape uses the opposite hand format for its axis. This seems likely to me as I believe OpenGL uses the oppposite format to Direct3D and if memory serves, Milkshape uses OpenGL.

I'm guessing this will be trivial to fix - I just need to modify the texture for the object so that I can see whether it is a mirror image of the Milkshape version or the correct one, then have a quick refresh read-up on the left and right hand axis stuff. Hopefully I just invert the Z, but we'll see.

[EDIT]

Got that sorted. Was a bit strange - turns out that for some weird reason, Milkshape was reversing my texture across the X dimension. Doesn't seem that many people are complaining about this on the internet, so the only solution was to do a left to right flip of all my vertices within Milkshape.

I then had to modify my exporter to negate the Z component of each vertex and swap the vertex order around so that the clockwiseness of the triangles was preserved.

From there, it was just a case of negating the Z component of the normals in the exporter. All the data in my exported file is now correct to load directly into a D3D vertex and index buffer and appear the same as it does in Milkshape.

In case you are wondering, the whole point of having my own format file is to implement skeletal animation myself, mainly for fun but also for flexibility and simplicity over the D3D animation system.

The first step from here in this process is to now export the skeleton details to my model file along with the model, then come up with a way to programatically rotate and move the joints of a skeleton.

Once that is working and all making sense (if ever [smile]), I can then write another exporter for Milkshape that will export just the keyframes of animation from a model.

I touched briefly on this in the last post, but the problem I am trying to overcome is that Milkshape seems to only let you store one animation per model and can only export one animation per model to an X file.

As well as the fun of the exercise (since all my game projects are dead in the water at the moment), I am hoping to be able to eventually have a system where I have a model file format that defines the model and a skeleton, then another file format that defines multiple named animations.

In theory, as long as the skeletons of two different models have the same joints in terms of function in the same order defined, it should then be possible to apply the animations to two different models.

From there, I want to work on running multiple animations on the same model but on different bones - e.g. lower body walking, upper body waving arms like idiot - via some nice flexible system.

Finally, I want to look at blending smoothly from one animation to another in a way that is compatible with the multiple animations thing described above.

I appreciate that this is all a bit of a monstrous undertaking but I'm a hobbyist and currently enthused, so that is not really a problem. It's the process I really enjoy over whether it works or not.

I need to do a bit of research next - I'm not sure how existing animation systems work. As I see it, there are two options - manually rotate and move the vertices attached to bones in code, then construct a single mesh based on the pose, or render the mesh in stages on a per bone bases, using the world matrix to move each set of vertices into the correct position based on the current pose.

I can see pros and cons to each. The first would reduce the number of draw primitive calls as the final posed mesh could be rendered in one go but then again all the vertex transformations would be done in software.

The second approach would mean multiple calls to draw primitive per mesh, but would move the work of transforming the vertices onto the graphics hardware, which I guess is where it should be.

If anyone has any insights on this subject, I'm all ears (well, all eyes really since this is a text-based forum).

[EDIT]

Hmm. Flipping the vertices in Milkshape buggers up my skeleton - the vertices all go kapooey. This is no good.

I've flipped them back and the model still renders in my app the same as in Milkshape, but the texture is still reversed.

I don't see a simple way to solve this in code. All I can do is flip my textures before and after editing in PaintShop and work with right-to-left textures. How daft.

I can't understand what is causing this - perhaps I'm just being thick.

[EDIT] Hmm. Fixed it but only by drawing the region from right to left before doing a remap. Weirdly, this didn't happen on another test model I made. Something going screwy weird somewhere. Oh well.

[EDIT] Ramble, ramble.

Thinking about it, as we prep for our New Year festivities, I need to transform as I render otherwise I have to store two copies of the model. Daft.

On this basis, here's how I see it. Corresponding with the skeleton will be a tree of matrices. The vertices will either be stored as separate meshes, one per bone, or sorted by bone index with an indexing table so I can do DrawPrimitive on contiguous segments.

For speed, I'll store an array of bones each of which will contain a list of all the child bones so I don't have to do any complex recursion to find all the children.

For each bone, I'll apply the rotation to the bone's matrix and all of its child matrices.

Actually, I'm going to have to recurse through a tree anyway to make sure I apply the rotations in the correct order, so perhaps I will use a tree structure to define the skeleton of matrices. Dunno.

Anyway, somehow I'll iterate through the tree, accumulating each rotation into each matrix, until the whole set have been visited.

Now I should be able to do a simple linear iteration through each section of the mesh, apply the relevant matrix to the overall scale/rotation/translation of the whole mesh and render the section.

Sound plausible? Doubt anyone is still reading at this point [smile].
Sign in to follow this  


2 Comments


Recommended Comments

I'm still reading [smile]

A helpful term to use in your research is "skinning", which can be performed either on the CPU or the GPU.

You mention about associating vertices with a single bone each, but it is common to associate a vertex with several bones, say on average 2-3. The final position of the vertex is given by transforming it (from its bind-pose position) by all its associated bones, weighting/scaling the results then combining them into one final position. The weighting applied to each bone is also associated with the vertex (rather than the bone) so that different vertices can be influenced by the same bone by different amounts (think about vertices near to joints, like elbows, as you follow vertices from from the forearm to the upperarm you would want both bones to influence the position of the vertices but you want vertices further away from the forearm bone to be influenced more by the upperarm bone).

Share this comment


Link to comment
Quote:
Original post by dmatter
I'm still reading [smile]

A helpful term to use in your research is "skinning", which can be performed either on the CPU or the GPU.

You mention about associating vertices with a single bone each, but it is common to associate a vertex with several bones, say on average 2-3. The final position of the vertex is given by transforming it (from its bind-pose position) by all its associated bones, weighting/scaling the results then combining them into one final position. The weighting applied to each bone is also associated with the vertex (rather than the bone) so that different vertices can be influenced by the same bone by different amounts (think about vertices near to joints, like elbows, as you follow vertices from from the forearm to the upperarm you would want both bones to influence the position of the vertices but you want vertices further away from the forearm bone to be influenced more by the upperarm bone).


Thanks for the comment and the pointers.

I've read a little bit about this before, but for the time being I don't intend to worry about it. If I can just get a very simple system up and running that just has one bone per vertex, and have any transformations to a bone also affect child bones correctly, perhaps then I'll look into this more.

The animation I did in Milkshape looks okay with just one bone per vertex so it may be that for my purposes, this is all that is required.

Either way, I'll think about it once the basic system is working.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now