Managing Meshes

Started by
1 comment, last by akuda 14 years, 5 months ago
Hi, As part of the Bachelor's degree I'm writing (with 3 other students) a computer game. We write in OpenGL and C++; I have a question about how the models are managed in RAM memory - I fully understand the idea of re-using geometry with VBO, I also know how skeletal animation works (and we're about to implement it). We were asked to keep the OpenGL-specific code in a separate module, so after frustum culling we need to create a command-list to the rendering device (we think that a tree-based data structure would be the best choice for this). My question is: how to organize (in classes) the skeletal-animated objects in memory, so a set of orders could be easily generated for the device. I found out that most engines define "Mesh" structure to contain geometry data, but the meaning of "Mesh class" vary between engines - some contains only raw geometry information, and some other contains also material and texture id's. Some do contain bones informations, some does not. Considering skinning as a part of process of skeletal animation, I have some problems defining how to divide a complete textured and animated model into any kind of "meshes" (as most of faces are bound with more than just one bone), "bones" and "animations" other than just "everything-in-one" which probably isn't the best idea. What I want to do with the representation of data? Well, be able to animate it with shaders on GPU, be able to attach some other objects to specific bones in skeleton, and efficiently code it into a set of instructions "independent" on rendering system. For specifications we can agree that the model format is similar to Milkshape ms3d one. What's your suggestions? Regards, akuda
Advertisement
I´ve been writing an engine using OpenGL and C++ for two years now, in my spare time. I´ve gathered a bit of experience and i can tell you what i do in my engine, but it probably isn´t the best way to do things.

I have an abstract class "Renderer" which is subclassed by "OpenGLRenderer" and others. The abstract class has virtual methods for common rendering operations (draw a chunk of vertices, change alpha blend mode, current shader, etc.) I implement these methods for each renderer i want to have, and when starting the program I instantiate a "Renderer* rend" variable to be one of the renderers. So that the rest of the engine is independent of the rendering API.

I also have a Mesh class that stores a bunch of vertices, normals, and UVs. I have a method that can blend Meshes together. Then i have a Node class which stores spatial transformations and can have other Nodes as childs. This way you can have a hierarchy of nodes which ad their transform to their parent´s. Nodes can also have Animations applied which are just a sequence of transforms over time.

To attach a Mesh to a Node, if there is no skinning (just a static entity like a table, for example) i just draw the Mesh after setting the Node´s transform as the current one. If i want a hierarchy of nodes to act as a skeleton to a mesh, I use a vertex shader to transform the Mesh vertices accordingly, for each vertex i select the affecting Node´s transforms and blend them togehter using the vertex bone weights.

To me this has proven to be quite flexible since you can have Animations for each node and blend animations together, change the animation for just some bones of the character, and attach weapons and stuff to the character´s bones by just attaching a new Node to their bone (Node) hierarchy.

[Edited by - ArKano22 on November 5, 2009 12:42:03 PM]
this is very similar to what I have planned (I guess we read the same books :) ), but I wonder - does that mean, that you're having an information "below subtree is a skeleton" attached to some node, and this information is applying a shader? To which node a geometry data (for skinning) are connected - to the root of animation?

This topic is closed to new replies.

Advertisement