API independent mesh class

Started by
10 comments, last by Basiror 18 years, 9 months ago
I'm at the point where I'm designing the mesh class for my API independent 3d engine. I need some help. Currently I've designed it so that the mesh contains a transform, a vertex buffer, an index buffer and a list of subsets. The subset contains information about material and texture as well as which indices in the index buffer it applies to. Is this a good design? Also, I've not thought about animation yet.. is it very complicated to implement?

www.marklightforunity.com | MarkLight: Markup Extension Framework for Unity

Advertisement
i would move animations into a seperate class with is only used for models

as for the mesh class you could overload the operator<(...) and compare thr resource indices(indices of your subsets?)

thus you could use it to sort the frustrum culled elements of your scenen by textures and such and reduce state switching that way just a thought so

and maybe take into consideration to create indexed triangle strips and merging them with the tunneling algorithem



with a decent triangle strip implementation you could reduce the amount of vertices per triangle to 1.1-1.3 if the mesh allows it thus you spare a lot of transformations on the gfxcard
http://www.8ung.at/basiror/theironcross.html
I can tell you that you'll redesign that class a couple of times as your engine grows. Just today I added full support for Triangle Strips to my engine, and I'll have to redo some of it's internals.

I also don't know what your particular defenition of "mesh" is. You see, some people pack a whole model as a "mesh", when in fact it is made up of arms, legs, torso, head, etc...

To me, a group of meshes that are hierarchichly linked together is an "Object".

In my view, there is a one-to-many relationship between "Objects" and "Meshes". An Object, which can be a Soldier, can have any number of defining meshes, as in the arms, legs, etc... you get the picture.


So, here is what I would have in a Mesh Class:

* Vertex:
** Positions - XYZ
** Normals - XYZ
** Texture Coordinates - UV
** Colors - RGB
* Faces - Defined in either unsigned short format (Word), 2 bytes, or unsigned int format (DWord), 4 bytes. I still haven't come to a conclusion on which is better, faster.
* Support for Stripes: If you're using degenerate triangles, then you'll have just one stripe sequence per mesh. Otherwise you should have a dinamic structure that should hold any number of TriStrip sequences, and your renderer path should be ready to render them.
* Support for Bones: If you're using bone-based animation, with quaternions or matrices to represent rotation of various meshes.
* Textures: This should be an array of pointers to this mesh's textures. Nowadays we just don't have a single texture per model, so, in my engine I have an array of 16 ints that serve as pointers to the various textures this model is using.
* Bounding Volume - A bounding box, or another volume of your choice, for collision detection.

If your animation is Vertex Interpolation-based, ala MD2, then all you need to do is save all Vertex Positions and Normals per frame. All the other data remains the same, even when using Triangle Strips.

If your animation is Bone-based, then you should save it on the "Object" structure, which should have an hierarchical representation of the various meshes, and thetransformation matrices per frame, per bone.

Also try to design "external connectors", or "extenders". Basicly its just a dummy pointer that is not being used, but, if in the future a plugin or an engine extension has to enhance the Mesh class, then it can use the dummy pointer to point to further information.

Using the above point, you should also have a per-API Mesh Extension.
As a practical example, imagine that you're rendering a Teapot under openGL, using Display Lists. Well, you'll need to request OpenGL an ID for the new DL, and you'll need to store it someplace, so it should be somehow attached to the Mesh Class. Using this dummy pointer method, it is.

You also find it very valuable when you add a physics layer to your engine. Meshes will sudenly stop being collections of vertices and faces, and you'll need to store their weight, mass distribution, etc, etc...

Continuing in this line of thinking, one can even extend it into Shaders. Each mesh can have an Effect assigned to it. An Effect is usually a pair, a Vertex Shader and a Pixel Shader. So perhaps you'll want to create a dummy pointer for that too.

Your Mesh structure might also have 2 states. The "Just Loaded" state, which stores the mesh information in an engine-friendly format, and the "Ready to Render" format, which is the format you put the mesh in just before sending it off to the card. I have multiple filters in my engine that work upong the Mesh, like generating vertex normals for it in case it doesn't have them (MD2), and it makes it much easier for Filters to work on the "Just Loaded", engine-firnedly version, than on the API-specific format, like VBOs.

Also don't forget to save how many frames of animation this frame contains, and which one you're currently drawing, etc...

I wrote this as I took the ocasional peek at my own engine, so I hope it was helpful. Good luck! [wink]
Prozak, that was quite interesting to read I always like reading about other peoples engines :)

Anyway I have some questions and comments on what you describe.
1) You store your per-vertex colors in RGB format so how do you implement alpha blended vertices - your textures?

2) Your store 16 ints to reference textures, do your objects track how many textures they currently have or are the unused ints just == -1 ? Any reason why you chose to limit them to max of 16 textures and why don't you just use normal pointers?

3) Do your 'extenders' offer any advantage to typical techniques such as inheritance?


Finally you say your not sure about using unsigned shorts or unsigned int... For my static geometry I use unsigned shorts because my spatial partitioning ensures that the number of vertices < 65536. My dynamic geometry uses unsigned ints.


To the OP, don't worry too much about good design at the moment, its more important just to get ideas down, code clearly and design your classes to be flexible from the start. Eventually a 'good' design will emerge from the maddness but probably not before you have several attempts at it first :)
My perspectives on what a mesh class ought to have:

Basic geometry (that's XYZ, normals, tex coords, tangents, and maybe bitangents, as well as the indices for the mesh, and any info needed to render that)
Bounding volume for the geometry


That's it! Animation*, transforms, textures, shaders...all of that should be seperated out into their own objects and controllers. This will ease all sorts of things later on (e.g. instancing).

* Depending on the exact format and layout of your model files, seperating animations from geometrical data, though strongly advised, may be a difficult proposition. But it is imperative not to bind animation info such as the currently running sequence, current frame, etc, anything that can change for multiple instances of the same model, to the mesh class.

[Edited by - Promit on August 2, 2005 6:56:40 PM]
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Prozak
I wrote this as I took the ocasional peek at my own engine, so I hope it was helpful. Good luck! [wink]


Very helpful, thank you. If I understand your design correctly you use one material / mesh and the list of textures applies to all faces? That would simplify things..

www.marklightforunity.com | MarkLight: Markup Extension Framework for Unity

I was going to post but realized that I agree with Promit 100% so there is no need. His advice is very good.
Quote:Original post by Promit
My perspectives on what a mesh class ought to have:

Basic geometry (that's XYZ, normals, tex coords, tangents, and maybe bitangents, as well as the indices for the mesh, and any info needed to render that)
Bounding volume for the geometry


That's it! Animation*, transforms, textures, shaders...all of that should be seperated out into their own objects and controllers. This will ease all sorts of things later on (e.g. instancing).

If I understand this design correctly you handle render-state changes that apply to the mesh outside the actual mesh drawing method (e.g. by transform and render-state nodes higher up in some sort of scene tree hierarchy that is traversed depth-first when drawing)?

I've choosed a different approach. I organize my scene objects spatially. Each scene object contains information about transform and bounding volume.. I got a seperate transform hierarchy and I'm not quite sure how to handle render-state hierarchies yet (may or may not break my design).. Anyway, I can't seperate transform and texture (and other render-state changes) from the mesh.

www.marklightforunity.com | MarkLight: Markup Extension Framework for Unity

dmatter:
1) I do store them in RGBA format, sorry, my mistake back there.

2) I do store pointers, and not ints, again, sorry. I think I made a mistake back there. Bascily we all know that at save time, we should save IDs, and not pointers, because the next time you do a load, the resources won't fall exactly on the same spots in memory, and therefore your pointers would become invalid.

I do track how many textures the object uses. Basicly, texture units 0 and 1 are the primary units. T2 is for bump-mapping, and anything above that isn't used in the fixed-pipeline. For the dinamic-pipeline (shaders), the Shader can call up any of the 16 texture units, and do with it as it pleases. The engine's job is just to make sure it is loaded correctly, and is in memory when it's called upon, it's the shader's job to actually use the texture in some form, for parallax-mapping for example.

I limit it to 16 textures because I cannot foresee how a mesh could ever use more than that ammount... it would have to be an insane level of passes, or a highly complex shader, and both are virtually unexistent in a true released game, although there is a place for such things in the lab.

3) I'm more of a C coder, than a C++ coder. I'm not much into inheritance and all that, and much of my engine's internals revolve arround function pointers. Inheritance is a compile-time concept, is it not? You could create a class that inherits from the base Mesh class and expand it with more methods and variables, but the "extender dummy pointer" concept, is to be used by plugins, and a plugin is something that has already been compiled. Consider it a "Hook" of sorts. Imagine you have 1.000 asteroids on a single level, and you're simulating the best asteroids game ever. Now imagine someone codes a plugin that makes the game more interesting. It plugs itself into the mesh resources, and defines one of the dummy pointers, to point into a specialized structure, that tells if a certain asteroid is made of iron, rock or ice. Then it plugs itself into the AI part of the engine, and makes it more dificult to shoot out iron asteroids, for example.

It is much simpler to have Mesh[945].dummy_pointer point to a "Asteroid_Type" structure, than to have a stand alone structure all by itself.

I guess this is a concept that makes itself more clear when you're "on the field", coding...

Promit, I agree. Whenever you have a situation where you see that various systems in your engine coudl be using this resource, its better to make it so that there is a pointer to the resource. Never glue a resource and the parameters that define it together.

As an example, don't have the 3D Geometry of the asteroid, and it's mass, speed, current position in the level, etc, all together. If you want to define another asteroid, you would have to duplicate all of the 3D geometry. Instead define the asteroid structure, and have a pointer to a mesh structure. That way you can have 2 asteroid structures, and a single asteroid mesh structure.

As allways, I hope I could make myself clear...
Quote:Original post by Opwiz
Anyway, I can't seperate transform and texture (and other render-state changes) from the mesh.


Of course you can. There is a critical point here.

A mesh is not an object that exists in the world of your game.

A mesh is a geometrical description of the visual representation of an object. A mesh is exactly the same as a transform, a material (textures, shaders, et al), or anything else. It's an attribute of a parent object. Not only that, a mesh shouldn't even have a render method. (Some kind of visitor double-dispatch render function maybe, but it shouldn't actually be drawing itself.) It merely needs to make the rendering data available to whoever is responsible for actually drawing it.

If I understand correctly, what's happening in your specific case is that the scene graph is a bit mangled and object relationships aren't quite clear. First of all, you need to construct some kind of entity class. These entities are the real things that are part of your game world, and have a material and mesh (but not transform) associated with them. Put these entities in a transform hierarchy, where transforms are internal nodes and entities are leaf nodes. Construct a seperate hierarchy that deals with rendering states. Attempting to merge these two trees will turn out badly, trust me.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

This topic is closed to new replies.

Advertisement