Using One Vertex Format

Started by
6 comments, last by rubicondev 14 years, 2 months ago
I'm thinking about using a single vertex format for every mesh in the game. Are there any major downsides to this? I want the following inside the vertex -Position -Normal -Texture -Difuse -Specular -BlendIndices -BlendWeights I hate the idea of having something like bones inside a level mesh but I also hate having different classes that do practically the same thing with different vertex formats.
Advertisement
In general, that model doesn't seem efficient in terms of rendering. It suggest, for starters, that you are using immediate mode (glBegin() glVertex3f() glEnd()) instead of VBOs or display lists.

A model format that's suited to fast rendering generally looks something more like:

class Model{private:    GLfloat *mVertices;    GLfloat *mNormals;    GLfloat *mTexIds;    GLushort *mIndices;}


You pack all "same" data together so that it can be batch sent to the GPU through VBOs or at least glVertexPointer (or the DirectX equivalent)

Taking something like that as a starting data model, it's easier to compose object hierarchies since child objects would simply call the base class initVBO method and append a few extra VBOs for things like specular/blendIndex/whatever. Your draw routine would then, as well call the base class draw and then just add whatever extra parameters are necessary for the shader that will render the current object.

[EDIT: although I suppose you could be using interleaved arrays with your current model... However, in that case it seems like the only thing that would be necessary would be to use a virtual function call to initialize the VBO or whatever with the correct interleaving.]

[EDIT2: If you're using OpenGL, it might be a good exercise to grab an OpenGL 3.0+ header and re-author your drawing code so that it compiles. A lot of the "slow" rendering modes have been completely removed from the spec. If nothing else it'll force you to rewrite your rendering logic to use faster methods.]

-me
I'm actually using XNA/DirectX.

The problem with that model is I'm not loading my geometry through a program but rather I'm building it myself and there are classes inside the model like meshes and polygons with vertices imbeded inside the polygon class.

I read that it might be possible to use one vertex format because GPUs are now capable of picking select attributes from a vertex instead of using the entire thing.
Quote:I hate the idea of having something like bones inside a level mesh but I also hate having different classes that do practically the same thing with different vertex formats.

By "level" mesh, do you mean non-animated (static)?

Also, by "every mesh in the game," do you mean a mixture of static and animated meshes? If so, then the downside is memory allocation (as you seem to understand) as you don't need indices, weights and an array of bone matrices for non-animated meshes.

Is there a specific reason you don't want to use one vertex structure for static meshes and another for animated? The only downside would be 2 shaders, which you'll likely want, in any case.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Quote:By "level" mesh, do you mean non-animated (static)?

Also, by "every mesh in the game," do you mean a mixture of static and animated meshes? If so, then the downside is memory allocation (as you seem to understand) as you don't need indices, weights and an array of bone matrices for non-animated meshes.


Yes that's I think that should be only real difference between them at the moment. Weights and indices are the only thing I entirely wouldn't use.

Quote:Is there a specific reason you don't want to use one vertex structure for static meshes and another for animated? The only downside would be 2 shaders, which you'll likely want, in any case.


Praticality reasons, but really I just wanted to see if what I read was true. I'm sorta obsessed with organization and that seemed like a nice scapegoat.
There's nothing stopping you from using a "fat vertex" which contains every possible field you could ever want, however you end up wasting a ton of space and bandwidth on unused fields that any given shader doesn't need. That's basically the trade-off. Considering just how much information you need to cram into a vertex to get even basic effects, it certainly isn't efficient when you combine them all together and use it for everything.
I thought that might be the case though the trade off is steeper than I anticipated. But that's all I wanted to know.

Thanks for the tips!
I have two vertices in my engine - skinned and not skinned.

The non-skinned, default vertex has

Position
Normal
BiNormal
Colour
2x UV

And all this fits into a 32 byte structure which is perfectly optimal for the post-transform cache. Most of these fields are compressed in some way but other than that, I can do practically every effect needed and still be small.

The skinned version became an exception because the code to handle it will always be an exception anyway, and the extra data means it doesn't fit nicely in the cache anymore, hence being special cased. The rest of the format is the same as the non-skinned one so can do all the same effects and use the same widgets to manipulate verts generally.

This model works perfectly for me and I'll probably stick with it forever. Ish.

------------------------------Great Little War Game

This topic is closed to new replies.

Advertisement