# Catering for multiple vertex definitions

This topic is 2067 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi guys,

I've been wandering how I can expand the number of different vertex structures I can support to increase the flexibility of data I can load in. Currently my data is structured something like this:

 struct Vertex { vec3 pos; vec3 norm; vec2 uv; vec3 tan; vec3 binorm; }; glVertexAttribPointer(...) x 5 

But what happens If I load some data in which doesn't contain tangents and binormals? It'd be wasting precious space.

I was thinking a possible solution would be something like this:

 enum VertexFormat { POS_NORM_TEX = 0, POS_NORM_COL = 1, POS_NORM_TEX_TAN_BIT = 2, }; class Vertex { public: Vertex(); ~Vertex(); void PushAttribute(float val) { m_data.push_back(val); } void PushAttribute(vec3& val) { m_data.push_back(val.x); // etc } void PushAttribute(vec2& val) { //same as above } float* GetFloatBuffer() { return &m_data[0]; } private: std::vector<float> m_data; }; 

then the VBO will know the format of all of it's vertices that are added

 class VertexBufferObject { public: VertexBufferObject(VertexFormat format) : m_format(format) {} float* GetArray(); int GetSize(); private: vector<Vertex> m_vertices; VertexFormat m_format; }; 

This would allow the graphics module to bind the correct attributes based on the VertexFormat of the buffer. But at the same time, It doesn't ensure that each Vertex is guaranteed to be in the format specified which won't be a problem if I'm controlling the data coming in from my own asset pipeline.

I want to know if this is a good way to go about things? What have other people done and is there a clearly better way that I'm missing?

Thanks guys,

Rocklobster Edited by rocklobster

##### Share on other sites
Well you only need [font=courier new,courier,monospace]struct Vertex[/font] for programatically created vertex data. Vertex data that is loaded from a file can be referred to via [font=courier new,courier,monospace]void*[/font], and can use any layout.

For vertex formats, you can either have a hard-coded enum/list like in your example, and have the file specify a value from that list (usually you don't have too many unique formats, so this will be fairly maintainable), or, the file can actually encode the vertex format itself, with e.g.[font=courier new,courier,monospace] struct { int offset, type, size, stride, etc; } elements[numElements];[/font] (which would allow people to use new formats without editing your code -- useful on bigger projects with more artists / tech-artists).

Yep, malicious/corrupt data will do bad things, but you can put the error checking code into the tool that generates your files. Edited by Hodgman

##### Share on other sites
Thanks,

I'll probably stick with the hard-coded enum for now because I'm not doing anything large scale. But I'll keep in mind allowing the files to specify their own format for the future.

##### Share on other sites
This is a balancing act.

If you get too hung-up about "wasting precious space" then you're going to miss other avenues for optimization; memory usage is not the be-all-end-all and by focussing on that to the exclusion of everything else, you may actually end up running significantly slower. It's easy to fall into this trap because memory usage is something that's directly measurable and not influenced by (many) outside factors, but the reality is quite a bit more complex.

Out of the theoretical and into the practical - let's look at your specific example here.

Changing your vertex format can be expensive. You may need to unbind buffers, switch shaders, break current batches, upload new uniforms, etc. All of these actions will interrupt the pipeline and - while they won't directly cause pipeline stalls - they will cause a break in the flow of commands and data from the CPU to the GPU. You've got one state change that potentially requires a lot of other state changes as a knock-on, and things can pile up pretty badly. Do it too often per frame and you'll see your benchmark numbers go in the wrong direction.

How often is too often? There's no right-in-all-situations answer to that one; it depends on your program and data.

I'm not saying that you should deliberately set out to unnecessarily use huge chunks of memory here. Quite the opposite; you should instead be making decisions like this based on information obtained through profiling and benchmarking - questing to reduce memory usage in cases like this and without this info is premature optimization. If accepting some extra memory usage is not causing any problems for your program, then just accept it as a tradeoff for reduced state changes - it may well be the right decision for you. Edited by mhagain

##### Share on other sites
note that you can calculate the binormal by cross(normal, tangent)
personally, i make a vertex struct for every usage type, since many things have different needs
for example a fullscreen shader needs only vertices that are [0, 1], and the tex coords can be derived from the vertices!
example:
// tex coords directly
texCoord = in_vertex.xy;
// transformation to screenspace in ortographics projection (0, 1, 0, 1, zmin, zmax)
gl_Position = vec4(in_vertex.xy * 2.0 - 1.0, 0.0, 1.0);

unless you are confident this is your bottleneck, you should be in gDEBugger to find out where your performance bottlenecks really are!
one neat feature in gDEBugger is rendering in slow motion, so you can see the exact order of operations, including order of rendering your models
that way you can for example get a real estimate on how effective your z-culling and samples passed (query objects) will be Edited by Kaptein

##### Share on other sites
I'm going to 2nd mhagain's comment .... use a vertex definition that is a superset of all you expect to render and use as few vertex/pixel shaders as possible, even if you have to superset them. Creating arbitrary state buckets just to save a few thousand bytes of "precious space" is going to slow down your render.

##### Share on other sites

This is a balancing act.

If you get too hung-up about "wasting precious space" then you're going to miss other avenues for optimization; memory usage is not the be-all-end-all and by focussing on that to the exclusion of everything else, you may actually end up running significantly slower. It's easy to fall into this trap because memory usage is something that's directly measurable and not influenced by (many) outside factors, but the reality is quite a bit more complex.

Out of the theoretical and into the practical - let's look at your specific example here.

Changing your vertex format can be expensive. You may need to unbind buffers, switch shaders, break current batches, upload new uniforms, etc. All of these actions will interrupt the pipeline and - while they won't directly cause pipeline stalls - they will cause a break in the flow of commands and data from the CPU to the GPU. You've got one state change that potentially requires a lot of other state changes as a knock-on, and things can pile up pretty badly. Do it too often per frame and you'll see your benchmark numbers go in the wrong direction.

How often is too often? There's no right-in-all-situations answer to that one; it depends on your program and data.

I'm not saying that you should deliberately set out to unnecessarily use huge chunks of memory here. Quite the opposite; you should instead be making decisions like this based on information obtained through profiling and benchmarking - questing to reduce memory usage in cases like this and without this info is premature optimization. If accepting some extra memory usage is not causing any problems for your program, then just accept it as a tradeoff for reduced state changes - it may well be the right decision for you.

I honestly wasn't trying to do premature optimization, if it seems like that. I just wanted better flexibility. Wouldn't I be required to swap shaders anyway if I had some geometry coming through that didn't use the shader that handled normal maps (other things requiring tangent space data). For example, half of my geometry has normal maps or bump maps associated with rendering them then I want to render some geometry which doesn't have these things, i'd run into errors trying to process the geometry in a shader that requires some normal map that isn't there. So wouldn't this lead to it being pointless for me to having bound the extra attributes anyway?

Sorry If i'm way off here, still trying to get the hang of shaders and how this whole process should work.

##### Share on other sites
If you do have a need to use the same shader on both types of objects, it's possible to just use a "flat" 1x1 pixel normal map for objects that don't require normal mapping.

##### Share on other sites

If you do have a need to use the same shader on both types of objects, it's possible to just use a "flat" 1x1 pixel normal map for objects that don't require normal mapping.

I know you can't really comment on whether or not that would be the best solution for me, but is that somewhat of a bandaid solution? Even if it is there to prevent shader swapping and setting new uniforms.

1. 1
2. 2
3. 3
Rutin
24
4. 4
JoeJ
18
5. 5

• 14
• 23
• 11
• 11
• 9
• ### Forum Statistics

• Total Topics
631766
• Total Posts
3002226
×

## Important Information

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!