Sign in to follow this  
rocklobster

Catering for multiple vertex definitions

Recommended Posts

rocklobster    415
Hi guys,

I've been wandering how I can expand the number of different vertex structures I can support to increase the flexibility of data I can load in. Currently my data is structured something like this:

[CODE]
struct Vertex
{
vec3 pos;
vec3 norm;
vec2 uv;
vec3 tan;
vec3 binorm;
};

glVertexAttribPointer(...) x 5
[/CODE]

But what happens If I load some data in which doesn't contain tangents and binormals? It'd be wasting precious space.

I was thinking a possible solution would be something like this:

[CODE]

enum VertexFormat
{
POS_NORM_TEX = 0,
POS_NORM_COL = 1,
POS_NORM_TEX_TAN_BIT = 2,
};

class Vertex
{
public:
Vertex();
~Vertex();
void PushAttribute(float val) { m_data.push_back(val); }
void PushAttribute(vec3& val)
{
m_data.push_back(val.x);
// etc
}
void PushAttribute(vec2& val) { //same as above }
float* GetFloatBuffer() { return &m_data[0]; }
private:
std::vector<float> m_data;
};
[/CODE]

then the VBO will know the format of all of it's vertices that are added

[CODE]
class VertexBufferObject
{
public:
VertexBufferObject(VertexFormat format)
: m_format(format) {}
float* GetArray();
int GetSize();
private:
vector<Vertex> m_vertices;
VertexFormat m_format;
};
[/CODE]

This would allow the graphics module to bind the correct attributes based on the VertexFormat of the buffer. But at the same time, It doesn't ensure that each Vertex is guaranteed to be in the format specified which won't be a problem if I'm controlling the data coming in from my own asset pipeline.

I want to know if this is a good way to go about things? What have other people done and is there a clearly better way that I'm missing?

Thanks guys,

Rocklobster Edited by rocklobster

Share this post


Link to post
Share on other sites
Hodgman    51234
Well you only need [font=courier new,courier,monospace]struct Vertex[/font] for programatically created vertex data. Vertex data that is loaded from a file can be referred to via [font=courier new,courier,monospace]void*[/font], and can use any layout.

For vertex formats, you can either have a hard-coded enum/list like in your example, and have the file specify a value from that list ([i]usually you don't have too many unique formats, so this will be fairly maintainable[/i]), [i]or[/i], the file can actually encode the vertex format itself, with e.g.[font=courier new,courier,monospace] struct { int offset, type, size, stride, etc; } elements[numElements];[/font] ([i]which would allow people to use new formats without editing your code -- useful on bigger projects with more artists / tech-artists[/i]).

Yep, malicious/corrupt data will do bad things, but you can put the error checking code into the tool that generates your files. Edited by Hodgman

Share this post


Link to post
Share on other sites
rocklobster    415
Thanks,

I'll probably stick with the hard-coded enum for now because I'm not doing anything large scale. But I'll keep in mind allowing the files to specify their own format for the future.

Share this post


Link to post
Share on other sites
mhagain    13430
This is a balancing act.

If you get too hung-up about "wasting precious space" then you're going to miss other avenues for optimization; memory usage is not the be-all-end-all and by focussing on that to the exclusion of everything else, you may actually end up running significantly slower. It's easy to fall into this trap because memory usage is something that's directly measurable and not influenced by (many) outside factors, but the reality is quite a bit more complex.

Out of the theoretical and into the practical - let's look at your specific example here.

Changing your vertex format can be [i]expensive[/i]. You may need to unbind buffers, switch shaders, break current batches, upload new uniforms, etc. All of these actions will interrupt the pipeline and - while they won't directly cause pipeline stalls - they will cause a break in the flow of commands and data from the CPU to the GPU. You've got one state change that potentially requires a lot of other state changes as a knock-on, and things can pile up pretty badly. Do it too often per frame and you'll see your benchmark numbers go in the wrong direction.

How often is too often? There's no right-in-all-situations answer to that one; it depends on your program and data.

I'm not saying that you should deliberately set out to unnecessarily use huge chunks of memory here. Quite the opposite; you should instead be making decisions like this based on information obtained through profiling and benchmarking - questing to reduce memory usage in cases like this and without this info [i]is[/i] premature optimization. If accepting some extra memory usage is not causing any problems for your program, then just accept it as a tradeoff for reduced state changes - it may well be the right decision for you. Edited by mhagain

Share this post


Link to post
Share on other sites
Kaptein    2224
note that you can calculate the binormal by cross(normal, tangent)
personally, i make a vertex struct for every usage type, since many things have different needs
for example a fullscreen shader needs only vertices that are [0, 1], and the tex coords can be derived from the vertices!
example:
// tex coords directly
texCoord = in_vertex.xy;
// transformation to screenspace in ortographics projection (0, 1, 0, 1, zmin, zmax)
gl_Position = vec4(in_vertex.xy * 2.0 - 1.0, 0.0, 1.0);

unless you are confident this is your bottleneck, you should be in gDEBugger to find out where your performance bottlenecks really are!
one neat feature in gDEBugger is rendering in slow motion, so you can see the exact order of operations, including order of rendering your models
that way you can for example get a real estimate on how effective your z-culling and samples passed (query objects) will be Edited by Kaptein

Share this post


Link to post
Share on other sites
Steve_Segreto    2080
I'm going to 2nd mhagain's comment .... use a vertex definition that is a superset of all you expect to render and use as few vertex/pixel shaders as possible, even if you have to superset them. Creating arbitrary state buckets just to save a few thousand bytes of "precious space" is going to slow down your render.

Share this post


Link to post
Share on other sites
rocklobster    415
[quote name='mhagain' timestamp='1353254851' post='5002055']
This is a balancing act.

If you get too hung-up about "wasting precious space" then you're going to miss other avenues for optimization; memory usage is not the be-all-end-all and by focussing on that to the exclusion of everything else, you may actually end up running significantly slower. It's easy to fall into this trap because memory usage is something that's directly measurable and not influenced by (many) outside factors, but the reality is quite a bit more complex.

Out of the theoretical and into the practical - let's look at your specific example here.

Changing your vertex format can be [i]expensive[/i]. You may need to unbind buffers, switch shaders, break current batches, upload new uniforms, etc. All of these actions will interrupt the pipeline and - while they won't directly cause pipeline stalls - they will cause a break in the flow of commands and data from the CPU to the GPU. You've got one state change that potentially requires a lot of other state changes as a knock-on, and things can pile up pretty badly. Do it too often per frame and you'll see your benchmark numbers go in the wrong direction.

How often is too often? There's no right-in-all-situations answer to that one; it depends on your program and data.

I'm not saying that you should deliberately set out to unnecessarily use huge chunks of memory here. Quite the opposite; you should instead be making decisions like this based on information obtained through profiling and benchmarking - questing to reduce memory usage in cases like this and without this info [i]is[/i] premature optimization. If accepting some extra memory usage is not causing any problems for your program, then just accept it as a tradeoff for reduced state changes - it may well be the right decision for you.
[/quote]

I honestly wasn't trying to do premature optimization, if it seems like that. I just wanted better flexibility. Wouldn't I be required to swap shaders anyway if I had some geometry coming through that didn't use the shader that handled normal maps (other things requiring tangent space data). For example, half of my geometry has normal maps or bump maps associated with rendering them then I want to render some geometry which doesn't have these things, i'd run into errors trying to process the geometry in a shader that requires some normal map that isn't there. So wouldn't this lead to it being pointless for me to having bound the extra attributes anyway?

Sorry If i'm way off here, still trying to get the hang of shaders and how this whole process should work.

Share this post


Link to post
Share on other sites
rocklobster    415
[quote name='Hodgman' timestamp='1353307945' post='5002267']
If you do have a need to use the same shader on both types of objects, it's possible to just use a "flat" 1x1 pixel normal map for objects that don't require normal mapping.
[/quote]

I know you can't really comment on whether or not that would be the best solution for me, but is that somewhat of a bandaid solution? Even if it is there to prevent shader swapping and setting new uniforms.

Share this post


Link to post
Share on other sites
DracoLacertae    518
This is the format I use for VBO data, in a plain C application:
[source lang="cpp"]
#define POS_COUNT 3
#define NORMAL_COUNT 3
#define TEXCOORD_COUNT 2
#define COLOR_COUNT 4


struct vertex_data_s {
int this_size; //size of this structure
float* databuffer; //holds all data

//pointers into data buffer:
float* pos; //pointer to pos for 0th vertex
float* normal; //pointer to normal for 0th vertex
float* color; //pointer to color for 0th vertex
float* texcoord; //pointer to 0th texcoord for 0th vertex
int texcoord_count;
int stride; //number of floats to next vertex
int count; //number of vertices
} vertex_data_t;

//In the above, pos, normal, color, texcoord are all staggered pointers into the same databuffer. To access a particular type of data, its just:
// assume 'd' is a vertex_data_t*

d->pos[ d->stride * n]; //pointer to nth vertex position
d->normal[ d->stride * n]; //pointer to nth normal

Suppose I have data that has position, and normal, but no color or texcoords. d->color and d->texcoord will be null, and d->stride will be set to NORMAL_COUNT + POSITION_COUNT;

What if I want 3 textures? Then d->stride can be NORMAL_COUNT + POSITION_COUNT + 3* TEXCOORD_COUNT. And the three texcoords are:

d->texcoord[ d->stride * n]
d->texcoord[ d->stide *n + TEXCOORD_COUNT ]
d->texcoord[d->stride * n +TEXCOORD_COUNT *2]

in general :

d->texcoord[stride * n + TEXCOORD_COUNT * tn ] for the 's'
d->texcoord[stride * n + TEXCOORD_COUNT * tn +1 ] for the 't'


[/source]

These all get simplified by some macros, so I don't have to think about it. I just have macros like vertex_position( my_vbo, n) to get a pointer to the nth 3d vector (pointer to 3 floats).

With macros, it makes it easy for my to change the underlying structure and recompile, without rewriting everything else. For C++, you could use accessor functions instead of macros. You can also return the correct data types if you do this. For instance, when I use the vertex_position macro, the macro casts the float* into a vertex_3*, which is my app is just 3 floats. So I can do vertex_position(my_vbo, n)->y or vertex_texcoord(my_vbo, n, tn)->y if I want. But in C++, you can do all kinds of other things with accessor functions, such as check for valud values of n, tn, etc, so you can use the same idea, but refine it a bit and make it more crash proof.

This structure also translates data easy to opengl: Call glBufferData using a pointer to d->databuffer, size of d->stride * sizeof(float) * d->count. All the data is transfered to gl (and probably the gfx card vram) in one quick call. Then when its time to draw, just call glVertexPointer / glAttribPointer with these pointers, give it d->stide for each one, and then draw. Fast and easy. If a particular buffer has no normals, colors, etc, its d->normal or d->colors pointer will be null and stride will be smaller: no wasted space.

Share this post


Link to post
Share on other sites
rocklobster    415
[quote name='DracoLacertae' timestamp='1353367015' post='5002492']
This is the format I use for VBO data, in a plain C application:
[/quote]

Looks very cool, might give that a shot actually.

Share this post


Link to post
Share on other sites
DracoLacertae    518
I have an alternate form that does a structure-of-arrays, but since it uses different accessor macros (because access is d->position[POSITION_COUNT*n] instead of d->position[d->stride*n]), but I don't have much use for it anymore because having two layouts is confusing. The idea was if all the positions are continuous in memory they can be updated by the cpu without having to retransfer everything else (like texcoords). But it didn't take long to realize that it was better just to have two vertex_data_t structs, one with positions/ normals updated with GL_DRAW_DYNAMIC, then the other with everything else as GL_DRAW_STATIC. All I needed to do was allow multiple of these to be bound to one model, which was not hard at all.

Share this post


Link to post
Share on other sites
DracoLacertae    518
[quote]Wouldn't this be an ideal situation for vertex streams?[/quote]

I suppose it is. When I was implementing this stuff, I considered many options:

[ ] denotes one VBO, P = position N = normal T = texcoord

interleaved: [PNTPNTPNTPNTPNT] (if model is static, I suspect this is ideal)

separate: [PPPPP] [NNNNN] [TTTTT] (I think this is the worst. advantage is T can be kept static, and P or N can be dynamic/streamed. Don't know when you would want to change P without changing N though) (This was also my first VBO implementation, just to get things working!)

sequential: [ PPPPP NNNNN TTTTT] (not sure about this one. I would guess it's the same or worse than interleaved. Not sure how it could be better.)

'twin' [PNPNPNPNPNP] [TTTTTTTTTTT] (If PN are dynamic and T is static, I suspect this is the best for CPU animated models)

Are there any other formats anyone uses? I suppose everyone has some custom attributes they like to pass, but you can just shove those into their own VBO, or extend my struct to allow custom0, custom1, custom2, etc to be part of the stride.

I suppose another interesting question is: Is the cost of having to bind 2 or more VBOs per model almost always less than the saving produced by tagging one VBO as static and the other as dynamic? For instance, given the choice between:

A. dynamic/streaming: [PNPNPNPNPNPNP] static:[TTTTTTTTTTT] ; each render needs to bind 2 VBOs

versus:

B. dynamic/streaming: [PNPNPNPNPNPNPNPTTTTTTTTTTTTTTTT] ; only update PNPNPN area with glBufferSubData; render only binds 1 VBO

will A almost always beat-out B? I think so, unless the VBOs being drawn are so short (only handfuls of triangles), that you're spending all your time thrashing and binding; in which case you should reevaluate what's going on.

Share this post


Link to post
Share on other sites
Hodgman    51234
[quote name='DracoLacertae' timestamp='1353525113' post='5002989']will A almost always beat-out B?[/quote]Depends how your GL driver implements buffer management, but probably [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img]

Another reason to use split (non-interleaved) streams is when you need to use the same mesh data with different vertex shaders.
e.g. your render-to-shadow-map shader only requires positions, not normals or tex-coords.
In that case, you might want to use [PPPPPPPP][NTNTNTNT] so that the shadow-shader doesn't unnecessarily have to skip over wasted normal/tex-coord bytes.
I've also seen other engines that simply export [PPPPPPPP] and [PNTPNTPNTPNTPNTPNTPNTPNT] -- so that both shaders are optimal, at the cost of memory ;)


In my engine, the model/shader compilers read in a Lua configuration file, describing their options for laying out vertex data. For any sub-mesh, it first determines which vertex shaders might be used on that sub-mesh, and collects a list of "vertex formats" ([i]i.e. vertex shader input structures[/i]) that the sub-mesh needs to be compatible with. It then uses that list, along with the list of attributes that the artist has authored ([i]e.g. have they actually authored tex-coords?[/i]) and selects an appropriate stream storage format for the data to be exported in.
e.g.
[code]StreamFormat("basicStream",
{
{--stream 0
{ Float, 3, Position },
},
{--stream 1
{ Half, 3, Normal },
{ Half, 2, TexCoord, 0 },
},
})

--these are also used to auto-generate the 'struct Vertex {...}' HLSL code, etc..
VertexFormat("basicVertex",
{
{ "position", float3, Position },
{ "texcoord", float2, TexCoord, 0 },
{ "normal", float3, Normal },
})
VertexFormat("shadowVertex",
{
{ "position", float3, Position },
})

--which stream formats are allowed to be used with which vertex formats
--A.K.A. VertexDeclaration, VAO format descriptor...
InputLayout( "basicStream", "basicVertex" )
InputLayout( "basicStream", "shadowVertex" )[/code] Edited by Hodgman

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this