• Advertisement

Vertex buffer management.

This topic is 3291 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is there any sensible way to determine a reasonable number of vertex buffers to expect to be able to host? Basically, it seems that it would be crazy to (for example) allocate one for every single tree object on a map containing 100,000 trees. So some sort of least recently used allocation of them would seem sensible. I'm just wondering how to size that pool. Obviously it needs to be greater than the number of renderable objects in the scene at any given point in time to be efficient -- but is there any way to find out if the GL implementation is going to struggle if I allocate 1000 or 100,000 or a million or however many?

Share this post

Share on other sites
Advertisement
You don't need a different vertex buffer for every rendered object! I doubt you will have 100000 different trees on your map.

Share this post

Share on other sites
Mm. I'm sort of thinking that after I've transformed the object into worldspace, surely it would be sensible to cache that transformed data in a VBO and re-use it next frame.

Otherwise, I don't really see any advantage over arrays if I end up just repeatedly sending the vertex data to the render engine. I thought the whole point of this was to have the data cached (preferably on the video card) so one could just say "Oi, render that again please".

I may be misunderstanding VBs though.

Share this post

Share on other sites
You are correct in what you are assuming, sort of, but drawing wrong conclusions from it. They are for storing vertex data on the graphics card (optimal place depends on usage, but generally speaking; you can place the vertex buffers in system memory aswell if usage pattern requires that), and the point is that you just render them again whenever you need them.

But; if you have the same object at different places (two identical trees at different places in the world, for example) you use the matrices to transform the vertices into their desired location. Same vertex data and different matrices gives you the object at two places. So you draw it twice, but with different matrices.

Share this post

Share on other sites
Ah... I see. I've been doing the drawing previously by doing full model->worldspace transforms on my objects, and then just rendering the thing into the world -- I've not been pushing/popping matricies at all.

I thought everyone did it like that?

OK, I see if you're storing source vertex data then you need one VB per static model and one per instance of each poseable model.

Share this post

Share on other sites
You can even get away with one VB per posable model (one for multiple instances) if you use hardware skinning. The data then is the same and just transformed in the vertex shader for each instance.

Share this post

Share on other sites
Should I be thinking, these days, that I ought to be using the OpenGL transform matrices then? On the basis that the GL driver can decide where to do the sums? (pick hardware transforms if they're available, for example).

Share this post

Share on other sites
I'm not sure I fully understand your question, but I'll give it a try.

For static models that don't move, you should calculate the world transform yourself and upload this via glLoadMatrix(). Thus you'd only need to calculate the matrix once and reuse it every frame.

For static but moving models (i.e. world matrix changes) you could either use the OpenGl functions or calculate the matrices yourself. That depends on what you need to do, i.e. if your state sorting requires you to work with world matrices only (no stacks) you should calculate them yourself, if you can work with the stack you'd probably let OpenGL do the work.

For skinning you should calculate the matrices yourself and upload them to the shader as uniforms/use them in software transformations. You should not recalculate the matrices in the vertex shader, as you'd do the same calculations over and over again.

There's also the possibility to let OpenGL do matrix calculations and read back the result but I doubt that would give a performance boost.

Share this post

Share on other sites
"you should calculate the world transform yourself and upload this via glLoadMatrix(). Thus you'd only need to calculate the matrix once and reuse it every frame."

Right, this is what I'm thinking -- in previous projects, I've been doing the multiplications of the model verticies by the matrix myself as well; so what I send via the vertex arrays are world co-ordinates.

So I'm going to need to pull back the initial modelview matrix (which contains the camera transform) and add it to my model->world transform. But that's going to be a once-per frame read operation.

Then I don't have to push/pop the matricies, it's just a straight upload of the result, bind the vertex buffer, bind the index buffer, bind a texture buffer and render the object.

OK. I can see how that's going to be reasonably clean and it offloads everything possible at the driver so it can pick optimal paths.

Share this post

Share on other sites
What I do is:

glLoadMatrix(<view matrix>);for each object  glPushMatrix();  glMultMatrix(<world matrix>);  <render>  glPopMatrix();

That's no read operation then.
You also could only upload the view matrix if it has changed to further reduce operations.

Share this post

Share on other sites

• Advertisement