Double-buffered VBO vs updating parts of a VBO

Started by
6 comments, last by guywithknife 12 years, 11 months ago
Hi,

I am currently working on my rendering code (using OpenGL 3.2, in case it matters) and I am wondering which of these approaches would be best (or if any better alternatives exist).

In both methods I store data about each object (position, colour etc) in a VBO, lets call it objectsVBO.

Method A, double-buffered VBO:


objectsVBO[0] and objectsVBO[1] are the two VBO buffers.
unsigned int buf = 0;
updatedObjects is an array of all objects, after having been updated by gameplay & physics code

every frame:
sort updatedObjects back to front, remove objects not in view
glBufferData to put updatedObjects into objectsVBO[buf]
glDrawArrays(objectsVBO[buf], 0, num_objects_in_view);
buf = 1 - buf;

EDIT: Looks like this method may be a better way to accomplish the above.

Method B, updating part of VBO:


objectsVBO is a VBO storing all objects
updatedObjects is an array of objects that have changed, after having been updated by gameplay & physics code

every frame:
glBufferSubData to put updatedObjects into objectsVBO (possibly only objects that are on screen)
indices = index of each object in objectsVBO that is on screen, sorted back to front
glDrawElements(objectsVBO, indices);


Method A means only objects to be drawn ever need to be pushed to the gpu and since its double buffered, hopefully the next frame can be prepared before the first frame is fully rendered.

Method B means only objects which have actually changed need to be pushed to the gpu while unchanged objects are still there from the previous frame but I do not have double buffering.

Both methods only render whats on screen and render back to front as required. Static geometry will be kept in a separate VBO in both cases.

Any help or ideas appreciated as I don't have an awful lot of experience with 3D rendering yet.
Advertisement
Or, store exactly 1 copy of each model in your vbo and just loop through your models setting up matrices. The point of vbos is to stop talking to the gpu so much. You can also store everything in 1 bigger vbo so that you dont have to bind separate vbos for different models.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

"just loop through your models setting up matrices"

Or pass the matricies in through vertex attributes to save modifying uniforms which is more expensive.
Well, that works fine for any static or almost static geometry. I was planning to do that in these cases as, like you said, the geometry then needs to only be uploaded once.
I am looking at these two options for dynamic stuff that will change almost every frame, like sprites and particles where the geometry is being modified from frame to frame instead of just the matrices. In this case I was thinking of tightly packed VBOs (eg positions of point sprites/billboards) which then get batch rendered by a single call to glDrawArrays or glDrawElements.
If doing it for particles, then keep a vbo for each instance. So like 50 particles max for your fire effect. The buffer would be 50 positions and 50 sizes (in world coordinates) for each fire in your game. Any particles that are dead, mark their size as 0 and just draw them anyway. Just upload to 1 buffer and then use it to draw. I dont see a benefit to double buffering unless someone says otherwise, but I highly doubt it.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Double buffering isn't needed, u just have to make sure to glBufferData with null first so the Driver can allocate new memory and don't have to stall.

With OpenGL 3 you have access to the Transform Feedback extension, which let u "render" from a Vertex Shader vertices into a Buffer Object.
This let u calculate all the particle movement on the GPU and several million particles are no problem (Sorting on the GPU is also possible but a bit more complicated).
PS: When using OpenGL 3 u can use Buffer objects for your Shader uniforms, take a look at http://www.opengl.org/wiki/Uniform_Buffer_Object.
Have a look at some examples of Hardware Instancing on the gpu via gsl shaders..
Basically, you can pass a page worth of matrices to the gpu in one call, representing a bunch of unique object instances based on a single static vbo.
Try to use transforms to manipulate your geometry instead of manipulating the VB.
In C++, friends have access to your privates.
In ObjAsm, your members are exposed!

Double buffering isn't needed, u just have to make sure to glBufferData with null first so the Driver can allocate new memory and don't have to stall.

With OpenGL 3 you have access to the Transform Feedback extension, which let u "render" from a Vertex Shader vertices into a Buffer Object.
This let u calculate all the particle movement on the GPU and several million particles are no problem (Sorting on the GPU is also possible but a bit more complicated).
PS: When using OpenGL 3 u can use Buffer objects for your Shader uniforms, take a look at http://www.opengl.or...m_Buffer_Object.

Thanks! This is the kind of information I was hoping to find.


Have a look at some examples of Hardware Instancing on the gpu via gsl shaders..
Basically, you can pass a page worth of matrices to the gpu in one call, representing a bunch of unique object instances based on a single static vbo.
Try to use transforms to manipulate your geometry instead of manipulating the VB.

Yes, this seems like a good idea for complex dynamic objects. Thanks. So, basically I would store the matrices in a buffer object (like what Danny02 suggests) and then render the VBO with glDrawElementsInstanced however many times I need, suing the instance id to index the buffer object of matrices? Is that correct?

Finally, what about simple objects like point sprites (ie the VBO only contains a small amount of information (eg, position and colour) per vertex (one object) but stores many objects, instead of a lot of geometry for a single model)? For particles, Danny02's suggestion of handling everything on the GPU sounds appealing, but what about things that cannot be handled entirely on the GPU, eg, because they are physics enabled? I take it I can use Transform Feedback here?

I'm thinking something like the following:

Each vertex in a VBO is one object, containing whatever parameters are required.
Store events (generated from user interaction, collisions etc) into BO
BeginTransformFeedback()
DrawArrays() - the vertex shader now calculates positions, animation and so on for each object. Use events BO to add external stimulus where needed.
EndTransformFeedback()
DrawArrays the written buffer to render
MapBuffer to read the written values (for input to collision detection code or user interaction or whatever)


Does that sound like a reasonable procedure? I'm especially wondering how best to handle the external stimulus. Perhaps a good way would be to glDrawElementsInstanced so I can use the instance id to look up stimulus/events for a single object. My first thought was to use the instance ID as a y coordinate in a 2d sampler (where the X coordinates contain the event information for that object) and then make sure that unused event texels are set in a way that applying them does nothing (identity function). Is that a reasonable way to do this or is that crazy (and potentially slow)?

Thanks for all the help everyone, I'm learning lots of new things, which is what I was hoping for.

This topic is closed to new replies.

Advertisement