Working with Sprite Batches

Started by
1 comment, last by Vincent_M 9 years, 12 months ago

I'm working with OpenGL ES 2.0 and OpenGL 2.1, and I've run into this issue where I need to sort my vertices for my sprite batch. For example, if I set the depth too high for a sprite where its vertices are at the beginning of the vertex buffer, then it'll draw on top (expected), but any alpha borders will draw on top of it too. I did post a thread on this months ago, but wasn't able to get to it at the time, and forgot. The solution that was proposed was to sort the indices, which I think is a good idea. Currently, I'm not using an IBO or VBO for indices or vertices. Would VBO/IBOs be useful at all? I currently use an STL vector for my vertex array and another for my index array since it's mutable whenever I need to add/remove, possibly reorder or transform the sprites in any way. I'd like give each sprite its own matrix to pass to the shader, but then I'd be uploading huge amounts of uniform matrices when I have many sprites onscreen.

So, would it be wise to re-sort my entire index buffer whenever sprite is added or its depth changes? This would work well for an axis-aligned orthographic camera, but what if the camera rotates? This would happen quite a bit if I ever wanted to use my sprites with a perspective camera. I'd also like to support billboarding with my sprite batch for perspective purposes. If I were to billboard, wouldn't I need to ensure that each billboard is drawn from back-to-front each frame (assuming glDepthFunc is setup that way)? What would be a decent way to sort my sprites by depth? Should I just re-order the index buffer (not the vertices) based on signed distance to the camera drawing them? A dynamic camera could easily change the depth of a sprite's (aka, "billboard's") distance on a frame-to-frame basis simply by moving and re-orientating the camera.

Another issue I'm running into is disabling sprites. What if I don't want to render a sprite? I treat all sprites as a Transform subclass in my code, like models, particles system and other entities my scene manager updates and renders. The differences is that it doesn't own its own geometry data because it's grouped together in a huge batch vector. Would it just be wise to set the alpha for my disabled sprites' vertices to zero, or would it be wise to move its vertices out of the vertex/index vectors until it's re-enabled? Setting alpha to zero can be heavy on mobile hardware, especially if it's a large sprite.

Any thoughts on the matter would be really helpful!

Advertisement

The best way to work with sprite batches is to create a huge vertex buffer where you will put your sprites in. Each frame you iterate over all the sprites that needs to be displayed then you insert them into the vertex buffer at an index that is open. If you buffer get's full., you render and flush the buffer then rinse and repeat. To have each sprite use a different texture. you can either use and Texture Array or an a Texture Atlas. Any transformation you want to apply to your sprite should happen at the insertion level before putting them in the buffer.

If you have some sprites that will be alpha blended. In that case you can have two vertex buffers. One that holds sprites that are transparent and one that does not.

The same algorithm apply to both. The only difference is that you will want to draw all the opaque sprites first. Then draw all the transparent ones on top. In order

to preserve your sprite order, you can always turn z buffer on and apply an z value to the sprite as you put them in the buffer. That way your sprite order will

be preserved regardless of the order they are draw in the frame buffer.

Another thing i forgot to mention. The GPU usually execute the command 1 or 2 frames behind the CPU. So one of the things you can do is make the vertex buffer double the size. That way you fill the first portion of the vertex bufffer on 1 frame and on the 2 frame you fill the second portion of the buffer. That way the CPU does not stall when the GPU is trying to access the first portion of the buffer.

Usually for a sprite batch you do not need to waste memory on allocating an Index Buffer. Since every frame you will fill the vertex buffer with exactly what you need to display for that frame, especially if you have dynamic sprites that changes every frame.

I see, so I should be using a pre-allcoated STL vector acting as a memory pool for just the indices? For example, as I add/remove sprites to the scene, I'd modify the sprite batch's vertex buffer accordingly, but then I'd just have a pre-allocated pool for indices. What I could do is also build a vector of valid sprites as I'm frustum culling, and then just sort that. I'd set the threshold somewhat high (like 128 sprites to start), and then I could increase that programmatically if needed. This would be done on a per-batch basis as well. Then, it's just a matter of sorting.

This topic is closed to new replies.

Advertisement