Sign in to follow this  

OpenGL Slow Sprite Batch Rendering

Recommended Posts

Dekowta    269

I'm fairly new to OpenGL but have been working on it for a little while now.

I'm mainly working on a 2D renderer which uses the more up to date style of OpenGL rather than Immediate mode.

I've managed to create a sprite batching system however I'm not really getting the results that I would expect. I'm roughly getting around 30fps when rendering 5,000 of the same sprite object and around 15fps when rendering 10,000

At the moment the way it works is as followed

- Sprite batch is created which creates the shader, indicies for MultiDrawArrays, 2 VBO and a VAO

- begin is called and sets the alpha mode

- draw is called and checks to see if the texture has been batched already if so it adds the new points to the batch if not it creates a new batch item and adds the points to it. if the max sprites is reached it will draw the batch

- end all the current batch sprites are rendered


Cpp file

I have tried a few things to improve performance but with not success. I expect its the method I'm using and that way that im using openGL thats causing the issue.

Thanks Dekowta

Share this post

Link to post
Share on other sites
ingramb    440
I think your problem may be glMultiDrawArrays. My understanding is that this doesnt really map to the gpu. You will still get one draw call per entry you send to glMultiDrawArrays. The only benefit is reduced driver overhead. You should be able to send all the sprites as a list of quads just using GL QUADS and glDrawArrays. I think this will be much faster.

Share this post

Link to post
Share on other sites
mhagain    13430
Your main problem with this code is the way you're updating your VBO. Calling glBufferSubData in this manner is going to lead to pipeline stalls and flushes, and doing it potentially so many times per frame will make things worse. You need to implement proper buffer streaming to get this working well; have a read of [url=""]this post[/url] for a description of the technique.

Sample code:[code]GLuint bufferid;
int buffersize = 0x400000;
int bufferoffset = 0;

void StreamBuffer (void *data, int batchsize)
glBindBuffer (GL_ARRAY_BUFFER, bufferid);

if (bufferoffset + batchsize >= buffersize)
glBufferData (GL_ARRAY_BUFFER, buffersize, NULL, GL_STREAM_DRAW);
bufferoffset = 0;

void *mappeddata = glMapBufferRange (GL_ARRAY_BUFFER, bufferoffset, bufferoffset + batchsize, access);

if (mappeddata)
memcpy (mappeddata, data, batchsize);
glUnmapBuffer (GL_ARRAY_BUFFER);
glDrawArrays ( .... );
bufferoffset += batchsize;
Ideally you wouldn't memcpy here; you'd generate the batch data directly into the pointer returned from glMapBufferRange instead. That's not always possible though, and memcpy is fine for many (if not most) use cases - the key is in avoiding pipeline stalls and the CPU-side overhead from memcpy is going to be very low by comparison. Edited by mhagain

Share this post

Link to post
Share on other sites
Dekowta    269
Thanks for the reply

I'm having a bit of a hard time understanding how to get buffer streaming set up. So far this is what I understand on what I have to do.

in the section where I loop through the iterations in the map I will

- Bind the texture
- Bind the vertex position buffer
- check to see if the bufferoffset + Batch Count is greater than the buffer size
->if so set the buffer to the size of the batch
->reset the bufferoffset to 0
- get the mapbuffer range pointer
- copy over the batch vertex data (if the vertex data is a pointer can I just make mappeddata point to it or do I need to copy data into mappeddata's address using them memcpy)
- unmap the buffer

- Bind the UV buffer
- do the same as the vertex buffer but with the UV coordinates
- unmap the buffer

- then call draw array with GL_QUADS
- increase the bufferoffset by the batch count

- unbind the texture

This is still fine to use with the vertex array as well?

Also post that you linked mentioned buffer-orphaning from what it explained this happens in the if statement check?

Share this post

Link to post
Share on other sites
mhagain    13430
You've pretty much got it, yes. It's really just a simple circular buffer; there's no voodoo in it and the only tricky thing is knowing the correct GL calls to use.

You're going to have some added complexity if you're using separate VBOs for positions and texcoords - I'd recommend that you define a vertex struct containing both and interleave them using a single VBO; it'll perform better and make your code much simpler.

Yes, buffer orphaning is what happens in the first "if" check; there are also flags on glMapBufferRange that you can use to accomplish this, but I prefer to use glBufferData (... NULL, ...) - not for any technical reason, just so that I can more easily add a fallback to glBufferSubData if the MapBufferRange call fails (using the same offset and size params). I omitted that from the sample code I posted just for clarity.

Not sure what you mean by vertex arrays here. Old-style vertex arrays or newer VAOs? If the former, there's no need to do this kind of process - you're using memory owned and managed by your program (rather than by the GPU) so there's no resource contention to speak of, and you don't need anything special to handle it.

Share this post

Link to post
Share on other sites
Dekowta    269
Humm I don't know what im doing wrong at the moment but its not drawing correctly and if I leave it for a bit glMapBufferRange returns 0 with 1281 a Invalid value.

The Modified Files are as followed though I have only really changed the initalise and render function as well as packing the vertex data into a single struct

what its rendering

what it should look like (excluding the orange and green character)

O and I meant to say Vertex Array Object rather than just Vertex Array.

Share this post

Link to post
Share on other sites
mhagain    13430
You're getting your offsets/etc wrong here - this in particular is not what you want:[code]void* mappedData = glMapBufferRange(GL_ARRAY_BUFFER, m_BufferOffset, m_BufferOffset + currentBatch->spriteCount, access);[/code]

I should probably have stated explicitly that the sample code I gave above works in byte sizes, not numbers of vertexes or numbers of sprites, so as a result things need to be adjusted accordingly if you're going to use other units.

So, the second parameter to glMapBufferRange is an offset in bytes, so make it m_BufferOffset * sizeof (MLBatchItem::Vertex) * 4 instead. The third parameter is the size of the range to map (also in bytes), not the end offset of the full range, so it becomes currentBatch->spriteCount * sizeof (MLBatchItem::Vertex) * 4.

You should also make sure that your value of m_BufferSize is equal to (however many sprites fit in your buffer) * sizeof (MLBatchItem::Vertex) * 4; likewise memcpy needs a size in bytes, glDrawArrays takes a number of vertexes, not a number of sprites as it's third param, and your offset needs to be incremented by the number of vertexes, not the number of sprites. There may be a few other places I've missed.

Share this post

Link to post
Share on other sites
Dekowta    269
Humm I seem to have it working but there are still a few issues.

I have two sprites A - 128x128 at 100, 100 and B - 230x230 at 400, 100

when I render just A and B, B will render in the coordinates of A but when I render A B B it does the same again but the second B is rendered like normal.

I checked the data in the VBO and it seems to be fine and holds the data that I would expect.

The modified files

Rendering A and B

Rendering A B B

Sorry to keep having issues with it you've helped so much so far.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now