Sign in to follow this  

OpenGL Slow Sprite Batch Rendering

Recommended Posts

Dekowta    269

I'm fairly new to OpenGL but have been working on it for a little while now.

I'm mainly working on a 2D renderer which uses the more up to date style of OpenGL rather than Immediate mode.

I've managed to create a sprite batching system however I'm not really getting the results that I would expect. I'm roughly getting around 30fps when rendering 5,000 of the same sprite object and around 15fps when rendering 10,000

At the moment the way it works is as followed

- Sprite batch is created which creates the shader, indicies for MultiDrawArrays, 2 VBO and a VAO

- begin is called and sets the alpha mode

- draw is called and checks to see if the texture has been batched already if so it adds the new points to the batch if not it creates a new batch item and adds the points to it. if the max sprites is reached it will draw the batch

- end all the current batch sprites are rendered


Cpp file

I have tried a few things to improve performance but with not success. I expect its the method I'm using and that way that im using openGL thats causing the issue.

Thanks Dekowta

Share this post

Link to post
Share on other sites
ingramb    440
I think your problem may be glMultiDrawArrays. My understanding is that this doesnt really map to the gpu. You will still get one draw call per entry you send to glMultiDrawArrays. The only benefit is reduced driver overhead. You should be able to send all the sprites as a list of quads just using GL QUADS and glDrawArrays. I think this will be much faster.

Share this post

Link to post
Share on other sites
mhagain    13430
Your main problem with this code is the way you're updating your VBO. Calling glBufferSubData in this manner is going to lead to pipeline stalls and flushes, and doing it potentially so many times per frame will make things worse. You need to implement proper buffer streaming to get this working well; have a read of [url=""]this post[/url] for a description of the technique.

Sample code:[code]GLuint bufferid;
int buffersize = 0x400000;
int bufferoffset = 0;

void StreamBuffer (void *data, int batchsize)
glBindBuffer (GL_ARRAY_BUFFER, bufferid);

if (bufferoffset + batchsize >= buffersize)
glBufferData (GL_ARRAY_BUFFER, buffersize, NULL, GL_STREAM_DRAW);
bufferoffset = 0;

void *mappeddata = glMapBufferRange (GL_ARRAY_BUFFER, bufferoffset, bufferoffset + batchsize, access);

if (mappeddata)
memcpy (mappeddata, data, batchsize);
glUnmapBuffer (GL_ARRAY_BUFFER);
glDrawArrays ( .... );
bufferoffset += batchsize;
Ideally you wouldn't memcpy here; you'd generate the batch data directly into the pointer returned from glMapBufferRange instead. That's not always possible though, and memcpy is fine for many (if not most) use cases - the key is in avoiding pipeline stalls and the CPU-side overhead from memcpy is going to be very low by comparison. Edited by mhagain

Share this post

Link to post
Share on other sites
Dekowta    269
Thanks for the reply

I'm having a bit of a hard time understanding how to get buffer streaming set up. So far this is what I understand on what I have to do.

in the section where I loop through the iterations in the map I will

- Bind the texture
- Bind the vertex position buffer
- check to see if the bufferoffset + Batch Count is greater than the buffer size
->if so set the buffer to the size of the batch
->reset the bufferoffset to 0
- get the mapbuffer range pointer
- copy over the batch vertex data (if the vertex data is a pointer can I just make mappeddata point to it or do I need to copy data into mappeddata's address using them memcpy)
- unmap the buffer

- Bind the UV buffer
- do the same as the vertex buffer but with the UV coordinates
- unmap the buffer

- then call draw array with GL_QUADS
- increase the bufferoffset by the batch count

- unbind the texture

This is still fine to use with the vertex array as well?

Also post that you linked mentioned buffer-orphaning from what it explained this happens in the if statement check?

Share this post

Link to post
Share on other sites
mhagain    13430
You've pretty much got it, yes. It's really just a simple circular buffer; there's no voodoo in it and the only tricky thing is knowing the correct GL calls to use.

You're going to have some added complexity if you're using separate VBOs for positions and texcoords - I'd recommend that you define a vertex struct containing both and interleave them using a single VBO; it'll perform better and make your code much simpler.

Yes, buffer orphaning is what happens in the first "if" check; there are also flags on glMapBufferRange that you can use to accomplish this, but I prefer to use glBufferData (... NULL, ...) - not for any technical reason, just so that I can more easily add a fallback to glBufferSubData if the MapBufferRange call fails (using the same offset and size params). I omitted that from the sample code I posted just for clarity.

Not sure what you mean by vertex arrays here. Old-style vertex arrays or newer VAOs? If the former, there's no need to do this kind of process - you're using memory owned and managed by your program (rather than by the GPU) so there's no resource contention to speak of, and you don't need anything special to handle it.

Share this post

Link to post
Share on other sites
Dekowta    269
Humm I don't know what im doing wrong at the moment but its not drawing correctly and if I leave it for a bit glMapBufferRange returns 0 with 1281 a Invalid value.

The Modified Files are as followed though I have only really changed the initalise and render function as well as packing the vertex data into a single struct

what its rendering

what it should look like (excluding the orange and green character)

O and I meant to say Vertex Array Object rather than just Vertex Array.

Share this post

Link to post
Share on other sites
mhagain    13430
You're getting your offsets/etc wrong here - this in particular is not what you want:[code]void* mappedData = glMapBufferRange(GL_ARRAY_BUFFER, m_BufferOffset, m_BufferOffset + currentBatch->spriteCount, access);[/code]

I should probably have stated explicitly that the sample code I gave above works in byte sizes, not numbers of vertexes or numbers of sprites, so as a result things need to be adjusted accordingly if you're going to use other units.

So, the second parameter to glMapBufferRange is an offset in bytes, so make it m_BufferOffset * sizeof (MLBatchItem::Vertex) * 4 instead. The third parameter is the size of the range to map (also in bytes), not the end offset of the full range, so it becomes currentBatch->spriteCount * sizeof (MLBatchItem::Vertex) * 4.

You should also make sure that your value of m_BufferSize is equal to (however many sprites fit in your buffer) * sizeof (MLBatchItem::Vertex) * 4; likewise memcpy needs a size in bytes, glDrawArrays takes a number of vertexes, not a number of sprites as it's third param, and your offset needs to be incremented by the number of vertexes, not the number of sprites. There may be a few other places I've missed.

Share this post

Link to post
Share on other sites
Dekowta    269
Humm I seem to have it working but there are still a few issues.

I have two sprites A - 128x128 at 100, 100 and B - 230x230 at 400, 100

when I render just A and B, B will render in the coordinates of A but when I render A B B it does the same again but the second B is rendered like normal.

I checked the data in the VBO and it seems to be fine and holds the data that I would expect.

The modified files

Rendering A and B

Rendering A B B

Sorry to keep having issues with it you've helped so much so far.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats =; gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at
      I would be happy about feedback, suggestions or contributions.

  • Popular Now