• Advertisement
Sign in to follow this  

LWJGL FBO performance

This topic is 2561 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok, my friends and I finally got the drive after years of though to start making a game for the hell of it. We aren't super worried about performance so long as the frame rate stays around 60.

We've got a nice height map generator running and it's rendered with a color,vertex, and index array and works great. I proceeded to add a GUI to allow ease control over the GUI and to notify us of the frame rate and other details. Currently the GUI is run through a VBO generation step and then that is used to render all GUI elements.

I decided it would be neat to render the GUI to an FBO so we can do some fancy effects with it. When i did this, I noticed a significant drop in performance (average of 650fps when just drawing ~2000 vertices for the GUI and dropped by about 75fps average). Is this normal for an FBO to render slower than a VBO, even though the FBO is currently only 4 vertices? I have the FBO vertices/color/texture coords stored at the beginning of the VBO for the GUI and just render the VBO for the GUI at the first 4 vertices.

Also, I noticed that Display.update (or swapbuffers) slows things down significantly. I know this is somewhat normal, but is there anyway to increase the speed of it in LWJGL, I was hoping for the fps to be in the thousands with so little rendered on the screen, but even rending nothing it caps at ~750fps.

(We are doing all of this in jRuby btw. I know that will decrease performance, but none of us really care for java and wanted to stay away from c/c++ if possible.)

Code that renders the FBO: (@guiVertices = 4)



glc.glBindTexture(glc::GL_TEXTURE_2D, @fboTexture)
arbb.glBindBufferARB(arbv::GL_ARRAY_BUFFER_ARB, @vbo)
glc.glVertexPointer(2, glc::GL_INT, 0, 0)
arbb.glBindBufferARB(arbv::GL_ARRAY_BUFFER_ARB, @cbo)
glc.glColorPointer(4, glc::GL_UNSIGNED_BYTE, 0, 0)
arbb.glBindBufferARB(arbv::GL_ARRAY_BUFFER_ARB, @tbo)
glc.glTexCoordPointer(2, glc::GL_FLOAT, 0, 0)
glc.glDrawArrays(glc::GL_TRIANGLE_STRIP, 0, @guiVertices)
glc.glBindTexture(glc::GL_TEXTURE_2D, 0)



Code to build FBO




glc.glEnable(glc::GL_TEXTURE_2D)
glc.glBindTexture(glc::GL_TEXTURE_2D, 0)
glc.glPushAttrib(glc::GL_VIEWPORT_BIT | glc::GL_ENABLE_BIT | glc::GL_COLOR_BUFFER_BIT | glc::GL_DEPTH_BUFFER_BIT)
glc.glViewport(0, 0, 1600, 900)
arbf.glBindFramebuffer(arbf::GL_FRAMEBUFFER, @fbo)

glc.glClearColor(1.0, 1.0, 1.0, 0.0)
glc.glClear(glc::GL_COLOR_BUFFER_BIT | glc::GL_DEPTH_BUFFER_BIT)
glc.glLoadIdentity

glc.glEnable(glc::GL_TEXTURE_2D)
arbb.glBindBufferARB(arbv::GL_ARRAY_BUFFER_ARB, @vbo)
glc.glVertexPointer(2, glc::GL_INT, 0, 0)
arbb.glBindBufferARB(arbv::GL_ARRAY_BUFFER_ARB, @cbo)
glc.glColorPointer(4, glc::GL_UNSIGNED_BYTE, 0, 0)
arbb.glBindBufferARB(arbv::GL_ARRAY_BUFFER_ARB, @tbo)
glc.glTexCoordPointer(2, glc::GL_FLOAT, 0, 0)
glc.glBindTexture(glc::GL_TEXTURE_2D, TextureGL.textures["font"].id)
glc.glDrawArrays(glc::GL_TRIANGLE_STRIP, 4, @vboLength)
glc.glBindTexture(glc::GL_TEXTURE_2D, 0)

arbf.glBindFramebuffer(arbf::GL_FRAMEBUFFER, 0)
glc.glPopAttrib


glc.glBindTexture(glc::GL_TEXTURE_2D, @fboTexture)
arbf.glGenerateMipmap(glc::GL_TEXTURE_2D)
glc.glBindTexture(glc::GL_TEXTURE_2D, 0)

Share this post


Link to post
Share on other sites
Advertisement

When i [render the GUI to an FBO ] I noticed a significant drop in performance (average of 650fps when just drawing ~2000 vertices for the GUI and dropped by about 75fps average).
Don't use FPS to measure performance... use milliseconds:
650 fps == 1.53ms
575 fps == 1.73ms

You're talking about a 0.2ms difference here, or in comparison to your 60fps target, you're talking about 0.012% of a frame.
Is this normal for an FBO to render slower than a VBO[/quote]That doesn't really make sense... You're not using an FBO instead of a VBO, what you're doing is rendering some data into an FBO, and then blitting that rendered data to the screen. 0.2ms sounds like the kind of overhead you could expect from the blit.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement