Hello Gamedev community.
I'm writing an application that needs to work on cheap boards (single buffering, fbo not supported, etc) and in order to avoid buffer corruption during the swap, I'm thinking about making a copy of the back buffer right before the swap and then copying it back to the back buffer after the swap. I already have a FBO implemented that is set as the main draw buffer, and right before the swap I have it copied to the back buffer.
In order to support cheap boards I'll have to save the back buffer contents somewhere else (not in a framebuffer object) since FBO is not supported - probably in a common array.
My question is, how do I copy the back buffer contents to an array, and then copy the array contents back to the back buffer?
Using Blitt won't work since it's part of EXT and also needs the target to be a framebuffer (which is not supported).
Thanks in advance.
Copying buffers to guarantee integrity
I highly doubt that they're really single-buffered unless they're some kind of weird embedded/specialist hardware; even the lowest-end hardware supports double-buffering and has done so for donkeys years. I'm also confused that you say FBO is not supported in one sentence, then in the next say you've got an FBO set up.
?????
Anyway, two methods spring to mind. glReadPixels will transfer the current draw buffer contents to system memory, then glDrawPixels will transfer them back. Ugly, slow, needs to stall the pipeline, but will work quite robustly. glCopyTexSubImage2D will copy the buffer to a texture, which must be sufficiently large of course, then you could just draw it as a textured quad post-swap. Fast copy with no pipeline stall, everything stays on the GPU, video RAM overhead, fillrate overhead for the post-swap draw operation.
?????
Anyway, two methods spring to mind. glReadPixels will transfer the current draw buffer contents to system memory, then glDrawPixels will transfer them back. Ugly, slow, needs to stall the pipeline, but will work quite robustly. glCopyTexSubImage2D will copy the buffer to a texture, which must be sufficiently large of course, then you could just draw it as a textured quad post-swap. Fast copy with no pipeline stall, everything stays on the GPU, video RAM overhead, fillrate overhead for the post-swap draw operation.
Thanks for the help, mhagain.
The main problem with cheap boards is the back buffer corruption after the swap and no support for FBO. I've had reports complaining about single buffered boards but I don't think I can find a solution for this case though.
Yes I have a FBO set up (I've already finished the application) but I want to change it so my application can run on boards without FBO support. I've only mentioned that I have it set up because I thought someone could have a way to just change the FBO swap system that I implemented instead of making another swap system from scratch.
Regarding the texture method it won't keep the back buffer from getting corrupted after the swap. Unless I had a custom buffer (not a FBO) and my application drew directly on it - but I don't know how to do this in a way that works on boards without FBO support.
The last time I tried to use glReadPixels the last parameter (data to receive the buffer contents) always ended up being of the wrong type. On some Java examples the data structure used was a 2-dimensional array but it won't work for me (I'm using Python). Do I need to use a special OpenGL data type?
The main problem with cheap boards is the back buffer corruption after the swap and no support for FBO. I've had reports complaining about single buffered boards but I don't think I can find a solution for this case though.
Yes I have a FBO set up (I've already finished the application) but I want to change it so my application can run on boards without FBO support. I've only mentioned that I have it set up because I thought someone could have a way to just change the FBO swap system that I implemented instead of making another swap system from scratch.
Regarding the texture method it won't keep the back buffer from getting corrupted after the swap. Unless I had a custom buffer (not a FBO) and my application drew directly on it - but I don't know how to do this in a way that works on boards without FBO support.
The last time I tried to use glReadPixels the last parameter (data to receive the buffer contents) always ended up being of the wrong type. On some Java examples the data structure used was a 2-dimensional array but it won't work for me (I'm using Python). Do I need to use a special OpenGL data type?
I still couldn't manage to use glReadPixels correctly for some reason.
Also, suggestions regarding more efficient methods to make a back buffer backup and then restore it are welcome.
Also, suggestions regarding more efficient methods to make a back buffer backup and then restore it are welcome.
You are probably talking about old Intel board. They support p-buffer also called pixel-buffer which is what came before FBO.
http://www.opengl.org/registry/specs/ARB/wgl_pbuffer.txt (Windows)
http://www.opengl.org/registry/specs/SGIX/pbuffer.txt (Unix)
http://www.opengl.org/registry/specs/ARB/wgl_pbuffer.txt (Windows)
http://www.opengl.org/registry/specs/SGIX/pbuffer.txt (Unix)
The last time I tried to use glReadPixels the last parameter (data to receive the buffer contents) always ended up being of the wrong type. On some Java examples the data structure used was a 2-dimensional array but it won't work for me (I'm using Python). Do I need to use a special OpenGL data type?
I really don't know much about Java or Python data types or constraints so unfortunately I'm not able to give any info there.
Thanks for the input everyone.
I manage to make the glReadPixels method run, but it isn't really working.
buffer = (GLubyte*(3*width*height))(0)
glReadBuffer(GL_BACK)
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer)
flip()
glDrawBuffer(GL_BACK)
glDrawPixels(self.width, self.height, GL_RGB, GL_UNSIGNED_BYTE, buffer)
For some reason only the first frame is shown continually. The flip() does the swap itself (imported from pyglet).
I tried using a 'print buffer' to see if the back buffer was being drawn at but it also remains constant.
Any ideas?
I manage to make the glReadPixels method run, but it isn't really working.
buffer = (GLubyte*(3*width*height))(0)
glReadBuffer(GL_BACK)
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer)
flip()
glDrawBuffer(GL_BACK)
glDrawPixels(self.width, self.height, GL_RGB, GL_UNSIGNED_BYTE, buffer)
For some reason only the first frame is shown continually. The flip() does the swap itself (imported from pyglet).
I tried using a 'print buffer' to see if the back buffer was being drawn at but it also remains constant.
Any ideas?
1) There hasn't been a single-buffered video card since 1998 (PCs). And those that were, hardly supported OpenGL.
2) There's no back buffer in single buffering. Just the front buffer.
3) Back buffer corruption? a. Why do you care? do you actually need the results after they've been displayed in the screen for something important? Does it happen once in a while or very often? b. Back buffer "corruption" is EXTREMELY rare. Even in very very old & crappy GPUs. You're either confusing your problem with something else, or you're doing something you shouldn't be doing, very very wrong.
Cheers
Dark Sylinc
2) There's no back buffer in single buffering. Just the front buffer.
3) Back buffer corruption? a. Why do you care? do you actually need the results after they've been displayed in the screen for something important? Does it happen once in a while or very often? b. Back buffer "corruption" is EXTREMELY rare. Even in very very old & crappy GPUs. You're either confusing your problem with something else, or you're doing something you shouldn't be doing, very very wrong.
Cheers
Dark Sylinc
1) There hasn't been a single-buffered video card since 1998 (PCs). And those that were, hardly supported OpenGL.
2) There's no back buffer in single buffering. Just the front buffer.
3) Back buffer corruption? a. Why do you care? do you actually need the results after they've been displayed in the screen for something important? Does it happen once in a while or very often? b. Back buffer "corruption" is EXTREMELY rare. Even in very very old & crappy GPUs. You're either confusing your problem with something else, or you're doing something you shouldn't be doing, very very wrong.
Cheers
Dark Sylinc
1) I'm aware of that, but as I said in my second post I've had users complaining that my application doesn't run in single buffered cards. If a user complains then I'll find a workaround it if I can.
2) I never said there was. Check my second post and you'll see that I said there's probably no way to support single buffered cards well.
3) a. If you have a slow card that uses swap flip and an application with progressive drawing, the contents of the back buffer after the swap (which will be the contents of the front buffer before the swap) won't have any value to you since you lost the last frame you just created. In cases where redrawing the scene is unnecessary you'll have glitchs and some flickering. b. Sorry but no it isn't rare. Every back buffer can be called"corrupted" after a swap flip, but you won't notice it if you redraw your whole scene everytime. Do some research.
And thanks everybody for the help, the image was frozen because I forgot to reset the raster position after the glReadPixels. Surprisingly the pixel transfer rate isn't as slow as I expected, and for small applications I had only a 3~4 fps drop.
glReadPixels can be made MUCH faster using GL_BGRA and GL_UNSIGNED_INT_8_8_8_8_REV as params if perf ever becomes an issue. However, this needs GL1.2 and if you're targetting hardware that really doesn't support a double-buffered context (which I remain suspicious about) it's unlikely it'll support GL 1.2 either. Put it this way - if it can run Quake then it supports double-buffering. That's the original Quake from 1996, not any of the more recent ones.
In cases where redrawing the scene is unnecessary I'd still just redraw the scene. It's a lot simpler that way, you don't have to resort to hacks like this, and it's guaranteed to run on anything.
In cases where redrawing the scene is unnecessary I'd still just redraw the scene. It's a lot simpler that way, you don't have to resort to hacks like this, and it's guaranteed to run on anything.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement