screen capturing / buggy glReadPixels [solved]

Started by
17 comments, last by Falken42 18 years, 1 month ago
Hello all, My application utilizes tiled rendering (simply shifting of the frustum by subpixels) to upscale the display DPI to the printer's DPI when printing. This works fine, but I seem to have a few weird problems with glClear sometimes not working and glReadPixels acting strangely on various video cards. The problem only exhibits itself if a window obstructs the OpenGL window (for example, my printing status dialog), or if the user switches to a different application while capturing the display. I presently render the entire scene, then call glReadPixels before calling SwapBuffers. glReadBuffer is also set to GL_BACK, so I don't really understand what the problem is... On a Geforce 2 Ultra, the area covered by another window was not cleared at all -- almost as if glClear was being ignored for that area of screen space. I managed to work around this problem by manually drawing a quad with the clear color at the farthest Z in orthographic mode instead of calling glClear. When I tried this on a Geforce 6600 GT, it didn't have any effect, and only captured the contents of the window _before_ the dialog was shown at that location. None of the pixels seemed to have been updated at all for the tiled rendering -- almost as if glReadBuffer is being set to GL_FRONT regardless. Has anyone had any experience with this? I'm guessing that FBOs might fix the issue, but my application has to work with older cards that don't have FBO support as well, so I'm open to any suggestions... [Edited by - bpoint on March 5, 2006 10:58:04 PM]
Advertisement
I realize my explanation is a bit hard to understand, so here's what a single screen capture looks like:
[ Edit: problem solved -- removed screenshot ]

The outline of the print status dialog is clearly visible in the frame buffer. When printing, I raise the brightness of the polygon colors during capturing, hence the change in colors.

I tried not calling SwapBuffers during capturing, but that seems to make no difference.

I also tried using glCopyTexSubImage2D with glGetTexImage, and that still resulted in the same problem.

Can anyone explain why another window covering the OpenGL window would affect the reading of the frame buffer? Any ideas on how to work around this?

[Edited by - bpoint on March 5, 2006 11:35:08 PM]
if on windows, can't you just create another render context and draw to bitmap and print the bitmap instead? that should be supported even on GL 1.1.
Y
Pixel values are only well defined for pixels that the rendering context owns. Any parts of a window occluded by another window is not owned by the rendering context, and OpenGL does not specify the content of those pixels. Only the content of visible pixels are guaranteed.

If you need to guarantee the content of the entire image at all times, you must as AP suggested, render to an off screen buffer.
So you're suggesting I create a separate render context with the PIXELFORMATDESCRIPTOR flags set to PFD_DRAW_TO_BITMAP instead of PFD_DRAW_TO_WINDOW? I've never done this with OpenGL, but I'll at least give it a try.

I still don't see why it would make a difference since I'm requesting the back buffer and not the front buffer, though...
Doesn't matter what buffer you're reading from; if the rendring context doesn't own the pixel, it's content is undefined. Forcing a certain behaviour could potentially force driver developers to implement a sub-optimal driver, sacrificing performance for a situation where other and better solutions exists.
What attributes are you using the create the window?
Quote:Original post by Brother Bob
Doesn't matter what buffer you're reading from; if the rendring context doesn't own the pixel, it's content is undefined.

It would seem logical to me that the back buffer of the rendering context wouldn't know whether it owned a pixel or not. :) The back buffer is just an area of memory somewhere (ie, on the video card) which render commands draw to. When this back buffer is copied into the front buffer, then the pixels it owns and doesn't own would be important.

So if this really is true, you're telling me that _any_ application that renders to a texture (similar to what I'm doing), then uses that texture in a scene will also suffer from the same problem? If, for example, I place a window in front of an application utilizing offscreen rendering will disrupt the textures being created within the scene?

Sounds more like a bug (read: bad implementation) to me, though...

I'm going to give the PFD_DRAW_TO_BITMAP a try now and see how it goes.

Deception666: I presume you mean the format descriptor? Nothing special, though:
	// locate a new pixel format	memset(&pfd, 0, sizeof(pfd));	pfd.nSize	= sizeof(PIXELFORMATDESCRIPTOR);	pfd.nVersion	= 1;	pfd.dwFlags	= PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;	pfd.dwLayerMask	= PFD_MAIN_PLANE;	pfd.iPixelType	= PFD_TYPE_RGBA;	pfd.cColorBits	= bpp;	pfd.cDepthBits	= zbufBits;	pfd.cStencilBits = stencilBits;

bpp is usually 32, zbufBits is 24, and stencilBits is zero. And of course, the window class has CS_OWNDC set for the style.
Quote:Original post by bpoint
It would seem logical to me that the back buffer of the rendering context wouldn't know whether it owned a pixel or not. :) The back buffer is just an area of memory somewhere (ie, on the video card) which render commands draw to. When this back buffer is copied into the front buffer, then the pixels it owns and doesn't own would be important.

Concider this scenario. You have an application that uses many windows. It could for example be a design application where each window provides different viewes of the scene, material previews, object previews, high-detail previews, anything. Since you have many windows, chance is there will be a lot of overlap and much memory needed for all frame buffers.

Now, say you have a driver that wants to optimize memory usage for these kind or applications and wants to use unified frame buffers. Meaning a front buffer for all windows, a single back buffer for all windows, a single depth buffer... and so on. You can have any number of windows of any size, but the buffers does not eat more memory that a single fullscreen window.

This is a prefectly valid thing to do, since the frame buffers are for displaying things, and at most one window can be active at any given pixel at any given time. If the specification required correct content for such situation, unified frame buffers would not be possible to implement.

Quote:Original post by bpoint
So if this really is true, you're telling me that _any_ application that renders to a texture (similar to what I'm doing), then uses that texture in a scene will also suffer from the same problem? If, for example, I place a window in front of an application utilizing offscreen rendering will disrupt the textures being created within the scene?

Any render to texture aplications that relies on the frame buffer would suffer, yes. However, not all render to texture applications relies on the frame buffer.

Quote:Original post by bpoint
Sounds more like a bug (read: bad implementation) to me, though...

It is explicitely stated in the specification. It's a feature, not a bug or bad implementation.

Quote:Original post by bpoint
I'm going to give the PFD_DRAW_TO_BITMAP a try now and see how it goes.

There are other off screen rendering techniques aswell which I have forgot to mention in earlier posts. For example pbuffers and PBO.
I'll be honest, I didn't consider a unified frame buffer. I had presumed that the back buffer was really a separate portion of memory, not even connected to the display.

In any case, I implemented PFD_DRAW_TO_BITMAP -- but there's only one problem. Regardless of how I create my pixel format, I can only get the Microsoft GDI Generic OpenGL renderer. This is worse than before, because I use extentions (ie: VBOs, multitexturing, etc) that the generic renderer does not support.

I tried adding PFD_GENERIC_ACCELERATED, but still got the generic renderer anyway.

I had considered using FBOs in my initial post, but you mentioned that pbuffers and PBOs would work as well. If I remember right, pbuffers are supported on some fairly older cards (my Geforce 2 Ultra seems to have it), so maybe I'll give that a shot...

What a pain just to capture the screen. *sigh*

This topic is closed to new replies.

Advertisement