GeForce4 Ti4200 and glReadPixels errors...

Started by
11 comments, last by Floating 21 years, 1 month ago
Hi, I just bought a new graphic card (GeForce4 Ti4200) and noticed that something is not working properly anymore: In my game I compute the clipping planes according to sampled depth values read with glReadPixels. With my old graphic card it worked fine but with the new one it seems that the read pixels are sometimes invalid... It happens only if I read a pixel which is covered by a child dialog!!! Is it possible that my new graphic card performs a "Scissor" around windows which lie on top of my rendering window?? I can't find any other explanation to this strange phenomenon. If I remove all child windows, glReadPixels works fine. Is there any workaround? I need my child dialogs to lie on top of my rendering window sometimes... Thanks [edited by - Floating on March 16, 2003 8:39:07 PM]
Advertisement
The spec says that reading pixel values from occluded areas (and that includes any buffer) gives you undefined values.

As for a workaround, you have already answered your own question: remove everything occluding the window.
Assuming you're using double buffering, you can use glReadBuffer( GL_BACK ) to indicate you'd like to read pixels out of the back buffer. If you render your scene then do 'readPixels' *before* you swap buffers (wglSwapBuffers or whatever it is in Windows) your image will be pristine in the back buffer, unobscured by anything on your desktop.

Having said that, glReadPixels is slow, slow, slow. If you do it on any kind of regular basis it will kill your performance. If you still want to do it periodically, consider setting up a very small -- 64x64 or something -- PBuffer (off-screen rendering context) and render the scene into that. Disable texturing, color writes, alpha blending, and anything else you can. It will render faster, readPixels will be way faster, and the depth values should be very representative of those in your full sized image.

[edited by - kronq on March 17, 2003 6:22:47 AM]
As I said in my post, the content of any buffer is undefined for occluded areas, and that includes the back buffer. There are no guarantees that the back buffer contains correct values for occluded areas. If you get correct values out of the back buffer, then you're just lucky, nothing else.

[edited by - Brother Bob on March 17, 2003 6:38:21 AM]
Thank you for the replies.
I am not reading a lot of values (just a sample of 30 points spead over the rendering surface). I read 30 times one depth values and that's fast enough.
For the workaround, I cannot remove every dialog everytime I render the scene, then put them back. Remember I use glReadPixels to compute "dynamic near and far clipping planes" for every rendering cycle...

Anyway I can't understand the purpose of leaving these covered zones undefined... is it for speed?

I can imagine a lot of situations where you would need to read pixel values (color coded selection, what I am also using). Is there a real workaround for my case?

[edited by - Floating on March 17, 2003 7:20:01 AM]
Among other things, it''s to allow the driver to use a unified buffer architecture.

Say you have a huge modelling software with lots of windows for different views of the scene, materials, preview windows, and so on. Now every window needs two frame buffers (front and back), depth, stencil, a few windows may need an accumulation buffer, and so on, it will consume a HUGE amount of memory just to serve the windows with buffers. Now, since only one window may occupy a pixel, it''s a waste of memory to keep one buffer per window. A unified buffer architecture has one large buffer that serves all windows. This can save a huge amount of memory sometimes. Therefore, it''s undefined to read from occluded areas, because you don''t know if you''re using a unified buffer or not, nor does OpenGL know about it.
Forgot to say, you can render to an off screen buffer to determine near and far clip planes. They can''t be occluded by other windows.
It looks like Brother Bob is right about obscured pixels being undefined -- I guess I have just been lucky.
If you don''t like the pbuffer idea, you can use your existing test with Windows GDI ''PtVisible'' call to see if your sample point is within the clipping region of the device context (and thus unobscured). You may need to create your window with WS_CLIPCHILDREN and/or WS_CLIPSIBLINGS for this to work.
If you decide to call PtVisible, watch out for the following warning (from MSDN): "OpenGL and GDI graphics cannot be mixed in a double-buffered window. An application can directly draw both OpenGL graphics and GDI graphics into a single-buffered window, but not into a double-buffered window."
Thanks for all your replies

The unified buffer architecture makes sense. As suggested I could use PtVisible, but how would I render to an off screen buffer?
Does this mean I have to render my scene twice (once to the off screen buffer, once to the screen), or can I also display the off screen buffer?

How do I initialize the off screen buffer?
(sorry, this might be a very easy question...)

This topic is closed to new replies.

Advertisement