Archived

This topic is now archived and is closed to further replies.

Floating

GeForce4 Ti4200 and glReadPixels errors...

Recommended Posts

Hi, I just bought a new graphic card (GeForce4 Ti4200) and noticed that something is not working properly anymore: In my game I compute the clipping planes according to sampled depth values read with glReadPixels. With my old graphic card it worked fine but with the new one it seems that the read pixels are sometimes invalid... It happens only if I read a pixel which is covered by a child dialog!!! Is it possible that my new graphic card performs a "Scissor" around windows which lie on top of my rendering window?? I can't find any other explanation to this strange phenomenon. If I remove all child windows, glReadPixels works fine. Is there any workaround? I need my child dialogs to lie on top of my rendering window sometimes... Thanks [edited by - Floating on March 16, 2003 8:39:07 PM]

Share this post


Link to post
Share on other sites
The spec says that reading pixel values from occluded areas (and that includes any buffer) gives you undefined values.

As for a workaround, you have already answered your own question: remove everything occluding the window.

Share this post


Link to post
Share on other sites
Assuming you're using double buffering, you can use glReadBuffer( GL_BACK ) to indicate you'd like to read pixels out of the back buffer. If you render your scene then do 'readPixels' *before* you swap buffers (wglSwapBuffers or whatever it is in Windows) your image will be pristine in the back buffer, unobscured by anything on your desktop.

Having said that, glReadPixels is slow, slow, slow. If you do it on any kind of regular basis it will kill your performance. If you still want to do it periodically, consider setting up a very small -- 64x64 or something -- PBuffer (off-screen rendering context) and render the scene into that. Disable texturing, color writes, alpha blending, and anything else you can. It will render faster, readPixels will be way faster, and the depth values should be very representative of those in your full sized image.

[edited by - kronq on March 17, 2003 6:22:47 AM]

Share this post


Link to post
Share on other sites
As I said in my post, the content of any buffer is undefined for occluded areas, and that includes the back buffer. There are no guarantees that the back buffer contains correct values for occluded areas. If you get correct values out of the back buffer, then you're just lucky, nothing else.

[edited by - Brother Bob on March 17, 2003 6:38:21 AM]

Share this post


Link to post
Share on other sites
Thank you for the replies.
I am not reading a lot of values (just a sample of 30 points spead over the rendering surface). I read 30 times one depth values and that's fast enough.
For the workaround, I cannot remove every dialog everytime I render the scene, then put them back. Remember I use glReadPixels to compute "dynamic near and far clipping planes" for every rendering cycle...

Anyway I can't understand the purpose of leaving these covered zones undefined... is it for speed?

I can imagine a lot of situations where you would need to read pixel values (color coded selection, what I am also using). Is there a real workaround for my case?

[edited by - Floating on March 17, 2003 7:20:01 AM]

Share this post


Link to post
Share on other sites
Among other things, it''s to allow the driver to use a unified buffer architecture.

Say you have a huge modelling software with lots of windows for different views of the scene, materials, preview windows, and so on. Now every window needs two frame buffers (front and back), depth, stencil, a few windows may need an accumulation buffer, and so on, it will consume a HUGE amount of memory just to serve the windows with buffers. Now, since only one window may occupy a pixel, it''s a waste of memory to keep one buffer per window. A unified buffer architecture has one large buffer that serves all windows. This can save a huge amount of memory sometimes. Therefore, it''s undefined to read from occluded areas, because you don''t know if you''re using a unified buffer or not, nor does OpenGL know about it.

Share this post


Link to post
Share on other sites
It looks like Brother Bob is right about obscured pixels being undefined -- I guess I have just been lucky.
If you don''t like the pbuffer idea, you can use your existing test with Windows GDI ''PtVisible'' call to see if your sample point is within the clipping region of the device context (and thus unobscured). You may need to create your window with WS_CLIPCHILDREN and/or WS_CLIPSIBLINGS for this to work.

Share this post


Link to post
Share on other sites
If you decide to call PtVisible, watch out for the following warning (from MSDN): "OpenGL and GDI graphics cannot be mixed in a double-buffered window. An application can directly draw both OpenGL graphics and GDI graphics into a single-buffered window, but not into a double-buffered window."

Share this post


Link to post
Share on other sites
Thanks for all your replies

The unified buffer architecture makes sense. As suggested I could use PtVisible, but how would I render to an off screen buffer?
Does this mean I have to render my scene twice (once to the off screen buffer, once to the screen), or can I also display the off screen buffer?

How do I initialize the off screen buffer?
(sorry, this might be a very easy question...)

Share this post


Link to post
Share on other sites
On PtVisible: it should be safe because it doesn't set or query any actual framebuffer-level data, it simply walks through the description of the clipping region (which can be a empty, a single rect, or multiple rects) that Windows constructs to make sure you don't draw over neighbor windows (and your own children/sibling windows if you want.)

On using a pbuffer:
http://www.nvidia.com/view.asp?IO=PBuffers_for_OffScreen
its an nvidia doc, but PBuffers are an ARB extension supported by NVidia, ATI, and likely most other major players (if there really are any.)

Basically, you do some initialization similar to creating a regular on-screen display context (color bits/depth bits/size/etc), then create a device context for the pbuffer. You can then pick whether you want to render to the screen or the pbuffer using wglMakeCurrent()

Using a pbuffer would mean rendering the scene twice (if you need to do it every frame), but if you use a tiny window with color writes and texturing etc turned off the fill-rate overhead would be insignificant. The limiting factor would then be the geometric complexity of your scene and the efficiency of your rendering pipeline. For any reasonable scene it wil be much faster than rendering to a full-sized display.

Currently you can't copy pixels directly between contexts (like with glCopyPixels) but you can (at least with NVidia drivers) bind a pbuffer to a texture, which you could then render to your display on a single quad. Whether or not this is faster than doing an extra render to a tiny pbuffer should be the subject of testing.

[edited by - kronq on March 17, 2003 11:42:46 PM]

Share this post


Link to post
Share on other sites