Sign in to follow this  

glDrawPixels and GL_DEPTH_TEST

This topic is 3570 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys, just wonder what is the mechanism of this glDrawPixels? As I know, glDrawPixels rendered directly to the framebuffer. Currently I used it to draw a 2D background in ortho projection. And how can I enable depth test? When I enable GL_DEPTH_TEST, every object is gone. When I disabled it, the objects are rendered normally on the background. Hope you guys can help. :) Thanks a lot.

Share this post


Link to post
Share on other sites
glDrawPixels is a 2D blit operation. It something similar to memcpy from one block of memory to another, except that it uses the DMA controller of the GPU. Because of that, depth testing is not supported in 2D mode. To solve the problem you can simply draw the background first.

Share this post


Link to post
Share on other sites
Hi, I do not quite understand. For instance, I draw a ball on a field (in ortho projection mode). I draw the field by glDrawPixels and then draw the ball (a quad with texture) on it.

When I enable GL_DEPTH_TEST I cannot see the ball (the Z of the ball is 0 actually). No matter how I change the Z element towards the near plane or the far plane, the ball can't not be seen.

If I disable GL_DEPTH_TEST then everything is ok.

Therefore, I think the glDrawPixels may have some issues with the GL_DEPTH_TEST.

The obvious solution is use a quad as the background and discard the glDrawPixels. Any other solutions?

Share this post


Link to post
Share on other sites
glDrawPixels generates fragments which is subject to all tests and per-fragment calculations like any other fragment.

That means, if depth test is enabled, then all fragments generated by glDrawPixels are depth tested according to the depth test function. All passed fragments are drawn and the depth buffer is updated unless the depth mask disables writes to it, and all failed fragments are discarded and not written to the frame buffers.

That also means, if texturing is enabled, then all fragments are colored by the enabled textures on the enabled texture units, according to the enabled textures and the current texture coordinates and the current combiner functions, just like any quad, triangle, line and point you draw.

That also means, if lighting is enabled, fragments are lit according to the current normal. If positional light is used, the raster position in object space is used for the fragments position.

This also allows for the pixel rectangles to use the alpha test to reject transparent parts. Or use the blending stage to blend the image with the frame buffer. This would not be possible if it was a direct memory copy.

In short, anything that affects a fragmens when drawing a triangle will also affect a fragment generated by glDrawPixels. That is why it's important to not leave states on that you're not using (for example depth test, texturing and lighting, as mention above) because they will all come back and bite you if you don't want them when drawing pixel rectangles. The depth used by glDrawPixels is the depth set by the raster position.

Share this post


Link to post
Share on other sites
I see, so in this case we need to use glRasterPos3f(0.0f, 0.0f, -1.0f) then glDrawPixels to draw the background at z = -1 first, then draw the ball at z = 0. Is that the idea?

Share this post


Link to post
Share on other sites
That is correct.

And of coruse, if all you need is a background image, then disabling depth test also works if you draw the background first. No depth test will be performed, and the image won't mess up the depth buffer either.

Share this post


Link to post
Share on other sites
Quote:
Original post by Brother Bob
Think you're confused about pixel transfers in general. glCopyPixels is subject to the same per fragment tests and operations as glDrawPixels.


Yea, I'm used to low-level graphics APIs that don't do horrible things behind your back. Just read about glDrawPixels and glCopyPixels. What they do internally is truly disgusting.

Share this post


Link to post
Share on other sites
If you don't want them to do these things, then disable all states you enable along the road, and don't leave arbitrary states enabled at arbitrary points in your code. Default state in OpenGL results in a pixel transfer logically equivalent to a direct memory copy. If you need something else, like blending, you're free to enable those states, something which is not possible with a direct memory copy.

And another reason for not doing direct memory copies is source and destination format mismatch. OpenGL doesn't specify anywhere the particular format of the frame buffers. That means OpenGL must also be ready to convert between formats. For example, swap channel order if you supply RGBA data to, what is internally, a BGRA buffer. Or expand I data to RGBA data. Or expand 16 bit data to 32 bit. Memory copy just doesn't work in OpenGL.

Personally I don't see why pixel transfer fragments should be treated differently from primitive generated fragments when the operations and tests are disabled by default and you have the option to enable them if you want them. In my oppinion, NOT having these option to use them if I want them would be disgusting, if anything.

Share this post


Link to post
Share on other sites

This topic is 3570 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this