24 bits texture fetch in shader

Started by
5 comments, last by dpadam450 10 years, 10 months ago

Hi,
I want to get depth value from the depth texture and the depth texture is init as this:

------------------------------------------------------------

glGenTextures(1,&pass2_depthImage);
glBindTexture(GL_TEXTURE_2D, depthImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
------------------------------------------------------------
I wonder how to get the depth value fom this image in a shader, I use this:
float3 sampleZ = tex2D(depthMap, projCoord.xy).xyz;
float3 bitShifts = float3(1.0/(256.0*256.0), 1.0/256.0, 1);
float depth = dot(sampleZ,bitShifts);
in the idea that the 24bit depth value is in 24 scale. However, it does not work, as it may use a 24 bit float value or something.
I also wonder as I init the texture as this:
glGenTextures(1,&pass2_depthImage);
glBindTexture(GL_TEXTURE_2D, depthImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
I get the depth as this:
float depth =tex2D(depthMap, projCoord.xy).x;
It still not works.
And every RGB channel values is as the same as I init the depth texture map internal format as GL_DEPTH_COMPONENT24.
The depth image rendering:
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, pass2_depthImage, 0);

Advertisement

Attach a color buffer as well and display that to verify that the scene is rendering properly in the depth texture and not is some random direction/camera location that is viewing nothing.

If that works then you know the depth is being written properly and that it is actually rendering into that depth texture.
float3 sampleZ = tex2D(depthMap, projCoord.xy).xyz;

float3 bitShifts = float3(1.0/(256.0*256.0), 1.0/256.0, 1);
float depth = dot(sampleZ,bitShifts);

Why are you performing a dot product on a depth value? sampleZ is the depth textures z component, what are you trying to achieve here?

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Attach a color buffer as well and display that to verify that the scene is rendering properly in the depth texture and not is some random direction/camera location that is viewing nothing.

If that works then you know the depth is being written properly and that it is actually rendering into that depth texture.
float3 sampleZ = tex2D(depthMap, projCoord.xy).xyz;

float3 bitShifts = float3(1.0/(256.0*256.0), 1.0/256.0, 1);
float depth = dot(sampleZ,bitShifts);

Why are you performing a dot product on a depth value? sampleZ is the depth textures z component, what are you trying to achieve here?

The scene renders well and the color buffer is reasonable, no depth wrong things like occluded object appearing happens.

I use that as a naive understanding of GL_DEPTH_COMPONENT24 format. I took it as : depth = (r<<16+g<<8+b), which is wrong.

And I wonder how to fetch a 24-bits internal format texture.

I use GL_DEPTH_COMPONENT internal format as well (I think it means 8-bits internal-format, is it right), and it does not still work.The depth image looks the same as GL_DEPTH_COMPONENT24

There's no such thing as a 24-bit texture.

When you request a 24-bit internal format, what you actually get is 32 bits with the extra 8 either unused or set to 255, depending on context.

In the case of a depth texture, you are also getting a 32-bit texture; the extra 8 bits this time are either unused or used for stencil.

So I'm going to suggest trying r << 24 + g << 16 + b << 8 first. If that works - happy days. If it doesn't, try b << 24 + g << 16 + r << 8, because your GPU may prefer to store the components in that order.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

No there is not bit shifting. I use GL_DEPTH_COMPONENT which I believe matches your internal format (at opengl context creation time) which mine is 24 (I may have changed to 32 recently).

The fetch is aware of the 24, the fetch takes care of making it a float. If you have a texture that you upload as 4- bytes, or a single byte per R,G,B,A, obviously you don't convert a char to a float in your shader, it just happens.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

No there is not bit shifting. I use GL_DEPTH_COMPONENT which I believe matches your internal format (at opengl context creation time) which mine is 24 (I may have changed to 32 recently).

The fetch is aware of the 24, the fetch takes care of making it a float. If you have a texture that you upload as 4- bytes, or a single byte per R,G,B,A, obviously you don't convert a char to a float in your shader, it just happens.

You mean

float depth =tex2D(depthMap, projCoord.xy).r;

is enought to get the depth number?

Acctually I fetch the R,G,B channels value of that texture. Each value is different, but Each value is continuous on an scene object.

Your return is a float, it takes care of it for you. If you had double float = tex...., its not like you would have to bit-shift anything either, but by default I don't even think it will compile that. Every texture sample should come back as a float.

R,G,B for that depth texture should all be the same. (I think), either that or only take into account the 'R' component. Which is why there is a GL_COMPARE_TO_R texture parameter.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

This topic is closed to new replies.

Advertisement