Rendering to and using a depth texture

Started by
2 comments, last by Fnord42 12 years, 3 months ago
I'm currently playing around with OpenGL, trying to learn how to use it in addition to general graphical techniques.

I have reached a point where I want to use the depth buffer of a currently rendered scene as a texture in order to produce different effects in a fragment shader, While I can render and use the colors to a texture just fine, the depth texture does not seem to get any information at all and when i try to use it's values as color the results are pure black.

The I use for frame- and renderbuffer objects:


GLuint frameBuffer;
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, this->screenTexture, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, this->depthTexture, 0);
//Create render buffer
GLuint renderBuffer;
glGenRenderbuffers(1, &renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 1024, 768);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderBuffer);
//Render to texture
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
printf("Framebuffer configuration error ..\n");
}
else
{
glUseProgram(r->landscapeShaderProgram);
//Draw landscape to depth-buffer
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glClearColor(0,1,1,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Drawing code here...

}


And the code I use for setting up the texture:


glGenTextures(1,&this->depthTexture);
glBindTexture(GL_TEXTURE_2D, this->depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT,depthWidth, depthHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);


Thanks in advance for any help.
Advertisement
Hi,

I think it's not necessary to allocate memory for your renderbuffer, because you've already allocated memory for the texture.
With both glFramebufferRenderbuffer and glFramebufferTexture2D you probably override the binding of the texture to GL_DEPTH_ATTACHMENT with the binding of the renderbuffer, which probably leads to rendering inside the renderbuffer instead of the texture.

This is how I create the Framebuffer for my Shadowmap:

glGenFramebuffers(1, &m_shadowMapFboId);
glGenTextures(1, &m_shadowMapTextureId);

// Allocate GPU-memory for the depth-texture.
glBindTexture(GL_TEXTURE_2D, m_shadowMapTextureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32,
m_shadowMapSize, m_shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);

// Bind the texture to the framebuffers depth-attachment.
glBindFramebuffer(GL_FRAMEBUFFER, m_shadowMapFboId);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, m_shadowMapTextureId, 0);

// Tell the Framebuffer we won't provide any color-atachments.
glDrawBuffer(GL_NONE); // For depth-only-renderings, if you need also frag-color don't use this.

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{ ... }


How do your shaders look, which you use to read/draw from/to the depth-texture?
While reading from depth-texture you only need to use the first "color"-coordinate, afaik because you've allocated a float-texture with glTexImage2D, so even if you use GL_DEPTH_COMPONENT32 you don't need to reassemble the 8bit-color-channels and can just access the 32bit-float using the first color-index:
float depth = texture(texDepth, vPos2D).x;

Hope I could help.

Hi,

I think it's not necessary to allocate memory for your renderbuffer, because you've already allocated memory for the texture.
With both glFramebufferRenderbuffer and glFramebufferTexture2D you probably override the binding of the texture to GL_DEPTH_ATTACHMENT with the binding of the renderbuffer, which probably leads to rendering inside the renderbuffer instead of the texture.

This is how I create the Framebuffer for my Shadowmap:

glGenFramebuffers(1, &m_shadowMapFboId);
glGenTextures(1, &m_shadowMapTextureId);

// Allocate GPU-memory for the depth-texture.
glBindTexture(GL_TEXTURE_2D, m_shadowMapTextureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32,
m_shadowMapSize, m_shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);

// Bind the texture to the framebuffers depth-attachment.
glBindFramebuffer(GL_FRAMEBUFFER, m_shadowMapFboId);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, m_shadowMapTextureId, 0);

// Tell the Framebuffer we won't provide any color-atachments.
glDrawBuffer(GL_NONE); // For depth-only-renderings, if you need also frag-color don't use this.

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{ ... }


How do your shaders look, which you use to read/draw from/to the depth-texture?
While reading from depth-texture you only need to use the first "color"-coordinate, afaik because you've allocated a float-texture with glTexImage2D, so even if you use GL_DEPTH_COMPONENT32 you don't need to reassemble the 8bit-color-channels and can just access the 32bit-float using the first color-index:
float depth = texture(texDepth, vPos2D).x;

Hope I could help.


Thanks a lot, removing the render-buffer completely did help. I do get some values that seem to correspond to fragment depth in the depth texture now, am I right in assuming that those seem to be calculated by 1/z? If so, does gl_FragCoord.z also have this value or will I have to do that divison manually in the fragment shader if I wish to compare these values?

Edit: This post looked like it was lost in crash...
The z-buffer depth is non-linear to the actual depth of the fragment, so there's more precision for near objects and less precision for objects further away.
I hope someone will correct me if I'm wrong, but I think the z-buffer-depth with a perspective projection get's calculated as following:

z_buffer_value = a + b / z
Where:
a = zFar / ( zFar - zNear )
b = zFar * zNear / ( zNear - zFar )
z = distance from the eye to the object

This happens inside the vertex-shader through multiplication of a vector with the projectionmatrix and the w-divide of the vector afterwards (v /= v.w).

My perspective projection matrix for example looks like that:

hFov..0.....0.....0
0.....vFov..0.....0
0.....0.....p1....p2
0.....0.....-1....0

vFov = 1.0 / (tan(vFovDegree * (PI / 360.0));
hFov = vFov / ratio;
p1 = (farPlaneDistance + nearPlaneDistance) / (nearPlaneDistance - farPlaneDistance);
p2 = (2.0 * farPlaneDistance * nearPlaneDistance) / (nearPlaneDistance - farPlaneDistance);


I've never used gl_FragCoord so I can't answer that, but you can try to compare both variants in your fragment-shader.
For example in the vertex-shader pass the projected, z-divided vertex and check in the fragment-shader how much it's interpolation differs from gl_FragCoord.z.

This topic is closed to new replies.

Advertisement