I'm currently playing around with OpenGL, trying to learn how to use it in addition to general graphical techniques.
I have reached a point where I want to use the depth buffer of a currently rendered scene as a texture in order to produce different effects in a fragment shader, While I can render and use the colors to a texture just fine, the depth texture does not seem to get any information at all and when i try to use it's values as color the results are pure black.
I think it's not necessary to allocate memory for your renderbuffer, because you've already allocated memory for the texture.
With both glFramebufferRenderbuffer and glFramebufferTexture2D you probably override the binding of the texture to GL_DEPTH_ATTACHMENT with the binding of the renderbuffer, which probably leads to rendering inside the renderbuffer instead of the texture.
This is how I create the Framebuffer for my Shadowmap:
glGenFramebuffers(1, &m_shadowMapFboId);
glGenTextures(1, &m_shadowMapTextureId);
// Allocate GPU-memory for the depth-texture.
glBindTexture(GL_TEXTURE_2D, m_shadowMapTextureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32,
m_shadowMapSize, m_shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
// Bind the texture to the framebuffers depth-attachment.
glBindFramebuffer(GL_FRAMEBUFFER, m_shadowMapFboId);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, m_shadowMapTextureId, 0);
// Tell the Framebuffer we won't provide any color-atachments.
glDrawBuffer(GL_NONE); // For depth-only-renderings, if you need also frag-color don't use this.
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{ ... }
How do your shaders look, which you use to read/draw from/to the depth-texture?
While reading from depth-texture you only need to use the first "color"-coordinate, afaik because you've allocated a float-texture with glTexImage2D, so even if you use GL_DEPTH_COMPONENT32 you don't need to reassemble the 8bit-color-channels and can just access the 32bit-float using the first color-index: float depth = texture(texDepth, vPos2D).x;
I think it's not necessary to allocate memory for your renderbuffer, because you've already allocated memory for the texture.
With both glFramebufferRenderbuffer and glFramebufferTexture2D you probably override the binding of the texture to GL_DEPTH_ATTACHMENT with the binding of the renderbuffer, which probably leads to rendering inside the renderbuffer instead of the texture.
This is how I create the Framebuffer for my Shadowmap:
glGenFramebuffers(1, &m_shadowMapFboId);
glGenTextures(1, &m_shadowMapTextureId);
// Allocate GPU-memory for the depth-texture.
glBindTexture(GL_TEXTURE_2D, m_shadowMapTextureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32,
m_shadowMapSize, m_shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
// Bind the texture to the framebuffers depth-attachment.
glBindFramebuffer(GL_FRAMEBUFFER, m_shadowMapFboId);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, m_shadowMapTextureId, 0);
// Tell the Framebuffer we won't provide any color-atachments.
glDrawBuffer(GL_NONE); // For depth-only-renderings, if you need also frag-color don't use this.
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{ ... }
How do your shaders look, which you use to read/draw from/to the depth-texture?
While reading from depth-texture you only need to use the first "color"-coordinate, afaik because you've allocated a float-texture with glTexImage2D, so even if you use GL_DEPTH_COMPONENT32 you don't need to reassemble the 8bit-color-channels and can just access the 32bit-float using the first color-index: float depth = texture(texDepth, vPos2D).x;
Hope I could help.
Thanks a lot, removing the render-buffer completely did help. I do get some values that seem to correspond to fragment depth in the depth texture now, am I right in assuming that those seem to be calculated by 1/z? If so, does gl_FragCoord.z also have this value or will I have to do that divison manually in the fragment shader if I wish to compare these values?
Edit: This post looked like it was lost in crash...
The z-buffer depth is non-linear to the actual depth of the fragment, so there's more precision for near objects and less precision for objects further away.
I hope someone will correct me if I'm wrong, but I think the z-buffer-depth with a perspective projection get's calculated as following:
z_buffer_value = a + b / z
Where:
a = zFar / ( zFar - zNear )
b = zFar * zNear / ( zNear - zFar )
z = distance from the eye to the object
This happens inside the vertex-shader through multiplication of a vector with the projectionmatrix and the w-divide of the vector afterwards (v /= v.w).
My perspective projection matrix for example looks like that:
hFov..0.....0.....0
0.....vFov..0.....0
0.....0.....p1....p2
0.....0.....-1....0
I've never used gl_FragCoord so I can't answer that, but you can try to compare both variants in your fragment-shader.
For example in the vertex-shader pass the projected, z-divided vertex and check in the fragment-shader how much it's interpolation differs from gl_FragCoord.z.