Rendering the depth buffer?

Started by
8 comments, last by swiftcoder 12 years, 1 month ago
Well, the question is very simple, so I hope that the answer might also be (I am going for "glass half full" mentality): I want to see my OpenGL graphics rendered not by their color or textures, but by their z-buffer value. I.e. closer pixels are darker, distant are brighter (or vice versa if easier). Like this image from Wikipedia: http://en.wikipedia....le:Z_buffer.svg (the lower one, of course). I already have the code to draw the things I want, but it draws it in color and textures. I want to see the Z-buffer instead.

What are my options?
Advertisement
Are you using shaders? If so, youll need to get the Z value. Usually you can pass this in from the vertex shader. Then you can do gl_FragColor.rgb=vec3(pow(Z,50)); Not sure if its possible without shaders.
I'm not using shaders. I thought that since the Z-buffer exists somewhere (I assume, or things would overlap, and my clearing the depth buffer would affect nothing), I should be able to simply(??) swap it for the color buffer. I clear the two things in the same line (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)), so they should be using similar logic... or am I making wild assumptions?
You're probably going to need to somehow copy the depths buffer into the color buffer or a texture. I'm not an expert with non shader opengl, so I can't really help you there.
Fog.

Set some fog, either linear or exp, then draw your objects untextured (or with white textures). It won't be an exact replica of what the depth buffer contains, but it will give you the visual result you want.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

The buffer copy or swap is something I've looked at, but found no syntax on. If anyone has the slightest thread on this, I'm all ears!

The fog is not going to do it, since it is not just to fake the look, it's an actual depiction of distances that I need :-/ However, the basic idea miiiight have something to it, if I can find no other option...
You should just just be able to allocate a texture with an internal format of GL_DEPTH_COMPONENT, and call glCopyTexSubImage2D to copy the contents of the depth buffer into that texture. After that, you can render a fullscreen quad with the texture, or whatever else you want...

However, unless you are restricted to an ancient version of OpenGL, I would recommend using an FBO (Frame Buffer Object), to render the depth buffer directly to a texture in the first place. Or even a shader, to render the z coordinate into the colour buffer, and save having to render multiple passes at all.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


You should just just be able to allocate a texture with an internal format of GL_DEPTH_COMPONENT, and call glCopyTexSubImage2D to copy the contents of the depth buffer into that texture. After that, you can render a fullscreen quad with the texture, or whatever else you want...


This looks like exactly what I was thinking. However, I cannot find the proper code for it, and I fear guessing it from scratch will end in utter failure (I am not a veteran). Is there an example or perhaps even a tutorial of this exact method somewhere?

However, unless you are restricted to an ancient version of OpenGL, I would recommend using an FBO (Frame Buffer Object), to render the depth buffer directly to a texture in the first place. Or even a shader, to render the z coordinate into the colour buffer, and save having to render multiple passes at all.
[/quote]

Partly due to my not being veteran, but also because of a long and boring story about what I'm doing (which I'll spare you), I would like to at least try without FBOs or shaders first, if there is a way?

I thought that since the Z-buffer exists somewhere (I assume, or things would overlap, and my clearing the depth buffer would affect nothing), I should be able to simply(??) swap it for the color buffer.


The thing with that is that, for as far as I'm aware, the z-buffer is implemented (either by hardware or software) on the GPU. This means that the only direct way to get access to it is via shaders.

The thing with that is that, for as far as I'm aware, the z-buffer is implemented (either by hardware or software) on the GPU. This means that the only direct way to get access to it is via shaders.

AFAIK, shaders can't access the depth buffer directly, either. FBOs are the only way to directly render to a texture (which texture can then be used by a shader).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement