Depth to Color

Started by
3 comments, last by rosewell 15 years, 10 months ago
I'm trying to render my depth buffer to my color buffer for a compositing effect, (briefly described here, http://www.opengl.org/registry/specs/NV/copy_depth_to_color.txt), however, the edges are jagged. So, I would like to use multisampling too, but when I turn it on, the result is not what I expected. The edges are smooth, but I don't want the extra garbage in the center. Image Hosted by ImageShack.us I'm using glReadPixels to read my zbuffer, then using glDrawPixels to render it.

 glEnable(GL_MULTISAMPLE_ARB);
 RenderScene();
 glDisable(GL_MULTISAMPLE_ARB);

 glReadPixels( 0,0, w,h, GL_DEPTH_COMPONENT, GL_FLOAT, depth_buffer );

 // in ortho mode
 glDrawPixels( w,h, GL_DEPTH_COMPONENT, GL_FLOAT, depth_buffer );
Is this the proper way to get an (anti-aliased) composite outline of the rendered scene? glDepthFunc() is always set to GL_LEQUAL, and GL_DEPTH_TEST is enabled. Any suggestions?
Advertisement
Looks to me more like some vertices are not identical so some pixels are missed (not rendered to). Example: 1.0000 != 0.999999
Try to use glutSolidSphere or some other persons code.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:Original post by V-man
Looks to me more like some vertices are not identical so some pixels are missed (not rendered to). Example: 1.0000 != 0.999999
Try to use glutSolidSphere or some other persons code.


Thanks, but I'm not sure what you mean by vertices not be identical. Are you saying this method has accuracy problems if I use GL_FLOAT?
I just tried using GL_UNSIGNED_INT and GL_INT to store my zbuffer, but it didn't help. Nothing changed.
Quote:Thanks, but I'm not sure what you mean by vertices not be identical. Are you saying this method has accuracy problems if I use GL_FLOAT?
I just tried using GL_UNSIGNED_INT and GL_INT to store my zbuffer, but it didn't help. Nothing changed.


Not sure what you mean. Are you using floats for your vertices or not?

Anyway, there are seems between the faces of your sphere. Perhaps you are rendering each face as GL_QUAD. This means some vertices are shared between a pair of quads

1 2 3
|--|--|
|__|__|
4 5 6

If you are using 2 different vertices at point 2, then you will get a crack.
You can search for T-joint on the net. The GL manual also explains the rules for polygon filling when an edge is shared.

What code are you using to render the sphere? Your own or someone elses or glutSolidSphere?
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Ok, you are talking about my sphere's geometry? There aren't any duplicate vertices, so no open edges.

I draw my own sphere using:

glBegin(GL_POLYGON);   glNormal3f(...);   glVertex3f(...);glEnd();


I don't think the problem is related to geometry or how it is drawn, as I'm just using the sphere as an example. I have already tried it with many different types of geometry,
and all give the same bad result when multi-sampling is turned on.


Quote:
Not sure what you mean. Are you using floats for your vertices or not?


I'm talking about the GL format that is chosen to read/write the depth buffer (GL_FLOAT, GL_UNSIGNED_INT, etc...).
You brought a good point without knowing it, but it didn't help. :)

OpenGL stores its depth buffer in some internal format (I don't know which), and when I specify GL_FLOAT, that
internal format is converted to floating point. But I think GL_FLOAT has the highest accuracy all of formats (1E+37),
so I don't think that is the source of the problem.

[Edited by - rosewell on June 18, 2008 5:34:46 PM]

This topic is closed to new replies.

Advertisement