Sign in to follow this  
Lewa

OpenGL Calculate depth of front sphere from back side

Recommended Posts

Currently sitting on an issue which i can't solve due to my lack of mathematical knowledge.

Here is a picture i made which sums up what i'm trying to do:

depthissue.png

 

To put it simply:

I render spheres (simple 3D meshes) into the scene on a seperate FBO by using frontface culling (so that the backside is rendered. The red part of the sphere on the screenshot.)

Now, in the fragment shader i can access the depth value of the rendered pixel by using "gl_Fragcoord.z". Now what i want to do is to calculate the depth value of the front facing side of the sphere of the exact same pixel. (so that i have a min and max depth value in order to know what the start depth and end depth value of the sphere on the given pixel is.) I need those values for post processing purposes.

 

My attempt to solve this was:

  1.  pass the current vertex position into the fragment shader
  2. subtract the vertex position from the origin point (in view space) to retrieve a normal pointing from the origin to the backface point
  3. Mirror the z-component of this normal (as we are in view space)
  4. add the mirrored normal to the origin point which gives us the front facing (vertex) position of the sphere
  5. use this position to calculate the depth value like in an openGL depth buffer. (haven't done this properly.)

depthissue2.png

 

I may or may not have an error in my shader code. (Maybe the way i multiply matrices is wrong?)

Here is my current code (a bit messy but i tried to comment it.)

//-------------- Vertex Shader --------------------
#version 330

layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec4 color;
layout (location = 3) in vec2 uv;
 
uniform mat4 uProjectionMatrix;
uniform mat4 uModelViewMatrix;

out vec4 oColor;
out vec2 vTexcoord;

out vec4 vFragWorldPos;
out vec4 vOriginWorldPos;
out mat4 vProjectionMatrix;

void main()
{
    oColor = color;
	vTexcoord = uv;

	//coordinates are in view space!
	vec4 tFragWorldPos = (uModelViewMatrix * vec4(position,1.0));
	vec4 tOriginWorldPos = (uModelViewMatrix * vec4(0.0,0.0,0.0,1.0));
	

	vFragWorldPos = (uModelViewMatrix * vec4(position,1.0));
	vOriginWorldPos = (uModelViewMatrix * vec4(0.0,0.0,0.0,1.0));
	
    gl_Position = uProjectionMatrix * uModelViewMatrix * vec4(position,1.0);
	
	//send projection matrix to the fragment shader
	vProjectionMatrix = uProjectionMatrix;
}
//---------------- Fragment Shader --------------
#version 330
 
in vec4 oColor;
in vec2 vTexcoord;
out vec4 outputF;
in vec4 gl_FragCoord;


uniform sampler2D sGeometryDepth;
in vec4 gl_FragCoord;

in vec4 vFragWorldPos;
in vec4 vOriginWorldPos;
in mat4 vProjectionMatrix;


 
void main()
{

	//get texture coordinates of the screenspace depthbuffer
	vec2 relativeTexCoord = vec2(gl_FragCoord.x,gl_FragCoord.y);
	relativeTexCoord = relativeTexCoord-0.5+1.0;
	relativeTexCoord.x = relativeTexCoord.x/1280.0;
	relativeTexCoord.y = relativeTexCoord.y/720.0;
	

	//depth
	float backDepth = gl_FragCoord.z;//back depth
	float geometryDepth = texture2D(sGeometryDepth,relativeTexCoord).r;//geometry depth
	
	
	//--------------- Calculation of front depth---------------
	
	//get distance from origin to the fragment (normal in viewspace)
	vec3 offsetNormal = (vFragWorldPos/vFragWorldPos.w).xyz-(vOriginWorldPos/vOriginWorldPos.w).xyz;
	
	//mirror depth normal (z)
	offsetNormal.z*=-1.0;
	
	//add normal to origin point in order to get the mirrored coordinate point of the sphere
	vec4 sphereMirrorPos = vOriginWorldPos;
	sphereMirrorPos.xyz += offsetNormal.xyz;
	
	//apply perspective calculation
	vec4 projectedMirrorPos = vProjectionMatrix * sphereMirrorPos;
	projectedMirrorPos/=projectedMirrorPos.w;
	
	
	
	//TODO: CALCULATE PROPERLY
	float frontDepth = projectedMirrorPos.z * 0.5 + 0.5; // no idea what do do here further

	
	//want to color only the pixels where the scene depth (provided by a screen space texture which is a depthbuffer of a different FBO)
	//is exactly in between the min/max depth values of the sphere
	if(backDepth>geometryDepth && frontDepth<geometryDepth){
		outputF = vec4(1.0,1.0,1.0,1.0);
	}else{
		discard;
	}
	
	
}

 

I suspect that maybe transforming the coordinates into viewspace in the vertex shader (and working with those coordinates) may be an issue. (No idea where/when to divide by "W" for example.) Also i'm currently stuck at the part where i have to calculate the depth values in the same range as the OpenGL depth buffer in order to compare them in the if statement shown at the end of the fragment shader.

Hints/help would be greatly appreciated. 

Edited by Lewa

Share this post


Link to post
Share on other sites

I nearly got it working.

But there still seems to be a minor error in the calculation of the mirrored depth value.

Here is the current fragment shader (vertex shader is the same as above:)

#version 330
 
in vec4 oColor;
in vec2 vTexcoord;
out vec4 outputF;
in vec4 gl_FragCoord;


uniform sampler2D sGeometryDepth;
in vec4 gl_FragCoord;

in vec4 vFragWorldPos;
in vec4 vOriginWorldPos;
in mat4 vProjectionMatrix;

float linearizeDepth(float depthVal,float zNear,float zFar)
{
  float n = zNear; // camera z near
  float f = zFar; // camera z far
  float z = depthVal;
  return (2.0 * n) / (f + n - z * (f - n));	
}

void main()
{
	float uZnear = 0.01;
	float uZfar = 500.0;


	//get texture coordinates of the screenspace depthbuffer
	vec2 relativeTexCoord = vec2(gl_FragCoord.x,gl_FragCoord.y);
	relativeTexCoord = relativeTexCoord-0.5+1.0;
	relativeTexCoord.x = relativeTexCoord.x/1280.0;
	relativeTexCoord.y = relativeTexCoord.y/720.0;
	
	
	//depth of the backfacing sphere pixels and of the level geometry (depth texture of different FBO)
	float backDepth = linearizeDepth(gl_FragCoord.z,uZnear,uZfar);//back depth
	float geometryDepth = linearizeDepth(texture2D(sGeometryDepth,relativeTexCoord).r,uZnear,uZfar);//geometry depth
	
    //Now we have to calculate the front depth

	//--------------- Calculation of front depth---------------
	
	//get distance from origin to the fragment (in viewspace)
	
	float depthDiff = (vFragWorldPos.z-vOriginWorldPos.z);
	
	//substract depth difference from origin point in order to get the mirrored coordinate point of the sphere
	vec4 sphereMirrorPos = vec4(vFragWorldPos.xy,vOriginWorldPos.z - depthDiff,vOriginWorldPos.w);

	//apply perspective calculation
	vec4 projectedMirrorPos = vProjectionMatrix * sphereMirrorPos;
	projectedMirrorPos/=projectedMirrorPos.w;
	
	
	//depth calculation
	float frontDepth = (projectedMirrorPos.z + 1.0) / 2.0;
	frontDepth = linearizeDepth(frontDepth,uZnear,uZfar);
	

	//want to color only the pixels where the scene depth (provided by a screen space texture which is a depthbuffer of a different FBO)
	//is exactly in between the min/max depth values of the sphere
	
	if(backDepth>geometryDepth && frontDepth<geometryDepth){
		outputF = vec4(1.0,1.0,1.0,1.0);
	}else{
		discard;
	}
	

	
}

 

I believe the issue is somewhere here:

float depthDiff = (vFragWorldPos.z-vOriginWorldPos.z);
	
//add normal to origin point in order to get the mirrored coordinate point of the sphere
vec4 sphereMirrorPos = vec4(vFragWorldPos.xy,vOriginWorldPos.z - depthDiff,vOriginWorldPos.w);

"vFragWorldPos" is the position of the current vertex in ModelviewSpace. "vOriginWorldPos" is the origin of the sphere in modelviewSpace.

I simply calculate the z difference of both points by substracting the z components of both vectors.

Then i reconstruct the mirrored vertex coordinate by using the xy coordiantes of "vFragWorldPos" while the z-coordinate is calculated by substracting the depthDifference from the origin z-coordinate.

 

The issue is that it doesn't seem to give correct results by doing so.

I tested if this reconstruction method by changing this line:

vec4 sphereMirrorPos = vec4(vFragWorldPos.xy,vOriginWorldPos.z - depthDiff,vOriginWorldPos.w);

to this:

vec4 sphereMirrorPos = vec4(vFragWorldPos.xy,vOriginWorldPos.z + depthDiff,vOriginWorldPos.w);

which effectively calculates the depth of the back side of the sphere which i then compared with the values of the depth buffer. They are exactly the same. (which is correct.) But substracting the depthDiff value doesn't yield correct results.

 

Is there something that i'm missing? Maybe the z coordinates of the vertices which were transformed to modelview space aren't linear?

 

Edited by Lewa

Share this post


Link to post
Share on other sites

This sounds like an XY problem to me. Try backing up a step or two and describe what you're trying to accomplish. I suspect someone on these forums will be able to suggest a different approach that might work better.

Share this post


Link to post
Share on other sites

I only looked over the code quickly, so apologies if I'm misunderstanding something.  But, assuming your description of how you're trying to go about solving this, and that you're doing all the math in view space as you said... then it wont work because negating the z value wont give you what you think it gives you.  It wont give you a point backwards along the line of sight.

You can solve this by getting the point of intersection between the line of sight to the pixel and the line from the sphere origin that intersects that line at a right angle.  Once you get this point (lets call it midPoint) you're basically home free as you can just use the pixel position and midPoint to get the point your looking for.

Here's some pseudo code:

vec3 pixelPos;	// position in view space of the pixel on the sphere back face.  Known.
vec3 spherePos;	// position in view space of sphere origin.  Known.

vec3 tempDir = CrossProduct(spherePos, pixelPos);	// vector pointing up/down from plane
tempDir = CrossProduct(pixelPos, tempDir);		// vector pointing from line-of-sight towards spherePos
tempDir = Normalize(tempDir);

float d = DotProduct(tempDir, spherePos);	// distance along the "right" vector towards the sphere origin

vec3 midPoint = spherePos - tempDir * d;	// midPoint = point midway from front to backface along ling-of-sight

vec3 frontFacePos = 2 * midPoint - pixelPos;	// the frontface view-space point you want

Of course this code doesnt check to see if pixelPos and spherePos are parallel.  You'll need to check for that and handle the situation accordingly.  But, I think this will give you what you want... a point backwards along the line of sight (in view-space) from the backface of the sphere towards the frontface.

Share this post


Link to post
Share on other sites
On 7.08.2017 at 11:51 AM, 0r0d said:

I only looked over the code quickly, so apologies if I'm misunderstanding something.  But, assuming your description of how you're trying to go about solving this, and that you're doing all the math in view space as you said... then it wont work because negating the z value wont give you what you think it gives you.  It wont give you a point backwards along the line of sight.

You can solve this by getting the point of intersection between the line of sight to the pixel and the line from the sphere origin that intersects that line at a right angle.  Once you get this point (lets call it midPoint) you're basically home free as you can just use the pixel position and midPoint to get the point your looking for.

Here's some pseudo code:


vec3 pixelPos;	// position in view space of the pixel on the sphere back face.  Known.
vec3 spherePos;	// position in view space of sphere origin.  Known.

vec3 tempDir = CrossProduct(spherePos, pixelPos);	// vector pointing up/down from plane
tempDir = CrossProduct(pixelPos, tempDir);		// vector pointing from line-of-sight towards spherePos
tempDir = Normalize(tempDir);

float d = DotProduct(tempDir, spherePos);	// distance along the "right" vector towards the sphere origin

vec3 midPoint = spherePos - tempDir * d;	// midPoint = point midway from front to backface along ling-of-sight

vec3 frontFacePos = 2 * midPoint - pixelPos;	// the frontface view-space point you want

Of course this code doesnt check to see if pixelPos and spherePos are parallel.  You'll need to check for that and handle the situation accordingly.  But, I think this will give you what you want... a point backwards along the line of sight (in view-space) from the backface of the sphere towards the frontface.

That is EXACTLY what i needed/what i was looking for. Thank you!

Although i'm having a hard time understanding why this formula works. (As i seem to misunderstand how the viewspace works.)

Quote

Of course this code doesnt check to see if pixelPos and spherePos are parallel.

What exactly do you mean with "parallel"? If they are axis aligned in view space?

Edited by Lewa

Share this post


Link to post
Share on other sites
6 hours ago, Lewa said:

That is EXACTLY what i needed/what i was looking for. Thank you!

Although i'm having a hard time understanding why this formula works. (As i seem to misunderstand how the viewspace works.)

What exactly do you mean with "parallel"? If they are axis aligned in view space?

Here's a diagram that might help:

Untitled-1.thumb.jpg.02ffe96245f5107f535fe1c5c9e6e9a2.jpg

I think the problem is that you're thinking that view space means the Z values point along the line of sight to the camera.  But, view space is just camera space.  So when you take the vector (=> pixelPos - spherePos) and then negate the Z component, you get the "incorrect midPoint" seen above.  

So what you need is to find the correct midPoint by first finding the "tempDir" in the image, which is found by first finding the vector normal to the plane and then doing a cross product to find this new vector which is orthogonal to both the plane normal and the line of sight vector.  Once you have that vector you easily get the midPoint and then easily the frontFacePos.

Does that make sense?

As far as why it matters to check if spherePos and pixelPos are parallel... if they are parallel (ie they both lie on the line from camera to spherePos) then the first 2 cross products will give you a 0 length vector, and then the normalize operation will cause a divide by 0.  So, you need to check if the pixelPos is parallel to spherePos, in which case the frontFacePos would just be

frontFacePos = 2 * spherePos - pixelPos;

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627744
    • Total Posts
      2978895
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now