Jump to content
  • Advertisement
Sign in to follow this  
Karlos

OpenGL Opengl World Position From Depth

This topic is 427 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I am having a lot of trouble trying to recover world space position from depth.

I swear I have managed to get this to work before in another project, but I have been stuck on this for ages

 

I am using OpenGL and a deferred pipeline

I am not modifying the depth in any special way, just whatever OpenGL does

and I have been trying to recover world space position with this (I don't care about performance at this time, i just want it to work):

vec4 getWorldSpacePositionFromDepth(
	sampler2D depthSampler,
	mat4 proj,
	mat4 view,
	vec2 screenUVs)
{

	mat4 inverseProjectionView =  inverse(proj * view);
	float pixelDepth = texture(depthSampler, screenUVs).r * 2.0f - 1.0f;	
	
	
	vec4 clipSpacePosition =  vec4( screenUVs * 2.0f - 1.0f, pixelDepth, 1.0);
	
	vec4 worldPosition = inverseProjectionView * clipSpacePosition;
	worldPosition = vec4((worldPosition.xyz / worldPosition.w ), 1.0f);
	return worldPosition;
}

Which I am sure is how many other sources do it...

But the positions seem distorted and get worse as i move the camera away from origin it seems, which of course then breaks all of my lighting...

Please see attached image to see the difference between depth reconstructed world space position and the actual world space position

Any help would be much appreciated!

K

Share this post


Link to post
Share on other sites
Advertisement
Which is not what you want since ur in shader you xould precompute z coord or use glfragcoord z value


Amyway( model * view) * projection = mat1
Vertex * mat1 = a
A.xyz = a.xyz / a.w
A = a * 0.5 + 0.5

A.z should be your depth

Share this post


Link to post
Share on other sites
Your code looks slow but sane. How are screenUVs computed?
Are you passing the exact same view matrix that the scene was rendered with? Maybe try computing the inverse-view-proj matrix on the CPU and passing it in.

Which is not what you want since ur in shader you xould precompute z coord or use glfragcoord z Value

Reconstructing position from the depth buffer is common in post-processing / deferred shading systems, as glfragcoord is not available and there's no other way to determine the per pixel world-space coordinates.

Share this post


Link to post
Share on other sites

screenuvs are just 0-1 uvs for a screen space quad like the usual deferred setup

When i use an explicit world space position buffer everything renders perfectly. So I am sure it is reading from the correct uvs

I tried cpu inverse view proj and it didn't make any difference.

It is so weird, because it looks almost okay until the camera is moved far away!

Share this post


Link to post
Share on other sites

I can't believe this - it was something so simple:

my glDepthRangef was set to 0.1f and 1000.0f

setting it to 0.0f and 1000.0f

fixed the position reconstruction

argh!

 

Thanks for the help anyway! :)

Share this post


Link to post
Share on other sites
glDepthRange specifies the NDC->depth-texture transform. Generally you want to leave it on 0,1 and never touch it.

Share this post


Link to post
Share on other sites

"The setting of (0,1) maps the near plane to 0 and the far plane to 1. With this mapping, the depth buffer range is fully utilized."

 

 

This says you have no need to touch it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!