Opengl World Position From Depth

Started by
6 comments, last by _WeirdCat_ 6 years, 11 months ago

Hi,

I am having a lot of trouble trying to recover world space position from depth.

I swear I have managed to get this to work before in another project, but I have been stuck on this for ages

I am using OpenGL and a deferred pipeline

I am not modifying the depth in any special way, just whatever OpenGL does

and I have been trying to recover world space position with this (I don't care about performance at this time, i just want it to work):


vec4 getWorldSpacePositionFromDepth(
	sampler2D depthSampler,
	mat4 proj,
	mat4 view,
	vec2 screenUVs)
{

	mat4 inverseProjectionView =  inverse(proj * view);
	float pixelDepth = texture(depthSampler, screenUVs).r * 2.0f - 1.0f;	
	
	
	vec4 clipSpacePosition =  vec4( screenUVs * 2.0f - 1.0f, pixelDepth, 1.0);
	
	vec4 worldPosition = inverseProjectionView * clipSpacePosition;
	worldPosition = vec4((worldPosition.xyz / worldPosition.w ), 1.0f);
	return worldPosition;
}

Which I am sure is how many other sources do it...

But the positions seem distorted and get worse as i move the camera away from origin it seems, which of course then breaks all of my lighting...

Please see attached image to see the difference between depth reconstructed world space position and the actual world space position

Any help would be much appreciated!

K

Advertisement
Which is not what you want since ur in shader you xould precompute z coord or use glfragcoord z value


Amyway( model * view) * projection = mat1
Vertex * mat1 = a
A.xyz = a.xyz / a.w
A = a * 0.5 + 0.5

A.z should be your depth
Your code looks slow but sane. How are screenUVs computed?
Are you passing the exact same view matrix that the scene was rendered with? Maybe try computing the inverse-view-proj matrix on the CPU and passing it in.

Which is not what you want since ur in shader you xould precompute z coord or use glfragcoord z Value

Reconstructing position from the depth buffer is common in post-processing / deferred shading systems, as glfragcoord is not available and there's no other way to determine the per pixel world-space coordinates.

screenuvs are just 0-1 uvs for a screen space quad like the usual deferred setup

When i use an explicit world space position buffer everything renders perfectly. So I am sure it is reading from the correct uvs

I tried cpu inverse view proj and it didn't make any difference.

It is so weird, because it looks almost okay until the camera is moved far away!

I can't believe this - it was something so simple:

my glDepthRangef was set to 0.1f and 1000.0f

setting it to 0.0f and 1000.0f

fixed the position reconstruction

argh!

Thanks for the help anyway! :)

glDepthRange specifies the NDC->depth-texture transform. Generally you want to leave it on 0,1 and never touch it.

oh so that explains it then.

Thank you!

"The setting of (0,1) maps the near plane to 0 and the far plane to 1. With this mapping, the depth buffer range is fully utilized."

This says you have no need to touch it.

This topic is closed to new replies.

Advertisement