• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
BlueSpud

Reconstructing Position From Depth Buffer

8 posts in this topic

Hey,

So I'm trying to optimize my G-buffer for my deferred shading. Right now what I'm doing is rendering color, normal and position all to their own attachments of a frame buffer. I've seen it hinted in places that you can reconstruct the position from the depth buffer, which would reduce the overhead of the G-Buffer by about a 3rd. I haven't really been able to find anything that explains it well, just code, and I want to know the actual math behind it. If anyone could point me in the right direction or some tips, they would be much appreciated.

 

Thanks.

0

Share this post


Link to post
Share on other sites

To go with the explanation above, here's some HLSL example code on how to construct the frustum corners in the vertex-shader. This is just quick pseudo-code, so it may have an issue or two. It's also better to compute this on CPU and pass it in as a constant to the shader instead of computing it per vertex (you can also reuse it for all quads in a frame).

float3 vFrustumCornersWS[8] =
{
	float3(-1.0,  1.0, 0.0),	// near top left
	float3( 1.0,  1.0, 0.0),	// near top right
	float3(-1.0, -1.0, 0.0),	// near bottom left
	float3( 1.0, -1.0, 0.0),	// near bottom right
	float3(-1.0,  1.0, 1.0),	// far top left
	float3( 1.0,  1.0, 1.0),	// far top right
	float3(-1.0, -1.0, 1.0),	// far bottom left
	float3( 1.0, -1.0, 1.0)		// far bottom right
};

for(int i = 0; i < 8; ++i)
{
	float4 vCornerWS = mul(mViewProjInv, float4(vFrustumCornersWS[i].xyz, 1.0));
	vFrustumCornersWS[i].xyz = vCornerWS.xyz / vCornerWS.w;
}

for(int i = 0; i < 4; ++i)
{
	vFrustumCornersWS[i + 4].xyz -= vFrustumCornersWS[i].xyz;
}

// Passes to the pixel shader - there are two ways to do this:
// 1. Pass the corner with the vertex of your quad as part of the input stage.
// 2. If you're quad is created dynamically with SV_VertexID, as in this example, just pick one from a constant array based on the position.
float3 vCamVecLerpT = (OUT.vPosition.x>0) ? vFrustumCornersWS[5].xyz : vFrustumCornersWS[4].xyz;
float3 vCamVecLerpB = (OUT.vPosition.x>0) ? vFrustumCornersWS[7].xyz : vFrustumCornersWS[6].xyz;	
OUT.vCamVec.xyz = (OUT.vPosition.y<0) ? vCamVecLerpB.xyz : vCamVecLerpT.xyz;

To get position in the pixel shader:

float3 vWorldPos = fLinearDepth * IN.vCamVec.xyz + vCameraPos;
Edited by Styves
1

Share this post


Link to post
Share on other sites

Hi BlueSpud,

 

It's actually quite simple to understand if you think of it this way to retrieve a viewspace coordinate.

 

- the screen coordinate comes in as 0 to 1 in x and y
- you remap that to -1 to 1 like so.... screencoord.xy * 2 - 1 (migth have to flip the sign of the results depending on API)
- you now have an xy value you can picture as laying on the near plane of your camera....so the z coordinate of the vector is whatever your near plane value is.
- you now have to figure out how to scale the -1 to 1 xy values properly to represent the dimensions of the near plane "quad" in view space... this is easy
- you just use some trig to figure out the width and height of the "quad" at the near plane.... basically it's .... tan(FOV * 0.5) * near
- also you'll have to multiply that by the aspect ratio for x probably.

- now after you do all this you will have calculated a vector from the eye to the near plane.
- you just have to now scale this vector by whatever the depth of that pixel is..... you'll have to account for the ratio of how close your near plane is

basically that's how you think of it... if you draw some pictures you should be able to get it..
in order to recover in another space... it's basically the same... but you move along the axis directions in x/y/z then add the eye position.

 

I've been doing my lighting in world space, would it be easier to get the position in world space? Or will I just have to multiply the view space by the inverse camera matrix to get it in world space?

0

Share this post


Link to post
Share on other sites

I'm using linear depth rendered into a texture (I'm using 32 bit floating-point texture, so it's a little overhead reduction). From linear depth it's easy to calculate the world space position with the following steps:

- calculate an eye ray, which points from the eye position to the proper position on the far plane

- when you have this ray, you can simply calculate world space position by eyePosition + eyeRay * depth, if the depth is in [0, 1] range.

 

This method is the same as Styves mentioned in the post above. There are some modifications of this technique, when you use linear depth in range [0, far - near] or [near, far] or something like that, but the "algorithm" is the same.

 

However, the "basic" stored depth is exponential, so if you'd like to use that, there's a really simple (but not so cost-effective) method to do that:

- if you have a texcoord in range [0, 1], you have to convert it into [-1, 1] by "texcoord.xy * 2 - 1"

- you set the depth value to the z coordinate

- then apply a homogenous matrix transformation with the inverse of the view * projection

 

Something like (GLS - NOTE: I didn't test it, just write it here) :

// read depth at the coordinate
float depth = getDepthValue(texcoord);

// get screen-space position
vec4 pos;
pos.xy = texcoord * 2.0 - 1.0;
pos.z = depth;
pos.w = 1.0;

// get world-space position
pos = invViewProj * pos; // or pos * mat, depends on the matrix-representation
pos /= pos.w;

vec3 worldPos = pos.xyz;

Since you have to do this for each pixel, this method can be slower than the previous one.

Edited by csisy
1

Share this post


Link to post
Share on other sites


Since you have to do this for each pixel, this method can be slower than the previous one.

 

There's no need for an entire matrix mul per pixel if you do it correctly.

 

I've explained reconstructing linear z from the depth buffer a few times before. You can look at this thread from a month ago that explains the impetus behind the math required to reconstruct linear z from a depth buffer. I've used this method in my deferred renderers in the past and it's always worked well.

1

Share this post


Link to post
Share on other sites

This is also a solution:

http://www.opengl.org/wiki/Compute_eye_space_from_window_space#Optimized_method_from_XYZ_of_gl_FragCoord

 

read from your depth-texture, then:

vec3 eyePos = eye_direction * linearDepth(texCoord);  // which uses window-space depth-texture as source

 

from eye to world (which can probably be optimized if matview is very specific):

vec3 wpos = (vec4(eyePos, 1.0) * matview).xyz;

 

and the camera to point direction is simply:

vec3 ray = normalize(wpos);

 

from this you can do cool stuff, like adding suncolor to fog:

float sunAmount = max(0.0, dot(ray, sunDirection));
 

0

Share this post


Link to post
Share on other sites

I'm using linear depth rendered into a texture (I'm using 32 bit floating-point texture, so it's a little overhead reduction). From linear depth it's easy to calculate the world space position with the following steps:

- calculate an eye ray, which points from the eye position to the proper position on the far plane

- when you have this ray, you can simply calculate world space position by eyePosition + eyeRay * depth, if the depth is in [0, 1] range.

 

This method is the same as Styves mentioned in the post above. There are some modifications of this technique, when you use linear depth in range [0, far - near] or [near, far] or something like that, but the "algorithm" is the same.

 

However, the "basic" stored depth is exponential, so if you'd like to use that, there's a really simple (but not so cost-effective) method to do that:

- if you have a texcoord in range [0, 1], you have to convert it into [-1, 1] by "texcoord.xy * 2 - 1"

- you set the depth value to the z coordinate

- then apply a homogenous matrix transformation with the inverse of the view * projection

 

Something like (GLS - NOTE: I didn't test it, just write it here) :

// read depth at the coordinate
float depth = getDepthValue(texcoord);

// get screen-space position
vec4 pos;
pos.xy = texcoord * 2.0 - 1.0;
pos.z = depth;
pos.w = 1.0;

// get world-space position
pos = invViewProj * pos; // or pos * mat, depends on the matrix-representation
pos /= pos.w;

vec3 worldPos = pos.xyz;

Since you have to do this for each pixel, this method can be slower than the previous one.

So I tried your method because it was written in GLSL, so I understood it the best and it was fairly simple to understand, but I got some odd results. When I move the camera, the position seems to change a bit, I don't really know how to describe it. Heres my code:

vec4 getWorldSpacePositionFromLinearDepth()
    {
        float depth = texture2D(positionMap,texcoord).r;
        
        //linearize the depth
        //depth = ((2.0 + 0.1) / (1000.0 + 0.1 - depth * (1000.0 - 0.1)));
        
        vec4 position = vec4(texcoord*2.0 - 1.0,depth,1.0);
        
        position =  ((inverseView*inverseProj)*position);
        return position/position.w;
    }

and on the CPU:

 RenderEngine.translateToCameraSpace(Player->PlayerCamera);
        
        glm::mat4 modelView,projection;
        glGetFloatv(GL_MODELVIEW_MATRIX, &modelView[0][0]);
        glGetFloatv(GL_PROJECTION_MATRIX, &projection[0][0]);
glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseProj"), 1, GL_FALSE, &glm::inverse(projection)[0][0]);
        glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseView"), 1, GL_FALSE, &glm::inverse(modelView)[0][0]);

I know its not the most efficient method, but Im not using linear depth, so I want to try this first to before I mess with linear depth.

 

EDIT:

Stupid me. I was multiplying the matrixes wrong. It should be this on the CPU:

glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseProj"), 1, GL_FALSE, &glm::inverse(projection*modelView)[0][0]);

and then on the CPU

 position =  (inverseProj*position)

also I found somewhere that I also had to do this:

float depth = texture2D(positionMap,texcoord).r * 2.0 - 1.0;

And it works great. Thanks everyone!

Edited by BlueSpud
1

Share this post


Link to post
Share on other sites

 


Since you have to do this for each pixel, this method can be slower than the previous one.

 

There's no need for an entire matrix mul per pixel if you do it correctly.

 

I've explained reconstructing linear z from the depth buffer a few times before. You can look at this thread from a month ago that explains the impetus behind the math required to reconstruct linear z from a depth buffer. I've used this method in my deferred renderers in the past and it's always worked well.

 

 

Nice one :) I've talked about the simpler solution (which requires to do that calculation per pixel), but of course you can "extract" the "behind the scenes" math as you did :) Btw, thanks for the information, I'll store it on my HDD :D

 

@BlueSpud:

Sorry, I wasn't here for some days, but I'm glad that it's working :)

 

You should also check Samith's post about reconstructing linear depth from the depth buffer. With linear depth, the world-space position reconstruction is faster, since you just need the eye ray and the eye position (and of course the linear depth).

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0