Hi All,
I am porting/migrating my pipeline from forward to deferred rendering because I want precomputed atmospheric scattering. If I do not learn how to achieve the technique I will never die a happy man (and vice versa ). The goal is:
The paper with it is interesting reading as is prolands source code example, however prolands demo does not actually use a GBuffer so I need to blend their high level behaviour from this DX port to achieve the effect in GLSL 330.
I am binding my render buffer as a texture (24bit 8 bit stencil), I can read and draw it to a quad on screen, I know that the none linear depth values it writes are valid and my shader/model binding process is tried and true (right handed). My current goal is to establish all the fiddly unpleasant transforms required to get from GBuffer post processing exposed and to better understand the most practical way to handle transforms around depth. My first stop was the realisation that if I can get the post process surfacePos below I will be home free as all the lighting is similar it is just the sources used to lookup values that have changed.
Here is the code I am attempting to port, the transforms through various coordinate space I have a loose grasp of, but the part I do not get is how the SV_POSITION translates in GLSL. Changing the gl_FragDepth to try and mimic screen space changes ends badly.
GBUFFER GEOMETRY___________________________________________________
struct VS_OUT {
float4 posH : SV_POSITION;
float3 posW : POSITION;
float3 tangent : TANGENT0;
float3 bitangent : TANGENT1;
float2 texC : TEXCOORD0;
};
...
Vertex shader snippet of interest to position:
output.posH = mul(float4(posWorld, 1.0f), g_viewProj);
output.posH.z = output.posH.z * output.posH.w * g_invFarPlane;
POST PROCESSING_____________________________________________________
Vertex Shader
static const float EPSILON_ATMOSPHERE = 0.002f;
static const float EPSILON_INSCATTER = 0.004f;
Texture2D g_depth;
Texture2D g_color;
Texture2D g_normal;
Texture2D g_texIrradiance;
Texture3D g_texInscatter;
float3 g_cameraPos;
float3 g_sunVector;
float4x4 g_cameraWorld;
float4 g_frustumFar[4];
float4 g_frustumNear[4];
struct VS_IN {
float3 posL : POSITION;
float2 texC : TEXCOORD0;
uint index : TEXCOORD1;
};
struct VS_OUT {
float4 posH : SV_POSITION;
float2 texC : TEXCOORD0;
float3 nearToFar : TEXCOORD2;
float3 cameraToNear : TEXCOORD3;
};
VS_OUT VS(VS_IN input) {
VS_OUT output;
output.posH = float4(input.posL,1.0f);
output.texC = input.texC;
float3 frustumFarWorld = mul(float4(g_frustumFar[input.index].xyz, 1.0f), g_cameraWorld).xyz;
float3 frustumNearWorld = mul(float4(g_frustumNear[input.index].xyz, 1.0f), g_cameraWorld).xyz;
output.cameraToNear = frustumNearWorld - g_cameraPos;
output.nearToFar = frustumFarWorld - frustumNearWorld;
return output;
}
Pixel Shader:
float4 PS_PLANET_DEFERRED(VS_OUT input) : SV_TARGET0 {
// reconstructing world space postion by interpolation
float depthVal = g_depth.SampleLevel( PointSamplerClamp, input.texC, 0 ).r;
float3 surfacePos = g_cameraPos + input.cameraToNear + depthVal * input.nearToFar;
// obtaining the view direction vector
float3 viewDir = normalize(input.nearToFar);
- Can anyone confirm how the PosH value is impacting the encoding of the depth buffer and what the openGL equivalent would be?
- Can anyone tell me the real values of those vec4[4] fustrums and where they can be derived from? I have no problem adding an index to my screen quad to link to it or building a clipping frustrum near or far plane. The problem I have is that it is glossed over and I am worried my deductive reasoning will be slower than the my goal. I really want my GBuffer ready to start assembling this post processing effect before the Easter holiday is done.
I believe g_cameraWorld is the world rotation of the camera.
http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/
http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real
http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer
http://www.opengl.org/discussion_boards/showthread.php/164734-Deferred-shading/page5
http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/
So far I have had no success reconstructing raw depth by blending these snippets input on the problem or testing them in relative isolation, I have a feeling that everyone is tampering with depth output in the geometry buffer but its not very clear in many of the snippets I have found exactly what parameters they are using to do this and why. I am going to try and focus on filling in the gaps from the model above because the resources above suggest it is still an efficient mechanism for solving reconstructing the desirable spaces in most post processes.
Does anyone have a tutorial where this reconstruction process is applied as a holistic piece of functioning code? I would love to see the implementation for these frustum shapes on the near and far plane. I just need to see a proper GBuffer pipeline using depth to reconstruct position and linear depth so I can reverse engineer and inspect its properties to understand the bugs in my own code and move on.
I really cannot wait to play with that effect. If I get my shaders to reconstruct from depth I will post them up and describe the parts that have as of yet confounded me.
I welcome any input.
Many Thanks,
Enlightened One