PSOutput ps_main(in float2 uv : TEXCOORD0,
in float3 viewdir : TEXCOORD1) : COLOR0 {
PSOutput output;
// -- reconstruct view position
float depth = tex2D(depthsampler, uv).x;
float3 viewpos = viewdir * depth;
float3 lposview = mul(lightpos, matView);
float3 lightdir = normalize(lposview - viewpos);
float3 v = normalize(viewdir);
float3 n = normalize(tex2D(normalsampler, uv));
float ndotl = max(0.0f, dot(n, normalize(lightdir)));
float3 r = reflect(lightdir, n);
float rdotv = max(0.0f, dot(r, -v));
// -- output light properties to the light buffer
float4 lightbuffer;
lightbuffer.xyz = lightcolor.xyz * ndotl;
lightbuffer.w = rdotv * ndotl;
output.lbuffer1 = lightbuffer;
output.lbuffer2 = rdotv * ndotl;
return output;
}
Reconstructing position from depth
Hello,
I've read a the threads on reconstructing position from depth, I've tried the techniques and have gone over the math but I'm still getting incorrect results. I could use another pair of eyes.
I'm pretty sure it's the view vector because the diffuse looks correct but the specular looks incorrect. It has a pinch at the center of the screen. When I render out the view vector to a full screen quad I get green on the top, red on the right, and black on the bottom left with a pinch of the three colors in the middle. There should be blue in the center of the screen where the camera is staring straight but there is not which leads me to believe the calculating of the view direction is incorrect.
The depth is stored in view space normalized to the far clip plane
The normals are also stored in view space
// -- constructing the depth position
float4 viewpos = mul(inpos, matView);
float depth = length(viewpos) / fFarClipPlane;
olineardepth = float4(depth, 0.0f, 0.0f, 0.0f);
This is my pixel shader.
Thank you,
-= Dave
Can't see how you unpack normals - you just normalize them but in view space only z would be always positive, xy can be negative and you droping them.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement