Deferred Shading question

Started by
26 comments, last by Rompa 16 years, 6 months ago
Hi all , i managed to implement a deferred renderer but i store position as a vec3 , now i want to store only the distance in a render target. How to reconstruct the world position from that ? i've searched on and i found that someone interpolate the camera direction vector in the pixel shader but what to do next ? Here 's what i do and the result is wrong ... In the vertex shader I draw a screen quad in normalized device coord, and then take this as a camera dir and output it in the texcoord In pixel shader i read the viewspace distance in the texture (previous rendertarget) and i do something like this : // Sample the distance according to current texture coords [0..1] d = tex2D(sDist,uv); camDir.z=1; float3 Viewpos = camDir*d; i would appreciate some help on this math trick... thanks in advance
Advertisement
If you have the depth, then all you need are the post-perspective X and Y. Luckily those are trivial to obtain, since they're part of the input position passed into the pixel shader when you render the full-screen quad. All you do is take the X and Y passed in as the post-perspective pixel position, replace their Z with the depth you have, and multiply it by the inverse perspective matrix to get the point back in view space. The code might look like this:
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);float4 view_position = mul(perspective_position, invPerspective);

And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.
Quote:Original post by Zipster
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);

And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.

okay thanks but what to put in in.Pos.w ? i only have xy ?
Quote:Original post by nini
Quote:Original post by Zipster
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);

And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.

okay thanks but what to put in in.Pos.w ? i only have xy ?

I think he means you pass the position vector into the pixel shader as a float4.

am i wrong in the write of the distance into the render target ?

i do this in a vertex shader for encoding the distance in R32F render target :

Output.pos = mul(matWorldViewProjection,Input.pos);
Output.distance = Output.pos.z / Output.pos.w;

but , when i modify the view the dot product seems to change dot(N,L)
because it's like the reconstruction of WorldPos is not the same eg
when i do in the last pixel shader :

float4 Screen_p = float4(Screenpos.x,Screenpos.y,Distance,1);
float3 Worldpos = mul(Screen_p,matViewProjectionInverse);

L = LightPos-Worldpos;
normalize(L);
normalize(N);
IDiffuse = dot(N,L);

well , i'm pretty sure that the encoding step of the distance is wrong and the reconstruction of worldpos too...

can someone talk me like a 3 year old on the subject ?

thanx in advance
there are lots of papers out there that describe this. This is a good one:

http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf
Quote:Original post by wolf
there are lots of papers out there that describe this. This is a good one:

http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf


i've read it , but unfortunately he didn't explain the trick for reconstruct the position...
There is a simpler and faster way to do it. Page 11 of http://www.ati.com/developer/siggraph06/Wenzel-Real-time_Atmospheric_Effects_in_Games.pdf tells you how.
Quote:Original post by Kenneth Gorking
There is a simpler and faster way to do it. Page 11 of http://www.ati.com/developer/siggraph06/Wenzel-Real-time_Atmospheric_Effects_in_Games.pdf tells you how.


okay this paper is very interesting , and yes that's what i search for ...
but when he say : "For full screen effects have the distance from the camera’s
position to its four corner points at the far clipping plane
interpolated"

how to calculate this vector ? , i have the normalized coord (eg -1,1 for x and y) but it's not the coord at far plane isnt it ?

This topic is closed to new replies.

Advertisement