Jump to content
  • Advertisement
Sign in to follow this  
nini

Deferred Shading question

This topic is 4070 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all , i managed to implement a deferred renderer but i store position as a vec3 , now i want to store only the distance in a render target. How to reconstruct the world position from that ? i've searched on and i found that someone interpolate the camera direction vector in the pixel shader but what to do next ? Here 's what i do and the result is wrong ... In the vertex shader I draw a screen quad in normalized device coord, and then take this as a camera dir and output it in the texcoord In pixel shader i read the viewspace distance in the texture (previous rendertarget) and i do something like this : // Sample the distance according to current texture coords [0..1] d = tex2D(sDist,uv); camDir.z=1; float3 Viewpos = camDir*d; i would appreciate some help on this math trick... thanks in advance

Share this post


Link to post
Share on other sites
Advertisement
If you have the depth, then all you need are the post-perspective X and Y. Luckily those are trivial to obtain, since they're part of the input position passed into the pixel shader when you render the full-screen quad. All you do is take the X and Y passed in as the post-perspective pixel position, replace their Z with the depth you have, and multiply it by the inverse perspective matrix to get the point back in view space. The code might look like this:

float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);

And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);
And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.


okay thanks but what to put in in.Pos.w ? i only have xy ?

Share this post


Link to post
Share on other sites
Quote:
Original post by nini
Quote:
Original post by Zipster
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);
And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.


okay thanks but what to put in in.Pos.w ? i only have xy ?


I think he means you pass the position vector into the pixel shader as a float4.

Share this post


Link to post
Share on other sites
am i wrong in the write of the distance into the render target ?

i do this in a vertex shader for encoding the distance in R32F render target :

Output.pos = mul(matWorldViewProjection,Input.pos);
Output.distance = Output.pos.z / Output.pos.w;

but , when i modify the view the dot product seems to change dot(N,L)
because it's like the reconstruction of WorldPos is not the same eg
when i do in the last pixel shader :

float4 Screen_p = float4(Screenpos.x,Screenpos.y,Distance,1);
float3 Worldpos = mul(Screen_p,matViewProjectionInverse);

L = LightPos-Worldpos;
normalize(L);
normalize(N);
IDiffuse = dot(N,L);

Share this post


Link to post
Share on other sites
well , i'm pretty sure that the encoding step of the distance is wrong and the reconstruction of worldpos too...

can someone talk me like a 3 year old on the subject ?

thanx in advance

Share this post


Link to post
Share on other sites
there are lots of papers out there that describe this. This is a good one:

http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
there are lots of papers out there that describe this. This is a good one:

http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf


i've read it , but unfortunately he didn't explain the trick for reconstruct the position...

Share this post


Link to post
Share on other sites
Quote:
Original post by Kenneth Gorking
There is a simpler and faster way to do it. Page 11 of http://www.ati.com/developer/siggraph06/Wenzel-Real-time_Atmospheric_Effects_in_Games.pdf tells you how.


okay this paper is very interesting , and yes that's what i search for ...
but when he say : "For full screen effects have the distance from the camera’s
position to its four corner points at the far clipping plane
interpolated"

how to calculate this vector ? , i have the normalized coord (eg -1,1 for x and y) but it's not the coord at far plane isnt it ?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!