Sign in to follow this  

Deferred Shading question

This topic is 3702 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all , i managed to implement a deferred renderer but i store position as a vec3 , now i want to store only the distance in a render target. How to reconstruct the world position from that ? i've searched on and i found that someone interpolate the camera direction vector in the pixel shader but what to do next ? Here 's what i do and the result is wrong ... In the vertex shader I draw a screen quad in normalized device coord, and then take this as a camera dir and output it in the texcoord In pixel shader i read the viewspace distance in the texture (previous rendertarget) and i do something like this : // Sample the distance according to current texture coords [0..1] d = tex2D(sDist,uv); camDir.z=1; float3 Viewpos = camDir*d; i would appreciate some help on this math trick... thanks in advance

Share this post


Link to post
Share on other sites
If you have the depth, then all you need are the post-perspective X and Y. Luckily those are trivial to obtain, since they're part of the input position passed into the pixel shader when you render the full-screen quad. All you do is take the X and Y passed in as the post-perspective pixel position, replace their Z with the depth you have, and multiply it by the inverse perspective matrix to get the point back in view space. The code might look like this:

float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);

And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);[/code]
And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.


okay thanks but what to put in in.Pos.w ? i only have xy ?

Share this post


Link to post
Share on other sites
Quote:
Original post by nini
Quote:
Original post by Zipster
float4 perspective_position = float4(in.Pos.x, in.Pos.y, tex2D(sDist, in.UV).x * in.Pos.w, in.Pos.w);
float4 view_position = mul(perspective_position, invPerspective);[/code]
And there's not much else to it [smile] If you want it in world space, just multiply by the inverse view matrix as well.


okay thanks but what to put in in.Pos.w ? i only have xy ?


I think he means you pass the position vector into the pixel shader as a float4.

Share this post


Link to post
Share on other sites
am i wrong in the write of the distance into the render target ?

i do this in a vertex shader for encoding the distance in R32F render target :

Output.pos = mul(matWorldViewProjection,Input.pos);
Output.distance = Output.pos.z / Output.pos.w;

but , when i modify the view the dot product seems to change dot(N,L)
because it's like the reconstruction of WorldPos is not the same eg
when i do in the last pixel shader :

float4 Screen_p = float4(Screenpos.x,Screenpos.y,Distance,1);
float3 Worldpos = mul(Screen_p,matViewProjectionInverse);

L = LightPos-Worldpos;
normalize(L);
normalize(N);
IDiffuse = dot(N,L);

Share this post


Link to post
Share on other sites
well , i'm pretty sure that the encoding step of the distance is wrong and the reconstruction of worldpos too...

can someone talk me like a 3 year old on the subject ?

thanx in advance

Share this post


Link to post
Share on other sites
there are lots of papers out there that describe this. This is a good one:

http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
there are lots of papers out there that describe this. This is a good one:

http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf


i've read it , but unfortunately he didn't explain the trick for reconstruct the position...

Share this post


Link to post
Share on other sites
Quote:
Original post by Kenneth Gorking
There is a simpler and faster way to do it. Page 11 of http://www.ati.com/developer/siggraph06/Wenzel-Real-time_Atmospheric_Effects_in_Games.pdf tells you how.


okay this paper is very interesting , and yes that's what i search for ...
but when he say : "For full screen effects have the distance from the camera’s
position to its four corner points at the far clipping plane
interpolated"

how to calculate this vector ? , i have the normalized coord (eg -1,1 for x and y) but it's not the coord at far plane isnt it ?

Share this post


Link to post
Share on other sites
I haven't tried this, but from what I understand:

Calculate 4 vectors, each from the camera position to the far (or near for that matter) frustum points, and normalize them. Pass these vectors as texcoords or whatever into the vertex shader and pass through to the pixel shader (as v3FrustumRay below) so that they get interpolated for you. DO NOT normalize the vectors after interpolation!

Also, pass through simple texcoords for your scene depth texture for each of the corners as 0,0 1,0 1,1 and 1,0 etc. Sample your linear eye depth texture using these interpolated texcoords to get fLinearSceneDepth.

In the pixel shader, you should be able to use something like this to get your world position of the pixel:

float3 v3PixelWorldPosition = v3FrustumRay * fLinearSceneDepth + v3CameraPos;

Hope this helps (and makes sense)!

Share this post


Link to post
Share on other sites
Quote:
Original post by Rompa
I haven't tried this, but from what I understand:

Calculate 4 vectors, each from the camera position to the far (or near for that matter) frustum points, and normalize them.



okay , where do you calculate those vectors (in the App or in the vertex shader ?)

here's what i do in HLSL actually :

float4 vViewPosition;


struct VS_INPUT
{
float3 Pos: POSITION;
};

struct VS_OUTPUT
{
float4 Pos: POSITION;
float2 TexCoord: TEXCOORD0;
float4 ScreenDir : TEXCOORD1;
};

VS_OUTPUT Out;

Out.Pos = float4(sign(In.Pos.xy),0.0,1.0);
Out.TexCoord.x = (Out.Pos.x+1.0) * 0.5;
Out.TexCoord.y = 1.0 - ((Out.Pos.y+1.0) * 0.5);
Out.ScreenDir = Out.Pos-vViewPosition;


this doesn't work...

Share this post


Link to post
Share on other sites
You can do either, precalculate on CPU or generate them per-vertex like you. I precalculate them once per camera view in mine for other reasons but either will do.

In regards to your code, the calculation of the texcoords for sampling your scene depth looks fine, as does calculating the clip space coords for the vertex positions. I think there's a problem with the ray calculation code though... it seems to be mixing a clip-space vertex position with world-space camera position. I assume you would want the ray in world space? You may either need to project your Out.Pos back into world-space (mul by inverse of view-projection matrix) or use world-space vertices. I guess it all depends on what 'space' you want to have your per-pixel position in.

Hope this helps...?

Share this post


Link to post
Share on other sites
Quote:
Original post by Rompa

Hope this helps...?



Okay thank you very mutch it effectively helps greatly !
the job is done...

I post here the shader so that other can have this math trick !!

VS_OUTPUT Out;

Out.Pos = float4(sign(In.Pos.xy),0.0,1.0);
Out.TexCoord.x = (Out.Pos.x+1.0) * 0.5;
Out.TexCoord.y = 1.0 - ((Out.Pos.y+1.0) * 0.5);
Out.ScreenDir = mul(matViewProjectionInverse,Out.Pos)-vViewPosition;

return Out;

This is good for world pos reconstruction , and don't forget to encode length in the distance buffer and not the Z ratio with W.

Thanx for all.

Again another question , why do you use it in Clip space , (to save framerate ?)

Share this post


Link to post
Share on other sites
"Once per camera view" means once for a camera's rendering pass. If you only have one camera, it effectively means once per frame. I'd just pass in the world-space ray in a float3 texcoord, as you're only rendering 4 vertices and are hardly going to be vertex bound. Sorry for the confusion.

A question I had for you is: why use the sign(In.Pos.xy) function and not just pass the normalized clip coordinates from the app? eg. (-1, 1, 0) for top-left-near ...

Anyway, I'm glad you got it working. By the way if you do happen to have a depth texture with z'/w' stored in it (a certain DirectX based console for example), then you can calculate the linear eye depth using just a few constants from the projection matrix, I think.

Something like if z'/w' is in the depth texture, then it was calculated with
z'/w' = (linear_eye_z * m33 + m43) / linear_eye_z
where m33 and m43 are the projection matrix components.

So we can rearrange and get:
linear_eye_z = -m43 / (m33 - (z'/w'))
and only need a division and subtraction to recover linear_eye_depth from a depth texture storing z'/w' (assuming I haven't stuffed up the calculations!)

Cheers...

Share this post


Link to post
Share on other sites
Quote:
Original post by Rompa

A question I had for you is: why use the sign(In.Pos.xy) function and not just pass the normalized clip coordinates from the app? eg. (-1, 1, 0) for top-left-near ...



in fact this portion of code commes from ATI's RenderMonkey, but when the shader will be bound with my app i will use clip coordinates...

Again thanks for your suggestion , you are totally right about it , i will give it a try tomorrow !

Tnx again...


Share this post


Link to post
Share on other sites
You can pretty much use it for anything as long as you have a depth texture representing the scene from the current camera's point of view... it would mean that you'd need to render your light volumes into the depth texture though which I'm not sure would be a viable thing.

I implemented this last night on the 360 where I render a shadowmap as a depth buffer and then render the scene as per usual. I then capture the depth buffer as a texture and use the depth reconstruction to get linear eye depth, calculate the world position of the pixel, and then proceed to shadowmap it as per usual. It works a treat and means I do my shadowing as a deferred pass rather than modifying any of my shaders to perform shadowing. Actually, that's not quite true as it only shadows stuff in the depth buffer, so you need to have a shader for your translucent stuff if you want it to receive shadows. I'm going to add shadow receiving to my particles in this manner.

The other thing is for multiple viewports (ie. 4 player racing game) is that you can't render the fullscreen quad - you need to render a quad for each viewport, as it'll have its own frustum (rays) and projection matrix.

Share this post


Link to post
Share on other sites
Hi all,

I have implemented deferred shadows too, but I have a problem:

When you moves away from the objects the shadows begins to flicker a lot, I think that it's a precission problem, because if I render only the depth texture (half float) it seems to have visible bands (something like dithering issues), have any of you got similar problems with the technique?

Thanks in advance.

Share this post


Link to post
Share on other sites
Quote:
Original post by lyonsbane
Hi all,

I have implemented deferred shadows too, but I have a problem:

When you moves away from the objects the shadows begins to flicker a lot, I think that it's a precission problem, because if I render only the depth texture (half float) it seems to have visible bands (something like dithering issues), have any of you got similar problems with the technique?

Thanks in advance.


Simple solution: use more than a 16F value for depth. It's not enough, you'll get banding (just like in the old days when video cards only had 16-bit Z).

Use 24-32 bit packed in, 32F, etc.

Share this post


Link to post
Share on other sites
okay i want now to implement the rompa's method ;-) world space reconstruction with z'/w' in a texture ...

so here are the step that i do in hlsl

==================
1) encoding z'/w'
==================

In the vertex shader :

z' is the z world value after projection
w' is the w world value after projection

if my maths are not so bad the projected z value is :

z' = z*m22 +m32; let m22 being matProjection[2][2] and [3][2]
w' = z;

so i propagate those value to the pixel shader

In the pixel shader :

return z'/w' in a R32F Texture.

===========================
2) Worldpos reconstruction
===========================

Vertex shader :

propagate the screendir to the pixel shader :

Out.screendir = float3(Ouput.pos.x,Output.pos.y,1);

With output.pos.xy being the fullscreen coord in projection space (-1,1 1,1 ect..)

Pixel shader :

We must find linear_eye_space depth , in order to do this i do :

linear_eye_space = m32/(z'/w')m22; because z'/w' = (z*m22 + m32) / z

and Worldpos = camerapos + linear_eye*screendir;

The results are wrong and i can't find why... i debug this by outputing worldpos to a texture with this method , and generate another texture with outputing directly worldpos in it from the vertex data and the textures are different wich prove me that it doesn't work.

I'm thinking that's because of my screendir calculation but i can't figure out...
please i need some help on this...

Here are on what i'm confuse :
does linear_eye_space_depth = Z world value ?
do we need to calc z/w with matProjection or matViewProjection ?
how to calculate screendir from clipspace coord(eg after projection) ?

thanx in advance.

Share this post


Link to post
Share on other sites
Hi,

I would recommend on the PC using linear eye space depth, not z'/w' as it makes the pixel shader cheaper (no divide!), and the accuracy a bit more predicatable. I only use the z'/w' because I get it for free by resolving the depth buffer on the 360.

As I mentioned in my previous posts, linear eye z is
linear_eye_z = -m43 / (m33 - (z'/w'))
but if you encode linear_eye_z in your shadowmap texture then there's no need to reconstruct like this. Note that I use 1 based indices here, like the D3DMATRIX stuff.

linear eye space z is the z value of a coordinate in camera (view space), but I guess you could also use homogenous z after the perspective matrix has been applied, as it just scales z from between z_near and z_far to between 0 and 1.
If you use view space depth (values range from 0->far) then you reconstruct world pos by
world_pos = camera_pos + frustum_ray * linear_eye_depth
where frustum_ray is a ray of unit length to the frustum vertices. Note that this ray is only of unit length in the vertex shader and should NOT be normalised in the pixel shader, as you want interpolation like the edge of a triangle versus an arc. Hopefully you get what I mean here.

You could probably use homogenous z, then frustum_ray would be a vector of NON-unit length from the near frustum vertices to the far frustum vertices (so when multiplied by 0..1 gives you near->far) or something like that. I've not tested this, but it follows the same principles.

z'/w' is a result of the combined world-view-proj matrix, so if you're to invert the math to reconstruct view space depth then you only need to use the proj matrix. If you want world space z (why?) then use the view-proj matrix. Essentially you're transforming from 'projection' space back to 'view' space so you just need to invert the projection part of the transform. I like to think of matrix transforms to be a transform from one space to another - as to how accurate this view is, I dunno, but its served me well enough :-)

I recommend reading "Real-Time Fog using Post-processing in OpenGL" to gain a little more insight into the methods, even though their projection math is slightly different. I can vouch for the linear eye space from z'/w' math though as I use it in a working deferred shadow system.

I hope this answers at least some of your questions.
Cheers.

Share this post


Link to post
Share on other sites
Thank you very mutch rompa , i really apreciate your help but when you say :

Quote:
Original post by Rompa

world_pos = camera_pos + frustum_ray * linear_eye_depth
where frustum_ray is a ray of unit length to the frustum vertices.



frustum vertices are what ? the -1,1,1 xyz coord of the full screen quad ?

I'm still in the mood and i don't know why..., i post here the output code from vertex shader for encoding linear z :

Output.linear_z = mul(Input.Position,matView).z;

then in the vertex shader of fullscreenquad :

Out.Pos = float4(sign(In.Position.xy),0.0,1.0);
Out.ScreenDir = normalize(float3(Out.Pos.xy,1));
Out.uv = input.uv;

then in the pixel shader :

screendir is TEXCOORD0 and float3

float linear_z = tex2D(Texture0,uv);
float3 worldpos = vViewPosition+(d*screendir);

i've already tested my uv's coords

if you have a mail i can mail you a rendermonkey 1.71 workspace with the shader testbed if you display the different passes , worldpos original texture get different from worlpos reconstructed texture...

I have lot of headhache for this..
if you can help mail me on private your mail adress so that i send you the rfx.

of course i will publish the solution here because it will save time for others who wants to play with deferred shading...

Share this post


Link to post
Share on other sites

This topic is 3702 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this