Deferred Shading question

Started by
26 comments, last by Rompa 16 years, 6 months ago
I haven't tried this, but from what I understand:

Calculate 4 vectors, each from the camera position to the far (or near for that matter) frustum points, and normalize them. Pass these vectors as texcoords or whatever into the vertex shader and pass through to the pixel shader (as v3FrustumRay below) so that they get interpolated for you. DO NOT normalize the vectors after interpolation!

Also, pass through simple texcoords for your scene depth texture for each of the corners as 0,0 1,0 1,1 and 1,0 etc. Sample your linear eye depth texture using these interpolated texcoords to get fLinearSceneDepth.

In the pixel shader, you should be able to use something like this to get your world position of the pixel:

float3 v3PixelWorldPosition = v3FrustumRay * fLinearSceneDepth + v3CameraPos;

Hope this helps (and makes sense)!
Advertisement
Quote:Original post by Rompa
I haven't tried this, but from what I understand:

Calculate 4 vectors, each from the camera position to the far (or near for that matter) frustum points, and normalize them.



okay , where do you calculate those vectors (in the App or in the vertex shader ?)

here's what i do in HLSL actually :

float4 vViewPosition;


struct VS_INPUT
{
float3 Pos: POSITION;
};

struct VS_OUTPUT
{
float4 Pos: POSITION;
float2 TexCoord: TEXCOORD0;
float4 ScreenDir : TEXCOORD1;
};

VS_OUTPUT Out;

Out.Pos = float4(sign(In.Pos.xy),0.0,1.0);
Out.TexCoord.x = (Out.Pos.x+1.0) * 0.5;
Out.TexCoord.y = 1.0 - ((Out.Pos.y+1.0) * 0.5);
Out.ScreenDir = Out.Pos-vViewPosition;


this doesn't work...
You can do either, precalculate on CPU or generate them per-vertex like you. I precalculate them once per camera view in mine for other reasons but either will do.

In regards to your code, the calculation of the texcoords for sampling your scene depth looks fine, as does calculating the clip space coords for the vertex positions. I think there's a problem with the ray calculation code though... it seems to be mixing a clip-space vertex position with world-space camera position. I assume you would want the ray in world space? You may either need to project your Out.Pos back into world-space (mul by inverse of view-projection matrix) or use world-space vertices. I guess it all depends on what 'space' you want to have your per-pixel position in.

Hope this helps...?
Quote:Original post by Rompa

Hope this helps...?



Okay thank you very mutch it effectively helps greatly !
the job is done...

I post here the shader so that other can have this math trick !!

VS_OUTPUT Out;

Out.Pos = float4(sign(In.Pos.xy),0.0,1.0);
Out.TexCoord.x = (Out.Pos.x+1.0) * 0.5;
Out.TexCoord.y = 1.0 - ((Out.Pos.y+1.0) * 0.5);
Out.ScreenDir = mul(matViewProjectionInverse,Out.Pos)-vViewPosition;

return Out;

This is good for world pos reconstruction , and don't forget to encode length in the distance buffer and not the Z ratio with W.

Thanx for all.

Again another question , why do you use it in Clip space , (to save framerate ?)
"Once per camera view" means once for a camera's rendering pass. If you only have one camera, it effectively means once per frame. I'd just pass in the world-space ray in a float3 texcoord, as you're only rendering 4 vertices and are hardly going to be vertex bound. Sorry for the confusion.

A question I had for you is: why use the sign(In.Pos.xy) function and not just pass the normalized clip coordinates from the app? eg. (-1, 1, 0) for top-left-near ...

Anyway, I'm glad you got it working. By the way if you do happen to have a depth texture with z'/w' stored in it (a certain DirectX based console for example), then you can calculate the linear eye depth using just a few constants from the projection matrix, I think.

Something like if z'/w' is in the depth texture, then it was calculated with
z'/w' = (linear_eye_z * m33 + m43) / linear_eye_z
where m33 and m43 are the projection matrix components.

So we can rearrange and get:
linear_eye_z = -m43 / (m33 - (z'/w'))
and only need a division and subtraction to recover linear_eye_depth from a depth texture storing z'/w' (assuming I haven't stuffed up the calculations!)

Cheers...
Quote:Original post by Rompa

A question I had for you is: why use the sign(In.Pos.xy) function and not just pass the normalized clip coordinates from the app? eg. (-1, 1, 0) for top-left-near ...



in fact this portion of code commes from ATI's RenderMonkey, but when the shader will be bound with my app i will use clip coordinates...

Again thanks for your suggestion , you are totally right about it , i will give it a try tomorrow !

Tnx again...


Hmm...thats an interesting technique, using the view-space depth like that. Is it possible to make that work for arbitrary vertex locations? (for when using light volumes for the the deferred pass)
You can pretty much use it for anything as long as you have a depth texture representing the scene from the current camera's point of view... it would mean that you'd need to render your light volumes into the depth texture though which I'm not sure would be a viable thing.

I implemented this last night on the 360 where I render a shadowmap as a depth buffer and then render the scene as per usual. I then capture the depth buffer as a texture and use the depth reconstruction to get linear eye depth, calculate the world position of the pixel, and then proceed to shadowmap it as per usual. It works a treat and means I do my shadowing as a deferred pass rather than modifying any of my shaders to perform shadowing. Actually, that's not quite true as it only shadows stuff in the depth buffer, so you need to have a shader for your translucent stuff if you want it to receive shadows. I'm going to add shadow receiving to my particles in this manner.

The other thing is for multiple viewports (ie. 4 player racing game) is that you can't render the fullscreen quad - you need to render a quad for each viewport, as it'll have its own frustum (rays) and projection matrix.
Ahh right, deferred shadowing. I've heard that it can be a big win, in the general case.
Hi all,

I have implemented deferred shadows too, but I have a problem:

When you moves away from the objects the shadows begins to flicker a lot, I think that it's a precission problem, because if I render only the depth texture (half float) it seems to have visible bands (something like dithering issues), have any of you got similar problems with the technique?

Thanks in advance.

This topic is closed to new replies.

Advertisement