General questions on deferred lighting

Started by
14 comments, last by GameDevGoro 13 years, 8 months ago
I wrote a deferred renderer before. As for your concerns about reconstructing the positions of the fragments in screen space... when you are in the fragment shader, the fragment coordinate is in screen space.

The screen space coordinate (S) of the fragment coordinate is given by
S=T*P, where T is the viewing transformation matrix and P is the position vector (usually in world coordinates.) So, to get P from S and T, you "divide" S by T. Of course, when dealing with matrices, you instead multiple S by the inverse of T. In OpenGL, you can use the gl_ModelViewProjectionMatrixInverse which is the inverse of the transformation matrix.

Of course, you can always convert your world coordinates into screenspace coordinates then do your lighting calculations like that.
Advertisement
Hi!

I'll try that, thanks for explaining it. :)

I've also read something interesting in the StarCraft 2 technology document (Or what was it) that under PS 3.0 (?) you can, in HLSL, use the VPOS semantic to make the position reconstruction easier.
But since I'm working in CG there's a similar or not identical semantic called WPOS I believe. Could I use that?
Man I hate to do this but: *Bump*

I need to know whether that VPOS thing is actually to consider, or if I should just ignore it.

And further more. I'm still confused about how to store depth properly, because I've seen many variations on that one too. I've even seen the "float-packing thing" where the depth is packed and produces that weird looking depth. What's that for?
Packing a float into all four RGBA channels isn't really needed anymore, it was generally used to make sure that GPUs which couldn't handle the R32F format could still run the game. Just about every GPU out there now can handle a float format so I wouldn't worry about it, certainly not if this is just for your own work.

I'm working on a deferred renderer now, my method for reconstructing world position is as follows:

Store the depth linearly in the G-Buffer in a R32F render target. This is pretty simple, you just need to get the distance from your camera position to the 3D position of the pixel like this:

OUT.Depth = length(IN.Pos3D - CamPos);

Then to reconstruct the position in your lighting shader you do something like the following:

//IN YOUR CAMERA CLASS:BoundingFrustum frustum = new BoundingFrustum(Proj);corners = frustum.GetCorners();Matrix world = Matrix.CreateFromQuaternion(Rotation);Vector3.Transform(corners, ref world, corners);//IN YOUR RENDERING CLASS:deferredLighting.Parameters["CornerPositions"].Elements[0].SetValue(Camera.Corners[4]);deferredLighting.Parameters["CornerPositions"].Elements[1].SetValue(Camera.Corners[5]);deferredLighting.Parameters["CornerPositions"].Elements[2].SetValue(Camera.Corners[7]);deferredLighting.Parameters["CornerPositions"].Elements[3].SetValue(Camera.Corners[6]);//IN YOUR VERTEX SHADER:OUT.BackRay = CornerPositions[IN.TexCoords.x + IN.TexCoords.y * 2];//IN YOUR PIXEL SHADER:float depth = tex2D(DepthSampler, IN.TexCoords).r;if(depth == 0) { discard; } //IT'S A BACKGROUND SKY PIXELfloat3 viewDir = normalize(IN.BackRay);float3 pos3D = CamPos + depth * viewDir;
Portfolio & Blog:http://scgamedev.tumblr.com/
Hello. :D

I will try my best at making this work. Thanks for the reference, that will surely help a lot.

Which space is this in? I've seen this way of reconstruction before but I can't recall what space it works in.
Since all my other stuff is in world space (like I've mentioned before) does that matter at all?
The reconstructed pixel position has to be worldspace since my light is in it right?

Thanks.

Edit* Oh crap, nevermind, you said 'world position' which I can only assume is what I'm after. Sorry!

[Edited by - GameDevGoro on September 4, 2010 11:44:37 AM]
OK I'm just going to post here again since I'm having some progress (Anti progress) going on.

I have my light showing up now, but it's sort of 'lopsided' depending on what angle the camera is looking at it, up to down that is.

Looking at it directly from above turns it into a line that's aligned with the screen horizontally, I'm guessing this has something to do with depth.

Any ideas of what might be wrong or do I need to supply more info?

This topic is closed to new replies.

Advertisement