Sign in to follow this  

General questions on deferred lighting

This topic is 2656 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello! :)

I *hope* this thread belongs here, and I have searched for answers for these questions but I haven't been in any luck getting them answered on my own, so I turn to you.
I was in here a while ago asking for help on my deferred lighting implementation, and I got it working.

But after all that I've still some very (I think) general questions about a few of the things commonly seen used in these sorts of shaders.
So on with the questions:

1) What is the biggest hazard/concern with storing positions directly in the texture, contrary to storing depth and recreating the position from there?
(As far as I can tell it's just that it uses more GPU memory?)

2) I've read many times in many implementations about a "frustum corner". Is this *really* just the frustum corner, For the far clip plane? Or have I missed something?

3) As for rendering multiple lights. That part goes in my application end, not the shader right? Since I can use the same shader to render all the lights?

Thanks for your time. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by GameDevGoro
1) What is the biggest hazard/concern with storing positions directly in the texture, contrary to storing depth and recreating the position from there?
(As far as I can tell it's just that it uses more GPU memory?)

Yes, it just uses more memory...

Quote:
Original post by GameDevGoro
2) I've read many times in many implementations about a "frustum corner". Is this *really* just the frustum corner, For the far clip plane? Or have I missed something?

Its not just the frustum corner... its a ray from the eye position to the far plane.
You use must interpolate the pixel position with the four corners of far frustum plane. Then you multiply the pixel position with the depth and you will get the view position.

Quote:
Original post by GameDevGoro
3) As for rendering multiple lights. That part goes in my application end, not the shader right? Since I can use the same shader to render all the lights?

Are you just using one shader?
You need 3 shaders... one for the geometry pass, another for lighting pass, and another for material pass (or second geometry pass) You render all lights in the full-screen quad lighting pass...

Share this post


Link to post
Share on other sites
Hi, Aqua Costa! :D

I guess that adequately answers the questions so I'm happy with that. :)

And I am using all shaders for the geometry pass and stuff. I meant for the lighting pass only, using that same vertex and fragment(pixel) shader to render all of the lights.

And oh. I thought of another question that I missed asking in the first post. (Sorry!)

Question: I'm a tad bit confused about the resolution to use on the render-buffers. Because there are still some graphics cards that don't support non-POT texture resolutions, right? So if I create my render-buffer at screen resolution (which is 1024x768 right now). That's not a POT resolution (I hope I'm right.) so how's that going to affect cross-hardware? Should I cram my render-buffers to 512x512 instead?

Thanks again! :)

Share this post


Link to post
Share on other sites
I never heard anything about POT resolutions... but I believe that its ok to use a resolution of 1024x768 because if you use 512x512 you will probably get lots of artifacts and reduce the quality...
(In my engine I'm using a resolution of 1440x900 for my render targets)

Share this post


Link to post
Share on other sites
Don't worry about POT textures because it is a limitation of old hardware. Unless you're aiming at really ancient hardware you should not worry about that. In fact, if the hardware does not support NPOT textures, then it should not even support the other necessary functionality for performing deferred rendering.

Your render buffers must always be of the same resolution that you use for rendering.

Share this post


Link to post
Share on other sites
(Off topic: Did my last reply really double post or am I seeing things? :O)

OK, fair enough. I'm not targeting ancient hardware since my computer isn't.
And yeah, I guess I sort of have to use the same resolution, anything else just looks (understandably) muddy. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by GameDevGoro
1) What is the biggest hazard/concern with storing positions directly in the texture, contrary to storing depth and recreating the position from there?
(As far as I can tell it's just that it uses more GPU memory?)


Not just more memory, but more importantly more bandwidth.

The other problem with precision. To get the same precision as reconstruction position from a 24 or 32-bit depth value, you need to use 32-bit floating point. This means at minimum 12 bytes per pixel instead of 4, unless you go for 16-bit floating point and live with the artifacts (things can get really ugly once shadow mapping is involved).

Share this post


Link to post
Share on other sites
Huh. OK. I guess I'll have to invest some time in learning how to reconstruct the position from depth thing.

I'm still a bit confused with the different spaces and such...
But I guess that if I have everything else in worldspace and the depth needs to be... uh... normalized viewspace (I've heard something about this) so I need to convert it from viewspace to worldspace then? The retrieved position that is.

Share this post


Link to post
Share on other sites
I wrote a deferred renderer before. As for your concerns about reconstructing the positions of the fragments in screen space... when you are in the fragment shader, the fragment coordinate is in screen space.

The screen space coordinate (S) of the fragment coordinate is given by
S=T*P, where T is the viewing transformation matrix and P is the position vector (usually in world coordinates.) So, to get P from S and T, you "divide" S by T. Of course, when dealing with matrices, you instead multiple S by the inverse of T. In OpenGL, you can use the gl_ModelViewProjectionMatrixInverse which is the inverse of the transformation matrix.

Of course, you can always convert your world coordinates into screenspace coordinates then do your lighting calculations like that.

Share this post


Link to post
Share on other sites
Hi!

I'll try that, thanks for explaining it. :)

I've also read something interesting in the StarCraft 2 technology document (Or what was it) that under PS 3.0 (?) you can, in HLSL, use the VPOS semantic to make the position reconstruction easier.
But since I'm working in CG there's a similar or not identical semantic called WPOS I believe. Could I use that?

Share this post


Link to post
Share on other sites
Man I hate to do this but: *Bump*

I need to know whether that VPOS thing is actually to consider, or if I should just ignore it.

And further more. I'm still confused about how to store depth properly, because I've seen many variations on that one too. I've even seen the "float-packing thing" where the depth is packed and produces that weird looking depth. What's that for?

Share this post


Link to post
Share on other sites
Packing a float into all four RGBA channels isn't really needed anymore, it was generally used to make sure that GPUs which couldn't handle the R32F format could still run the game. Just about every GPU out there now can handle a float format so I wouldn't worry about it, certainly not if this is just for your own work.

I'm working on a deferred renderer now, my method for reconstructing world position is as follows:

Store the depth linearly in the G-Buffer in a R32F render target. This is pretty simple, you just need to get the distance from your camera position to the 3D position of the pixel like this:

OUT.Depth = length(IN.Pos3D - CamPos);

Then to reconstruct the position in your lighting shader you do something like the following:


//IN YOUR CAMERA CLASS:
BoundingFrustum frustum = new BoundingFrustum(Proj);
corners = frustum.GetCorners();
Matrix world = Matrix.CreateFromQuaternion(Rotation);
Vector3.Transform(corners, ref world, corners);

//IN YOUR RENDERING CLASS:
deferredLighting.Parameters["CornerPositions"].Elements[0].SetValue(Camera.Corners[4]);
deferredLighting.Parameters["CornerPositions"].Elements[1].SetValue(Camera.Corners[5]);
deferredLighting.Parameters["CornerPositions"].Elements[2].SetValue(Camera.Corners[7]);
deferredLighting.Parameters["CornerPositions"].Elements[3].SetValue(Camera.Corners[6]);

//IN YOUR VERTEX SHADER:
OUT.BackRay = CornerPositions[IN.TexCoords.x + IN.TexCoords.y * 2];

//IN YOUR PIXEL SHADER:
float depth = tex2D(DepthSampler, IN.TexCoords).r;
if(depth == 0) { discard; } //IT'S A BACKGROUND SKY PIXEL
float3 viewDir = normalize(IN.BackRay);
float3 pos3D = CamPos + depth * viewDir;



Share this post


Link to post
Share on other sites
Hello. :D

I will try my best at making this work. Thanks for the reference, that will surely help a lot.

Which space is this in? I've seen this way of reconstruction before but I can't recall what space it works in.
Since all my other stuff is in world space (like I've mentioned before) does that matter at all?
The reconstructed pixel position has to be worldspace since my light is in it right?

Thanks.

Edit* Oh crap, nevermind, you said 'world position' which I can only assume is what I'm after. Sorry!

[Edited by - GameDevGoro on September 4, 2010 11:44:37 AM]

Share this post


Link to post
Share on other sites
OK I'm just going to post here again since I'm having some progress (Anti progress) going on.

I have my light showing up now, but it's sort of 'lopsided' depending on what angle the camera is looking at it, up to down that is.

Looking at it directly from above turns it into a line that's aligned with the screen horizontally, I'm guessing this has something to do with depth.

Any ideas of what might be wrong or do I need to supply more info?

Share this post


Link to post
Share on other sites

This topic is 2656 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this