General questions on deferred lighting

Started by
14 comments, last by GameDevGoro 13 years, 7 months ago
Hello! :)

I *hope* this thread belongs here, and I have searched for answers for these questions but I haven't been in any luck getting them answered on my own, so I turn to you.
I was in here a while ago asking for help on my deferred lighting implementation, and I got it working.

But after all that I've still some very (I think) general questions about a few of the things commonly seen used in these sorts of shaders.
So on with the questions:

1) What is the biggest hazard/concern with storing positions directly in the texture, contrary to storing depth and recreating the position from there?
(As far as I can tell it's just that it uses more GPU memory?)

2) I've read many times in many implementations about a "frustum corner". Is this *really* just the frustum corner, For the far clip plane? Or have I missed something?

3) As for rendering multiple lights. That part goes in my application end, not the shader right? Since I can use the same shader to render all the lights?

Thanks for your time. :)
Advertisement
Quote:Original post by GameDevGoro
1) What is the biggest hazard/concern with storing positions directly in the texture, contrary to storing depth and recreating the position from there?
(As far as I can tell it's just that it uses more GPU memory?)

Yes, it just uses more memory...

Quote:Original post by GameDevGoro
2) I've read many times in many implementations about a "frustum corner". Is this *really* just the frustum corner, For the far clip plane? Or have I missed something?

Its not just the frustum corner... its a ray from the eye position to the far plane.
You use must interpolate the pixel position with the four corners of far frustum plane. Then you multiply the pixel position with the depth and you will get the view position.

Quote:Original post by GameDevGoro
3) As for rendering multiple lights. That part goes in my application end, not the shader right? Since I can use the same shader to render all the lights?

Are you just using one shader?
You need 3 shaders... one for the geometry pass, another for lighting pass, and another for material pass (or second geometry pass) You render all lights in the full-screen quad lighting pass...

Hi, Aqua Costa! :D

I guess that adequately answers the questions so I'm happy with that. :)

And I am using all shaders for the geometry pass and stuff. I meant for the lighting pass only, using that same vertex and fragment(pixel) shader to render all of the lights.

And oh. I thought of another question that I missed asking in the first post. (Sorry!)

Question: I'm a tad bit confused about the resolution to use on the render-buffers. Because there are still some graphics cards that don't support non-POT texture resolutions, right? So if I create my render-buffer at screen resolution (which is 1024x768 right now). That's not a POT resolution (I hope I'm right.) so how's that going to affect cross-hardware? Should I cram my render-buffers to 512x512 instead?

Thanks again! :)
I never heard anything about POT resolutions... but I believe that its ok to use a resolution of 1024x768 because if you use 512x512 you will probably get lots of artifacts and reduce the quality...
(In my engine I'm using a resolution of 1440x900 for my render targets)
OK I see... I'll just keep it as it is then. :)

Thank you for taking the time to answer, Aqua Costa, you've been really helpful! :D
OK I see... I'll just keep it as it is then. :)

Thank you for taking the time to answer, Aqua Costa, you've been really helpful! :D
Don't worry about POT textures because it is a limitation of old hardware. Unless you're aiming at really ancient hardware you should not worry about that. In fact, if the hardware does not support NPOT textures, then it should not even support the other necessary functionality for performing deferred rendering.

Your render buffers must always be of the same resolution that you use for rendering.
(Off topic: Did my last reply really double post or am I seeing things? :O)

OK, fair enough. I'm not targeting ancient hardware since my computer isn't.
And yeah, I guess I sort of have to use the same resolution, anything else just looks (understandably) muddy. :)
Quote:Original post by GameDevGoro
1) What is the biggest hazard/concern with storing positions directly in the texture, contrary to storing depth and recreating the position from there?
(As far as I can tell it's just that it uses more GPU memory?)


Not just more memory, but more importantly more bandwidth.

The other problem with precision. To get the same precision as reconstruction position from a 24 or 32-bit depth value, you need to use 32-bit floating point. This means at minimum 12 bytes per pixel instead of 4, unless you go for 16-bit floating point and live with the artifacts (things can get really ugly once shadow mapping is involved).

Huh. OK. I guess I'll have to invest some time in learning how to reconstruct the position from depth thing.

I'm still a bit confused with the different spaces and such...
But I guess that if I have everything else in worldspace and the depth needs to be... uh... normalized viewspace (I've heard something about this) so I need to convert it from viewspace to worldspace then? The retrieved position that is.

This topic is closed to new replies.

Advertisement