Jump to content
  • Advertisement
Sign in to follow this  

Deferred Rendering - handling spot lights

This topic is 3708 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi I posted a message a few days ago on the subject of deferred rendering - and the lighting stage. Thinking a little more about my specific problem it is this: I am seeking clarrification / example code on implementing the lighting stage of a deferred renderer using spot lights. some background: My simple deferred renderer uses a fat G-buffer with FBOs using F16RGBA. I store the EyeSpace vertex positions (x,y,z) in the G-buffer for later use in the lighting stage. But note ... i do not store any depth information. (Should I be?) The chosen spotlight implementation - lighting stage performs a 2D ortho projection view using a full screen quad. (this could be optimized to be a sub-portion of the full screen based on some projected bounding volume of the spot light area. But for now in debug mode I'm using full screen.) In preparation for the lighting stage - the engine establishes the render targets and spot-light materials and shader - this includes establishing a vertexToLightSpaceTextureProjection matrix and an assortment of other shader params. I then draw a full screen quad - and so each pixel to be draw will call my vertex shader for the spotlight and pixel shaders. The vertex shader is a pass through and all the real work is done in the pixel shader. To correctly light the geometry vertices (which are actually in eyeSpace in the g-buffer) I first need to perform a lookup. Is this correct? Since the pixel shader data entering the shader is the interpolated 2d quad data - you can not directly use the 2D's current vertex value or any texture values. You must first (somehow?) obtain a 3D coordinate and use this as the basis to transform by the VertexToLightSpaceTextureProjection matrix and the resulting texture coordinates can then be used to lookup the EyeSpace vertex from the g-buffer. So can any help out with an example of how they did this mapping? Can we simply send as 3-valued texture coordinates - a gluUnproject of the current screen x,y and a nominal z point. Suppose we have the final 3D vertex value from the g-buffer. How do we know if the pixel that might be influenced by the spotlight is blocked by say a wall or something. I.e compare its depth value to that of the spotlights. Currently, my spot lights (before I broke my own code), were well - seemingly influencing say the front pixels of a pillar when the spotlight was attached to a wall behind the back of the pillar. This can not be right can it? God I hope all that made some sense!

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!