Please check my approach for deferred shading algorithm (DX 11) and see if it is possible to optimize it.
- Generate G-Buffer. Also use a separate depth stencil buffer. Since this buffer is required for lighting pass, we can’t just create a shader resource view of it, so store view space depth as 16 bit float in one of the render targets.
- Switch to single render target mode, still using the same depth stencil buffer we used for scene rendering.
- Draw a screen aligned quad, use “0.25 * albedo” to simulate scene ambient lighting. Where albedo is diffuse color from G-Buffer.
- For each light in the list (assuming the list is optimized) draw light volume mesh (in two passes) in order to detect pixels inside the light volume (use depth stencil buffer).
- Draw a screen aligned quad again, but this time use a lighting algorithm appropriate for current light type (use G-Buffer to retrieve view space depth). While using stencil buffer restricts the calculations to where they should be applied, but this step requires switching between meshes and shaders and render states (light volume <=> quad and associated shaders with them).
I also have a problem with step D. If camera was inside the light volume, front faces of the mesh would be discarded and this ruins everything.
Thank you.