Thanks Hodgman, thats sounds a pretty niffy method. However I have some queries as to the 2 methods you have just describe.
For the first method, where I would draw a smaller screen-space quad, is it right to say I have to do some calculations to get the Tex Coord of the smaller quad, since I don't think setting the texcoord from (0,0) to (1,1) is going to work in all case?
For the second method how does it work and what do exactly mean by drawing it in 3D? Do you mean I draw the full screen quad as normal, then for each spotlight, I draw a 3D cone?
Lighting in Deferred Shading
As for rendering of light volumes, you simply create a mesh representing the shape of a given light type. For a point light that would be a sphere, not a cone.
In lighting pass just render the light volume. 3D volume: [1] automaticaly limits amount of processed pixel to those only, which lay in light's range; [2] automaticaly rejects those pixels in light's range which won't be visible to observer because they are hidden behind the geometry between the light and the observer (they are simply culled basing on z-buffer contents).
In my deferred renderer I store depth in camera space and I perform all lighting calculations in camera space. Computation of pixel's position in camera space is as simple as only one MAD in pixel shader...
There are a few papers covering deferred shading and 3D light volumes + stencil technique used in that case, however - I may always present you a simple sketch of all steps necessary, if you want.
In lighting pass just render the light volume. 3D volume: [1] automaticaly limits amount of processed pixel to those only, which lay in light's range; [2] automaticaly rejects those pixels in light's range which won't be visible to observer because they are hidden behind the geometry between the light and the observer (they are simply culled basing on z-buffer contents).
In my deferred renderer I store depth in camera space and I perform all lighting calculations in camera space. Computation of pixel's position in camera space is as simple as only one MAD in pixel shader...
There are a few papers covering deferred shading and 3D light volumes + stencil technique used in that case, however - I may always present you a simple sketch of all steps necessary, if you want.
Sure thanks, I would be glad if you could outline thoese steps. I am still pndering over how does drawing the cone/sphere allow rejection of the unwated pixels. since my z-buffer contains only the z-value of the screen quad which is all at 0.0f
Surely I can outline it (however - you've read the 6800_legaues and it's all in there ;)).
But first of all - why do you say that your z-buffer contains only 0.0 (from a fullscreen quad)? Habitualy your z-buffer should containt scene depth information after the g-buffer pass and that's what you use for all the lighting steps... Well, I might not be as much experienced with deferred lighting since I'm still working on some features (in parallel - simple app for raw test + one complete engine library with all caps and features).
But that's what I do in g-pass:
Now - I have a unit sphere for point lights. In lighting pass I just scale it by light's range, transform to correct place in the scene and render...
Option #1 (camera outside the light volume): z-buffer culls out all these points, which lay beyond the geometry and should not be visible - those pixels will not take part in further lighting. You have to mask them in stencil buffer (set stencil reference value to Light.ID - you have 255 lights before stencil.clear will be required :)). Next - you render back faces. Z-buffer culls all those pixels, which do not "touch" any geometry (they float in space) and than, stencil comparison culls all those pixels, which are hidden behind visible geometry (after the first step).
Option #2 (camera inside the light volume): you cannot render front faces, so you don't bother with stenciling. Instead, you render back faces and cull all those pixels which float in space and don't involve any geometry.
Hmmm... I think, I'll prepare some screenshots and come back a bit later...
But first of all - why do you say that your z-buffer contains only 0.0 (from a fullscreen quad)? Habitualy your z-buffer should containt scene depth information after the g-buffer pass and that's what you use for all the lighting steps... Well, I might not be as much experienced with deferred lighting since I'm still working on some features (in parallel - simple app for raw test + one complete engine library with all caps and features).
But that's what I do in g-pass:
g-pass, keep the z-buffer filled with depth values of the whole scene; - RT#0 = albedo (RGB) + shininess (A) - RT#1 = normal (RGB) + spare space (A) - RT#2 = depth in camera space Use R32F for depth if possible. I chose camera space for all lighting calculations (seemed to be the simplest choice).
Now - I have a unit sphere for point lights. In lighting pass I just scale it by light's range, transform to correct place in the scene and render...
l-pass, for each light do: - if camera lies outside the light volume - render front faces of the light volume and mask them out in the stencil buffer; - render back faces of the light volume comparing to stencil - if camera lies inside the light volume - render back faces of the light volume (ignore stenciling)
Option #1 (camera outside the light volume): z-buffer culls out all these points, which lay beyond the geometry and should not be visible - those pixels will not take part in further lighting. You have to mask them in stencil buffer (set stencil reference value to Light.ID - you have 255 lights before stencil.clear will be required :)). Next - you render back faces. Z-buffer culls all those pixels, which do not "touch" any geometry (they float in space) and than, stencil comparison culls all those pixels, which are hidden behind visible geometry (after the first step).
Option #2 (camera inside the light volume): you cannot render front faces, so you don't bother with stenciling. Instead, you render back faces and cull all those pixels which float in space and don't involve any geometry.
Hmmm... I think, I'll prepare some screenshots and come back a bit later...
Ok. First of all, results of g-pass:
1) diffuse only (RT#0)
2) perturbed normal (RT#1)
3) depth in camera space (RT#2)
4) final product for 1 point light (no shadowing though)
5) final product for 4 point lights (no shadowing though)
And now, as for lighting pass... The most important is option #1 (camera outside light volume).
This fragment culls all those pixels, which lay behind occluding geometry. Only visible part of light volume will be marked in stencil buffer.
This is the real lighting... We've masked all pixels of light volume occluded by geometry. Now we eliminate those, which are not affected by light volume (those laying inside the area of projected light volume, but "behind" /deeper/ then the range of light).
Here you go:
It's the result of culling (one light, diffuse factor /perturbed normal dot light vector/, no attenuation). Note how the middle column in the room occludes light...
As for the Option #2 - since camera is inside the light volume, you should render backfaces only (no stenciling, just z-func = GREATEREQUAL)...
Camera space seems to be very accomodating for all lighting calculations... Perturbed normals are stored in camera space (that's a simple transformation). Depth is stored in camera space... For each vertex of the light volume you can calculate an eye vector (light volume's vertex in camera space = eye vector...). Normalize eye vector to maximum depth (far clipping plane.Z in camera space) and pass it as texcoord to pixel shader... Now you only have to do vEye * pixDepth and you have position in camera space... Just add an addition (light.pos - (vEye * pixDepth)) and you have pixel->light vector for diffuse lighting... That is a simple MAD instruction:
And your other question (reconstruction of pixel position) has been answered...
And all in PS2.0... No higher version required... (at least at this stage of complexity)...
As for the framerates - screenshots were taken in 800x600, but usually I work in 1280x1024 (native for my LCD) and I get ~170 FPS for 4 point lights (old, good GF 7900GS)... And these lights are huge... Since I added linear attenuation, I had to increase the range of each light... Even with geometry culling, these lights cover almost all the screen (the last, sixth screenshot was taken with light's range decreased by half, in order to get better visibility of culling).
Images are blocky due to a poor quality. Real product is smooth and not pixelized... :)
1) diffuse only (RT#0)
2) perturbed normal (RT#1)
3) depth in camera space (RT#2)
4) final product for 1 point light (no shadowing though)
5) final product for 4 point lights (no shadowing though)
And now, as for lighting pass... The most important is option #1 (camera outside light volume).
- render front faces - disable color writes - disable z-buffer writes - z-function = LESS - stencil enabled - stencilfunc = ALWAYS - stencilops = all KEEP but STENCILPASS = REPLACE - stencil reference = Light.ID (<>0)
This fragment culls all those pixels, which lay behind occluding geometry. Only visible part of light volume will be marked in stencil buffer.
- render back faces - color writes enabled - z-buffer writes disabled - z-function = GREATEREQUAL - stenfilfunc = EQUAL - stencilops = all KEEP - stencil reference = Light.ID - alpha blending enabled for lighting accumulation
This is the real lighting... We've masked all pixels of light volume occluded by geometry. Now we eliminate those, which are not affected by light volume (those laying inside the area of projected light volume, but "behind" /deeper/ then the range of light).
Here you go:
It's the result of culling (one light, diffuse factor /perturbed normal dot light vector/, no attenuation). Note how the middle column in the room occludes light...
As for the Option #2 - since camera is inside the light volume, you should render backfaces only (no stenciling, just z-func = GREATEREQUAL)...
Camera space seems to be very accomodating for all lighting calculations... Perturbed normals are stored in camera space (that's a simple transformation). Depth is stored in camera space... For each vertex of the light volume you can calculate an eye vector (light volume's vertex in camera space = eye vector...). Normalize eye vector to maximum depth (far clipping plane.Z in camera space) and pass it as texcoord to pixel shader... Now you only have to do vEye * pixDepth and you have position in camera space... Just add an addition (light.pos - (vEye * pixDepth)) and you have pixel->light vector for diffuse lighting... That is a simple MAD instruction:
MAD vLit,-pixelDepth,vEye,light.poswhich should be:MAD r1.xyz,-r0.r,t1,c0
And your other question (reconstruction of pixel position) has been answered...
And all in PS2.0... No higher version required... (at least at this stage of complexity)...
As for the framerates - screenshots were taken in 800x600, but usually I work in 1280x1024 (native for my LCD) and I get ~170 FPS for 4 point lights (old, good GF 7900GS)... And these lights are huge... Since I added linear attenuation, I had to increase the range of each light... Even with geometry culling, these lights cover almost all the screen (the last, sixth screenshot was taken with light's range decreased by half, in order to get better visibility of culling).
Images are blocky due to a poor quality. Real product is smooth and not pixelized... :)
Wow, I will have to slowly digest these informations.
A question about your light volume outside the camera. Why can't we just not draw this light since it is not in the camera volume anyway.
What I mean is that, I have a spatial tree that just contains all the light volumes, before doing any light pass, I cull this spatial tree with my view frustum and thoes lights that drawn are automatically within the camera frustum.
Will that work?
A question about your light volume outside the camera. Why can't we just not draw this light since it is not in the camera volume anyway.
What I mean is that, I have a spatial tree that just contains all the light volumes, before doing any light pass, I cull this spatial tree with my view frustum and thoes lights that drawn are automatically within the camera frustum.
Will that work?
Well... It's not the question of a light outside the camera volume (frustum) - such lights should be rejected as soon as possible, of course. It's the question of camera lying inside the light volume. Think about the position of the camera, not the whole frustum. In other words - camera position stays within light's range (light's sphere in case of point lights). We should be clear now :)
If camera (observer) stays within light's volume, we don't draw front faces (since they will be partly or totaly clipped by near plane and thus - not visible and we will have some pixels not lit at all, though they should be).
As a matter of the fact - you don't have to differentiate these states. You can skip stenciling and render backfaces in both situations (whether you are in or outside the light's volume). But usually - differentiation is a win, if only there is an occluding geometry between the camera and the light (stenciling works here like a depth-only pass before drawing to the backbuffer).
If camera (observer) stays within light's volume, we don't draw front faces (since they will be partly or totaly clipped by near plane and thus - not visible and we will have some pixels not lit at all, though they should be).
As a matter of the fact - you don't have to differentiate these states. You can skip stenciling and render backfaces in both situations (whether you are in or outside the light's volume). But usually - differentiation is a win, if only there is an occluding geometry between the camera and the light (stenciling works here like a depth-only pass before drawing to the backbuffer).
Ok, let me see if i got the whole process right.
Is it something along this line? or am I missing out something.
A side track question, if I want to do HDR is it correct that instead of drawing the result to the default backbuffer, I created another floating point Render Target to store all the final lighting result, and then proceed to do HDR with this RT?
Generate 4 G-Buffer(Depth or Pos, Diffuse, Normal, Others)For each directional light Draw full screen quad, with lighting and stuff.For each point or spot light: Draw Light volume only back faces. if back faces z-value is less than the current g-buffer z-value, update stencil value else ignore. Draw the full screen quad. This time we se to compare the stencil value as equal and only thoes pixels that are in the stencil values get to the shader. The result is additively blended to the buffer
Is it something along this line? or am I missing out something.
A side track question, if I want to do HDR is it correct that instead of drawing the result to the default backbuffer, I created another floating point Render Target to store all the final lighting result, and then proceed to do HDR with this RT?
I think you're still missing a part of the idea :)
As for directional lights - yup, you still need a fullscreen quad. That's correct.
As for point and spot lights - you partly get it, but don't draw a fullscreen quad in the second step!
Look at the sketches:
1) you have a light volume and three pieces of geometry. One lays behind the light, beyond it's range. One lays inside the light volume and one (on the right side) occludes the light. It is obvious, that you want to shade and lit only those pixels, which affect the geometry (geometry lies withing light's range, inside the light's volume). The blue line on the top represents backbuffer and the region, which should be processed is marked in red:
2) You draw the light's volume front faces with stenciling and depth function set to LESS (color writes disabled this time, so you don't affect the backbuffer contents). All the pixels occluded by the geometry on the right side (LESS = false) are left unaffected. All front faces on the left are marked positive in the stencil buffer because LESS = true.
But that's still too much. There is a significant amount of pixels which will be lit for nothing (some part of the geometry lies behind the front faces of the light's volume, but not within the range of the light!). The truth is - you could leave it if only you compute the attenuation (attenuation factor will be < 0, so these pixels will not be lit anyway) but they will be processed! And that's what it is about - don't process things you don't need to process...
3) This time you draw backfaces of the light's volume (depth function = GREATEREQUAL) and filter by stencil buffer contents. You need stenciling, because of the occluding geometry (the right part of the picture would be lit if not stenciled - cause for that geometry backfaces of the light's volume are GREATER):
And just remember - you have light's volume, so you don't want to draw a fullscreen quad in the second step!
As for HDR rendering - yes. If you want to do some HDR, you don't draw to the backbuffer directly, you render to additional (accumulation) render target with, possibly, higher precision (16+ bits per component).
Good luck! :)
As for directional lights - yup, you still need a fullscreen quad. That's correct.
As for point and spot lights - you partly get it, but don't draw a fullscreen quad in the second step!
Look at the sketches:
1) you have a light volume and three pieces of geometry. One lays behind the light, beyond it's range. One lays inside the light volume and one (on the right side) occludes the light. It is obvious, that you want to shade and lit only those pixels, which affect the geometry (geometry lies withing light's range, inside the light's volume). The blue line on the top represents backbuffer and the region, which should be processed is marked in red:
2) You draw the light's volume front faces with stenciling and depth function set to LESS (color writes disabled this time, so you don't affect the backbuffer contents). All the pixels occluded by the geometry on the right side (LESS = false) are left unaffected. All front faces on the left are marked positive in the stencil buffer because LESS = true.
But that's still too much. There is a significant amount of pixels which will be lit for nothing (some part of the geometry lies behind the front faces of the light's volume, but not within the range of the light!). The truth is - you could leave it if only you compute the attenuation (attenuation factor will be < 0, so these pixels will not be lit anyway) but they will be processed! And that's what it is about - don't process things you don't need to process...
3) This time you draw backfaces of the light's volume (depth function = GREATEREQUAL) and filter by stencil buffer contents. You need stenciling, because of the occluding geometry (the right part of the picture would be lit if not stenciled - cause for that geometry backfaces of the light's volume are GREATER):
And just remember - you have light's volume, so you don't want to draw a fullscreen quad in the second step!
As for HDR rendering - yes. If you want to do some HDR, you don't draw to the backbuffer directly, you render to additional (accumulation) render target with, possibly, higher precision (16+ bits per component).
Good luck! :)
Ok but I am kind of stuck when you say we draw the light volume at the 2nd part and not the full screen quad. If I draw the light volume, how do I get the Texture Coords to sample the g-buffers for the information on position/normals/diffuse etc?
float4x4 g_LightVolumeWorldViewProjvoid vs_draw_lvolume(float4 Position : POSITION0, out float4 oPosition : POSITION0){ oPosition = mul(Position, LightVolumeWorldViewProj); oTexture = Texture;}void ps_draw_lvolume(out float4 oColor0 : COLOR0){ float2 GBufferTexCoord; //How do I get this coordinates? float4 Diffuse = tex2D(g_DiffuseBuffer, GBufferTexCoord); ... other lighting calculations, e.g phong shading etc oColor0 = float4(Diffuse * Color, 1.0f);}
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement