Jump to content
  • Advertisement
Sign in to follow this  
Yours3!f

deferred rendering: lighting stage

This topic is 2638 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I want to do lighting in a deferred pipeline. I have all the data reconstructed in the lighting stage (colors, normals, position);
For the normals I use the store x and y, reconstruct z method, for position I use the store linearized depth, reconstruct view-space position method. For colors I just store the RGB values.

I looked at lighting methods and I chose the Blinn-Phong method. I read the wikipedia page about it, but I don't know how to implement it in view-space. To add I'm not sure if the normals are in view-space, please take a look at the screenshot.

Another thing is how do I make sure that I can draw infinite number of lights? I can only upload fixed arrays of values to shaders.

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
Advertisement

I looked at lighting methods and I chose the Blinn-Phong method. I read the wikipedia page about it, but I don't know how to implement it in view-space. To add I'm not sure if the normals are in view-space, please take a look at the screenshot.

The normals looks ok, but beware, that the z value could be negative, a simple z=1-sqrt(x*x+y*y) will eventually lead to artifacts. Take a look at aras page about other normal compression algorithm.


Another thing is how do I make sure that I can draw infinite number of lights? I can only upload fixed arrays of values to shaders.

For infinit number of lights you will need multiple passes. Each pass should render up to X lights. One and only one pass should consider general lighting properties like self illumination materials or ambient light.

Share this post


Link to post
Share on other sites

[quote name='Yours3!f' timestamp='1317207588' post='4866744']
I looked at lighting methods and I chose the Blinn-Phong method. I read the wikipedia page about it, but I don't know how to implement it in view-space. To add I'm not sure if the normals are in view-space, please take a look at the screenshot.

The normals looks ok, but beware, that the z value could be negative, a simple z=1-sqrt(x*x+y*y) will eventually lead to artifacts. Take a look at aras page about other normal compression algorithm.


Another thing is how do I make sure that I can draw infinite number of lights? I can only upload fixed arrays of values to shaders.

For infinit number of lights you will need multiple passes. Each pass should render up to X lights. One and only one pass should consider general lighting properties like self illumination materials or ambient light.
[/quote]

Yeah I know it. I actually took the algorithm from aras' page. I just wanted to keep it simple. I want to implement the spheremap one later.

But then for each light I should draw a full-screen quad? That must be really expensive...

Share this post


Link to post
Share on other sites
there are two methods I know atm.

  1. use light volums (sphere, box what ever) to mask with the stencil buffer only the pixels which are affected by the light.(just search for stencil and deffered lighting, same technique as in stencil shadows I think)
  2. use a grid over the screen to merge lights in those screen cells and render for example in one cell 18 lights and in the other only 2 ( there are some presentations from DICE, they used this in Frostbite 2)

Share this post


Link to post
Share on other sites

there are two methods I know atm.

  1. use light volums (sphere, box what ever) to mask with the stencil buffer only the pixels which are affected by the light.(just search for stencil and deffered lighting, same technique as in stencil shadows I think)
  2. use a grid over the screen to merge lights in those screen cells and render for example in one cell 18 lights and in the other only 2 ( there are some presentations from DICE, they used this in Frostbite 2)



There is the third method, very similar to the stencil one mentioned above, which is to simply render the bounding shape of the light, with depth testing set to GEQUAL. The fragment shader for the bounding shape will then be run for all fragments which might be affected by the light, you just need to find out which screen fragment you're rendering.

There will be wasted pixels computed (i.e. You look directly at a wall, 10m behind the wall there is a light with only a 5m radius. The stencil test would basically ignore this light, the depth test will not.) but it only requires a single pass, where the stencil-test requires either 2 or 3 passes (I haven't done it myself, so I can't be certain. I do know you can remove a pass by doing the two parts of the stencil test in one go), so it might balance out. Obviously one light per-pass should only really be done for shadow casting lights in a final version, non-shadow casting lights should probably be batched together into a single pass (as in option 2 mentioned by Danny above).

Share this post


Link to post
Share on other sites
thanks for the replies :)

1st method: according to this it could be done, but it requires a lot of checking + using stencils isn't really a win...
2nd method: doesn't frostbite use compute shader based rendering? see
3rd method: I see your point :)

one more thing: to do lighting I must have view-space position, right? Now I reconstruct it using a far-plane sized rectangle, which covers the whole screen. So how do reconstruct the view-space position when I'm drawing a light mesh?

+ I heard about a technique called indexed deferred lighting
+ a presentation from Intel, which presents a similar technique to the one used in frostbite

I tried out indexed deferred lighting, and Intel's technique too, but Intel's was faster. I don't know about the 1st method's performance, but I think it is not by accident frostbite uses something else...

Share this post


Link to post
Share on other sites
they did it not only with compute shaders, because it runs on DX10 hardware also.

My first thought would be to upload all the lightdata in one huge uniform data block(or texture if it you are using more then u can get with uniforms).
then render the hole grid at once, and use the vertex ids to index in another uniform array which gives u the offset in the light data array and the nummber of lights in the cell.

Share this post


Link to post
Share on other sites

they did it not only with compute shaders, because it runs on DX10 hardware also.

My first thought would be to upload all the lightdata in one huge uniform data block(or texture if it you are using more then u can get with uniforms).
then render the hole grid at once, and use the vertex ids to index in another uniform array which gives u the offset in the light data array and the nummber of lights in the cell.


I don't really care about DX10 class hardware :) dx11 or none (especially because when I will finish my engine there will be dx14 or so...)

I see your point, and it is used here with pixel buffer objects.

I've implemented the spheremap stuff, it was just a matter of copy-paste though... Is there any specific data format I should store the encoded normals in? (They're currently in RG16F format)

I started to look at OpenCL so that I can do the whole Compute-shader based stuff on Linux too, and it seems pretty easy to me. It's very similar to OpenGL and GLSL.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!