Jump to content

  • Log In with Google      Sign In   
  • Create Account


Light Prepass output color differences


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 ~0ul   Members   -  Reputation: 630

Like
0Likes
Like

Posted 17 March 2013 - 10:09 PM

I was considering switching my engine from a deferred shading to a light prepass (deferred lighting) approach. From my initial readings on deferred lighting, it seems that this method will not generate the same ouput as deferred shading since we are not taking into account the diffuse + specular colors of the materials during the light buffer generation. So if an object is affected by multiple lights, it will only apply the surface color to the output once vs the deferred shading approach which multiplies in the surface color for each light (I am talking about the phong model specifically). 

 

I assume that to generate the same output as before, I would have to modify the light properties for each light to generate the same output or modify the deferred shading implementation to only apply the surface color once. Another option is to add surface data to the g-buffer but that brings us back to deferred shading. In my current implementation I can switch between deferred and forward shading and the output is about the same, however this will no longer be the case with deferred lighting.

 

Is there something I am missing or is this indeed the case ? how are other engines which have switched to deferred lighting handling this ? Are you just ignoring the differences and keeping with one lighting method ? or applying some function in the code to modify the light properties in a prepass renderer. I would assume this transition would be a bigger issue in large projects with multiple scenes and lights.

 

 



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 28615

Like
3Likes
Like

Posted 17 March 2013 - 10:16 PM

To get the same results with deferred-lighting and deferred-shading, your deferred-lighting "lighting accumulation buffer" has to accumulate diffuse and specular light seperately, so that later you can multiply them with the diffuse surface colour and specular surface colour, respectively. As long as you do that, they'll be the same.

 

The reason that many deferred-lighting systems are different, is because the above setup requires 6 channels in the accumulation buffer. As an optimization, you can instead just accumulate 4 channels -- the diffuse light RGB, and the specular light without any colour information. Later on, you can either treat all specular light as monochromatic, or you can 'guess' it's colour by looking at the accumulated diffuse colour.

[edit]

So if an object is affected by multiple lights, it will only apply the surface color to the output once vs the deferred shading approach which multiplies in the surface color for each light

Deferred shading does:

light0 * material + light1 * material + light2 * material

Deferred lighting does

(light0 + light1 + light2) * material

 

Both of these are exactly equivalent.


Edited by Hodgman, 17 March 2013 - 11:18 PM.


#3 ~0ul   Members   -  Reputation: 630

Like
0Likes
Like

Posted 17 March 2013 - 11:18 PM

Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.

 

my current deferred shading approach (which I believe is how everyone does it ?):

 

float4 color = 0,0,0,0;

for each light

  color += surfaceColor * (NdotL * lightDiffuseColor);

freameBuffer = color

 

deferred lighting approach:

 

lightAccumBuffer = 0,0,0,0

for each light

  lightAccumBuffer += (NdotL * lightDiffuseColor);

 

scene render pass

   framebuffer = lightAccumBuffer * surfaceColor

 

In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?



#4 Hodgman   Moderators   -  Reputation: 28615

Like
0Likes
Like

Posted 17 March 2013 - 11:40 PM

I edited my post above at the same time that you replied:

A*x + B*x + C*x == (A + B + C)*x

 

Both of these equations are mathematically equal (just ask wolfram). If you can re-arrange the left one into the right one, then it may be more efficient to implement in a computer because there's less operations.



#5 ~0ul   Members   -  Reputation: 630

Like
0Likes
Like

Posted 18 March 2013 - 12:25 AM

Well, now that you put it like that, I can see that they are in fact identical :) (as long as we keep diffuse and specular seperate)

 

I guess I just got tripped up by thinking in terms of lights and passes and missed the simple equation.

 

Thanks!



#6 MJP   Moderators   -  Reputation: 10637

Like
0Likes
Like

Posted 18 March 2013 - 12:29 AM

Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.

 

my current deferred shading approach (which I believe is how everyone does it ?):

 

float4 color = 0,0,0,0;

for each light

  color += surfaceColor * (NdotL * lightDiffuseColor);

freameBuffer = color

 

deferred lighting approach:

 

lightAccumBuffer = 0,0,0,0

for each light

  lightAccumBuffer += (NdotL * lightDiffuseColor);

 

scene render pass

   framebuffer = lightAccumBuffer * surfaceColor

 

In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?

 

Mathematically the results are the same due to the distributive property, as Hodgman has already pointed out. In practice there can be differences due to precision and conversion behavior of render target formats. If you're using floating-point formats then it's not likely to be a significant issue.

However, I feel I should ask why you're considering using a "light pre-pass" approach in the first place. It can be useful if you *really* don't want to use multiple render targets (which can be beneficial on a certain current-gen console), but outside of that it doesn't really have any advantages. It forces you to render your geometry twice (both times with a pixel shader), it's harder than regular deferred to handle MSAA (at least if you want to do it correctly), and the second pass doesn't really give you much more material flexibility since it happens after applying the BRDF.


Edited by MJP, 18 March 2013 - 12:30 AM.


#7 ~0ul   Members   -  Reputation: 630

Like
0Likes
Like

Posted 18 March 2013 - 01:26 AM

The main reason was material flexibility without having to store additional parameters to the GBuffer, but thinking about this more based on what you said, it doesn't seem that much more useful since the BRDF has already been applied as you said. Basically I was just wondering if I were to switch, if there would be any discrepancies in the output. I probably won't switch unless I find a good reason.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS