Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualMJP

Posted 18 March 2013 - 12:30 AM

Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.

 

my current deferred shading approach (which I believe is how everyone does it ?):

 

float4 color = 0,0,0,0;

for each light

  color += surfaceColor * (NdotL * lightDiffuseColor);

freameBuffer = color

 

deferred lighting approach:

 

lightAccumBuffer = 0,0,0,0

for each light

  lightAccumBuffer += (NdotL * lightDiffuseColor);

 

scene render pass

   framebuffer = lightAccumBuffer * surfaceColor

 

In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?

 

Mathematically the results are the same due to the distributive property, as Hodgman has already pointed out. In practice there can be differences due to precision and conversion behavior of render target formats. If you're using floating-point formats then it's not likely to be a significant issue.

However, I feel I should ask why you're considering using a "light pre-pass" approach in the first place. It can be useful if you *really* don't want to use multiple render targets (which can be beneficial on a certain current-gen console), but outside of that it doesn't really have any advantages. It forces you to render your geometry twice (both times with a pixel shader), it's harder than regular deferred to handle MSAA (at least if you want to do it correctly), and the second pass doesn't really give you much more material flexibility since it happens after applying the BRDF.


#1MJP

Posted 18 March 2013 - 12:29 AM

Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.

 

my current deferred shading approach (which I believe is how everyone does it ?):

 

float4 color = 0,0,0,0;

for each light

  color += surfaceColor * (NdotL * lightDiffuseColor);

freameBuffer = color

 

deferred lighting approach:

 

lightAccumBuffer = 0,0,0,0

for each light

  lightAccumBuffer += (NdotL * lightDiffuseColor);

 

scene render pass

   framebuffer = lightAccumBuffer * surfaceColor

 

In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?

 

Mathematically the results are the same due to the distributive property, as Hodgman has already pointed out. In practice there can be differences due to precision and conversion behavior of render target formats. If you're using floating-point formats then it's not likely to be a significant issue.

However, I feel I should ask why you're considering using a "light pre-pass" approach in the first place. It can be useful if you *really* don't want to use multiple render targets (which can be beneficial on a certain current-gen console), but outside of that it doesn't really have any advantages. It forces you to render your geometry twice (both times with a pixel shader), it's harder than regular deferred to handle MSAA (at least if you want to do it correctly), and it doesn't really give you much more material flexibility since most of the variation is in the BRDF.


PARTNERS