Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualBacterius

Posted 10 April 2013 - 01:03 AM

And that dot product lighting in video games bypasses the concept of radiance and defines lights in terms of irradiance

What? No, the solid angle is always taken into account in direct light sampling (which is what games usually do). See that division by the distance to the point light squared? That's the solid angle part. The cosine term (dot product) does happen to cancel out for most BRDF's, though. Radiance and irradiance are two completely different concepts, irradiance does not take direction into account, it's simply the radiant energy flux density (i.e. per unit area) emitted by a surface (in every direction).

Your calculation using irradiance is technically correct for diffuse surfaces, but the only reason this is true is because such surfaces have constant reflectance (for diffuse surfaces, irradiance is equal to reflectance, this is what light baking actually does but it only works for diffuse-only surfaces, or generally surfaces where the reflectance happens to be independent of view angle). It does not work for more complex reflectance functions (BRDF's), this is what simple pixel shaders do:
set the lighting term to zero
calculate the surface normal N

for every point light in the world:
-- calculate the distance D from the light to this pixel
-- calculate the view vector V (camera to pixel) and the light vector L (pixel to light source)
-- evaluate the BRDF for (L, V) as reflectance ratio R (both for diffuse and specular)
-- divide everything by D^2 (inverse square law) and multiply/divide by dot(N, L), PI, as needed
-- add this to your "lighting term"

add ambient, light-independent stuff
modify your pixel's color depending on the lighting term calculated
 
Computer graphics shaders aren't a role model of clarity, generally they are not concerned with making sense, they just need to be fast and correct. Sometimes you'll see things like the inverse square division missing as it was accounted for elsewhere, the PI division was baked outside the shader, and so on.. shaders probably aren't the best way to learn radiometry.

I think it's really the notion of BRDF that you are missing here, diffuse surfaces aren't everything and tend to simplify a lot of stuff which is actually relevant for every other type of surface. As for the BRDF, the concept is fairly simple, if a bit abstract - for a light vector L and a view vector V, the BRDF returns the fraction of light incident to the surface along vector L (note this is radiance) which will be reflected along vector V.

PS: in the pseudocode above you also need to check if the point light is occluded by geometry, if so you need to cast a shadow instead. And you will notice this is not physically accurate, as it is ignoring the light contribution from light bouncing around the world and only considering direct lighting contribution from the point light sources themselves.

Area lights are in theory handled like an infinite number of point lights clumped together (but in computer graphics different methods are used, obviously).

#6Bacterius

Posted 09 April 2013 - 09:25 PM

And that dot product lighting in video games bypasses the concept of radiance and defines lights in terms of irradiance

What? No, the solid angle is always taken into account in direct light sampling (which is what games usually do). See that division by the distance to the point light squared? That's the solid angle part. The cosine term (dot product) does happen to cancel out for most BRDF's, though. Radiance and irradiance are two completely different concepts, irradiance does not take direction into account, it's simply the radiant energy flux density (i.e. per unit area) emitted by a surface (in every direction).

Your calculation using irradiance is technically correct for diffuse surfaces, but the only reason this is true is because such surfaces have constant reflectance (for diffuse surfaces, irradiance is equal to reflectance, this is what light baking actually does but it only works for diffuse-only surfaces, or generally surfaces where the reflectance happens to be independent of view angle). It does not work for more complex reflectance functions (BRDF's), this is what simple pixel shaders do:
set the lighting term to zero
calculate the surface normal N

for every point light in the world:
-- calculate the distance D from the light to this pixel
-- calculate the view vector V (camera to pixel) and the light vector L (pixel to light source)
-- evaluate the BRDF for (L, V) as reflectance ratio R (both for diffuse and specular)
-- divide everything by D^2 (inverse square law) and multiply/divide by dot(N, L), PI, as needed
-- add ambient, etc.. and then add this to your "lighting term"

modify your pixel's color depending on the lighting term calculated
 
Computer graphics shaders aren't a role model of clarity, generally they are not concerned with making sense, they just need to be fast and correct. Sometimes you'll see things like the inverse square division missing as it was accounted for elsewhere, the PI division was baked outside the shader, and so on.. shaders probably aren't the best way to learn radiometry.

I think it's really the notion of BRDF that you are missing here, diffuse surfaces aren't everything and tend to simplify a lot of stuff which is actually relevant for every other type of surface. As for the BRDF, the concept is fairly simple, if a bit abstract - for a light vector L and a view vector V, the BRDF returns the fraction of light incident to the surface along vector L (note this is radiance) which will be reflected along vector V.

PS: in the pseudocode above you also need to check if the point light is occluded by geometry, if so you need to cast a shadow instead. And you will notice this is not physically accurate, as it is ignoring the light contribution from light bouncing around the world and only considering direct lighting contribution from the point light sources themselves.

Area lights are in theory handled like an infinite number of point lights clumped together (but in computer graphics different methods are used, obviously).

#5Bacterius

Posted 09 April 2013 - 06:36 PM

And that dot product lighting in video games bypasses the concept of radiance and defines lights in terms of irradiance

What? No, the solid angle is always taken into account in direct light sampling (which is what games usually do). See that division by the distance to the point light squared? That's the solid angle part. The cosine term (dot product) does happen to cancel out for most BRDF's, though. Radiance and irradiance are two completely different concepts, irradiance does not take direction into account, it's simply the radiant energy flux density (i.e. per unit area) emitted by a surface (in every direction).

Your calculation using irradiance is technically correct for diffuse surfaces, but the only reason this is true is because such surfaces have constant reflectance (for diffuse surfaces, irradiance is equal to reflectance, this is what light baking actually does but it only works for diffuse-only surfaces, or generally surfaces where the reflectance happens to be independent of view angle). It does not work for more complex reflectance functions (BRDF's), this is what simple pixel shaders do:
set the lighting term to zero
calculate the surface normal N

for every point light in the world:
-- calculate the distance D from the light to this pixel
-- calculate the view vector V (camera to pixel) and the light vector L (pixel to light source)
-- evaluate the BRDF for (L, V) as reflectance ratio R (both for diffuse and specular)
-- divide everything by D^2 (inverse square law) and multiply/divide by dot(N, L), PI, as needed
-- add ambient, etc.. and then add this to your "lighting term"

modify your pixel's color depending on the lighting term calculated
 
Computer graphics shaders aren't a role model of clarity, generally they are not concerned with making sense, they just need to be fast and correct. Sometimes you'll see things like the inverse square division missing as it was accounted for elsewhere, the PI division was baked outside the shader, and so on.. shaders probably aren't the best way to learn radiometry.

I think it's really the notion of BRDF that you are missing here, diffuse surfaces aren't everything and tend to simplify a lot of stuff which is actually relevant for every other type of surface. As for the BRDF, the concept is fairly simple, if a bit abstract - for a light vector L and a view vector V, the BRDF returns the fraction of light incident to the surface along vector L (note this is radiance) which will be reflected along vector V.

PS: in the pseudocode above you also need to check if the point light is occluded by geometry, if so you need to cast a shadow instead. And you will notice this is not physically accurate, as it is ignoring the light contribution from light bouncing around the world and only considering direct lighting contribution from the point light sources themselves.

Area lights are in theory handled like an infinite number of point lights clumped together, but in computer graphics different methods are used, obviously).

#4Bacterius

Posted 09 April 2013 - 06:33 PM

And that dot product lighting in video games bypasses the concept of radiance and defines lights in terms of irradiance

What? No, the solid angle is always taken into account in direct light sampling (which is what games usually do). See that division by the distance to the point light squared? That's the solid angle part. The cosine term (dot product) does happen to cancel out for most BRDF's, though. Radiance and irradiance are two completely different concepts, irradiance does not take direction into account, it's simply the radiant energy flux density (i.e. per unit area) emitted by a surface (in every direction).

Your calculation using irradiance is technically correct for diffuse surfaces, but the only reason this is true is because such surfaces have constant reflectance (for diffuse surfaces, irradiance is equal to reflectance, this is what irradiance caching/light baking actually does but it only works for diffuse-only surfaces, or generally surfaces where the reflectance happens to be independent of view angle). It does not work for more complex reflectance functions (BRDF's), this is what simple pixel shaders do:
set the lighting term to zero
calculate the surface normal N

for every point light in the world:
-- calculate the distance D from the light to this pixel
-- calculate the view vector V (camera to pixel) and the light vector L (pixel to light source)
-- evaluate the BRDF for (L, V) as reflectance ratio R (both for diffuse and specular)
-- divide everything by D^2 (inverse square law) and multiply/divide by dot(N, L), PI, as needed
-- add ambient, etc.. and then add this to your "lighting term"

modify your pixel's color depending on the lighting term calculated
 
Computer graphics shaders aren't a role model of clarity, generally they are not concerned with making sense, they just need to be fast and correct. Sometimes you'll see things like the inverse square division missing as it was accounted for elsewhere, the PI division was baked outside the shader, and so on.. shaders probably aren't the best way to learn radiometry.

I think it's really the notion of BRDF that you are missing here, diffuse surfaces aren't everything and tend to simplify a lot of stuff which is actually relevant for every other type of surface. As for the BRDF, the concept is fairly simple, if a bit abstract - for a light vector L and a view vector V, the BRDF returns the fraction of light incident to the surface along vector L (note this is radiance) which will be reflected along vector V.

#3Bacterius

Posted 09 April 2013 - 06:33 PM

And that dot product lighting in video games bypasses the concept of radiance and defines lights in terms of irradiance

What? No, the solid angle is always taken into account in direct light sampling (which is what games usually do). See that division by the distance to the point light squared? That's the solid angle part. The cosine term (dot product) does happen to cancel out for most BRDF's, though. Radiance and irradiance are two completely different concepts, irradiance does not take direction into account, it's simply the radiant energy flux density (i.e. per unit area) emitted by a surface (in every direction).

Your calculation using irradiance is technically correct for diffuse surfaces, but the only reason this is true is because such surfaces have constant reflectance (for diffuse surfaces, irradiance is equal to reflectance, this is what irradiance caching/light baking actually does but it only works for diffuse-only surfaces, or generally surfaces where the reflectance happens to be independent of view angle). It does not work for more complex reflectance functions (BRDF's), this is what simple pixel shaders do:


 
set the lighting term to zero
calculate the surface normal N

for every point light in the world:
-- calculate the distance D from the light to this pixel
-- calculate the view vector V (camera to pixel) and the light vector L (pixel to light source)
-- evaluate the BRDF for (L, V) as reflectance ratio R (both for diffuse and specular)
-- divide everything by D^2 (inverse square law) and multiply by dot(N, L) as needed
-- add ambient, etc.. and then add this to your "lighting term"

modify your pixel's color depending on the lighting term calculated
 
Computer graphics shaders aren't a role model of clarity, generally they are not concerned with making sense, they just need to be fast and correct. Sometimes you'll see things like the inverse square division missing as it was accounted for elsewhere, and so on.. shaders probably aren't the best way to learn radiometry.

I think it's really the notion of BRDF that you are missing here, diffuse surfaces aren't everything and tend to simplify a lot of stuff which is actually relevant for every other type of surface. As for the BRDF, the concept is fairly simple, if a bit abstract - for a light vector L and a view vector V, the BRDF returns the fraction of light incident to the surface along vector L (note this is radiance) which will be reflected along vector V.

#2Bacterius

Posted 09 April 2013 - 06:32 PM

And that dot product lighting in video games bypasses the concept of radiance and defines lights in terms of irradiance

What? No, the solid angle is always taken into account in direct light sampling (which is what games usually do). See that division by the distance to the point light squared? That's the solid angle part. The cosine term (dot product) does happen to cancel out for most BRDF's, though. Radiance and irradiance are two completely different concepts, irradiance does not take direction into account, it's simply the radiant energy flux density (i.e. per unit area) emitted by a surface (in every direction).

Your calculation using irradiance is technically correct for diffuse surfaces, but the only reason this is true is because such surfaces have constant reflectance (for diffuse surfaces, irradiance is equal to reflectance, this is what irradiance caching/light baking actually does but it only works for diffuse-only surfaces, or generally surfaces where the reflectance happens to be independent of view angle). It does not work for more complex reflectance functions (BRDF's), this is what simple pixel shaders do:


 
set the lighting term to zero
calculate the surface normal N

for every point light in the world:
-- calculate the distance D from the light to this pixel
-- calculate the view vector V (camera to pixel) and the light vector L (pixel to light source)
-- evaluate the BRDF for (L, V) as reflectance ratio R (both for diffuse and specular)
-- divide everything by D^2 (inverse square law) and dot(N, L) as needed
-- add ambient, etc.. and then add this to your "lighting term"

modify your pixel's color depending on the lighting term calculated
 
Computer graphics shaders aren't a role model of clarity, generally they are not concerned with making sense, they just need to be fast and correct. Sometimes you'll see things like the inverse square division missing as it was accounted for elsewhere, and so on.. shaders probably aren't the best way to learn radiometry.

I think it's really the notion of BRDF that you are missing here, diffuse surfaces aren't everything and tend to simplify a lot of stuff which is actually relevant for every other type of surface. As for the BRDF, the concept is fairly simple, if a bit abstract - for a light vector L and a view vector V, the BRDF returns the fraction of light incident to the surface along vector L (note this is radiance) which will be reflected along vector V.

PARTNERS