Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Parentheses in HLSL cause light attenuation function to not work correctly?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 eowdaoc   Members   -  Reputation: 119

Like
0Likes
Like

Posted 13 September 2012 - 03:10 PM

I have a diffuse+specular equation in my pixel shader, and it works pretty well except for this one issue:

When I change this:
float attenuation = 1.0f / d*d;
To this:
float attenuation = 1.0f / ( d*d );

My model is no longer lit, and is instead the color of my ambient intensity. I find this extremely strange. The reason I want parentheses is so I can use a different attenuation function such as ( 1 + 0.045*d + 0.0075*d*d ).

It also messes up if I try this:
float denominator = d*d;
float attenuation = 1.0f / denominator;


Here is my entire pixel shader. The reason for all of the "tmp_stuff" variables is to make everything "politically correct" since I have no choice but to input them as float4 due to constant buffer alignment properties. I would rather convert everything to temporary float3 rather than risk the chance of something not working right because of float4. Anyways, it shouldn't have anything to do with my current problem. I've doubled, tripled, and quad-checked my lighting equations with four different books, and I'm just baffled with this problem.

void ps( in v2p input, out float4 final_color : SV_TARGET )
{
	float3 ambient_intensity = float3( 0.1f, 0.1f, 0.1f );
	float3 diffuse_color = float3( 0.8f, 0.8f, 0.8f);
	float3 specular_color = float3( 1.0f, 1.0f , 1.0f );

	float3 tmp_light;
	tmp_light.x = light_vector.x;
	tmp_light.y = light_vector.y;
	tmp_light.z = light_vector.z;

	float3 norm_light = normalize( tmp_light );

	float3 tmp_pos;
	tmp_pos.x = input.pos.x;
	tmp_pos.y =  input.pos.y;
	tmp_pos.z = input.pos.z;

	float3 tmp_norm;
	tmp_norm.x = input.norm.x;
	tmp_norm.y = input.norm.y;
	tmp_norm.z = input.norm.z;

	float3 tmp_cam = float3( 0.0f, 0.0f, -20.0f ); // fixed view camera position

	// light intensity
	float d = distance( tmp_pos, tmp_light );
	float attenuation = 1.0f/d*d; // HERE IS THE PROBLEM AREA

	float3 pointlight = attenuation*float3( light_color.x, light_color.y, light_color.z );

	// diffuse lighting
	float diffuse = max( dot( tmp_norm, norm_light) , 0.0f );
	float3 diffuse_final = diffuse_color*ambient_intensity + diffuse_color*pointlight*diffuse;

	// specular lighting
	float3 reflect_vect = 2*dot( tmp_norm, norm_light )*tmp_norm - norm_light;
	float ref_max = max( dot( reflect_vect, normalize(tmp_cam) ), 0.0f );
	float spec_exponent = pow ( ref_max, 50.0f );
	float3 spec_final;

	if( dot( tmp_norm, norm_light ) <= 0 )
	{
	  spec_final = float3( 0.0f, 0.0f, 0.0f );
	}
	if( dot( tmp_norm, norm_light ) > 0 )
	{
	  spec_final = specular_color*pointlight*spec_exponent;
	}
	final_color = float4(  diffuse_final + spec_final, 1.0f );
}

Without parentheses:
Posted Image

With parentheses:
Posted Image

Edited by eowdaoc, 13 September 2012 - 03:23 PM.


Sponsor:

#2 Seabolt   Members   -  Reputation: 633

Like
0Likes
Like

Posted 13 September 2012 - 04:28 PM

That's because in your first statement you didn't appropriately handle the order of operations. If you made any tweaks as to how you calculated your attenuation values to work with the old setup, it's most likely incorrect.
Perception is when one imagination clashes with another

#3 Tsus   Members   -  Reputation: 1049

Like
1Likes
Like

Posted 13 September 2012 - 04:43 PM

Hi!

If you write
float attenuation = 1.0f / d*d;
if will evaluate to . Therefore, you haven’t had any attenuation at all.

If you change it to
float attenuation = 1.0f / (d*d);
you got what you want:

By the way, the other attenuation parameters (constant and linear) are just for artistic purposes. Squared attenuation would be most correct, since the irradiance of a point light is:
E = max(0, cosAngle) * vLightIntensity / (squaredDistance);
For a diffuse surface the radiance then becomes:
L = E * vSurfaceColor / Pi;
You got it right, except for the division by Pi. (It is there for the energy conservation.)

Anyway, the reason why things are getting dark is, that your light source is probably too far away. Consider a distance of 10 units in space. Squared and divided gives you 1/100 of your un-attenuated intensity. I think, things should work for you, if you make your light brighter, i.e. your variable “lightColor”. The lightColor actually stores the luminous intensity in candela (cd). A candle has about 1 candela. A 100 watt light bulb has about 130 cd. So, I guess you need rather large values. :-)

By the way, your test model is quite beautiful. Is it available online? I'd like to use it in my thesis.

Cheers!

Edited by Tsus, 13 September 2012 - 04:44 PM.


#4 eowdaoc   Members   -  Reputation: 119

Like
0Likes
Like

Posted 13 September 2012 - 07:29 PM

Anyway, the reason why things are getting dark is, that your light source is probably too far away. Consider a distance of 10 units in space. Squared and divided gives you 1/100 of your un-attenuated intensity. I think, things should work for you, if you make your light brighter, i.e. your variable “lightColor”. The lightColor actually stores the luminous intensity in candela (cd). A candle has about 1 candela. A 100 watt light bulb has about 130 cd. So, I guess you need rather large values. :-)

By the way, your test model is quite beautiful. Is it available online? I'd like to use it in my thesis.

Cheers!


So my light color, which is currently ( 1.0f, 1.0f, 1.0f ), needs to be something like ( 100.0f, 100.0f, 100.0f )? I tried this but it is still the same result. I am apparently missing some of the logic here. I'm not sure what you are talking about dividing by PI. I haven't seen any diffuse/specular calculations that use PI, unless you're talking about the actual physical model, not the real time phong model. I guess I should have specified Posted Image

Also my light position is at ( 5, 5, -5 ) so it's not a crazy distance away or anything. I have it coded so that I can move the light around so I should be able to move it close enough.

I got the 3d model a while ago so I have no clue where it is online but I can upload it somewhere for you (no texture, just v/vn obj file). What's a good free upload place?

Update: I can only get it to light the object if make the equation attenuation = 1.0f / ( d*d*0.00000075 ) or making the light color something ridiculous like ( 999999, 9999999, 99999999 ) but that's kind of silly, there has to be a better way right? Also, using that, I can't get the light to completely "fall off" no matter how far I move it away from the object.

Edited by eowdaoc, 13 September 2012 - 10:18 PM.


#5 Nik02   Crossbones+   -  Reputation: 2881

Like
0Likes
Like

Posted 14 September 2012 - 12:35 AM

Actual light does not fall off completely either.

Niko Suni


#6 Tsus   Members   -  Reputation: 1049

Like
0Likes
Like

Posted 14 September 2012 - 02:47 AM

Hi again,

I'm not sure what you are talking about dividing by PI. I haven't seen any diffuse/specular calculations that use PI, unless you're talking about the actual physical model, not the real time phong model. I guess I should have specified Posted Image

The BRDF tells for an incoming direction how much light is reflected to an outgoing direction. If you sum up the BRDF over all out-going directions (i.e. integrate the BRDF over the hemisphere) it tells you how much energy is reflected in total. This shouldn’t be more than one, thus for energy conservation we just divide by that integral (making the result exactly one). For a diffuse reflection, this integral happens to be Pi.

Here are some slides on reflectance models for games, including physically plausible BRDFs (though, they approximate quite a lot).

So my light color, which is currently ( 1.0f, 1.0f, 1.0f ), needs to be something like ( 100.0f, 100.0f, 100.0f )? I tried this but it is still the same result. I am apparently missing some of the logic here.
Update: I can only get it to light the object if make the equation attenuation = 1.0f / ( d*d*0.00000075 ) or making the light color something ridiculous like ( 999999, 9999999, 99999999 ) but that's kind of silly, there has to be a better way right? Also, using that, I can't get the light to completely "fall off" no matter how far I move it away from the object.

To get back to your problem, in what spaces are light_vector and input.pos? Are they both in world space? The values you reported to see anything at all are indeed a little extreme. :)
And yes, in theory the attentuation won't fall down to zero. In practice, however, you just cut it off somewhere.

I got the 3d model a while ago so I have no clue where it is online but I can upload it somewhere for you (no texture, just v/vn obj file). What's a good free upload place?

Sounds good! Dropbox perhaps? When you have installed it, you get a "public" folder (which is synced with a server somewhere). You could copy the file in there, do a right click, select "Copy public link" and then paste the link here. What’s the size of the file?

Actual light does not fall off completely either.

The photons don’t lose energy, that’s true. But the solid angle of the receiver area (the pixel / your eye) gets smaller the farer you are away, when viewed from the light source.
Photometric distance law:
E = I * dot(N,L) / (squaredDistance)
E=irradiance, I=intensity, N=normal, L=light direction.

#7 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 14 September 2012 - 03:19 AM

By the way, the other attenuation parameters (constant and linear) are just for artistic purposes. Squared attenuation would be most correct, since the irradiance of a point light is:
E = max(0, cosAngle) * vLightIntensity / (squaredDistance);
For a diffuse surface the radiance then becomes:
L = E * vSurfaceColor / Pi;
You got it right, except for the division by Pi. (It is there for the energy conservation.)

If anything it would be:





The does not appear in the shader code though, as it cancels out with most BRDF's. For example lambert:

Whether needs to be in the shader code depends on the implementation. It depends on what his light is storing. It could either be radiance or radiant intensity. The way he is doing it now is storing radiant intensity, which is . Depending on whether his radiant intensity contains the multiplication by , the division by needs to be in the shader code or not.

Edited by CryZe, 14 September 2012 - 06:01 AM.


#8 Tsus   Members   -  Reputation: 1049

Like
1Likes
Like

Posted 14 September 2012 - 06:34 AM

Hi CryZe!

Alright, let me show you how I got to my equations. I’m still convinced that they are correct. Posted Image

The intensity is the flux per solid angle:
The solid angle is area of a cone cap divided by squared distance:
Now, I assume the solid angle’s top sits at the point light position and opens up toward the viewed receiver area. The viewed receiver area is the receiver area *viewed* from the light’s view. Therefore, the *viewed* receiver area is the receiver area times the cosine of the enclosed angle between normal and direction to light. (That’s what makes the difference to your equation. You forgot in your second line about using only the *viewed* part.)

Now, using that the irradiance is the flux per receiver area:
gives: .
Rearranging leads to: , which is my line from before.

Your first line is slightly wrong, too. From the definition of the radiance: and the definition of intensity: we get:
Rearranging and using gives: (Your first line misses the cosine.)
You pulled the cosine in later in the third line (which is therefore wrong, too). Note that a BRDF is . The cosine isn’t here as well.
Eventually, you get the correct result in the end, since your mistakes in line one and three cancelled each other out. Summarizing, your cosine should move from line three to line one and everything is fine.

As for the thing about Pi, I usually just compute the irradiance for a light like that:

and than just use to get the radiance for that light.
I don’t explicitly compute the incoming radiance (), therefore my Pi in the BRDF doesn’t cancel out.

Best regards!

Edited by Tsus, 14 September 2012 - 06:41 AM.


#9 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 14 September 2012 - 07:47 AM

Ok, thanks for the clarification Posted Image
I wasn't sure when to include the cosine. I just knew that it's part of the rendering equation. Good to know that it's part of the conversion from radiance to radiant intensity. In a few articles I've read it wasn't part of the first two equations either, that's why I wondered.
Good to know though ^^

Edited by CryZe, 14 September 2012 - 07:55 AM.


#10 eowdaoc   Members   -  Reputation: 119

Like
1Likes
Like

Posted 14 September 2012 - 11:10 AM

Wow, those posts are way over my head Posted Image


Hi again,

To get back to your problem, in what spaces are light_vector and input.pos? Are they both in world space? The values you reported to see anything at all are indeed a little extreme. Posted Image
And yes, in theory the attentuation won't fall down to zero. In practice, however, you just cut it off somewhere.


inpus.pos is in world space (transformed in vertex shader), but the light_vector is just raw input, no transformations. Is that correct?

Here are the lighting equations I'm using, from the book "Mathematics for 3D Game Programming and Computer Graphics":



D = diffuse reflection color
A = ambient intensity (should this be a float or a float3?)
C = light attenuation * light_color
N = surface normal
L = normalized light vector



S = specular reflection color
C = light attenuation * light_color
R = 2*( N dot L )*N - L
V = camera vector
m = specular exponent
N = surface normal
L = normalized light vector

Here is the dropbox link to the model: https://www.dropbox....pl6jls/kit2.obj

Edited by eowdaoc, 14 September 2012 - 01:20 PM.


#11 Tsus   Members   -  Reputation: 1049

Like
0Likes
Like

Posted 14 September 2012 - 01:33 PM

Hi again!

First of all, thank you very, very much for the model! Posted Image

Sorry, if we have lost you on the way with all the units (not all of them were explained). Photometrically correct lighting is sometimes a little overwhelming, when you’re not working with it every day.

I glanced a little longer at your code and noticed that you should probably normalize the normal in your pixel shader. Even if the normals are normalized in the vertex shader, the interpolated normal may not be normalized, when it arrives in the pixel shader. Despite from that it looks all fine. What happens if you move the light closer? Any chance to get it brighter?
Perhaps a squared fall-off isn’t a good idea if you only have one bounce of light. Would it work better in your scene, if you’d use a linear fall-off?

To answer your other question, the ambient light A can be a color (float3). It is used to model the indirect light that bounced between surfaces. If you’re in a room with many red walls, the ambient light would mostly be red.

The reflection model of your book looks almost like standard phong, except for the ambient part. But it is okay, to model it that way. The fixed function pipelines did it a little different, but that’s a matter of taste. Often games change the tone of the lighting or add additional terms, to achieve a certain look and feel, though it might not be “correct” in any way. A nice example is in John Edwards' talk on the sand rendering in Journey (second last section of that page).
Probably it’s best not to worry too much about the lighting, as long as it looks good. Posted Image

Best regards!

Edited by Tsus, 14 September 2012 - 01:34 PM.


#12 Tsus   Members   -  Reputation: 1049

Like
1Likes
Like

Posted 14 September 2012 - 03:38 PM

There she is. Posted Image
Nice model, thanks again!

Attached Thumbnails

  • CornellKit2.jpg


#13 Bacterius   Crossbones+   -  Reputation: 9066

Like
1Likes
Like

Posted 15 September 2012 - 07:07 PM

I couldn't resist and passed this great model through my path tracer (took forever though). Thanks again for the .obj, I will keep it as a reference model.

Attached Thumbnails

  • Kit2.png

Edited by Bacterius, 15 September 2012 - 07:08 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#14 Tsus   Members   -  Reputation: 1049

Like
1Likes
Like

Posted 16 September 2012 - 05:19 AM

Nice! I should put in area lights. It looks so much better. Posted Image
Btw, I’ve used stochastic progressive photon mapping for the image above, implemented on the GPU using stochastic spatial hashing. So, a consistent rendering, too. Posted Image

#15 Bacterius   Crossbones+   -  Reputation: 9066

Like
0Likes
Like

Posted 16 September 2012 - 08:59 AM

Nice! I should put in area lights. It looks so much better.
Btw, I’ve used stochastic progressive photon mapping for the image above, implemented on the GPU using stochastic spatial hashing. So, a consistent rendering, too.

Haha, very nice, mine is just basic path tracing with an area light, a "wet plastic" material for the clothes and everything else is perfectly matte. I need to get working on ray tracing again but I have so much stuff to do... sigh perhaps one day I'll find time to implement my pipe dream, a fast physically correct raytracer (preferably on the GPU) Posted Image

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS