Sign in to follow this  
B_old

Model-scale and specular highlights

Recommended Posts

Hi,

after writing a some forward shading code (for use with transparent stuff) and comparing it with deferred shaded models I noticed, that the specular highlights differ if the model is scaled. The bigger the scale, the more apparent the difference. I believe the forward shaded model looks wrong. A little bit as if the highlights are calculated for the unscaled model.
All lighting calculations happen in object space, so scale should be accounted for.
I'll post some shader code, just in case someone notices a faulty piece right away.


//vertex shader
VS_Output_Nm output;

output.pos = mul(float4(pos, 1.f), g_worldViewProj);

output.tex = tex;

float3 pnorm = 2.f * (normal.xyz - 0.5f);
float3 ptan = 2.f * (tan.xyz - 0.5f);
float3x3 tangentSpace = float3x3(ptan, normalize(tan.w * cross(pnorm, ptan)), pnorm);

//g_lightDir and g_eyePos are in object-space
output.light = mul(tangentSpace, g_lightDir);

float3 eyeDir = mul(tangentSpace, normalize(g_eyePos - pos));
output.halfVec = eyeDir + output.light;

return output;


//pixel shader

float3 normal = 2.f * (g_texture1.Sample(g_anisoSampler, input.tex).rgb - 0.5f);

return lighting(normalize(normal), normalize(input.light), normalize(input.halfVec), input.tex);

float3 lighting(float3 normal, float3 light, float3 halfAngle, float2 tex)
{
float diffuse = dot(normal, light);
float backDiffuse = max(-diffuse, 0.f) * g_translucency;
diffuse = max(diffuse, 0.f) + backDiffuse;

float reflection = max(dot(normal, halfAngle), 0.f);
float3 specular = pow(reflection, g_specularExponent * 255.f) * g_specularColor;

float3 baseColor = g_opacity * g_texture0.Sample(g_anisoSampler, tex).rgb;
return g_lightColor * (diffuse * baseColor + specular);
}

Share this post


Link to post
Share on other sites
You should light in worldspace really.

If you light in object space, that means you need to invert the model matrix and apply it to the light at some point. But inverting a scaled matrix will bring the light closer to the object which affects reflection angles etc.

Share this post


Link to post
Share on other sites
Thanks for the answer.
I don't understand it though. :) So maybe you could explain some more.
As you say, I transform the light position by the inverted model matrix. I was under the impression that bringing the light closer for bigger objects should FIX the angle problem, not the other way round.

Also, you make me a little bit nervous. Are you suggesting, that world space lighting is always preferable? For example I'm doing all lighting in view space for the deferred part of the pipeline. Not good?

Share this post


Link to post
Share on other sites
Quote:
Original post by B_old
Thanks for the answer.
I don't understand it though. :) So maybe you could explain some more.
As you say, I transform the light position by the inverted model matrix. I was under the impression that bringing the light closer for bigger objects should FIX the angle problem, not the other way round.

Also, you make me a little bit nervous. Are you suggesting, that world space lighting is always preferable? For example I'm doing all lighting in view space for the deferred part of the pipeline. Not good?

Calm down, using view space in a deferred shader is the way to go. But on the other hand it is better to use world space over object space (avoids some inverse matrices) :_)

Do you use uniform scaling ? If not, check if you scale your normal and tangent vector.

Do you have any screenshots about the issue ?

Maybe your deferred implementation contains a bug.

Share this post


Link to post
Share on other sites
Quote:
Original post by B_old
Thanks for the answer.
I don't understand it though. :) So maybe you could explain some more.
As you say, I transform the light position by the inverted model matrix. I was under the impression that bringing the light closer for bigger objects should FIX the angle problem, not the other way round.

Also, you make me a little bit nervous. Are you suggesting, that world space lighting is always preferable? For example I'm doing all lighting in view space for the deferred part of the pipeline. Not good?
Firstly, I'm only on about forward rendering here, based on your op. World space is just easier generally and it avoids the scaling problem.

If you have a sphere of radius 1 and a light 10 metres away, their is a 9m gap for attenuation. If you scaled the sphere by 100%, it's now radus 2 and the gap is 8m.

If you instead invert that "doubling" scale and apply it to the light, the light will move to 5m away, levaing a 4m gap instead of 8m - half it's relative distance. Which will make it too bright and will change the incidence angles too.

Hope that helps.

Share this post


Link to post
Share on other sites
Object space lighting should be correct then?
The reason I'm using it, is that I figured its easier to invert one matrix per model on the CPU and then transform light-pos to tangentspace per vertex, instead to of transforming every normal to world-space per pixel.
I'll give it a try though, to see whether it changes the image.

I tried both uniform and non-uniform (?) scaling and the result seemed to vary slightly, which really confuses me, because as I said, the normals and tangents never get touched by the scale.

Thinking about it now it occurs to me, that in a case where you have vertex positions of zero on a certain axis, and scale along that axis I could see how the object space eye/light pos could end up wrong.

Anyway, looking forward to your feedback.

Share this post


Link to post
Share on other sites
Quote:
Original post by Rubicon
If you instead invert that "doubling" scale and apply it to the light, the light will move to 5m away, levaing a 4m gap instead of 8m - half it's relative distance. Which will make it too bright and will change the incidence angles too.

Hmm. I'm still having some trouble following.
I scale the attenuation according to the model scale. It should work, because the visible attenuation radius is consistent for differently scaled models.
About the incidence angles. I tried visualizing it on a piece of paper with a scaled cube and came to the conclusion, that the angles should be the same for scaling the model in world space or scaling the light dist in object space. Which you are saying is wrong?

EDIT:
I now do the forward lighting in world space and can see virtually no difference to the deferred part. Still not understanding the angle issue though.

[Edited by - B_old on September 28, 2010 9:43:12 AM]

Share this post


Link to post
Share on other sites
I don't really know how to explain it better tbh. It's a well known problem and I never researched it fully, as I know that doing stuff in world space makes everything "just work". (As well as making other stuff trivial like instancing. Likewise most demo source on teh internets is usually in ws too).

If applying a 3x3 transform to a normal in a VS gives you a bottleneck, I'll give you a million bucks. It's soooo not a problem - I'd advise just bite the bullet, stop trying to figure out why it doesn't work, accept the everyone says it doesn't work and just box around it. :)

EDIT: Don't forget that just scaling the attenuation isn't all of the problem. You mentioned specular, which also involves a light position and a camera position.

Share this post


Link to post
Share on other sites
Quote:
Original post by Rubicon
If applying a 3x3 transform to a normal in a VS gives you a bottleneck, I'll give you a million bucks.

But in case of normal mapping I don't have the normal before the PS. So I am transforming the normal from tangent to world space in the PS not the VS. Or am I going about it the wrong way?

Share this post


Link to post
Share on other sites
It's what I do, yes.

FYI, my pipeline used to work in object space exactly how you have your forwards bit. What made me change was adding instancing support, which *has* to be in world-space if you want sane code.

Because of the need to support instancing, a few more things had to move from VS to PS, but it was a price more than worth paying in terms of general perf payoff. Even a fairly old card can eat this stuff for breakfast.

There's an interesting paper somewhere about maximising the GPU by adding vertices to meshes for detail rather than having them sat idle. Their example was a fairy model which fullscreen in the their sample pic still looked solid when in wireframe! Not a practical thing, but a good eye opener for when to start worrying! :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this