D3D lighting and scaling - TECHY STUFF!

Started by
4 comments, last by AndyM 22 years ago
If you scale an object/vertices using the WORLD matrix transformation, then directional lighting gets messed up because the vertex normals appear to get scaled too! Scaling the [normalized] lighting direction vector by the same amount as in the World Matrix would seem to counteract any mis-calculations - is this the general solution (guess its only a minor inconvenience)? I''m not a maths expert, so I''m not entirely clear on why vertex normals are not kept normalized - by doing an extra (and perhaps costly) length check of each normal after transformation or by using a second matrix for normals [or the normal component of the vertices] which doesn''t contain the scaling parts, just rotations. Would anyone care to explain the DirectX approach and any alternatives used in other engines? A-
Advertisement
Turn on the D3DRS_NORMALIZENORMALS renderstate...

...if you''re using vertex shaders, you''ll be controlling the transform yourself. Transform vertex positions by the usual concatenated world and view matrices (aka modelview/worldview).

Transform normals by the *inverse transpose" of the world*view matrix, i.e. take the inverse then transpose it.

AFAIK, scaling the light direction vector won''t always work properly since the dot product at the core of D3D lighting gives:

a.b = |a||b|cos(t)

Which if a and b (L and N) *are* normalised gives just the cosine of the angle between them (i.e. perfect for the lighting intensity). If one or both aren''t normalised there''s an extra length being multiplied by the cosine so the intensity will either be too dark (|a||b| < 1) or too light (|a||b| > 1)... so AFAIK you''d need to correct the colour after lighting which could be a major pain.


My preference would be to use a vertex shader to avoid all the hassle, particularly on a chip with no hardware T&L. Or even simpler ensure you don''t have any scales (the "hold large stick over artist" method). On fixed function only hardware T&L, NORMALIZENORMALS is fairly cheap.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Wow!

When I first started using DX7, I had a problem with scaled objects getting darker. I have also done my own software engines, so I understand the math. Reading the docs gave me formulas, but I guess it went past me that other things must be considered.

My question to S1CA (which is not the first, thanks for the help in the past!) is if you prefer Vertex Shaders, even if there is not any hardware support? I have another post asking this same question. I am getting higher frame rates on the Skinned Mesh example not using vertex than using them.

But your point is, by using them, we can bypass a lot of DX hassle. Heck, I am still getting 80 FPS using them, and if I did, it would be future hardware compatible.

S1CA, do you have any other thoughts on this? I would love to here them.
Thank you Simon (should have looked at the renderstates!) - I'll leave the vertex shaders alone for now (extra complexity).
A

[edited by - andym on April 17, 2002 11:02:21 AM]

I wrote:
quote:so AFAIK you''d need to correct the colour after lighting which could be a major pain.


Actually on second thoughts, if the scales in your matrices were uniform and you knew what the scale was (say by tracking the construction of the matrices rather than a totally arbitrary method), you might be able to make it work:

|N||L|cos(t)

If you knew that the source normals were always normalised (|N|=1), and you knew that the scale was say 3, then you could shorten the L vector so that it''s length was 1/3 so that 3*1/3=1.

Of course things would get messy for anything other than a plain directional light - a spotlight term and attenuation would probably get tricky. Also you might get precision issues if L becomes too short.


Taulin wrote:
quote:My question to S1CA (which is not the first, thanks for the help in the past!) is if you prefer Vertex Shaders, even if there is not any hardware support?


A) on a new chip which supports shaders in hardware (GeForce3, Radeon8500 etc), I''d always prefer shaders.
Knowing that there''s no redundant processing happening and that you don''t need to fudge is a nice feeling

B) on a chip with *NO* hardware vertex processing at all (not even fixed function), such as most older hardware, the Kyro series, MatroxG400 etc, I''d prefer shaders too!!!.
The software T&L is always going to be done on the CPU by the D3D runtime whether you use shaders or the fixed function pipe on that type of hardware. AFAIK (from conferences etc), vertex shaders get "compiled" into native chip specific x86 code by the D3D runtime and simply executed inline so its speed is very similar to the fixed function pipe.
The extra benefit you get is being able to remove parts of the pipeline you know your app doesn''t need.

C) the tricky hardware is DX7 level fixed function hardware like the GeForce1 & 2 - in that case, what is best becomes very application specific. To simplify my code I''d prefer shaders, however the speed from hardware fixed function may be worth the extra work.
Particularly if the only reason for using a shader was to avoid scaled normals problems because NORMALIZENORMALS in hardware is pretty fast.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

I guess that is why on my Gforce 2 I get faster frame rates using the fixed pipe than vertex shaders ( at least in the Skinned Mesh demo I do).

Thanks for the reply!

This topic is closed to new replies.

Advertisement