My point is that lighting in world space will be different to lighting in view space.
distance(vertex*view - light) != distance(vertex - light*view^-1)
this is clear if we let the light origin be 0,0,0, and consider a matrix with no translation, then we have that:
length(vertex*view) != distance(vertex)
which is true
In other words, the transformation distorts the lighting
So I wonder why opengl does lighting in view space and not in world space.
Is it for the the sake of having a single modelview matrix instead of separate model and view matrices in the pipeline?
Ok, I see. I did a test and yes, there is a difference in distance depending on which space you are working in when there is scale applied.
It seems to be a problem for point lights and spot lights and when you need to use attenuation.
However, I don't see this as a problem for fixed function GL because fixed function GL doesn't do its lighting computation in object space.
But for someone who is using shaders, and let's say he wants to do bump mapping on certain objects and phong lighting on something else, the attenuation will appear different and he might wonder why. The bump mapped object would appear brighter.
Is the above correct? Can someone verify it?