BRDF related question...

Started by
1 comment, last by EvilDecl81 19 years, 5 months ago
hi all, i am trying to play a little with brdf lighting, so i've read a nice paper brought by nvidia, explaining a bit of the maths underneath this tecnique. I am a bit confused thou: when indexing the drdf textures the nvidia guy uses the light-ray/eye-ray in BTN coordinates space. Why is that? why can't i use the simple usual object space coordinates? thanks to you all... the G.
Advertisement
er, I'm not sure I understood what your problem was, and if I did, I don't think you understood quite well what a BRDF is...

I haven't read the paper you're reffering to, maybe they changed some stuff compared to the general BRDF concept. a BRDF defines how light will react when hitting a surface, and how much light will be reflected in every direction (the light distribution). all this happens in the surface coordinate space, relative to the surface itself, that is, in tangent space... object space hasn't got anything to do with it.
Quote:Original post by gunslinger77
hi all,
i am trying to play a little with brdf lighting,
so i've read a nice paper brought by nvidia,
explaining a bit of the maths underneath this tecnique.
I am a bit confused thou:
when indexing the drdf textures the nvidia guy
uses the light-ray/eye-ray in BTN coordinates space.
Why is that? why can't i use the simple usual
object space coordinates?

thanks to you all...

the G.


Certain BRDF formulations work only in the Tangent Space, e.g., Lafortune, since it assumes that the Normal is (0,0,1), and the tangents are (1,0,0), and (0,1,0). The advantage of this is that you can parameterize a BRDF with only 2 vectors (The light direction and the eye direction). I'd guess this is what they are doing.

If you are only varying the BRDF parameters in texture space, this can be much more efficent since the transform happens only in the vertex shader.
EvilDecl81

This topic is closed to new replies.

Advertisement