Normals and camera/eye direction computations (HLSL)

Started by
11 comments, last by Meltac 12 years ago
Wow, that sounds interesting. I've thought of something like that, however didn't know the ddx/ddy functions. I guess that shall help me out, will take me some trial-and-error cycles though.

Thanks for your help, guys!
Advertisement
Can you verify the surface format of the normal texture? There are lots of ways to store inaccurate normals.

Also, I'm not 100% convinced that the matrix you used for the transform is the correct one. Typically the mView matrix is used for world-to-view space transform and what you need is the inverse of that matrix.

Cheers!

Can you verify the surface format of the normal texture? There are lots of ways to store inaccurate normals.


How would I verify the surface format? I only see what I get when sampling some position, normal, or color value, but what "format" would it have?


Also, I'm not 100% convinced that the matrix you used for the transform is the correct one. Typically the mView matrix is used for world-to-view space transform and what you need is the inverse of that matrix.


I also thought of that. Given the fact that my engine wouldn't provide that inverse matrix, is there a way to compute it out of the mView matrix?

This topic is closed to new replies.

Advertisement