Normals and camera/eye direction computations (HLSL)

Started by
11 comments, last by Meltac 12 years ago
Hi all!

I'm not experienced in vector maths related computations so I need some help here.

In the game engine I'm working with (xRay 1.0) I can sample the normal vector of a pixel in a pixel shader like this.

float3 N = tex2D(s_normal, texCoord);

However it seems the result is not the absolute surface normal vector, but something depending on the camera/eye direction. e.g. when I use the normal vector as color, horizontals are green when the camera points straight forward, but as soon as I turn the camera vertically towards the sky or ground, the color changes to blue or red, respectively, indicating that the "normal" N that I sampled above is somewhat relative to the camera direction rather than absolute. Same thing happens with verticals when turning the camera to the left or right.

I have the camera/eye direction vector, so it should be easily possible to get the absolute surface vector with some vector math, but I don't know which formula or HLSL function(s) to use. I tried several combinations of scalar multiplications, divisions, subtractions, and additions, as well as the "dot" and "mul" functions but did not achieve to get a correct result.

So, how do I get the absolute surface normal independent from the camera direction out of the above values?
Advertisement
It seems that your normals are stored in view space. Ie. they have been transformed from world space to view space.
It is absolutely possible to calculate lighting in view space. There may even be some performance advantages.

Just transform the light position and direction vectors to view space and the math for lighting is exactly the same as if the lighting was done in the world space. The advantage is that the camera position is always at 0,0,0 in the view space. Also, storing normals in view space is good in deferred shading implementations.

Practically, you can also transform the normals back to world space using the 3x3 part of the view matrix, but "why would you?".

Cheers!
Sorry, but could you give me a concrete formula or function call? I'm stuck here and have no idea how to practically do this transformation. And yes, I forgot to mention that the xRay engine uses deferred rendering.
anybody please?
With all the math functions you threw at the conversation gave me the idea that handling few transformations isn't issue for you.

Simple normal transformation in HLSL goes something like

float3 WorldNormal = mul(ViewSpaceNormal,(float3x3)mView); where ViewSpaceNormal contains the normal you sample from your normal texture and mView holds a 4x4 matrix which transforms from view/camera space to world space (it is the inverse of typical camera matrix which transforms from world space to view space).

The WorldNormal can then be used for the lighting in worldspace.

Still, I'd stay with the view space lighting. If vector/matrix multiplies give you hard time, I'd spend few hours studying them.

[edit] you'll also need to make sure that the normal you sample from the normal texture isn't packed.
Now THAT helped me, many thanks! Taken your formula with the game's transformation matrix (which I had to digg up out of the existing shader's code) gave me the eye-direction-independent normals.

Now there's such one issue with this method, the results do not seem to be very precise. Verticals can easily be distinguished from horizonals now, but planes with about 30-40 degrees decline still deliver a surface normal equal to an entirely flat (0 degrees) plane. Might some part of that formula be responsible for this lack of precision?
That's not a precision issue.

Make sure your inverse transform is exactly what kauna wrote.

float3 WorldNormal = mul(ViewSpaceNormal,(float3x3)mView);




[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]mView (inverse of world -> view) HAS to be casted to a 3x3 matrix [/font][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif](for beginner safety reasons).[/font]


[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]ViewSpaceNormal should ideally be a float3 (for beginner safety reasons). [/font]




[font=helvetica, arial, verdana, tahoma, sans-serif][color=#282828]Also, components (x, y, z) of your normal can be negative. So when debugging them, either abs() them, or negate them to find out what's happening.[/font]

What makes you sure that it's NOT a precision issue. I double checked that the transform is exactly as kauna suggested, including the data types / casts. Here's my code:

float3 viewspace_N = tex2D(s_normal, uv); // s_normal is the engine-provided normal sampler state, uv the float2 tex-coord
float3 N = mul(viewspace_N,(float3x3)m_WV); // m_WV is the engine-provided transformation matrix

Note that I've also tried both normalizing and unpacking the vectors without success.

Might the transformation matrix m_WV given by the engine happen to be responsible for the incorrect (or better, imprecise) result?
After further investigation it turned out that the surface normals themselves are returned imprecisely by the engine. What makes that values for my purposes where I need to distinguish flat areas from those with remarkable decline somewhat useless.

So I have to find some approach to detect if a pixel is in a flat surface without relying on surface normals. I tried to take the depth/distance value of the position sampler into account for that purpose but it seems that this one is not more precise than the normal sampler. So what other options do I have in this case, if any?
If your engine is giving you flat out incorrect normals, you can also compute them in the pixel shader with

ddx/ddy (partial derivative aka 'change' of a value with respect to viewport X,Y)

To get world space normals, try the following

zdx = ddx(pixel world space z)
zdy = ddy(pixel world space z)

ydx = ddx(pixel world space y)
ydy = ...


v0 = xdx, ydx, zdx (or v0 = ddx(pixel world space position)
v1 = xdy, ydy, zdy (...)

normal = cross product of v0 and v1

You gotta decide what you want to do at edges (and inside special loops). Also becareful of n's direction.

This topic is closed to new replies.

Advertisement