Jump to content

  • Log In with Google      Sign In   
  • Create Account


Normals and camera/eye direction computations (HLSL)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 05 April 2012 - 04:45 AM

Hi all!

I'm not experienced in vector maths related computations so I need some help here.

In the game engine I'm working with (xRay 1.0) I can sample the normal vector of a pixel in a pixel shader like this.

float3 N = tex2D(s_normal, texCoord);

However it seems the result is not the absolute surface normal vector, but something depending on the camera/eye direction. e.g. when I use the normal vector as color, horizontals are green when the camera points straight forward, but as soon as I turn the camera vertically towards the sky or ground, the color changes to blue or red, respectively, indicating that the "normal" N that I sampled above is somewhat relative to the camera direction rather than absolute. Same thing happens with verticals when turning the camera to the left or right.

I have the camera/eye direction vector, so it should be easily possible to get the absolute surface vector with some vector math, but I don't know which formula or HLSL function(s) to use. I tried several combinations of scalar multiplications, divisions, subtractions, and additions, as well as the "dot" and "mul" functions but did not achieve to get a correct result.

So, how do I get the absolute surface normal independent from the camera direction out of the above values?

Sponsor:

#2 kauna   Crossbones+   -  Reputation: 2142

Like
0Likes
Like

Posted 05 April 2012 - 05:38 AM

It seems that your normals are stored in view space. Ie. they have been transformed from world space to view space.
It is absolutely possible to calculate lighting in view space. There may even be some performance advantages.

Just transform the light position and direction vectors to view space and the math for lighting is exactly the same as if the lighting was done in the world space. The advantage is that the camera position is always at 0,0,0 in the view space. Also, storing normals in view space is good in deferred shading implementations.

Practically, you can also transform the normals back to world space using the 3x3 part of the view matrix, but "why would you?".

Cheers!

#3 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 05 April 2012 - 07:02 AM

Sorry, but could you give me a concrete formula or function call? I'm stuck here and have no idea how to practically do this transformation. And yes, I forgot to mention that the xRay engine uses deferred rendering.

#4 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 05 April 2012 - 09:38 AM

anybody please?

#5 kauna   Crossbones+   -  Reputation: 2142

Like
0Likes
Like

Posted 05 April 2012 - 12:30 PM

With all the math functions you threw at the conversation gave me the idea that handling few transformations isn't issue for you.

Simple normal transformation in HLSL goes something like

float3 WorldNormal = mul(ViewSpaceNormal,(float3x3)mView); where ViewSpaceNormal contains the normal you sample from your normal texture and mView holds a 4x4 matrix which transforms from view/camera space to world space (it is the inverse of typical camera matrix which transforms from world space to view space).

The WorldNormal can then be used for the lighting in worldspace.

Still, I'd stay with the view space lighting. If vector/matrix multiplies give you hard time, I'd spend few hours studying them.

[edit] you'll also need to make sure that the normal you sample from the normal texture isn't packed.

#6 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 06 April 2012 - 11:08 AM

Now THAT helped me, many thanks! Taken your formula with the game's transformation matrix (which I had to digg up out of the existing shader's code) gave me the eye-direction-independent normals.

Now there's such one issue with this method, the results do not seem to be very precise. Verticals can easily be distinguished from horizonals now, but planes with about 30-40 degrees decline still deliver a surface normal equal to an entirely flat (0 degrees) plane. Might some part of that formula be responsible for this lack of precision?

#7 jameszhao00   Members   -  Reputation: 267

Like
0Likes
Like

Posted 07 April 2012 - 12:22 AM

That's not a precision issue.

Make sure your inverse transform is exactly what kauna wrote.

float3 WorldNormal = mul(ViewSpaceNormal,(float3x3)mView);



mView (inverse of world -> view) HAS to be casted to a 3x3 matrix (for beginner safety reasons).

ViewSpaceNormal should ideally be a float3 (for beginner safety reasons).



Also, components (x, y, z) of your normal can be negative. So when debugging them, either abs() them, or negate them to find out what's happening.



#8 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 07 April 2012 - 01:25 PM

What makes you sure that it's NOT a precision issue. I double checked that the transform is exactly as kauna suggested, including the data types / casts. Here's my code:

float3 viewspace_N = tex2D(s_normal, uv); // s_normal is the engine-provided normal sampler state, uv the float2 tex-coord
float3 N = mul(viewspace_N,(float3x3)m_WV); // m_WV is the engine-provided transformation matrix

Note that I've also tried both normalizing and unpacking the vectors without success.

Might the transformation matrix m_WV given by the engine happen to be responsible for the incorrect (or better, imprecise) result?

#9 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 07 April 2012 - 02:10 PM

After further investigation it turned out that the surface normals themselves are returned imprecisely by the engine. What makes that values for my purposes where I need to distinguish flat areas from those with remarkable decline somewhat useless.

So I have to find some approach to detect if a pixel is in a flat surface without relying on surface normals. I tried to take the depth/distance value of the position sampler into account for that purpose but it seems that this one is not more precise than the normal sampler. So what other options do I have in this case, if any?

#10 jameszhao00   Members   -  Reputation: 267

Like
0Likes
Like

Posted 07 April 2012 - 05:08 PM

If your engine is giving you flat out incorrect normals, you can also compute them in the pixel shader with

ddx/ddy (partial derivative aka 'change' of a value with respect to viewport X,Y)

To get world space normals, try the following

zdx = ddx(pixel world space z)
zdy = ddy(pixel world space z)

ydx = ddx(pixel world space y)
ydy = ...


v0 = xdx, ydx, zdx (or v0 = ddx(pixel world space position)
v1 = xdy, ydy, zdy (...)

normal = cross product of v0 and v1

You gotta decide what you want to do at edges (and inside special loops). Also becareful of n's direction.

#11 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 08 April 2012 - 02:38 PM

Wow, that sounds interesting. I've thought of something like that, however didn't know the ddx/ddy functions. I guess that shall help me out, will take me some trial-and-error cycles though.

Thanks for your help, guys!

#12 kauna   Crossbones+   -  Reputation: 2142

Like
0Likes
Like

Posted 10 April 2012 - 01:47 AM

Can you verify the surface format of the normal texture? There are lots of ways to store inaccurate normals.

Also, I'm not 100% convinced that the matrix you used for the transform is the correct one. Typically the mView matrix is used for world-to-view space transform and what you need is the inverse of that matrix.

Cheers!

#13 Meltac   Members   -  Reputation: 277

Like
0Likes
Like

Posted 12 April 2012 - 09:16 AM

Can you verify the surface format of the normal texture? There are lots of ways to store inaccurate normals.


How would I verify the surface format? I only see what I get when sampling some position, normal, or color value, but what "format" would it have?

Also, I'm not 100% convinced that the matrix you used for the transform is the correct one. Typically the mView matrix is used for world-to-view space transform and what you need is the inverse of that matrix.


I also thought of that. Given the fact that my engine wouldn't provide that inverse matrix, is there a way to compute it out of the mView matrix?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS