Sign in to follow this  

Converting screen-space normals to world-space normals

This topic is 3484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone, In the making of a deferred renderer, I'm trying to deal with some kind of 3D objects which haven't normals embedded in their vertex buffer. My approach is described below. For this kind of objects, I don't deal at all with normals at the vertex shader stage. I leverage this entirely to the pixel shader. As I have a deferred renderer, the pixel shader has to export depth, normals, albedo and many others things to a g-buffer, in a very conventional way. I obtain the screen space normal of the pixel by using ddx and ddy on the Z part of the depth. (the depth is made of two components, Z and W, and the "real depth" is Z / W). This is very fast, and the results are accurate. But I can't manage to convert back theses screen-space normals to world-space. I tried several ways : First, I multiplied them by the inverse of my View-Proj matrix. The result was not view-independent. As I moved the camera position or orientation, the normals changed, so they were not displayed in world-space. I thought the normal had nothing to do with the camera translation, so I tried to isolate the rotation component of the View-Proj matrix, calculate the inverse of this component and multiplied the screen space normal with it. Same type of problems, view-dependent (although, obviously the result was independent from camera translation). I gave up with a inversed matrix approach, and thought of an idea of combinations of vectors. If a screen space normal is 100 % "green" (ie 0 1 0), then it is exactly the inverse of the camera vector. If is 100 % "red" (1 0 0), it is a vector which I called "A", which is the cross-product between the camera vector and the Up vector. If it is blue ( 0 0 1), it is a vector I called B, which is the cross product between the camera vector and A. So the equation is : A = cross(vecCam, vecUp); B = cross(vecCam, A); worldspaceN = InvVecCam * screenspaceN.g + A * screenspaceN.r + B * screenspaceN.b; Once again, it didn't work, the result was view-dependent. I thought that using the camera vector could be a mistake, and I replaced it by the vector between the pixel and the camera. The results were a little different but again view-dependent. I don't have ideas anymore. Does someone could spot an error in my solutions, or coming up with a different and effective approach ? Thank you for reading. Clément ELBAZ

Share this post


Link to post
Share on other sites
Have you tried interpolating the world-space position,using ddx/ddy on that, and then taking the cross product of the result? This should be all you need to do.



Share this post


Link to post
Share on other sites

This topic is 3484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this