How to determine if a surface is facing the camera

Started by
20 comments, last by byhisdeeds 16 years, 4 months ago
I am rendering a cube and wish to limit the drawing to those sides facing the camera. I figure that for each face I need to get the normal which using any 3 vertices of the face. I think then that I must use the dot product of the normal and the view direction from the camera, and this should give me the angle between the view direction and face of the cube. However when I try this method the numbers don't look right. Can anyone tell me where I'm going wrong? John
[ jROAM ]
Advertisement
If I remember correctly, if the dot product of the face normal and the camera's forward (assuming both are normalized) is negative, the face is back facing the camera and anything positive is facing the camera. Hope this helps.

Just in case you were wondering, dot product gives you the cosine of the angle not the angle itself.

Thuan
Your method is fine. Show us your implementation if you aren't getting the right results. You didn't say so, but I assume that you're writing some kind of software renderer 'cause otherwise, back-face culling has been hardware accelerated for years now, and attempting to do it yourself would be a criminal waste of CPU-time.
Ring3 Circus - Diary of a programmer, journal of a hacker.
Backface culling is done by the hardware so you would not have to do this for yourself. In the case you are writing a software renderer backface culling is normally done in screen space. This means you wouldn't need the view direction and also not the normal of the face (This would even give inconsistent result when using a perspective projection). You would simply determine if a trinangle is clockwise of counterclockwise by examining the vertices itself. In the end this would boil down to a single cross product.
I use the dot product and don't have any problems with the results except when the object is scaled so i avoid that or pre scale the object, atm I transform the camera to object space and do the backface culling there so I can avoid unnecessary vertex (EDIT: and normal) transforms.

Stonemonkey.
Yes I'm software rendering with LOD. At a certain point I iterate through all the quads, and cull those that are outside the view frustum. I wish also to ignore those that are facing away from the view direction. I expect that there will be some that are at various angles to the view direction, and these I will need to render. However I could still probably drop some percentage of those that are really facing away and that will reduce my rendering load.

I had forgotton to normalize the vectors so maybe that was why the numbers were looking strange.

If anybody has any ideas of how I could get what I want, I'd be greatful.

John
[ jROAM ]
The sign of the result of a dot product should still be the same whether they're normalised or not, if you're pre calculating the normals of tris/quads then I'd suggest normalising them but the camera-vertex vector you use for the backface culling doesn't need to be normalised unless you're going to be doing something like calculating the specular component for lighting or something.

What you need is to have 1 vertex transformed to camera space, and transorm the normal vector to camera space. The sign of the dot product of those will tell you if it's facing the camera or not.

Stonemonkey.
When you say 1 vertex, and the normal of the face transformed to camera space, what do you mean. I keep the values for the faces fixed in the range -1 to +1 (faces of a cube projected on to a sphere). I have a camera that uses quaternions to mange the view direction and orientation, which then calls the OpenGL gluLookAt(..) function to set the modelview matrix which I save. My projection matrix is left as the identity matrix.

Isn't the face and view diretion vector already in camera space?

John
P.S. Please forgive if my questions are somewhat daft


[ jROAM ]
Ah sorry, I thought you were software rendering. Using the view vector (direction the camera is pointing) for the test will give errors as Trenki said, it has to be done using the camera->vertex vector.

What could be done is to transform the camera into object space and calculate the dot product of the quad normal and the camera->vertex vector(using any vertex of the quad) and test the sign of the result.

I doubt there'd be any advantage in doing that over letting opengl/hw take care of the backface culling though.

Stonemonkey.
I take it that by the camera vertex you mean the position of the camera. I used the dot product of the quad normal and the camera position and this seems to give good results.

This helps me in my rendering because when I need to refine my mesh and load textures, I can filter those quads that can be ignored, which speeds things up, and is easier on the memory.
[ jROAM ]

This topic is closed to new replies.

Advertisement