Sign in to follow this  
byhisdeeds

How to determine if a surface is facing the camera

Recommended Posts

byhisdeeds    122
I am rendering a cube and wish to limit the drawing to those sides facing the camera. I figure that for each face I need to get the normal which using any 3 vertices of the face. I think then that I must use the dot product of the normal and the view direction from the camera, and this should give me the angle between the view direction and face of the cube. However when I try this method the numbers don't look right. Can anyone tell me where I'm going wrong? John

Share this post


Link to post
Share on other sites
tts1980    146
If I remember correctly, if the dot product of the face normal and the camera's forward (assuming both are normalized) is negative, the face is back facing the camera and anything positive is facing the camera. Hope this helps.

Just in case you were wondering, dot product gives you the cosine of the angle not the angle itself.

Thuan

Share this post


Link to post
Share on other sites
TheAdmiral    1122
Your method is fine. Show us your implementation if you aren't getting the right results. You didn't say so, but I assume that you're writing some kind of software renderer 'cause otherwise, back-face culling has been hardware accelerated for years now, and attempting to do it yourself would be a criminal waste of CPU-time.

Share this post


Link to post
Share on other sites
Trenki    345
Backface culling is done by the hardware so you would not have to do this for yourself. In the case you are writing a software renderer backface culling is normally done in screen space. This means you wouldn't need the view direction and also not the normal of the face (This would even give inconsistent result when using a perspective projection). You would simply determine if a trinangle is clockwise of counterclockwise by examining the vertices itself. In the end this would boil down to a single cross product.

Share this post


Link to post
Share on other sites
Stonemonkey    142
I use the dot product and don't have any problems with the results except when the object is scaled so i avoid that or pre scale the object, atm I transform the camera to object space and do the backface culling there so I can avoid unnecessary vertex (EDIT: and normal) transforms.

Stonemonkey.

Share this post


Link to post
Share on other sites
byhisdeeds    122
Yes I'm software rendering with LOD. At a certain point I iterate through all the quads, and cull those that are outside the view frustum. I wish also to ignore those that are facing away from the view direction. I expect that there will be some that are at various angles to the view direction, and these I will need to render. However I could still probably drop some percentage of those that are really facing away and that will reduce my rendering load.

I had forgotton to normalize the vectors so maybe that was why the numbers were looking strange.

If anybody has any ideas of how I could get what I want, I'd be greatful.

John

Share this post


Link to post
Share on other sites
Stonemonkey    142
The sign of the result of a dot product should still be the same whether they're normalised or not, if you're pre calculating the normals of tris/quads then I'd suggest normalising them but the camera-vertex vector you use for the backface culling doesn't need to be normalised unless you're going to be doing something like calculating the specular component for lighting or something.

What you need is to have 1 vertex transformed to camera space, and transorm the normal vector to camera space. The sign of the dot product of those will tell you if it's facing the camera or not.

Stonemonkey.

Share this post


Link to post
Share on other sites
byhisdeeds    122
When you say 1 vertex, and the normal of the face transformed to camera space, what do you mean. I keep the values for the faces fixed in the range -1 to +1 (faces of a cube projected on to a sphere). I have a camera that uses quaternions to mange the view direction and orientation, which then calls the OpenGL gluLookAt(..) function to set the modelview matrix which I save. My projection matrix is left as the identity matrix.

Isn't the face and view diretion vector already in camera space?

John
P.S. Please forgive if my questions are somewhat daft


Share this post


Link to post
Share on other sites
Stonemonkey    142
Ah sorry, I thought you were software rendering. Using the view vector (direction the camera is pointing) for the test will give errors as Trenki said, it has to be done using the camera->vertex vector.

What could be done is to transform the camera into object space and calculate the dot product of the quad normal and the camera->vertex vector(using any vertex of the quad) and test the sign of the result.

I doubt there'd be any advantage in doing that over letting opengl/hw take care of the backface culling though.

Stonemonkey.

Share this post


Link to post
Share on other sites
byhisdeeds    122
I take it that by the camera vertex you mean the position of the camera. I used the dot product of the quad normal and the camera position and this seems to give good results.

This helps me in my rendering because when I need to refine my mesh and load textures, I can filter those quads that can be ignored, which speeds things up, and is easier on the memory.

Share this post


Link to post
Share on other sites
Trenki    345
It is USELESS to do your own backface culling when you are going to use a hardware renderer through OpenGL or Direct3D. You won't optimize anything but only make it worse. And if you would program a software renderer you would implement it in SCREEN SPACE which is the most efficient way.

The only thing you could do for a static scene is to remove any geometry that you absolutely know the camera will never see.

Share this post


Link to post
Share on other sites
byhisdeeds    122
Maybe I didn't make myself clear. As you put it:

"The only thing you could do for a static scene is to remove any geometry that you absolutely know the camera will never see."

That';s what I'm trying to do.

Share this post


Link to post
Share on other sites
Trenki    345
Ok, but then the threads title is missleading and backface culling is not the correct term either. You probably can't even assume the camera to be static, so a completely different algorithm is required.

Share this post


Link to post
Share on other sites
byhisdeeds    122
Not sure I follow, so let me start from scratch. I have a mesh of quads which completely covers a cube, and is projected to a sphere for rendering. A globe as you will. When I move the camera towards the globe mesh, I refine the mesh quads (splitting quads) based on their estimated screen size when rendered. Then I cull the mesh quads based on those inside the viewing frustum to cut down the number of quads that I must load textures for and render.

I wish to further cull the mesh quads based on those that are back facing (maybe that's the wrong term) and will never be seen.

John

Share this post


Link to post
Share on other sites
Stonemonkey    142
Sorry about this being slightly off the original topic:

[quote="Trenki"]
And if you would program a software renderer you would implement it in SCREEN SPACE which is the most efficient way.
[/quote]

I do it in object space and all that's required is to transform the camera to object space then for each poly a dot product will determine if it's front or backfacing meaning there's no transforms for vertices of backfacing polys done. How would doing it in screen space be more efficient?

byhisdeeds:
I didn't say 'camera vertex', I said 'camera->vertex vector' by which I mean is the vector from the camera coordinates to the coordinates of one of the vertices of the poly (both must be in the same space).

Stonemonkey.

Share this post


Link to post
Share on other sites
TheAdmiral    1122
Trenki is asking if this culling is to be done on a regular basis or as a pre-computational pass. If the mesh and camera never move (a static scene), then it may be worth doing, but if either does move then the cull's results will be invalidated each frame and so you'd end up doing a colossal amount of work on the CPU that really belongs with the GPU.

Under no realistic circumstances is it computationally beneficial to backface-cull your own primitives, when the GPU can do it for you.

But just how many quads are we talking? Unless the number of back-facing primitives is in the order of thousands then it's not even worth thinking about wasting valuable CPU time on them. And even then, a per-primitive approach will bring the program to its knees.

Share this post


Link to post
Share on other sites
DobarDabar2    127
Hi,

I asume you are doing labs with software renderer. Transform object into camera space before you are transforming with projection matrix. Then, view vector is (depending on hand system) simply (0,0,1). You dont have to normalize cross product of edges just look for the sign of dot product if it is positive (they are pointing in same direction), face is "looking" away from camera. Make shure you order your vertices in clockwise or counter clockwise order.

Hope this helps, Regards.

Share this post


Link to post
Share on other sites
byhisdeeds    122
The culling is done whenever the camera moves. The number of quads can be in the couple of thousands range. If I can trim this based on back facing quads then I save time and memory on loading textures, and further refining the mesh (those back facing quads don't need to be refined).

John

Share this post


Link to post
Share on other sites
Einstone    126
If you are using orthopraphic camera view, you can use the camera's direction to determine whether the face is facing the camera.
But if you are using perspective camera view, the camera's direction can not be used. In stead, you can use a vector formed by camera eye point and the center point of the face (or just simply any vertex point of the face).

[Edited by - Einstone on November 26, 2007 6:33:16 PM]

Share this post


Link to post
Share on other sites
TheAdmiral    1122
Quote:
Original post by byhisdeeds
The culling is done whenever the camera moves. The number of quads can be in the couple of thousands range. If I can trim this based on back facing quads then I save time and memory on loading textures, and further refining the mesh (those back facing quads don't need to be refined).

If the camera is liable to move on a per-frame basis, then there's no way you'll benefit from culling each quad each frame. Some kind of spatial partitioning could be viable, but for a few thousand quads, it simply isn't worth it.

Just how many textures are on this mesh anyway? Surely not enough to warrant a texture-paging algorithm. Loading textures on-the-fly is not an easy task to do smoothly. Unless there is something terribly complex about this mesh, you should create all of the textures at load-time so you don't cripple performance when the camera brings some new geometry into view. If by 'texture loading' you mean fillrate wasted to invisible geometry, then you don't need to worry about that as backface culling eliminates all invisible geometry before the rasteriser goes anywhere near the data (and so the texture isn't cached, let alone sampled). I don't mean to sound rude, but it sounds like you're optimising all the wrong things, all the wrong way.

We know there are a few thousand quads in the mesh. Tell us how many textures (and their sizes) feature on it, and how you plan to do the mesh refinement.

Your graphics card can process billions of vertices per second, and has a pixel throughput and caching mechanism that could blow your socks off. Unless there's something very special about your program that you aren't telling us, I doubt you can improve GPU performance by even the tiniest fraction by taking the load onto the CPU.

For the record, even if you could determine visibility of each quad without hurting the CPU, there would be no way to tell the card which ones to draw without either submitting them individually or recreating the vertex buffer (you are using a vertex buffer, right?) each frame. Both of these will slow you down tremendously.

Share this post


Link to post
Share on other sites
byhisdeeds    122
The mesh represents terrain textures on a planetary scale at say 1m resolution. The textures are 128x128. Lots of quads. The refinement is handled frame to frame based on which quads are in the view and which required refinement to keep the visible resolution acceptable. Frame-to-Frame coherence is maintained, so only changes because of view refinement are rendered. As I fly in from space the mesh can get rather fine and it contains those in the view and those on the other side of the earth. I'm trying to exclude those on the other side of the earth, to save mesh memory, texture memory.

However at certian viewing angles, say near to the surface, looking along the surface, back facing quads will not be an issue. Here I will need to check that I won't discard viewable quads.

John

Share this post


Link to post
Share on other sites
byhisdeeds    122
To visualise what I'm talking about, if anyone is running linux (the windows version has issues I'm trying to tie down) then try this link

http://www.ejamaica.org/jroam/jroam.jnlp

When it loads (if it loads) use the mouse wheel to zoom in or out, and grabbing the planet performs some amount of dragging. Please forgive the parts that don't work whenever you encounter them.

John

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this