How to determine if a surface is facing the camera

Started by
20 comments, last by byhisdeeds 16 years, 4 months ago
It is USELESS to do your own backface culling when you are going to use a hardware renderer through OpenGL or Direct3D. You won't optimize anything but only make it worse. And if you would program a software renderer you would implement it in SCREEN SPACE which is the most efficient way.

The only thing you could do for a static scene is to remove any geometry that you absolutely know the camera will never see.
Advertisement
Maybe I didn't make myself clear. As you put it:

"The only thing you could do for a static scene is to remove any geometry that you absolutely know the camera will never see."

That';s what I'm trying to do.
[ jROAM ]
Ok, but then the threads title is missleading and backface culling is not the correct term either. You probably can't even assume the camera to be static, so a completely different algorithm is required.
Not sure I follow, so let me start from scratch. I have a mesh of quads which completely covers a cube, and is projected to a sphere for rendering. A globe as you will. When I move the camera towards the globe mesh, I refine the mesh quads (splitting quads) based on their estimated screen size when rendered. Then I cull the mesh quads based on those inside the viewing frustum to cut down the number of quads that I must load textures for and render.

I wish to further cull the mesh quads based on those that are back facing (maybe that's the wrong term) and will never be seen.

John
[ jROAM ]
Sorry about this being slightly off the original topic:

Trenki said:

And if you would program a software renderer you would implement it in SCREEN SPACE which is the most efficient way.


I do it in object space and all that's required is to transform the camera to object space then for each poly a dot product will determine if it's front or backfacing meaning there's no transforms for vertices of backfacing polys done. How would doing it in screen space be more efficient?

byhisdeeds:
I didn't say 'camera vertex', I said 'camera->vertex vector' by which I mean is the vector from the camera coordinates to the coordinates of one of the vertices of the poly (both must be in the same space).

Stonemonkey.
Trenki is asking if this culling is to be done on a regular basis or as a pre-computational pass. If the mesh and camera never move (a static scene), then it may be worth doing, but if either does move then the cull's results will be invalidated each frame and so you'd end up doing a colossal amount of work on the CPU that really belongs with the GPU.

Under no realistic circumstances is it computationally beneficial to backface-cull your own primitives, when the GPU can do it for you.

But just how many quads are we talking? Unless the number of back-facing primitives is in the order of thousands then it's not even worth thinking about wasting valuable CPU time on them. And even then, a per-primitive approach will bring the program to its knees.
Ring3 Circus - Diary of a programmer, journal of a hacker.
Hi,

I asume you are doing labs with software renderer. Transform object into camera space before you are transforming with projection matrix. Then, view vector is (depending on hand system) simply (0,0,1). You dont have to normalize cross product of edges just look for the sign of dot product if it is positive (they are pointing in same direction), face is "looking" away from camera. Make shure you order your vertices in clockwise or counter clockwise order.

Hope this helps, Regards.
The culling is done whenever the camera moves. The number of quads can be in the couple of thousands range. If I can trim this based on back facing quads then I save time and memory on loading textures, and further refining the mesh (those back facing quads don't need to be refined).

John
[ jROAM ]
If you are using orthopraphic camera view, you can use the camera's direction to determine whether the face is facing the camera.
But if you are using perspective camera view, the camera's direction can not be used. In stead, you can use a vector formed by camera eye point and the center point of the face (or just simply any vertex point of the face).

[Edited by - Einstone on November 26, 2007 6:33:16 PM]
Quote:Original post by byhisdeeds
The culling is done whenever the camera moves. The number of quads can be in the couple of thousands range. If I can trim this based on back facing quads then I save time and memory on loading textures, and further refining the mesh (those back facing quads don't need to be refined).

If the camera is liable to move on a per-frame basis, then there's no way you'll benefit from culling each quad each frame. Some kind of spatial partitioning could be viable, but for a few thousand quads, it simply isn't worth it.

Just how many textures are on this mesh anyway? Surely not enough to warrant a texture-paging algorithm. Loading textures on-the-fly is not an easy task to do smoothly. Unless there is something terribly complex about this mesh, you should create all of the textures at load-time so you don't cripple performance when the camera brings some new geometry into view. If by 'texture loading' you mean fillrate wasted to invisible geometry, then you don't need to worry about that as backface culling eliminates all invisible geometry before the rasteriser goes anywhere near the data (and so the texture isn't cached, let alone sampled). I don't mean to sound rude, but it sounds like you're optimising all the wrong things, all the wrong way.

We know there are a few thousand quads in the mesh. Tell us how many textures (and their sizes) feature on it, and how you plan to do the mesh refinement.

Your graphics card can process billions of vertices per second, and has a pixel throughput and caching mechanism that could blow your socks off. Unless there's something very special about your program that you aren't telling us, I doubt you can improve GPU performance by even the tiniest fraction by taking the load onto the CPU.

For the record, even if you could determine visibility of each quad without hurting the CPU, there would be no way to tell the card which ones to draw without either submitting them individually or recreating the vertex buffer (you are using a vertex buffer, right?) each frame. Both of these will slow you down tremendously.
Ring3 Circus - Diary of a programmer, journal of a hacker.

This topic is closed to new replies.

Advertisement