backface culling

Started by
24 comments, last by blizzard999 18 years, 7 months ago
Today we use OpenGL, DirectX and even pixel and fragment shaders so we have forgot the ancient art [smile]

You are right!!! And I'm wrong [bawling] but be nice...not totally.

The discriminant is the sign of the distance between the observer position (ie O=(0,0,0) after transform) and the poly plane.

So if the normal is

N = (B-A)^(C-A) in CCW notation

the plane is described by the implicit equation

N * P + d = 0

where d = - N * A ( A is one of the points in the plane )

Now, the distance origin-plane is simply...d = - N * A

If this distance is positive the polygon is front-facing hence 'visible'

float d = - A * ( (B-A)^(C-A) )bool back_faced = CCW ? d<0 : d>0


This is what you explained. Thanks for the correction. [smile]
Advertisement
blizzard999, never mind! )))

I know it, only because sometimes )), when I was a young boy, I made total software rendering. Engine was Z-Buffer based, it has features of lightning, texturing, environmental mapping. It has no transparency support at all )))
And, surely, I made back-face culling. The very first realisation was the same, that you described first ))) And it didn't work properly ))) So I had to think a little bit more )))

GL!
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.
mikeman
I've never said here, that I know, how GL does back-face cull )))
I've just made some correction into blixxard999 algoritm )))

Anyway, thnx for the info! Very informative!
If it doesn't use normals, then how does it determine their winding in screen space?

Quote:Original post by mikeman
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.


Mike C.http://www.coolgroups.com/zoomer/http://www.coolgroups.com/ez/
Quote:Original post by mike74
If it doesn't use normals, then how does it determine their winding in screen space?


it checks the sign of the z-component of the cross-product between the two edge-vectors relative to one of the vertices.

something like this:

float x0 = v[2].x - v[0].x;
float y0 = v[2].y - v[0].y;

float x1 = v[2].x - v[1].x;
float y1 = v[2].y - v[1].y;

float cz = x0 * y1 - y0 * x1;

if (cz > 0.f) return GL_CW;
else return GL_CCW;
Quote:Original post by mikeman
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.


The same result can be achieved in space or screen coordinates; however, as the algorithm show, it is more efficient compute it in space coordinates because you can cull a polygon just after modelview transform.
Also you can deduce which side is visible.

Note that the normal we are talking about is the geometric normal of the polygon plane and not the normal(s) used for lighting.

In other words if you use this algorithm ( and it's now correct[smile] ) you will see no difference with the result provided by GL...
It also probable that different GL implementations (video cards, drivers,...) use different algorithms to produce the same result. Why not?
Quote:
In other words if you use this algorithm ( and it's now correct ) you will see no difference with the result provided by GL...
It also probable that different GL implementations (video cards, drivers,...) use different algorithms to produce the same result. Why not?


It's not a matter of implementation. The OpenGL specs clearly state otherwise:

Quote:
The first step of polygon rasterization is to determine if the polygon is
back facing or front facing. This determination is made by examining the
sign of the area computed by equation 2.7 of section 2.13.1 (including the
possible reversal of this sign as indicated by the last call to FrontFace). If
this sign is positive, the polygon is frontfacing; otherwise, it is back facing.
This determination is used in conjunction with the CullFace enable bit and
mode value to decide whether or not a particular polygon is rasterized.


and in section 2.13.1:

Quote:
The selection between back color and front color depends on the primitive
of which the vertex being lit is a part. If the primitive is a point or a line
segment, the front color is always selected. If it is a polygon, then the
selection is based on the sign of the (clipped or unclipped) polygon's signed
area computed in window coordinates.


Of course using normals will have the same result, it's just not the behaviour OpenGL specs define. As I said, it can be accomplished through an ARB extension.

Also note that the "normals algorithm" would require the normals to be supplied somehow. If the user had to supply precalculated normals, that means that backface culling would require normals in order to work right, which definately breaks OpenGL interface. If we asssume that the card implicitly calculated the normals for each and every face, I don't think it would be faster, since it would have to do it every time a primitive was rendered. Usually, calculating normals is an expensive procedure and you don't do it on the fly.

What's that stuff about back color and front color? I thought it would just use the glColor3f colors if the polygon is visible. Otherwise, it doesn't show it.

Mike C.http://www.coolgroups.com/zoomer/http://www.coolgroups.com/ez/
Quote:Original post by mike74
What's that stuff about back color and front color? I thought it would just use the glColor3f colors if the polygon is visible. Otherwise, it doesn't show it.


Haven't you seen the first parameter of glMaterial which is GL_FRONT,GL_BACK or GL_FRONT_AND_BACK? You can determine different materials for front and back faces. If you have two-sided lighting enabled(using glLightModel), then OpenGL uses the front or back material for the lighting equation based on which side of the polygon you're seeing(and reverses the normal for back faces).

This topic is closed to new replies.

Advertisement