Sign in to follow this  
mike74

OpenGL backface culling

Recommended Posts

I was just wondering if anyone knows exactly how OpenGL performs backface culling. I'm guessing that it looks at the 2d projection and sees if the points are in clockwise or counterclockwise order. If they're counterclockwise, and it's in GL_CW mode, then I think it removes the face. What is the fastest way to tell if a set of points are clockwise or counterclockwise though? Thanks. mike http://www.coolgroups.com/

Share this post


Link to post
Share on other sites
I don't know but my guess would be to calculate something similar to a normal for the points and determine whether the resulting vector is pointing towards or away from the viewer.

To add to your question, is there a difference between swapping between back & front culling, or leaving culling mode as it is & swapping between GL_CW & GL_CCW modes performance-wise? Apart from having to specify the vertices in a different order I mean.

Share this post


Link to post
Share on other sites
Your guess is correct. As far I know, that's exactly how OpenGL performs backface culling.
As for your second question: I believe there is no other way than to arrange your vertices in the desired order by yourself.


//Clockwise
glBegin(GL_TRIANGLES);
glVertex3f(-1.0f,-1.0f,0);
glVertex3f( 1.0f,-1.0f,0);
glVertex3f( 1.0f, 1.0f,0);
glEnd();

//Counter Clockwise
glBegin(GL_TRIANGLES);
glVertex3f(-1.0f,-1.0f,0);
glVertex3f( 1.0f, 1.0f,0);
glVertex3f( 1.0f,-1.0f,0);
glEnd();



Greets

Chris

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Backface culling is done in hardware, not by OpenGL. I assume the rasterizer can figure out the orientation when it's calculating deltas and stuff anyway in preparations for filling the triangle.

DrewGreen: No, there is no speed difference.

Share this post


Link to post
Share on other sites
well, as far as i know, the algorithm is the same as the one used by lighting. it computest the triangle normal vector, then it computes the viewer vector and then it computes a dotproduct between them. if the angle is > 90 degrees, the triangle is invisible to the user.

Share this post


Link to post
Share on other sites
If this is true, then why is it that OpenGL makes you do normals yourself for lighting but does them for you for backface culling?

Quote:
Original post by meeshoo
well, as far as i know, the algorithm is the same as the one used by lighting. it computest the triangle normal vector, then it computes the viewer vector and then it computes a dotproduct between them. if the angle is > 90 degrees, the triangle is invisible to the user.


Share this post


Link to post
Share on other sites
Quote:
Original post by mike74
If this is true, then why is it that OpenGL makes you do normals yourself for lighting but does them for you for backface culling?

Quote:
Original post by meeshoo
well, as far as i know, the algorithm is the same as the one used by lighting. it computest the triangle normal vector, then it computes the viewer vector and then it computes a dotproduct between them. if the angle is > 90 degrees, the triangle is invisible to the user.


For flexibility, and for the fact that a single triangle can have three different normals for surface approximation, all pointing to different directions.

The winding order of the vertices - in the context of graphics API:s - is almost exclusively used for backface culling.

Share this post


Link to post
Share on other sites
The algorithm is very simple

Given a polygon A. B, C, .... you can compute the normal from the first three points.

normal = ( B-A ) ^ ( C-A )


The observer looks always in the opposite z direction ( that is after the modelview transform is like it is)

Now, if the polygon is looking toward the observer the scalar product

normal * observer_direction

is negative otherwise is positive.

// since you need only the z you can 'optimize' the code
bool back_faced = CCW ? ((B-A)^(C-A)).z > 0 : ((B-A)^(C-A)).z < 0;


It's performed by OpenGL before rasterization

EDIT: I confused the sign in back_faced (as I wrote before if the sign is <0 the polygon is front_facing...)

[Edited by - blizzard999 on September 6, 2005 4:05:34 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by blizzard999
The algorithm is very simple
Given a polygon A. B, C, .... you can compute the normal from the first three points.
The observer looks always in the opposite z direction ( that is after the modelview transform is like it is)
Now, if the polygon is looking toward the observer the scalar product
is negative otherwise is positive.


I'm sorey, but you are absolutely wrong )))
Imagine the situation: you have wide viewport with the horizontal range of about 120-150 degrees. Imagine the cube in front of you, which is oriented right to you by one of its faces. Only cube front side is visible to you, the others are got clipped by your algorithm, because they have 0 dot product (the back side has negative dot product, so, it is clipped anyway).
This is right situation and right solution, but until the cube begins to move to the right. It is moving, moving... Dot products remain the same... Now we must see it's left side!!! Imagine it, we MUST see it! But it is still clipped due to your algo.

The only right method to verify it, is to build a plane, including 3 triangle vertices, and to watch, whether the viewing point is lying in positive half-space according to this plane.

So, the algo is:
---------------

A, B, C - points of triangle after modelview transformation

calculate triangle plane:
N = (B - A) x (C - A)
d = A dot N

Now, the equation of plane is : N dot X - d = 0
Equation of positive half-space is : N dot X - d > 0


So, put here X = (0,0,0)
Now, we have -d > 0 to accomplish test, or, d < 0 - triangle is visible

So, if A dot ((B - A) x (C - A)) >= 0 - triangle is got culled.

Share this post


Link to post
Share on other sites
Today we use OpenGL, DirectX and even pixel and fragment shaders so we have forgot the ancient art [smile]

You are right!!! And I'm wrong [bawling] but be nice...not totally.

The discriminant is the sign of the distance between the observer position (ie O=(0,0,0) after transform) and the poly plane.

So if the normal is

N = (B-A)^(C-A) in CCW notation

the plane is described by the implicit equation

N * P + d = 0

where d = - N * A ( A is one of the points in the plane )

Now, the distance origin-plane is simply...d = - N * A

If this distance is positive the polygon is front-facing hence 'visible'


float d = - A * ( (B-A)^(C-A) )
bool back_faced = CCW ? d<0 : d>0


This is what you explained. Thanks for the correction. [smile]

Share this post


Link to post
Share on other sites
blizzard999, never mind! )))

I know it, only because sometimes )), when I was a young boy, I made total software rendering. Engine was Z-Buffer based, it has features of lightning, texturing, environmental mapping. It has no transparency support at all )))
And, surely, I made back-face culling. The very first realisation was the same, that you described first ))) And it didn't work properly ))) So I had to think a little bit more )))

GL!

Share this post


Link to post
Share on other sites
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.

Share this post


Link to post
Share on other sites
mikeman
I've never said here, that I know, how GL does back-face cull )))
I've just made some correction into blixxard999 algoritm )))

Anyway, thnx for the info! Very informative!

Share this post


Link to post
Share on other sites
If it doesn't use normals, then how does it determine their winding in screen space?

Quote:
Original post by mikeman
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.


Share this post


Link to post
Share on other sites
Quote:
Original post by mike74
If it doesn't use normals, then how does it determine their winding in screen space?


it checks the sign of the z-component of the cross-product between the two edge-vectors relative to one of the vertices.

something like this:

float x0 = v[2].x - v[0].x;
float y0 = v[2].y - v[0].y;

float x1 = v[2].x - v[1].x;
float y1 = v[2].y - v[1].y;

float cz = x0 * y1 - y0 * x1;

if (cz > 0.f) return GL_CW;
else return GL_CCW;

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.


The same result can be achieved in space or screen coordinates; however, as the algorithm show, it is more efficient compute it in space coordinates because you can cull a polygon just after modelview transform.
Also you can deduce which side is visible.

Note that the normal we are talking about is the geometric normal of the polygon plane and not the normal(s) used for lighting.

In other words if you use this algorithm ( and it's now correct[smile] ) you will see no difference with the result provided by GL...
It also probable that different GL implementations (video cards, drivers,...) use different algorithms to produce the same result. Why not?

Share this post


Link to post
Share on other sites
Quote:

In other words if you use this algorithm ( and it's now correct ) you will see no difference with the result provided by GL...
It also probable that different GL implementations (video cards, drivers,...) use different algorithms to produce the same result. Why not?


It's not a matter of implementation. The OpenGL specs clearly state otherwise:

Quote:

The first step of polygon rasterization is to determine if the polygon is
back facing or front facing. This determination is made by examining the
sign of the area computed by equation 2.7 of section 2.13.1 (including the
possible reversal of this sign as indicated by the last call to FrontFace). If
this sign is positive, the polygon is frontfacing; otherwise, it is back facing.
This determination is used in conjunction with the CullFace enable bit and
mode value to decide whether or not a particular polygon is rasterized.


and in section 2.13.1:

Quote:

The selection between back color and front color depends on the primitive
of which the vertex being lit is a part. If the primitive is a point or a line
segment, the front color is always selected. If it is a polygon, then the
selection is based on the sign of the (clipped or unclipped) polygon's signed
area computed in window coordinates.


Of course using normals will have the same result, it's just not the behaviour OpenGL specs define. As I said, it can be accomplished through an ARB extension.

Also note that the "normals algorithm" would require the normals to be supplied somehow. If the user had to supply precalculated normals, that means that backface culling would require normals in order to work right, which definately breaks OpenGL interface. If we asssume that the card implicitly calculated the normals for each and every face, I don't think it would be faster, since it would have to do it every time a primitive was rendered. Usually, calculating normals is an expensive procedure and you don't do it on the fly.

Share this post


Link to post
Share on other sites
What's that stuff about back color and front color? I thought it would just use the glColor3f colors if the polygon is visible. Otherwise, it doesn't show it.

Share this post


Link to post
Share on other sites
Quote:
Original post by mike74
What's that stuff about back color and front color? I thought it would just use the glColor3f colors if the polygon is visible. Otherwise, it doesn't show it.


Haven't you seen the first parameter of glMaterial which is GL_FRONT,GL_BACK or GL_FRONT_AND_BACK? You can determine different materials for front and back faces. If you have two-sided lighting enabled(using glLightModel), then OpenGL uses the front or back material for the lighting equation based on which side of the polygon you're seeing(and reverses the normal for back faces).

Share this post


Link to post
Share on other sites
I did not found the specification but I found the GL state diagram and...yes...the culling is performed after screen projection.
Now I know why GL, before HW acceleration, was so crappy [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by blizzard999
Today we use OpenGL, DirectX and even pixel and fragment shaders so we have forgot the ancient art [smile]


don't fret, here you go.

http://www.devmaster.net/articles/software-rendering/part1.php

http://www.icarusindie.com/DoItYourSelf/rtsr/ <-- this one is awesome

Share this post


Link to post
Share on other sites
Quote:
Original post by blizzard999
I did not found the specification but I found the GL state diagram and...yes...the culling is performed after screen projection.
Now I know why GL, before HW acceleration, was so crappy [smile]

I don't know how one can not find the specification. A Google for, in my oppinion a pretty obvious search phrase, opengl specification returns, as first link, the place where you can download it. And on, in my oppinion an obvious place to look, opengl.orghas a direct link in the left menu on the front page.

Anyway, now I have given you two ways of getting it, so now you know where it is [wink]

Share this post


Link to post
Share on other sites
Quote:
Original post by Brother Bob
I don't know how one can not find the specification. A Google for, in my oppinion a pretty obvious search phrase, opengl specification returns, as first link, the place where you can download it. And on, in my oppinion an obvious place to look, opengl.orghas a direct link in the left menu on the front page.

Anyway, now I have given you two ways of getting it, so now you know where it is [wink]


[headshake]

I'm sorry...obviously I found the specification as well as the state diagram (they are on the same page at gl.org !)
What I've not found is the specification about the backface culling (probably with the new version it's no more in the section 2.13 as mikeman reported...)
No problem because if you follow the pipeline on the state diagram you see where backface culling is.

Share this post


Link to post
Share on other sites
Quote:
Original post by blizzard999
I'm sorry...obviously I found the specification as well as the state diagram (they are on the same page at gl.org !)
What I've not found is the specification about the backface culling (probably with the new version it's no more in the section 2.13 as mikeman reported...)

If that's what you meant, then I'm sorry for the misunderstanding. If you still want to read about it though, it's around equation (2.6) on page 63 in the OpenGL 2.0 specification. That particular part is about coloring with two sided lighting, where you need to determine what side is visible (to choose front or back material properties). That is about determining whether the back or front is visible, and that is what backface culling in OpenGL is about.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this