# OpenGL backface culling

## Recommended Posts

I was just wondering if anyone knows exactly how OpenGL performs backface culling. I'm guessing that it looks at the 2d projection and sees if the points are in clockwise or counterclockwise order. If they're counterclockwise, and it's in GL_CW mode, then I think it removes the face. What is the fastest way to tell if a set of points are clockwise or counterclockwise though? Thanks. mike http://www.coolgroups.com/

##### Share on other sites
I don't know but my guess would be to calculate something similar to a normal for the points and determine whether the resulting vector is pointing towards or away from the viewer.

To add to your question, is there a difference between swapping between back & front culling, or leaving culling mode as it is & swapping between GL_CW & GL_CCW modes performance-wise? Apart from having to specify the vertices in a different order I mean.

##### Share on other sites
Your guess is correct. As far I know, that's exactly how OpenGL performs backface culling.
As for your second question: I believe there is no other way than to arrange your vertices in the desired order by yourself.

//ClockwiseglBegin(GL_TRIANGLES);glVertex3f(-1.0f,-1.0f,0);glVertex3f( 1.0f,-1.0f,0);glVertex3f( 1.0f, 1.0f,0);glEnd();//Counter ClockwiseglBegin(GL_TRIANGLES);glVertex3f(-1.0f,-1.0f,0);glVertex3f( 1.0f, 1.0f,0);glVertex3f( 1.0f,-1.0f,0);glEnd();

Greets

Chris

##### Share on other sites
Backface culling is done in hardware, not by OpenGL. I assume the rasterizer can figure out the orientation when it's calculating deltas and stuff anyway in preparations for filling the triangle.

DrewGreen: No, there is no speed difference.

##### Share on other sites
well, as far as i know, the algorithm is the same as the one used by lighting. it computest the triangle normal vector, then it computes the viewer vector and then it computes a dotproduct between them. if the angle is > 90 degrees, the triangle is invisible to the user.

##### Share on other sites
If this is true, then why is it that OpenGL makes you do normals yourself for lighting but does them for you for backface culling?

Quote:
 Original post by meeshoowell, as far as i know, the algorithm is the same as the one used by lighting. it computest the triangle normal vector, then it computes the viewer vector and then it computes a dotproduct between them. if the angle is > 90 degrees, the triangle is invisible to the user.

##### Share on other sites
Quote:
Original post by mike74
If this is true, then why is it that OpenGL makes you do normals yourself for lighting but does them for you for backface culling?

Quote:
 Original post by meeshoowell, as far as i know, the algorithm is the same as the one used by lighting. it computest the triangle normal vector, then it computes the viewer vector and then it computes a dotproduct between them. if the angle is > 90 degrees, the triangle is invisible to the user.

For flexibility, and for the fact that a single triangle can have three different normals for surface approximation, all pointing to different directions.

The winding order of the vertices - in the context of graphics API:s - is almost exclusively used for backface culling.

##### Share on other sites
The algorithm is very simple

Given a polygon A. B, C, .... you can compute the normal from the first three points.

normal = ( B-A ) ^ ( C-A )

The observer looks always in the opposite z direction ( that is after the modelview transform is like it is)

Now, if the polygon is looking toward the observer the scalar product

normal * observer_direction

is negative otherwise is positive.

// since you need only the z you can 'optimize' the code
bool back_faced = CCW ? ((B-A)^(C-A)).z > 0 : ((B-A)^(C-A)).z < 0;

It's performed by OpenGL before rasterization

EDIT: I confused the sign in back_faced (as I wrote before if the sign is <0 the polygon is front_facing...)

[Edited by - blizzard999 on September 6, 2005 4:05:34 AM]

##### Share on other sites
yes, this must be the algorithm.

##### Share on other sites
Quote:
 Original post by blizzard999The algorithm is very simpleGiven a polygon A. B, C, .... you can compute the normal from the first three points.The observer looks always in the opposite z direction ( that is after the modelview transform is like it is)Now, if the polygon is looking toward the observer the scalar product is negative otherwise is positive.

I'm sorey, but you are absolutely wrong )))
Imagine the situation: you have wide viewport with the horizontal range of about 120-150 degrees. Imagine the cube in front of you, which is oriented right to you by one of its faces. Only cube front side is visible to you, the others are got clipped by your algorithm, because they have 0 dot product (the back side has negative dot product, so, it is clipped anyway).
This is right situation and right solution, but until the cube begins to move to the right. It is moving, moving... Dot products remain the same... Now we must see it's left side!!! Imagine it, we MUST see it! But it is still clipped due to your algo.

The only right method to verify it, is to build a plane, including 3 triangle vertices, and to watch, whether the viewing point is lying in positive half-space according to this plane.

So, the algo is:
---------------

A, B, C - points of triangle after modelview transformation

calculate triangle plane:
N = (B - A) x (C - A)
d = A dot N

Now, the equation of plane is : N dot X - d = 0
Equation of positive half-space is : N dot X - d > 0

So, put here X = (0,0,0)
Now, we have -d > 0 to accomplish test, or, d < 0 - triangle is visible

So, if A dot ((B - A) x (C - A)) >= 0 - triangle is got culled.

##### Share on other sites
Today we use OpenGL, DirectX and even pixel and fragment shaders so we have forgot the ancient art [smile]

You are right!!! And I'm wrong [bawling] but be nice...not totally.

The discriminant is the sign of the distance between the observer position (ie O=(0,0,0) after transform) and the poly plane.

So if the normal is

N = (B-A)^(C-A) in CCW notation

the plane is described by the implicit equation

N * P + d = 0

where d = - N * A ( A is one of the points in the plane )

Now, the distance origin-plane is simply...d = - N * A

If this distance is positive the polygon is front-facing hence 'visible'

float d = - A * ( (B-A)^(C-A) )bool back_faced = CCW ? d<0 : d>0

This is what you explained. Thanks for the correction. [smile]

##### Share on other sites
blizzard999, never mind! )))

I know it, only because sometimes )), when I was a young boy, I made total software rendering. Engine was Z-Buffer based, it has features of lightning, texturing, environmental mapping. It has no transparency support at all )))
And, surely, I made back-face culling. The very first realisation was the same, that you described first ))) And it didn't work properly ))) So I had to think a little bit more )))

GL!

##### Share on other sites
Whoa, this thread has too much misinformation!

In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.

There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.

##### Share on other sites
mikeman
I've never said here, that I know, how GL does back-face cull )))
I've just made some correction into blixxard999 algoritm )))

Anyway, thnx for the info! Very informative!

##### Share on other sites
If it doesn't use normals, then how does it determine their winding in screen space?

Quote:
 Original post by mikemanWhoa, this thread has too much misinformation!In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.

##### Share on other sites
Quote:
 Original post by mike74If it doesn't use normals, then how does it determine their winding in screen space?

it checks the sign of the z-component of the cross-product between the two edge-vectors relative to one of the vertices.

something like this:

float x0 = v[2].x - v[0].x;
float y0 = v[2].y - v[0].y;

float x1 = v[2].x - v[1].x;
float y1 = v[2].y - v[1].y;

float cz = x0 * y1 - y0 * x1;

if (cz > 0.f) return GL_CW;
else return GL_CCW;

##### Share on other sites
Quote:
 Original post by mikemanWhoa, this thread has too much misinformation!In OpenGL, the culling is done in screen-space. That is, the vertices of the triangle are transformed into clip space and divided by w. Then, based on their winding, determines which side of the face(front or back) you're looking at. It uses that result to do the culling if GL_CULL_FACE is enabled, or to determine which material parameters(front or back) will be used if you have two-sided lighting enabled. There is no normals involved or anything like that.There is the EXT_CULL_VERTEX extension that allows to cull vertices based on their normals(which you must supply as always). When all vertices are culled, then the whole face is culled. You may gain some performance because vertices can be culled without going through the transformation to screen-space.

The same result can be achieved in space or screen coordinates; however, as the algorithm show, it is more efficient compute it in space coordinates because you can cull a polygon just after modelview transform.
Also you can deduce which side is visible.

Note that the normal we are talking about is the geometric normal of the polygon plane and not the normal(s) used for lighting.

In other words if you use this algorithm ( and it's now correct[smile] ) you will see no difference with the result provided by GL...
It also probable that different GL implementations (video cards, drivers,...) use different algorithms to produce the same result. Why not?

##### Share on other sites
Quote:
 In other words if you use this algorithm ( and it's now correct ) you will see no difference with the result provided by GL... It also probable that different GL implementations (video cards, drivers,...) use different algorithms to produce the same result. Why not?

It's not a matter of implementation. The OpenGL specs clearly state otherwise:

Quote:
 The first step of polygon rasterization is to determine if the polygon isback facing or front facing. This determination is made by examining thesign of the area computed by equation 2.7 of section 2.13.1 (including thepossible reversal of this sign as indicated by the last call to FrontFace). Ifthis sign is positive, the polygon is frontfacing; otherwise, it is back facing.This determination is used in conjunction with the CullFace enable bit andmode value to decide whether or not a particular polygon is rasterized.

and in section 2.13.1:

Quote:
 The selection between back color and front color depends on the primitiveof which the vertex being lit is a part. If the primitive is a point or a linesegment, the front color is always selected. If it is a polygon, then theselection is based on the sign of the (clipped or unclipped) polygon's signedarea computed in window coordinates.

Of course using normals will have the same result, it's just not the behaviour OpenGL specs define. As I said, it can be accomplished through an ARB extension.

Also note that the "normals algorithm" would require the normals to be supplied somehow. If the user had to supply precalculated normals, that means that backface culling would require normals in order to work right, which definately breaks OpenGL interface. If we asssume that the card implicitly calculated the normals for each and every face, I don't think it would be faster, since it would have to do it every time a primitive was rendered. Usually, calculating normals is an expensive procedure and you don't do it on the fly.

##### Share on other sites
What's that stuff about back color and front color? I thought it would just use the glColor3f colors if the polygon is visible. Otherwise, it doesn't show it.

##### Share on other sites
Quote:
 Original post by mike74What's that stuff about back color and front color? I thought it would just use the glColor3f colors if the polygon is visible. Otherwise, it doesn't show it.

Haven't you seen the first parameter of glMaterial which is GL_FRONT,GL_BACK or GL_FRONT_AND_BACK? You can determine different materials for front and back faces. If you have two-sided lighting enabled(using glLightModel), then OpenGL uses the front or back material for the lighting equation based on which side of the polygon you're seeing(and reverses the normal for back faces).

##### Share on other sites
I did not found the specification but I found the GL state diagram and...yes...the culling is performed after screen projection.
Now I know why GL, before HW acceleration, was so crappy [smile]

##### Share on other sites
Quote:
 Original post by blizzard999Today we use OpenGL, DirectX and even pixel and fragment shaders so we have forgot the ancient art [smile]

don't fret, here you go.

http://www.devmaster.net/articles/software-rendering/part1.php

http://www.icarusindie.com/DoItYourSelf/rtsr/ <-- this one is awesome

##### Share on other sites
Quote:
 Original post by blizzard999I did not found the specification but I found the GL state diagram and...yes...the culling is performed after screen projection.Now I know why GL, before HW acceleration, was so crappy [smile]

I don't know how one can not find the specification. A Google for, in my oppinion a pretty obvious search phrase, opengl specification returns, as first link, the place where you can download it. And on, in my oppinion an obvious place to look, opengl.orghas a direct link in the left menu on the front page.

Anyway, now I have given you two ways of getting it, so now you know where it is [wink]

##### Share on other sites
Quote:
 Original post by Brother BobI don't know how one can not find the specification. A Google for, in my oppinion a pretty obvious search phrase, opengl specification returns, as first link, the place where you can download it. And on, in my oppinion an obvious place to look, opengl.orghas a direct link in the left menu on the front page.Anyway, now I have given you two ways of getting it, so now you know where it is [wink]

I'm sorry...obviously I found the specification as well as the state diagram (they are on the same page at gl.org !)
What I've not found is the specification about the backface culling (probably with the new version it's no more in the section 2.13 as mikeman reported...)
No problem because if you follow the pipeline on the state diagram you see where backface culling is.

##### Share on other sites
Quote:
 Original post by blizzard999I'm sorry...obviously I found the specification as well as the state diagram (they are on the same page at gl.org !)What I've not found is the specification about the backface culling (probably with the new version it's no more in the section 2.13 as mikeman reported...)

If that's what you meant, then I'm sorry for the misunderstanding. If you still want to read about it though, it's around equation (2.6) on page 63 in the OpenGL 2.0 specification. That particular part is about coloring with two sided lighting, where you need to determine what side is visible (to choose front or back material properties). That is about determining whether the back or front is visible, and that is what backface culling in OpenGL is about.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627682
• Total Posts
2978622
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 9
• 14
• 12
• 10
• 12