Archived

This topic is now archived and is closed to further replies.

Problems with the camera and backface culling

This topic is 6010 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to write my own 3D engine. I sucessfully have a 3D spinning cube, with some simple lightning effects, that moves back and forth on the Z axis while spinning. However, I'm having difficulties getting the backface culling to work out accurately. I realize I'm about to ask a very inspecific question, and I apologize for that, but I really don't know where specifically the problem would be or how to fix it. I have placed the camera in the center of the screen, at the 0 z coordinate. When the cube is directly beneath the camera (same X and Y coordinate), there are no problems and everything works perfectly. However, when I move the cube away from the center of the screen, problems begin to occur. The backface culling becomes inaccurate. Only polygons facing towards the center of the screen are drawn, as desired, but this becomes hyper-sensitive or something. Instead of drawing only the ones that could be seen from someone sitting behind the screen, it draws those that would be seemingly facing towards someone sitting at the camera location of the screen, turning their head towards the cube. Almost as if the Z axis isn't really being taken into account properly - translating the cube along the Z axis has SOME effect on which polygons are drawn, but not as much as would seem appropriate. That is as best as I can describe it. Its not as simple as if it were just ignoring the z axis entirely, but it seems like it isn't dealing with it properly. In the hopes of surpassing my feeble explanation in ability to communicate information, I'm going to post the code I use to decide whether a polygon is facing towards the camera or not. I treat every object as having a central point (originX, originY, originZ) in space around which each of it's vertices are placed. Each vertex exists relative to this origin point. For example, the origin point may be (123, 456, 789) or (987, 654, 321) but no matter where it is, the vertices will have the same value (say for instance, -5, 7, 0) at all times (unless rotated or scaled). I hope that makes sense. I don't mean to diverge, but the code fragment wouldn't make sense without knowing that. Also, each my 3D engine takes the form of a class, called Object3D, and there is a global coordinate triple representing the camera. A Vector3D is merely a structure with an x, y, and z component. Any other values in the code, unless otherwise defined, are variables specific to the class. Finally, here's the code:
    
// Perform backface culling.

void Object3D::UpdateVisibility(void)
{
	Vector3D view;
	float cameraX, cameraY, cameraZ;
	int index, counter;
	
	counter = polyCount;
	index = (polyCount * 3) - 2;
	
	cameraX = originX - cameraPos.x;
	cameraY = originY - cameraPos.y;
	cameraZ = originZ - cameraPos.z;
	
	visiblePolygons = 0;
	
		while(counter--)
		{
			view.x = vertices[links[index]].x + cameraX;
			view.y = vertices[links[index]].y + cameraY;
			view.z = vertices[links[index]].z + cameraZ;
			
			index -= 3;
			
				if( ((view.x * normals[counter].x) +
					(view.y * normals[counter].y) +
					(view.z * normals[counter].z)) < 0)
				{
					visibility[counter] = true;
					visiblePolygons++;
				}
				else
				{
					visibility[counter] = false;
				}
		}
}
    
Thanks for any help you can give. I realize that this is a poor way to be asking such a complex question, but if I had a better grasp on what the problem was, I would have fixed it already. I'll be happy to post anything else that could help with figuring out what the problem could be - uploading the .exe to my site and linking to it comes to mind, since it would certainly do a better job of demonstrating the problem than my explanation. Feel free to ask. And thanks again for any help you can give. Edited by - Carnivorous Duck on June 27, 2001 12:46:28 AM

Share this post


Link to post
Share on other sites
Are you making a 3D engine with a software rasterizer? I mean, do you fill the polygons yourself or do you pass the vertices to an API (DirectX/OpenGL). In the latter case you don''t need to do backface culling, the API does it automatically.

Share this post


Link to post
Share on other sites
Heya,

good going!
Since you only have one cube now...I would prefer
2D backfaceculling.

Formula:
// poly not visible -> continue
if ( (v2->sx - v1->sx) * (v1->sy - v3->sy)) -
(v2->sy - v1->sy) * (v1->sx - v3->sx)) > 0 ) continue;

sx and sy are the projectd vertices. So no view or cam is needed, because these were taken into account during projection.

What you were trying to do is 3D backface culling:
Formula:

//The polygon is visible if the following equation is true:
dotproduct(normal, CAM) >=cullpoint
where
normal is the normal vector of the polygon (in 3D).
CAM is the reverse transformated camera location vector.
cullpoint is dotproduct (normal, any vertix of poly).

either cam is inverse tronsformed and the normals and vertices are in object space (which is a crude optimization) or normals and vertices are in cam space(rotated, translated, scaled) and cam is the normal cam vector).

This is almost what you did I guess, but you forgot about the cullpoint, which is not just ''0''.

Gr,
BoRReL

AND NO, HE IS NOT USING DIRECT3D OR OPENGL!!!!

Share this post


Link to post
Share on other sites