Disapearing triangles when getting close to an object.

Started by
7 comments, last by Lionheartadi 20 years, 9 months ago
Hi I created an code that leaves out triangles that aren''t facing towards the camera. In other words the triangles of an object that are behind other objects are not passed to DirectX. This saves me 40-60 percent of the models triangles. Preatty cool. The math and the technique is somewhat based on this article http://www.gamedev.net/reference/articles/article1088.asp and on the Real-Time Rendering SE book. It works great, but with some models when you get closer to an object some of the triangles disapear, when they still need to be visible. I don''t know exactly what''s the problem, but I think that the angle between the face normal and the eye vector is to small or something. I am not sure, this is why I need some help here. Below is some pictures showing models that work like Teapot, geosphere and a box. Then more complex models like a statue(model data from one of the http://www.gametutorials.com/ tutorials) and one other complex model made by my friend. Here are the pictures: http://www.kyamk.fi/~oh1adsi/GeoSphere.JPG http://www.kyamk.fi/~oh1adsi/box.JPG http://www.kyamk.fi/~oh1adsi/Teapot.JPG http://www.kyamk.fi/~oh1adsi/statue.JPG http://www.kyamk.fi/~oh1adsi/Head.JPG I''ll like to point out that on the box, teapot and the geosphere can be seen disapearing triangles, but in very rare situations from positions that are real close to the object(the camera position). Below is the core code that processes every face on an object and determines if it''s necesarry to render it.

			pTempVertex = (RAWMOBJ->obj_assemblyData.objVertex_List.begin()+OBJFACE->a);	// Get a vertex from the objects curent face		

			pFaceNormal = *(RAWMOBJ->obj_normalData.objFaceNormal_List.begin()+ OBJFACE->FaceID); // Get the objects curent face normal


			// Move the normal to modelspace

			D3DXVec3TransformNormal(&TempNormal, &pFaceNormal, &this->WorldViewProjectCameraData->matWorld); 

			// Move the vertex position to modelspace

			D3DXVec3TransformNormal(&TempVertexPos, &pTempVertex->position, &this->WorldViewProjectCameraData->matWorld);

			//Calculate the eye vector

			D3DXVec3Subtract(&CameraVector, &this->WorldViewProjectCameraData->camDat.viewEyePosition, &TempVertexPos );
			
			if(D3DXVec3Dot(&TempNormal, &CameraVector) > 0) // Do the dot product and see if we need to render this face(triangle)

			{
				TriIndexBufferArray.push_back(*OBJFACE);
			}

If someone could help me here it would save me alot of time, because I''m in a hurry to get this property ready. If there is someone who has the knowledge to help me, please do. Thank you in advance for any help given by anyone.
Adrian Simionescu
Advertisement
quote:Original post by Lionheartadi
I created an code that leaves out triangles that aren''t facing towards the camera.


Before spending more time on your code, you should know that modern cards with hardware transform & lighting -- like all GeForce and Radeon cards -- do backface culling in hardware much faster than you can do it in software. If you are targetting one of these modern cards, you will be able to render your models much faster by using a vertex buffer (or whatever it is in D3D), rendering your entire model with a single call and letting the hardware do backface culling.

If you still want to fix your code, you should try transforming your face normals into screen space because then you can simply test the sign of the z component of the transformed normal to determine if it is back facing, skipping eye vector calculation and dot product.
The screenshots look like everything is seen from inside-out. Did you try to revert your face normals in your test ?

Y.
quote:Original post by kronq
quote:Original post by Lionheartadi
I created an code that leaves out triangles that aren''t facing towards the camera.


Before spending more time on your code, you should know that modern cards with hardware transform & lighting -- like all GeForce and Radeon cards -- do backface culling in hardware much faster than you can do it in software. If you are targetting one of these modern cards, you will be able to render your models much faster by using a vertex buffer (or whatever it is in D3D), rendering your entire model with a single call and letting the hardware do backface culling.

If you still want to fix your code, you should try transforming your face normals into screen space because then you can simply test the sign of the z component of the transformed normal to determine if it is back facing, skipping eye vector calculation and dot product.


Hi

Thanks for the tip I''ll try it. The reason why I''m doing it this way is to minimize the amount of data that needs to be send to to the 3D card. I have to test if this helps at all, but theoretically you save bandwidth, the only set-back is the CPU power is allocated to these calculations. I need to test if this helps, but it''s fun to learn it for your self.

Adrian Simionescu
quote:Original post by Lionheartadi
The reason why I''m doing it this way is to minimize the amount of data that needs to be send to to the 3D card. I have to test if this helps at all, but theoretically you save bandwidth, the only set-back is the CPU power is allocated to these calculations.


If you use a DirectX static vertex buffer to store your model data, it is very likely (depending on your card/driver) that the data will be stored on the card itself and require almost no bandwidth after the initial upload. By preprocessing every triangle on the host CPU you assure that the 3d card will spend most of its time simply waiting for something to do.

quote:
I need to test if this helps, but it''s fun to learn it for your self.


Yes it is fun to learn it for yourself -- and testing things yourself is essential for learning the best techniques -- but when it comes to 3d graphics there are lots of things to learn Once you get your backface culling working be sure to try out vertex buffers and test the difference...
quote:Original post by kronq
quote:Original post by Lionheartadi
The reason why I''m doing it this way is to minimize the amount of data that needs to be send to to the 3D card. I have to test if this helps at all, but theoretically you save bandwidth, the only set-back is the CPU power is allocated to these calculations.


If you use a DirectX static vertex buffer to store your model data, it is very likely (depending on your card/driver) that the data will be stored on the card itself and require almost no bandwidth after the initial upload. By preprocessing every triangle on the host CPU you assure that the 3d card will spend most of its time simply waiting for something to do.

quote:
I need to test if this helps, but it''s fun to learn it for your self.


Yes it is fun to learn it for yourself -- and testing things yourself is essential for learning the best techniques -- but when it comes to 3d graphics there are lots of things to learn Once you get your backface culling working be sure to try out vertex buffers and test the difference...


Hmm... You may be 110 percent right, but I wonder how do you do things in OpenGL. I mean you have the same properties in both DX and OGL, but OGL doesn''t have vertex nor index buffers(well not to my knowledge). Well you do pass data to OGL through glBegin() function, but does that have the same effect as in DX buffers. I just wonder, don''t want to die entirely stupid . Knowledge is gooood. :D
Adrian Simionescu
quote:Original post by Lionheartadi
I mean you have the same properties in both DX and OGL, but OGL doesn''t have vertex nor index buffers(well not to my knowledge). Well you do pass data to OGL through glBegin() function, but does that have the same effect as in DX buffers.


in ogl you have about half a dozen of ways to do it. glbegin etc. is the slowest way you can find by handfeeding every single vertex. display lists are fine but hardwired. vertex arrays can be used with or without index arrays, either single arrays for color, normal, position, etc. or one interleaved array. with the vertex buffer object extension you get buffers that are pretty much like you know it from directx.

f@dzhttp://festini.device-zero.de
quote:Original post by Lionheartadi
Hmm... You may be 110 percent right, but I wonder how do you do things in OpenGL. I mean you have the same properties in both DX and OGL, but OGL doesn''t have vertex nor index buffers(well not to my knowledge). Well you do pass data to OGL through glBegin() function, but does that have the same effect as in DX buffers. I just wonder, don''t want to die entirely stupid . Knowledge is gooood. :D
It has been a little long in coming, but ARB_vertex_buffer_objects are now fully supported by most ATI/NVidia cards.

How appropriate. You fight like a cow.
quote:Original post by Trienco
quote:Original post by Lionheartadi
I mean you have the same properties in both DX and OGL, but OGL doesn''t have vertex nor index buffers(well not to my knowledge). Well you do pass data to OGL through glBegin() function, but does that have the same effect as in DX buffers.


in ogl you have about half a dozen of ways to do it. glbegin etc. is the slowest way you can find by handfeeding every single vertex. display lists are fine but hardwired. vertex arrays can be used with or without index arrays, either single arrays for color, normal, position, etc. or one interleaved array. with the vertex buffer object extension you get buffers that are pretty much like you know it from directx.



Aaaa... Cool thanks clear my skies of dark clouds. It''s just that sometimes I feel like screaming with DX and changing to OGL, but then after a while I cool down. Heh... someday I may go crazy on dump DX, before it will shorten my life by 20 %.
Adrian Simionescu

This topic is closed to new replies.

Advertisement