BSP Trees or Normals Associated With Multiple Polygons

Started by
5 comments, last by Bob_Smith 20 years, 3 months ago
I am writing a software/openGL based 3D graphics engine designed primarily for general gaming use. I was wondering if it would be best to implement a BSP tree type method or by associating a single normal with multiple polygons or vertices. By checking each of these normals at the start of rendering the scene one could if used properly cull multiple polygons by only conparing a few normals to the camera. The downside is that this method could use a large amount of memory. I am battling with the implementation: It is easy to implement this manually but am battling to find an easy solution to implementing this method automatically in the software. Can anyone help?
Advertisement
Hi,
I can''t figure out how can you cull a polygon using only the normals. With the normals you can only cull the backfaces of the polygons.
What would happen if you have two separate triangles infront of the camera and the back triangle it''s taller than front one ?
J.Martin Garcia Rangel
OK, maybe rephrase the question: I Was wondering if one could associate a single normal with a bunch of polygons rather than a single polygon. (You will probably have to create the normal by hand and carry it through the calculations as if it where another vector) But then you could cull all backfacing polygons with one calculation instead of calculating the normal for each polygon and then comparing it.

Consider this simple, maybe biased example:

You are in a room with which has pillars and stairs, alcoves etc, etc. And say you have four normals each pointint north, east, south and west. (Very simple?) Then associate the normal pointing north with all the polygons facing a northly direction and similarly with the other polygons. By comparing the camera with the north facing normal, you can then check if the north facing polygons are facing the camera or are facing away from the camera and so get rid of a whole bunch of them pretty quickly.

Hi Bob. The two techniques you describe solve different problems. The "normals" technique tells you which polygons are backfacing. The BSP method tells you which polygons are invisible b/c they are occluded (they may be facing the camera, but you still want to avoid drawing them).

I understand what you mean. You want to backface-cull a whole wall, full of little geometry details, like protuberant bricks or pipes, by using a "master" normal for the whole wall.

I''m using this trick in my current project, where there are house blocks and their walls are covered in geometric details (windows, doors, and some other stuff). I tried a quick "average normal" caltulation at load time, but it wasn''t too accurate (maybe I should take the polygons area in account and do an wheighted average), so I ended up leaving it to the artist to flag which wall groups were going to use this kind of culling and specify the normal for that group.

It works rather well, but it''s not a good idea if you have geometry that extrudes too far away from the wall, making this trick only usable in certain situations (thus why I left it up to the artist to flag geometry groups for backface culling).
My first thought was to leave it to the artist, as this would simplify the problem, however I was wondering if there were any algorithms that would enable me to use this method on arbitrary polygon meshes. It just made sense to me to use a single normal rather than calculating or even carrying many normals around with you the whole time.

I also find when the models/scene/wall becomes very complex with many different angles it is very hard to be able to determine which polygons should be included under which "master" normals and whether it will actually speed up the rendering process and amke it look good or not. Perhaps as you do, it would be best to lease it up to the artist.
Pardon me for dipping a fly in the ointment here, but doesn''t each vertex require its own normal anyway, for Gouraud shading [or have we magically come up with a method whereby we can blend between adjacent non-coplanar polygons in a pixel shader without knowing about vertex normals....]

Also, don''t modern cards (supporting Pure HW VP, etc) do backface culling on the gpu, and damn fast?

hmmn. I must be missing something.

It occurs to me that yes, you could effectively cull groups of polygons before sending them to the card, but that will require *extra* data storage, for the second normal...

I shut up now, before someone kindly removes me. Bye.

Chris
die or be died...i think

This topic is closed to new replies.

Advertisement