Archived

This topic is now archived and is closed to further replies.

AxelF

"Backface-culling" for points?

Recommended Posts

Hi, is there a possibility in OpenGL to perform a "backface culling" for points (for example in dependance of a given normal)? Could the GL_EXT_cull_vertex extension work? Thx [edited by - AxelF on April 29, 2003 6:22:51 AM]

Share this post


Link to post
Share on other sites
Since points have no front/back face, it has no real meaning to perform backface culling on points (the same goes for lines).
Though you could proably send points as a mesh (say, a set of triangles) and use glPolygonMode with GL_POINTS. This way, triangles will be rendered as points (ie only the triangle corners will be rendered) and you will be able to perform backface culling over the triangles, so you will be able to cull points depending on the triangles'' orientations.

Am I clear ?

Share this post


Link to post
Share on other sites
Absolutely clear, but this is not appropriate for my problem where I have an unorganized cloud of points.

But nevertheless it makes sense to give points a normal direction and to perform backface culling, for example if you would like to approximate a surface by a cloud of points. Each point would have a normal which corresponds to the surface normal at this point.

Share this post


Link to post
Share on other sites
Then if "surface" makes sense for your point cloud, I think that "mesh" also makes sense. And a mesh is just what you need for backface culling triangles.

(the con being, with mesh you represent points for "triangle corners" whereas surfaces'' points may rather represent "surface element centers")

Share this post


Link to post
Share on other sites
Ok, but the case isn''t so simple in my application. I know that the points belong to a surface, and I even have the normals, but I don''t know the "connectivity" (I can''t easily construct the mesh).

Share this post


Link to post
Share on other sites
Just a few questions. What are these points going to be used for in this application? How many of them are there? Will the points move in relation to each other? Will points be created and deleted at runtime?

Share this post


Link to post
Share on other sites
If you can not get that connectivity, then obviously this can''t work. That''s bad news because it would have made to work much simpler I think.

If you want to cull by normals, you could setup a vertex program that "sorts out" points which normal is not facing the viewer. (I don''t really know how to transform a normal to clip-space but I think it''s a good approximation to just transform it to eye-space).

The first problem being : can you afford vertex programming ? Is your targeted hardware ready for that ?


CheeseGrater : why do you propose marching cubes ? for connectivity ?

Share this post


Link to post
Share on other sites
quote:
Original post by vincoof
CheeseGrater : why do you propose marching cubes ? for connectivity ?


It''s one of the better ways to generate a mesh from a field of 3d values, which seems to be the case here.

Alternatively, if the points are all guaranteed to lie on a convex surface, the Qhull algorithm would work nicely.

Share this post


Link to post
Share on other sites
I''m not really sure that his will help because of the connectivity thing but this is what I did in one of my programs.

get the data from the vertex buffer

use pythagorous to work out the distance between the camera and the first vertex of the face.

use pythagorous to work out the distance between the camera and (the first vertex on the face + it''s normal)

Compare the two calculations - if the first is smaller than the second the face is facing away from the camera.

To speed this up you can miss the square root in the pythagerous cus it doesn''t do anything in this example.

hope that gives you some ideas even if it''s not helpfull

Share this post


Link to post
Share on other sites
de_matt : in computer graphics, there is a famous algorithm that simply tell you if a vertex+direction is facing the camera : compute the dot product between the eye-vertex vector (vector from the camera to the vertex) and the normal. If the dot-product is negative, then the vertex+direction is facing the camera.
Actually, that is what is done by OpenGL''s face culling. (OpenGL does compute a cross product over the face instead of using normals, though)

Share this post


Link to post
Share on other sites
That''s true. That''s what I have to do, if OpenGL can''t figure it out by itself.

Marching Cube isn''t an alternative because Marching Cube requires a cube-like structure. And as I already mentioned, I don''t have a structure on these points. Just a bunch of points.

And the whole thing has to be fast. I don''t have the time to compute something like a surface.

The rendering as it is now is ok, but what I want is simply to speed it up, because I render more points than necessary.

Share this post


Link to post
Share on other sites
quote:
Original post by vincoof
Actually, that is what is done by OpenGL''s face culling. (OpenGL does compute a cross product over the face instead of using normals, though)


No its not, GL uses the winding order of the vertices, not the normals. Normals are not touched (or needed) for front/back face culling.

Share this post


Link to post
Share on other sites
quote:
Original post by OrangyTang
Normals are not touched (or needed) for front/back face culling.


Indeed, that''s what I''m saying

The problem with vertex program is that the overhead due to program execution may be slower than rendering the points themselves, unless points are really big.

Share this post


Link to post
Share on other sites
quote:
Original post by AxelF
Marching Cube isn't an alternative because Marching Cube requires a cube-like structure. And as I already mentioned, I don't have a structure on these points. Just a bunch of points.



That's actually not a requirement, but if you don't want to compute a surface the point is moot.

Anyhoo, I suspect that backface culling is going not to help you, unless you have large point sprites that eat fillrate. Backface culling requires that every vertex be touched/processed. It's primary purpose is to save on scan conversion/fillrate. When you're dealing with points, fillrate is pretty negligable vs. vertex transformation, so backface culling points doesn't make much sense.

You'll need to come up with a smarter method of point culling. If you have no structure at all to your points, you'll probably have to impose some, either by generating a surface or sorting them somehow.

[edited by - cheesegrater on April 29, 2003 1:36:15 PM]

Share this post


Link to post
Share on other sites