Sign in to follow this  

glDrawElements

This topic is 4726 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I guess that depends what you mean...
to my understanding. Vertices drawn wirh glDrawElements go through the graphics card's clipping and view transformations just like every other vertex. However, the point of frustrum culling is to prevent card overload by sending only the in-frustrum vertices to the card. If this is what you mean, then no, calling a whole bunch of vertices with glDrawElements will not frustrum cull them automatically. You have to make your own culling engine and then pass only those vertices that fall inside the view frustrum to OpenGL

Share this post


Link to post
Share on other sites
glDrawElements has nothing to do with frustum culling at all. glDrawElements is a way of batching draw calls to OpenGL, which is rather more efficient than rendering everything with immediate mode calls (glBegin/glEnd, etc.).

Share this post


Link to post
Share on other sites
Believe it or not, every GL call is frustum-culling-aware, provided frustum culling is activated. Some drivers do that automatically while other times you have to call Scissor(...) with correct parameters.

Now, take this with some salt. Hardware "frustum culling" as implemented in consumer video cards is very accurate but also very slow when compared to what you could do with the CPU.

Say HW frustum culling operates on generated fragment/pixels, but only AFTER the vertices has been transformed. This means you still have some geometric work to carry out.
Frustum culling on CPU by the other hand, can avoid this work completely.

In a transform limited scenario I built on a program some time ago, hardware frustum culling gave me very limited speedups (say less than 10%). I then discontinued the project but I speculate CPU culling could buy me much more savings, at least one order of magnitude higher.

Share this post


Link to post
Share on other sites
Quote:
Original post by Krohm
Believe it or not, every GL call is frustum-culling-aware, provided frustum culling is activated. Some drivers do that automatically while other times you have to call Scissor(...) with correct parameters.

Now, take this with some salt. Hardware "frustum culling" as implemented in consumer video cards is very accurate but also very slow when compared to what you could do with the CPU.

Say HW frustum culling operates on generated fragment/pixels, but only AFTER the vertices has been transformed. This means you still have some geometric work to carry out.
Frustum culling on CPU by the other hand, can avoid this work completely.

In a transform limited scenario I built on a program some time ago, hardware frustum culling gave me very limited speedups (say less than 10%). I then discontinued the project but I speculate CPU culling could buy me much more savings, at least one order of magnitude higher.


What you're talking about is clipping, not frustum culling. What do you mean "provided it's activated"? You're saying there's a token that you can pass to glEnable to get it to work? How about... glEnable( GL_FRUSTUM_CULLING )? You can't disable clipping, it is always activated, and so it should be.

There was an extension called GL_EXT_clip_vertex ( or similar ) which clipped vertices, wasn't finished though, and it is kinda pointless.

Share this post


Link to post
Share on other sites

This topic is 4726 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this