Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 Mar 2013
Offline Last Active Mar 20 2013 06:55 AM

Topics I've Started

Overhead of Using Degenerate Triangles

18 March 2013 - 10:40 AM

Hi everyone,


I have been wondering how big the overhead of using degenerate triangles with indexed triangle lists is.


I've been asking around at NVIDIA DevZone, but not getting any reply:




I also saw that old GD thread, which was interesting, but did not give me a definite answer:





The situation is the following:

- Render a list of indexed triangles

- Do some fancy stuff in a vertex shader

- After the vertex shader, some vertices will be located at the same world position, meaning there will be some degenerate tris



My Question:

Assume I would know beforehand which triangles will become degenerate, and I could exclude them from rendering.

How big do you think would the speedup be in that case?


Please note that the indices of the corresponding joint vertices might still be different, so the GPU should not be able to discard triangles before the vertex processing stage, meaning that it still has to transform each vertex before finding out which triangle is degenerate. BTW, does the GPU realize this at all in that case? Does anyone have a reference where some GPU manufacturer has explained how the filtering of degenerate triangles works, and when it is applied?


Any help is appreciated. Thanks a lot in advance!