Hello,
I have stumbled across a behavior I do not understand, when rendering many small triangles with an expensive fragment shader. I am currently trying to write a quite expensive fragment shader (only computations, no texture fetches). I have got the problem that rendering gets considerably slower (by up to a factor of 5) when I rasterize the same number of fragments using a larger number of small primitives instead of a single larger primitive. This is the case, even though I am probably not limited by the vertex shader (I use a trivial vertex shader) and rendering the same number of primitives without the expensive fragment shader is very fast, too.
When I use a small number of primitives, the performance is nicely linear in the complexity of the shader. However, if I use a larger number of primitives (12000 quads, each approximately 6x6 pixels), the behavior is far more non-deterministic and the rendering gets much slower. Attached, I illustrate the performance in a plot. Each line corresponds to a certain number of triangles (ordered in a regular x*x grid of quads rendered into 512x512 pixels). The plot then show the rendering time in dependence on the number of loops performed in the shader.
Can anyone explain this behavior? I am aware of the fact that smaller triangles may result in some inefficiencies (e.g. due to fragments at the boundaries of the triangles needed to compute the finite differences), but can that explain a difference in performance by a factor of more than 5 (for example at 32 loops)? Is there some way to avoid this problem (apart from using deferred shading)?
For the case it might be relevant: I am using WebGl in Chrome for the rendering, the calls are thus translated by ANGLE to DirectX. The experiments were performed on a Geforce GTX680.
Best Regards,
Roland