Roland

Members
  • Content count

    63
  • Joined

  • Last visited

Community Reputation

138 Neutral

About Roland

  • Rank
    Member
  1. I have found the main reason for this behaviour. It seems, it was due to the repeated upload of the vertex buffers prior to the rendering. Astonishigly, this resulted in a drastic reduction of the rendering performance when using an expensive pixel shader, even though there were no problems when using a trivial one (as can be seen in the plot, where the performance is good if no loops are performed in the shader).   When I avoid repeated uploads and only perform the rendering calls in the loop, the performance is nicely linear in the number of loops and and scales with the number of triangles appoximately as one would expect. So rendering many small triangles takes about four times as long as rendering larger ones. Thank you for your help and suggestions.
  2. Ok, I would also assume that the GPU processes a larger number of fragments at the same time. At least in CUDA, as far as I know, you usually have groups of 32 threads which run in parallel (a warp), so I think this should also be the case for the fragment shaders. However, what I am not sure is whether it is possible to fill these threads with pixels from different triangles. I would hope so, as otherwise GPUs could have a tremendous waste at the geometric complexity used today (e.g. when using displacement mapping you have triangles of just a few pixels).   I upload all vertices into one vertex buffer beforehand and then render the full batch of triangles in one drawArrays call (actually I render all triangles ten times, with ten identical calls to increase the times for the benchmark). So I would really hope, that WebGL transfers this in a straightforward fashion into corresponding DirectX calls. I might try the same experiment with OpenGl directly, but somehow I cannot imagine that this would make much of a difference.   One other question I am not sure about is in which way the pixels are grouped into the warps. Could it be, that in the case of one primitive only neighbouring pixels are grouped, whereas in the case of many triangles pixels from different places are mixed together, reducing coherence and increasing time lost due to branching. However, I again cannot imagine, that this would cause a difference by a factor of 5 as there is not that much branching in the shader (in the worst case if all branches have to be executed, this should not be much more than a factor of two).   Best regards, Roland
  3. Thank you for the fast answer! I have been wondering whether it is this effect. As far as I could find out, GPUs render blocks of 2x2 pixels as this facilitates the computation of derivatives via finite differences. If this is the case, the effect should be much smaller. For example, when rendering a 64x64 grid, each quad is stilll 8x8 pixels. So in the worst case, we should need 10x10 pixels plus those on the diagonal (should be altogether about 120 shaded pixels). This should not be more than a factor of two. Furthermore, in my tests, the quads should actually be perfectly aligned with the gird as I have a power of two texture and a power of two grid of primitives. If the the GPU actually uses even larger blocks, that might explain the effect, though.
  4. Hello,   I have stumbled across a behavior I do not understand, when rendering many small triangles with an expensive fragment shader. I am currently trying to write a quite expensive fragment shader (only computations, no texture fetches). I have got the problem that rendering gets considerably slower (by up to a factor of 5) when I rasterize the same number of fragments using a larger number of small primitives instead of a single larger primitive. This is the case, even though I am probably not limited by the vertex shader (I use a trivial vertex shader) and rendering the same number of primitives without the expensive fragment shader is very fast, too.    When I use a small number of primitives, the performance is nicely linear in the complexity of the shader. However, if I use a larger number of primitives (12000 quads, each approximately 6x6 pixels), the behavior is far more non-deterministic and the rendering gets much slower. Attached, I illustrate the performance in a plot. Each line corresponds to a certain number of triangles (ordered in a regular x*x grid of quads rendered into 512x512 pixels). The plot then show the rendering time in dependence on the number of loops performed in the shader.   Can anyone explain this behavior? I am aware of the fact that smaller triangles may result in some inefficiencies (e.g. due to fragments at the boundaries of the triangles needed to compute the finite differences), but can that explain a difference in performance by a factor of more than 5 (for example at 32 loops)? Is there some way to avoid this problem (apart from using deferred shading)?   For the case it might be relevant: I am using WebGl in Chrome for the rendering, the calls are thus translated by ANGLE to DirectX. The experiments were performed on a Geforce GTX680.   Best Regards, Roland
  5. A while ago, I tried to create properties using operator overloading. When you create a nested subclass for every property and overload its assignment and say int operator you could create an effect quite similar to properties. I only had a problem in printf statements, because the int operator wasn't called. You had to manually insert an cast, because the prototype of printf doesn't expect an int value. printf(char *,...), or something similar. Does anyone have an idea how you could circumvent this? Is there a way to enforce the use of the int operator? Any idear, if this approach will lead to other problems?