• Content count

  • Joined

  • Last visited

Community Reputation

126 Neutral

About slarti

  • Rank
  1. The same question was posted by somebody on stackoverflow recently. The top answer also seems to be an interesting suggestion, basically make sure that for any dependent texture reads, calculate the coordinates in the vertex shader rather than the fragment shader. This allows the GPU to optimize texture fetches in the fragment shader by caching etc. [url=""]http://stackoverflow...rociously-slow/[/url]
  2. Thanks! I'd already looked into separable filters. But the other two methods seem promising, especially summed area tables, which i think is the same as Integral images. Also I was wondering, since the texture fetches are offset and the same fetches will be repeated for every texels is there any caching technique we can make use of. (I agree its tough because the operations are happening in parallel)
  3. Hi!, I have a fragment shader program, which has to sample a grid of texels (about 16x16) around the center texel of a texture. This happens for every texel in the texture. As expected these many texture fetches per fragment affects the performance of the program. I was wondering if there are any ways to optimize these fetches for better performance. I understand that if the texels fetched are used for a simple linear weighting operation(such as a gaussian filter), one can do lesser number of fetches and by using of GL_LINEAR sampling and sampling in between two texels rather than at the actual positions. But are there any other methods for operations more complicated than weighted sums?