Intrinsics to improve performance of interpolation / mix functions

Started by
16 comments, last by Meltac 9 years, 6 months ago

Hello all

In pixel shaders (SM3 to 5) I often do some mixing / interpolation between two or more vectors (e.g. colors) or scalars (e.g. luminances).

A simple example of a common code as used for example in gaussion-blur-like implementation, looks like this

float4 mixColor = (tex2D(colorSampler, uv-blur)+tex2D(colorSampler, uv+blur)) / 2.f;

I would like to improve that, performance- and instruction-limits-wise, making use of hardware-supported HLSL intrinsic functions, but I'm not sure what would best here. lerp(), for example? I think I'd basically need the opposite of the mad() command. Are there better ways?

Advertisement

It is pretty difficult to make a simple average of two values faster. Your compiler probably changes the division by 2 to multiply by 0.5 and that's like the end of your optimization possibilities - at least with so short code.

Gaussian blur can be implemented better with a compute shader - with a different algorithm.

Cheers!

I don't think there's anything available that the compiler won't already being using, and even if there were there's no guarantee that it would actually map to a single instruction once it's JIT compiled for your GPU.

Thanks to you both.

I'm doings loads of such averages, not just one single. That was just an example. Typically I'm processing e.g 128 to 1024 iterations in a for-loop (depending on the effect I implement). That's why performance considerations come into play. Compute shaders are out of scope in my case, I have to do everything in plain old HLSL pixel shaders.

As for what the compiler already optimizes, this is primarily what interests me. My question might have been not clear enough. I'm less interested in what theoretically could improve performance but what command / functions or best practices lead to the best optimized compiled code.

I am well aware that different GPU hardware (or firmware) might JIT compile intermediate code differently, but I'm interesting to know what *most* common GPU / systems will do.

E.g. it wouldn't help to use a mad() in my shaders when most systems would still compile that command into two instructions (mul,add) instead of a single madd. Similar goes for lerp() and the like.The noise() function for example is not supported by most GPUs although present in the HLSL standard.

Of course I can read what Microsoft writes on their HLSL intrinsic functions pages. But since reality differs from standards definitions such as HLSL, I'm interesting to learn how to write my HLSL shaders to benefit from hardware supported features as much as possible.

http://www.humus.name/index.php?page=Articles&ID=6

http://www.humus.name/index.php?page=Articles&ID=9

Cool, thanks!

There's a standard trick to improve the performance of blurs by using the bilinear filtering hardware to half the number of texture fetch instructions required.

http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/

Thanks I've already sort of known that but it was a good reminder smile.png

On the topic: Is there a simple and fast way to get and compare the average of the components of a vector, e.g. luminance of a color? I'm often doing things like this:



float lum1 = (col1.r+col1.g+col1.b)/3.f;
float lum2 = (col2.r+col2.g+col2.b)/3.f;

if (lum2-lum1 > threshold) { ... }


Which of cause generates loads of instrucstions for a simple average computation and comparison. Is there a simpler / leaner / more elegant way?

Well, you can easily sum the components (and do the division) in a single instruction:


float lum1 = dot(col1.rgb, float3(1.0f / 3.0f, 1.0f / 3.0f, 1.0f / 3.0f));
...

This topic is closed to new replies.

Advertisement