Intrinsics to improve performance of interpolation / mix functions

Started by
16 comments, last by Meltac 9 years, 7 months ago
Some vector-based GPUs have a horizontal add instruction that does this. Writing x.r+x.g+x.b+x.a (or dot(x,(float4)1)) will cause FXC to emit a bunch of add instructions (or a dot), but the driver will optimize them into a single horizontal add.

Modern GPUs are scalar, not vector though, so a float4 is the same as a single float really... So I'm not sure if vector instructions like horizontal add exist. They might, because vendors were making a big deal about supporting SAD, etc...
Advertisement

Yeah - I mean dot(x, 1) is a horizontal add already. On older vector GPUs, that's pretty much the best you're going to do. On GCN (and other modern scalar GPUs), it basically doesn't matter - you're doing the same amount of math either way (as long as you factor out the common operations). Also, you can re-write the original code slightly to be faster (even on scalar GPUs):


float lum1 = (col1.r + col1.g + col1.b);
float lum2 = (col2.r + col2.g + col2.b);
if (lum2 - lum1 > (3 * threshold))  // Pre-compute 3 * threshold, if possible, too!
{ ... }

This is the most efficient option I could find. Two instructions to compute the value to test, by rearranging it so you do the subtract first. Unless I've messed up somewhere it should give you the same answer :)


    float test = dot(col2-col1, float3(1.0f / 3.0f, 1.0f / 3.0f, 1.0f / 3.0f));
    if (test > threshold) { ... }

Thank you guys! I'll try those tweaks.

Btw, when talking about "thinking low level while coding high level", how can I determine how many atomic GPU instructinos a certain GPU's driver / JIT compiler produces by a given bytecode-compiled shader?

I mean, examining fxc's assembly output for performance reasons doesn't seem to make much sense if the GPU instructions produced by the JIT compiler differ much from that (e.g. if some assembly code containing, say, 10 "bytecode" instructions but the driver makes 15 GPU instructons out of those while a different bytecode with 15 instructions produces the same amount of GPU instructions).

Thank you guys! I'll try those tweaks.

Btw, when talking about "thinking low level while coding high level", how can I determine how many atomic GPU instructinos a certain GPU's driver / JIT compiler produces by a given bytecode-compiled shader?

I mean, examining fxc's assembly output for performance reasons doesn't seem to make much sense if the GPU instructions produced by the JIT compiler differ much from that (e.g. if some assembly code containing, say, 10 "bytecode" instructions but the driver makes 15 GPU instructons out of those while a different bytecode with 15 instructions produces the same amount of GPU instructions).

It's impossible to do without tools from the GPU vendor. AMD provides such a tool, Nvidia does not.


It's impossible to do without tools from the GPU vendor. AMD provides such a tool, Nvidia does not.

Thanks. But there seem to be at least some heuristics, or experiences, or basic information on the particular instruction set of a GPU, right?

I mean, people seem to know whether a GPU has a horizontal add instruction or not, so I guess there are other instructions that I could specifically address using particular HLSL codes, so I'm interested in learning those.

Well, the difficulty in knowing that, and keeping track of which GPU can do which operations, is part of the reason that newer architectures aren't vectorized (in the same way). Rather than doing instruction-level parallelism (with things like dot product and multiply-add on a float4), all operations are standard scalar floating point. Instead, the GPU runs the same scalar math on more threads in parallel (clusters of pixels or vertices). The end result is that - in general - micro-optimizing the order of operations is less important, because you can really just think about the naturally correct math to get what you need.

Example: It used to make sense to accumulate lighting information from four lights in parallel, in a transposed fashion. Doing work per light meant that you were operating on three of the four channels (rgb/xyz) while doing your math. That was basically wasting 1/4 of your potential ALU. So you'd accumulate all of the red contribution in one variable, green in another, etc... And you'd do it for four lights (one in each channel of those variables). That got you up to a 33% speedup.

On the scalar GPUs, that kind of optimization does nothing - you're still doing the same number of basic operations to accumulate the final result, and because everything is broken down to scalar pieces (a 3 component dot product takes a multiply and two multiply-adds), the total number of operations doesn't change.

I think the presentations by Humus give a good level of insight into what the compiler can/can't do, and what's typically fast. The #1 thing to remember is that GPUs have a multiply-then-add instruction, but not add-then-multiply. Always move your constant biases through your scale factors to take advantage of that. And things like combining lots of linear operations into a single matrix operation is going to get you lots more performance than worrying about whether to use lerp or not - if lerp correctly describes the math you want to do, you need to trust that it's going to be implemented using the best available instructions on each GPU.

Hey thanks osmanb for those explanations and hints. I'll consider them.

What about branches (in the sense of if-then-else code sections) and loops? Both are known as being potentially heavily affecting performance, so are there considerations about how to speed up those code flow operations, i.e. what sort of optimizations on the HLSL side help generating well performing GPU operations?

This topic is closed to new replies.

Advertisement