Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Intrinsics to improve performance of interpolation / mix functions


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 04 September 2014 - 09:29 AM

Hello all

 

In pixel shaders (SM3 to 5) I often do some mixing / interpolation between two or more vectors (e.g. colors) or scalars (e.g. luminances).

 

A simple example of a common code as used for example in gaussion-blur-like implementation, looks like this

 

float4 mixColor = (tex2D(colorSampler, uv-blur)+tex2D(colorSampler, uv+blur)) / 2.f;

 

I would like to improve that, performance- and instruction-limits-wise, making use of hardware-supported HLSL intrinsic functions, but I'm not sure what would best here. lerp(), for example? I think I'd basically need the opposite of the mad() command. Are there better ways?



Sponsor:

#2 kauna   Crossbones+   -  Reputation: 2744

Like
2Likes
Like

Posted 04 September 2014 - 05:48 PM

It is pretty difficult to make a simple average of two values faster. Your compiler probably changes the division by 2 to multiply by 0.5 and that's like the end of your optimization possibilities - at least with so short code. 

 

Gaussian blur can be implemented better with a compute shader - with a different algorithm.

 

Cheers!



#3 MJP   Moderators   -  Reputation: 11569

Like
4Likes
Like

Posted 04 September 2014 - 11:40 PM

I don't think there's anything available that the compiler won't already being using, and even if there were there's no guarantee that it would actually map to a single instruction once it's JIT compiled for your GPU.



#4 Meltac   Members   -  Reputation: 424

Like
1Likes
Like

Posted 05 September 2014 - 01:59 AM

Thanks to you both.

 

I'm doings loads of such averages, not just one single. That was just an example. Typically I'm processing e.g 128 to 1024 iterations in a for-loop (depending on the effect I implement). That's why performance considerations come into play. Compute shaders are out of scope in my case, I have to do everything in plain old HLSL pixel shaders.

 

As for what the compiler already optimizes, this is primarily what interests me. My question might have been not clear enough. I'm less interested in what theoretically could improve performance but what command / functions or best practices lead to the best optimized compiled code.

 

I am well aware that different GPU hardware (or firmware) might JIT compile intermediate code differently, but I'm interesting to know what *most* common GPU / systems will do.

 

E.g. it wouldn't help to use a mad() in my shaders when most systems would still compile that command into two instructions (mul,add) instead of a single madd. Similar goes for lerp() and the like.The noise() function for example is not supported by most GPUs although present in the HLSL standard.

 

Of course I can read what Microsoft writes on their HLSL intrinsic functions pages. But since reality differs from standards definitions such as HLSL, I'm interesting to learn how to write my HLSL shaders to benefit from hardware supported features as much as possible.


Edited by Meltac, 05 September 2014 - 02:00 AM.


#5 Hodgman   Moderators   -  Reputation: 30965

Like
6Likes
Like

Posted 05 September 2014 - 06:20 AM

http://www.humus.name/index.php?page=Articles&ID=6

http://www.humus.name/index.php?page=Articles&ID=9

#6 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 05 September 2014 - 07:20 AM

Cool, thanks!



#7 Adam_42   Crossbones+   -  Reputation: 2563

Like
2Likes
Like

Posted 05 September 2014 - 06:09 PM

There's a standard trick to improve the performance of blurs by using the bilinear filtering hardware to half the number of texture fetch instructions required.

 

http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/



#8 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 08 September 2014 - 02:15 PM

Thanks I've already sort of known that but it was a good reminder smile.png



#9 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 16 September 2014 - 07:15 AM

On the topic: Is there a simple and fast way to get and compare the average of the components of a vector, e.g. luminance of a color? I'm often doing things like this:


float lum1 = (col1.r+col1.g+col1.b)/3.f;
float lum2 = (col2.r+col2.g+col2.b)/3.f;

if (lum2-lum1 > threshold) { ... }


Which of cause generates loads of instrucstions for a simple average computation and comparison. Is there a simpler / leaner  / more elegant way?


Edited by Meltac, 16 September 2014 - 07:15 AM.


#10 osmanb   Crossbones+   -  Reputation: 1608

Like
1Likes
Like

Posted 16 September 2014 - 01:53 PM

Well, you can easily sum the components (and do the division) in a single instruction:

float lum1 = dot(col1.rgb, float3(1.0f / 3.0f, 1.0f / 3.0f, 1.0f / 3.0f));
...


#11 Hodgman   Moderators   -  Reputation: 30965

Like
4Likes
Like

Posted 17 September 2014 - 04:46 AM

Some vector-based GPUs have a horizontal add instruction that does this. Writing x.r+x.g+x.b+x.a (or dot(x,(float4)1)) will cause FXC to emit a bunch of add instructions (or a dot), but the driver will optimize them into a single horizontal add.

Modern GPUs are scalar, not vector though, so a float4 is the same as a single float really... So I'm not sure if vector instructions like horizontal add exist. They might, because vendors were making a big deal about supporting SAD, etc...

#12 osmanb   Crossbones+   -  Reputation: 1608

Like
2Likes
Like

Posted 17 September 2014 - 07:42 AM

Yeah - I mean dot(x, 1) is a horizontal add already. On older vector GPUs, that's pretty much the best you're going to do. On GCN (and other modern scalar GPUs), it basically doesn't matter - you're doing the same amount of math either way (as long as you factor out the common operations). Also, you can re-write the original code slightly to be faster (even on scalar GPUs):

float lum1 = (col1.r + col1.g + col1.b);
float lum2 = (col2.r + col2.g + col2.b);
if (lum2 - lum1 > (3 * threshold))  // Pre-compute 3 * threshold, if possible, too!
{ ... }


#13 Adam_42   Crossbones+   -  Reputation: 2563

Like
4Likes
Like

Posted 17 September 2014 - 01:52 PM

This is the most efficient option I could find. Two instructions to compute the value to test, by rearranging it so you do the subtract first. Unless I've messed up somewhere it should give you the same answer :)

    float test = dot(col2-col1, float3(1.0f / 3.0f, 1.0f / 3.0f, 1.0f / 3.0f));
    if (test > threshold) { ... }


#14 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 18 September 2014 - 04:04 PM

Thank you guys! I'll try those tweaks.

 

Btw, when talking about "thinking low level while coding high level", how can I determine how many atomic GPU instructinos a certain GPU's driver / JIT compiler produces by a given bytecode-compiled shader?

 

I mean, examining fxc's assembly output for performance reasons doesn't seem to make much sense if the GPU instructions produced by the JIT compiler differ much from that (e.g. if some assembly code containing, say, 10 "bytecode" instructions but the driver makes 15 GPU instructons out of those while a different bytecode with 15 instructions produces the same amount of GPU instructions).


Edited by Meltac, 18 September 2014 - 04:05 PM.


#15 MJP   Moderators   -  Reputation: 11569

Like
0Likes
Like

Posted 18 September 2014 - 04:24 PM

Thank you guys! I'll try those tweaks.

 

Btw, when talking about "thinking low level while coding high level", how can I determine how many atomic GPU instructinos a certain GPU's driver / JIT compiler produces by a given bytecode-compiled shader?

 

I mean, examining fxc's assembly output for performance reasons doesn't seem to make much sense if the GPU instructions produced by the JIT compiler differ much from that (e.g. if some assembly code containing, say, 10 "bytecode" instructions but the driver makes 15 GPU instructons out of those while a different bytecode with 15 instructions produces the same amount of GPU instructions).

 

It's impossible to do without tools from the GPU vendor. AMD provides such a tool, Nvidia does not.



#16 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 19 September 2014 - 02:08 PM


It's impossible to do without tools from the GPU vendor. AMD provides such a tool, Nvidia does not.

 

Thanks. But there seem to be at least some heuristics, or experiences, or basic information on the particular instruction set of a GPU, right?

 

I mean, people seem to know whether a GPU has a horizontal add instruction or not, so I guess there are other instructions that I could specifically address using particular HLSL codes, so I'm interested in learning those.



#17 osmanb   Crossbones+   -  Reputation: 1608

Like
1Likes
Like

Posted 20 September 2014 - 08:45 AM

Well, the difficulty in knowing that, and keeping track of which GPU can do which operations, is part of the reason that newer architectures aren't vectorized (in the same way). Rather than doing instruction-level parallelism (with things like dot product and multiply-add on a float4), all operations are standard scalar floating point. Instead, the GPU runs the same scalar math on more threads in parallel (clusters of pixels or vertices). The end result is that - in general - micro-optimizing the order of operations is less important, because you can really just think about the naturally correct math to get what you need.

 

Example: It used to make sense to accumulate lighting information from four lights in parallel, in a transposed fashion. Doing work per light meant that you were operating on three of the four channels (rgb/xyz) while doing your math. That was basically wasting 1/4 of your potential ALU. So you'd accumulate all of the red contribution in one variable, green in another, etc... And you'd do it for four lights (one in each channel of those variables). That got you up to a 33% speedup.

 

On the scalar GPUs, that kind of optimization does nothing - you're still doing the same number of basic operations to accumulate the final result, and because everything is broken down to scalar pieces (a 3 component dot product takes a multiply and two multiply-adds), the total number of operations doesn't change.

 

I think the presentations by Humus give a good level of insight into what the compiler can/can't do, and what's typically fast. The #1 thing to remember is that GPUs have a multiply-then-add instruction, but not add-then-multiply. Always move your constant biases through your scale factors to take advantage of that. And things like combining lots of linear operations into a single matrix operation is going to get you lots more performance than worrying about whether to use lerp or not - if lerp correctly describes the math you want to do, you need to trust that it's going to be implemented using the best available instructions on each GPU.


Edited by osmanb, 20 September 2014 - 08:46 AM.


#18 Meltac   Members   -  Reputation: 424

Like
0Likes
Like

Posted 20 September 2014 - 01:20 PM

Hey thanks osmanb for those explanations and hints. I'll consider them.

 

What about branches (in the sense of if-then-else code sections) and loops? Both are known as being potentially heavily affecting performance, so are there considerations about how to speed up those code flow operations, i.e. what sort of optimizations on the HLSL side help generating well performing GPU operations?


Edited by Meltac, 20 September 2014 - 01:21 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS