Jump to content

  • Log In with Google      Sign In   
  • Create Account


Are square roots still really that evil?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
13 replies to this topic

#1 cozzie   Members   -  Reputation: 1467

Like
0Likes
Like

Posted 25 April 2014 - 03:59 AM

Was just wondering, a lot of topics, replies and articles state that you should prevent using square roots in your equations 'with all cost'. Of course 'all cost' in my opinion depends on what you have to do to prevent using the sqrt.

With today's hardware, is it still reasonable to believe that 3 to 6 multiplications and value assignments are still cheaper then 1 sqrt? (of course profiling would tell, but I'm just curious about experience and opinions)

Examples I'm talking about are mainly distance comparisions, i.e. on the CPU (point to point distance, point in sphere check etc.) but also on the GPU side (i.e for light attenuation).

Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 8162

Like
7Likes
Like

Posted 25 April 2014 - 04:13 AM

Well, distance comparisons don't "require" square roots as the square root part of the Euclidean metric does not change the order of comparisons (since the square root function is strictly increasing). Similarly, for light attenuation, you typically don't need the distance but the distance squared, so what is the point of calculating the square root just to square the result immediately after? Unless you need the actual distance/radius at some point, I don't see what you gain by doing the computation.

 

So, where do the "3 to 6 multiplications and value assignments" come in when doing distance comparisons or distance squared computations? To me it just seems like taking the square root is straight up a waste of energy here. If you have a specific situation where avoiding doing a square root requires some extra work, please mention it, because the examples you give don't really seem relevant to your question.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 Erik Rufelt   Crossbones+   -  Reputation: 3041

Like
5Likes
Like

Posted 25 April 2014 - 04:16 AM

Point in sphere and distance checks by themselves can be done by comparing the squared distance to the squared radius, thereby avoiding sqrt.

When you actually need sqrt... it's not very evil on newer desktop processors, but at the same time the other instructions have also gotten faster, so they can still be relatively faster.

There are also special instructions on many newer processors for calculating them. One reference I found put sqrt for a single float in SSE at 19 clockcycles, while an instruction for 1 / sqrt which is only an approximation with some number of bits accuracy only takes 3 cycles so if that would work then it would probably be the fastest way.



#4 Buster2000   Members   -  Reputation: 1419

Like
2Likes
Like

Posted 25 April 2014 - 05:13 AM

Its not that evil but the fastest code is the code that you never have to run.  So if you don't need to do a sqrt then don't do it.



#5 mhagain   Crossbones+   -  Reputation: 7436

Like
2Likes
Like

Posted 25 April 2014 - 05:27 AM

Normalization involves a sqrt, so since normalization is used so much, you can safely assume that GPUs are optimized for it.

 

Of course, using the built-in normalize instruction rather than writing your own normalization would be advised to take advantage of where it may be implemented directly in the hardware.

 

As the others said, for distance comparisons/etc, where you can get away without doing a sqrt then do so.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#6 Ohforf sake   Members   -  Reputation: 1446

Like
7Likes
Like

Posted 25 April 2014 - 06:16 AM

Latencies in cycles, straight from the Intel Intrinsics Guide:

_mm256_add_ps (add 8 pairs of floats): 3
_mm256_mul_ps (multiply 8 pairs of floats): 5
_mm256_rcp_ps (compute approx. reciprocals of 8 floats): 7
_mm256_rsqrt_ps (compute approx. reciprocals of square roots of 8 floats): 7
_mm256_div_ps (divide 8 pairs of floats): 29
_mm256_sqrt_ps (compute square roots of 8 floats): 29

For reference (from the intel vTune performance analysis guide), a L1 load is about 4 cycles, a L2 load about 10 cycles, a L3 load ranges from tens to hundreds of cycles, and a RAM access takes forever. This is assuming no TLB miss.

Today, precise square root has the same latency as precise division.

So essentially what others already said: Don't do work you don't have to do, but other then that, sqrt has gotten pretty fast and if it is used sparsely, most of the latency will probably get pipelined away.

#7 agleed   Members   -  Reputation: 317

Like
-2Likes
Like

Posted 25 April 2014 - 11:41 AM

GPUs have a sqrt instruction which is a single instruction (edit: it's not just 1 cycle I think), so taking the x1*x1 + y1*y1 ...> x2*x2 + y2*y2 ... comparison can actually end up being slower than just doing sqrt(vecn(...),vecn(...0). 

 

CPUs have a similar instruction as well, but it's (afaik) SIMD only. However, it wouldn't surprise me if compilers implemented std::sqrt simply by calling that simd sqrt.

 

edit: yes, seems I was wrong. Thanks for correcting me.


Edited by agleed, 26 April 2014 - 06:35 AM.


#8 cozzie   Members   -  Reputation: 1467

Like
0Likes
Like

Posted 25 April 2014 - 12:02 PM

Thanks, this gives a good view of what (not to) do.

I'll keep in mind that every sqrt (or anything else :)) that isn't really necessary, shouldn't be done at all.

 

Two examples:

 

1. My CoordToCoord distance function:

float CoordToCoordDist(const D3DXVECTOR3 pv1, const D3DXVECTOR3 pv2)
{
	return sqrt(pow(pv1.x - pv2.x, 2) + pow(pv1.y - pv2.y, 2) + pow(pv1.z - pv2.z, 2));
}

How would you do that without the sqrt?

 

2. Point in sphere.

I currently save the radius of my bounding spheres as 'normal' radius. I take the CoordToCoord distance from world center of the sphere to the point I'm checking. This distance I compare to the radius. That would basically be solved if the above CoordToCoord distance function returns the squared distance. In that I could initially take the squared radius of the sphere and save that (and when updating also keep the squared radius).

 

Note: in my shaders I don't use any sqrt at the moment, I'll ook into how I do my attenuation at the moment.

Of course there are some normalizations in my VS/PS, which I think are needed (and cannot be done without a square root).



#9 Ohforf sake   Members   -  Reputation: 1446

Like
2Likes
Like

Posted 25 April 2014 - 01:07 PM

GPUs have a sqrt instruction which is a single instruction (edit: it's not just 1 cycle I think), so taking the x1*x1 + y1*y1 ...> x2*x2 + y2*y2 ... comparison can actually end up being slower than just doing sqrt(vecn(...),vecn(...0). 
 
CPUs have a similar instruction as well, but it's (afaik) SIMD only. However, it wouldn't surprise me if compilers implemented std::sqrt simply by calling that simd sqrt.

 

GPUs have an approximate rsqrt and an approximate rcp function similar to the SIMD counterparts in my last post. They do not have a vector->length function which performs the squaring and adding of the components in addition to the sqrt in one instruction. So everything that was said for the CPU pretty much also holds for the GPU.
 
 

Thanks, this gives a good view of what (not to) do.
I'll keep in mind that every sqrt (or anything else smile.png) that isn't really necessary, shouldn't be done at all.
 
Two examples:
 
1. My CoordToCoord distance function:

float CoordToCoordDist(const D3DXVECTOR3 pv1, const D3DXVECTOR3 pv2)
{
	return sqrt(pow(pv1.x - pv2.x, 2) + pow(pv1.y - pv2.y, 2) + pow(pv1.z - pv2.z, 2));
}

How would you do that without the sqrt?
 
2. Point in sphere.
I currently save the radius of my bounding spheres as 'normal' radius. I take the CoordToCoord distance from world center of the sphere to the point I'm checking. This distance I compare to the radius. That would basically be solved if the above CoordToCoord distance function returns the squared distance. In that I could initially take the squared radius of the sphere and save that (and when updating also keep the squared radius).
 
Note: in my shaders I don't use any sqrt at the moment, I'll ook into how I do my attenuation at the moment.
Of course there are some normalizations in my VS/PS, which I think are needed (and cannot be done without a square root).


Are you sure that pow(a, 2) is reduced to a*a and not exp(log(a)+2) which is significantly more expensive?

Also sqrt and pow are the double precision functions. The float functions are sqrtf and powf and if you want the compiler to decide based on the parameters then it is std::sqrt and std::pow.

Even if you don't store the squared radius, computing it is faster then computing the square root. You should have a sqrLength or a dot function so you get the squared distance as sqrDistance = (vec1-vec2).SqrLength(); Then you check (sqrDistance < radius*radius). No need for sqrt.

Edit: And you should pass vectors per reference, not per value!

Edited by Ohforf sake, 25 April 2014 - 01:13 PM.


#10 mhagain   Crossbones+   -  Reputation: 7436

Like
1Likes
Like

Posted 25 April 2014 - 01:16 PM

GPUs have a sqrt instruction which is a single instruction (edit: it's not just 1 cycle I think), so taking the x1*x1 + y1*y1 ...> x2*x2 + y2*y2 ... comparison can actually end up being slower than just doing sqrt(vecn(...),vecn(...0). 

 

...good job they've also got a dot product instruction then...!

 

(dot (v1, v1) > dot (v2, v2))


Edited by mhagain, 25 April 2014 - 01:17 PM.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#11 cozzie   Members   -  Reputation: 1467

Like
0Likes
Like

Posted 25 April 2014 - 03:44 PM

@Ohforf sake: thanks, I made a 2nd CoordToCoord distance function, now for quared distance:

float CoordToCoordDistSqr(const D3DXVECTOR3 &pv1, const D3DXVECTOR3 &pv2)
{
	return D3DXVec3LengthSq(&D3DXVECTOR3(pv1-pv2));
}

To be sure I use any possible optimizations, I also changed the non-squared version:

float CoordToCoordDist(const D3DXVECTOR3 &pv1, const D3DXVECTOR3 &pv2)
{
	return D3DXVec3Length(&D3DXVECTOR3(pv1 - pv2));
}

Now that's done, I'll go through my code where I call the CoordToCoord distance function and see what I compare the result too. For example radius of a sphere I can do "radius*radius" like you said. The same probably goes for checking distance between mesh/ renderable center's and point lights, versus point light radius (radius would be radius*radius then).

 

Thanks for the help.



#12 Matias Goldberg   Crossbones+   -  Reputation: 3007

Like
0Likes
Like

Posted 25 April 2014 - 07:36 PM

1. My CoordToCoord distance function:

float CoordToCoordDist(const D3DXVECTOR3 pv1, const D3DXVECTOR3 pv2)
{
return sqrt(pow(pv1.x - pv2.x, 2) + pow(pv1.y - pv2.y, 2) + pow(pv1.z - pv2.z, 2));
}

OMG! You have three pow calls and you worry about a sqrt?! A pow is significantly more expensive.
Just do
float tmp = pv1.x - pv2.x;
tmp = tmp * 2; //or tmp = tmp + tmp;
Whether "tmp * 2" is better than "tmp + tmp" depends on the architecture you're running. On one hand, you've got addition vs multiplication, and often addition has lower latency than multiplication. On the other hand, the multiplication is against a constant value, and some archs may have special optimizations for that (i.e. custom opcodes, better pipelining). However both of them will be a zillion times better than a pow( tmp, 2 ).

Second, to answer the OP; like others have said, work smarter (i.e. don't use sqrt if it's unnecessary); but if you're curious, yes sqrt has gotten faster; but more importantly CPUs have gotten better at hiding the latency (this is called pipelining: executing instructions that come after and don't depend on the sqrt's result, while this sqrt hasn't finished yet). Tricks like the famous Carmack's sqrt "fast approximation" actually hurt performance in today's hardware (because they tend to hinder pipelining, or involve RAM roundtrips, and ALU has gotten faster, but memory latency hasn't changed much in the last 10 years).

Edited by Matias Goldberg, 25 April 2014 - 07:36 PM.


#13 cozzie   Members   -  Reputation: 1467

Like
0Likes
Like

Posted 26 April 2014 - 03:00 AM


OMG! You have three pow calls and you worry about a sqrt?! A pow is significantly more expensive.
Just do
float tmp = pv1.x - pv2.x;
tmp = tmp * 2; //or tmp = tmp + tmp;Whether "tmp * 2" is better than "tmp + tmp" depends on the architecture you're running.

 

I'm afraid this is not correct, shouldn't it be tmp * tmp?

(tmp * 2 would only work if it were always 2 :))

 

I now have 2 coord to coord distance functions, one squared and one non-squared.

Next step is going through my codebase and see where I can use the squared one and multiply the other variable by itself (that or saving the original value squared, the 2nd doesn't sound that good because I would then have to keep track of this always and rename all member vars).



#14 Álvaro   Crossbones+   -  Reputation: 11886

Like
0Likes
Like

Posted 26 April 2014 - 08:33 AM

OMG! You have three pow calls and you worry about a sqrt?! A pow is significantly more expensive.


I don't encourage people relying on compiler optimizations too much, but gcc optimizes calls to pow where the exponent is a positive integer, turning the computation into a sequence of multiplies. I don't know if Visual C++ would do the same.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS