# Are square roots still really that evil?

This topic is 1366 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Was just wondering, a lot of topics, replies and articles state that you should prevent using square roots in your equations 'with all cost'. Of course 'all cost' in my opinion depends on what you have to do to prevent using the sqrt.

With today's hardware, is it still reasonable to believe that 3 to 6 multiplications and value assignments are still cheaper then 1 sqrt? (of course profiling would tell, but I'm just curious about experience and opinions)

Examples I'm talking about are mainly distance comparisions, i.e. on the CPU (point to point distance, point in sphere check etc.) but also on the GPU side (i.e for light attenuation).

##### Share on other sites

Its not that evil but the fastest code is the code that you never have to run.  So if you don't need to do a sqrt then don't do it.

##### Share on other sites

Normalization involves a sqrt, so since normalization is used so much, you can safely assume that GPUs are optimized for it.

Of course, using the built-in normalize instruction rather than writing your own normalization would be advised to take advantage of where it may be implemented directly in the hardware.

As the others said, for distance comparisons/etc, where you can get away without doing a sqrt then do so.

##### Share on other sites

GPUs have a sqrt instruction which is a single instruction (edit: it's not just 1 cycle I think), so taking the x1*x1 + y1*y1 ...> x2*x2 + y2*y2 ... comparison can actually end up being slower than just doing sqrt(vecn(...),vecn(...0).

CPUs have a similar instruction as well, but it's (afaik) SIMD only. However, it wouldn't surprise me if compilers implemented std::sqrt simply by calling that simd sqrt.

edit: yes, seems I was wrong. Thanks for correcting me.

Edited by agleed

##### Share on other sites

Thanks, this gives a good view of what (not to) do.

I'll keep in mind that every sqrt (or anything else :)) that isn't really necessary, shouldn't be done at all.

Two examples:

1. My CoordToCoord distance function:

float CoordToCoordDist(const D3DXVECTOR3 pv1, const D3DXVECTOR3 pv2)
{
return sqrt(pow(pv1.x - pv2.x, 2) + pow(pv1.y - pv2.y, 2) + pow(pv1.z - pv2.z, 2));
}



How would you do that without the sqrt?

2. Point in sphere.

I currently save the radius of my bounding spheres as 'normal' radius. I take the CoordToCoord distance from world center of the sphere to the point I'm checking. This distance I compare to the radius. That would basically be solved if the above CoordToCoord distance function returns the squared distance. In that I could initially take the squared radius of the sphere and save that (and when updating also keep the squared radius).

Note: in my shaders I don't use any sqrt at the moment, I'll ook into how I do my attenuation at the moment.

Of course there are some normalizations in my VS/PS, which I think are needed (and cannot be done without a square root).

##### Share on other sites

GPUs have a sqrt instruction which is a single instruction (edit: it's not just 1 cycle I think), so taking the x1*x1 + y1*y1 ...> x2*x2 + y2*y2 ... comparison can actually end up being slower than just doing sqrt(vecn(...),vecn(...0).

CPUs have a similar instruction as well, but it's (afaik) SIMD only. However, it wouldn't surprise me if compilers implemented std::sqrt simply by calling that simd sqrt.

GPUs have an approximate rsqrt and an approximate rcp function similar to the SIMD counterparts in my last post. They do not have a vector->length function which performs the squaring and adding of the components in addition to the sqrt in one instruction. So everything that was said for the CPU pretty much also holds for the GPU.

Thanks, this gives a good view of what (not to) do.
I'll keep in mind that every sqrt (or anything else ) that isn't really necessary, shouldn't be done at all.

Two examples:

1. My CoordToCoord distance function:

float CoordToCoordDist(const D3DXVECTOR3 pv1, const D3DXVECTOR3 pv2)
{
return sqrt(pow(pv1.x - pv2.x, 2) + pow(pv1.y - pv2.y, 2) + pow(pv1.z - pv2.z, 2));
}


How would you do that without the sqrt?

2. Point in sphere.
I currently save the radius of my bounding spheres as 'normal' radius. I take the CoordToCoord distance from world center of the sphere to the point I'm checking. This distance I compare to the radius. That would basically be solved if the above CoordToCoord distance function returns the squared distance. In that I could initially take the squared radius of the sphere and save that (and when updating also keep the squared radius).

Note: in my shaders I don't use any sqrt at the moment, I'll ook into how I do my attenuation at the moment.
Of course there are some normalizations in my VS/PS, which I think are needed (and cannot be done without a square root).

Are you sure that pow(a, 2) is reduced to a*a and not exp(log(a)+2) which is significantly more expensive?

Also sqrt and pow are the double precision functions. The float functions are sqrtf and powf and if you want the compiler to decide based on the parameters then it is std::sqrt and std::pow.

Even if you don't store the squared radius, computing it is faster then computing the square root. You should have a sqrLength or a dot function so you get the squared distance as sqrDistance = (vec1-vec2).SqrLength(); Then you check (sqrDistance < radius*radius). No need for sqrt.

Edit: And you should pass vectors per reference, not per value! Edited by Ohforf sake

##### Share on other sites

GPUs have a sqrt instruction which is a single instruction (edit: it's not just 1 cycle I think), so taking the x1*x1 + y1*y1 ...> x2*x2 + y2*y2 ... comparison can actually end up being slower than just doing sqrt(vecn(...),vecn(...0).

...good job they've also got a dot product instruction then...!

(dot (v1, v1) > dot (v2, v2))

Edited by mhagain

##### Share on other sites

@Ohforf sake: thanks, I made a 2nd CoordToCoord distance function, now for quared distance:

float CoordToCoordDistSqr(const D3DXVECTOR3 &pv1, const D3DXVECTOR3 &pv2)
{
return D3DXVec3LengthSq(&D3DXVECTOR3(pv1-pv2));
}



To be sure I use any possible optimizations, I also changed the non-squared version:

float CoordToCoordDist(const D3DXVECTOR3 &pv1, const D3DXVECTOR3 &pv2)
{
return D3DXVec3Length(&D3DXVECTOR3(pv1 - pv2));
}



Now that's done, I'll go through my code where I call the CoordToCoord distance function and see what I compare the result too. For example radius of a sphere I can do "radius*radius" like you said. The same probably goes for checking distance between mesh/ renderable center's and point lights, versus point light radius (radius would be radius*radius then).

Thanks for the help.

##### Share on other sites

1. My CoordToCoord distance function:

float CoordToCoordDist(const D3DXVECTOR3 pv1, const D3DXVECTOR3 pv2)
{
return sqrt(pow(pv1.x - pv2.x, 2) + pow(pv1.y - pv2.y, 2) + pow(pv1.z - pv2.z, 2));
}

OMG! You have three pow calls and you worry about a sqrt?! A pow is significantly more expensive.
Just do
float tmp = pv1.x - pv2.x;
tmp = tmp * 2; //or tmp = tmp + tmp;
Whether "tmp * 2" is better than "tmp + tmp" depends on the architecture you're running. On one hand, you've got addition vs multiplication, and often addition has lower latency than multiplication. On the other hand, the multiplication is against a constant value, and some archs may have special optimizations for that (i.e. custom opcodes, better pipelining). However both of them will be a zillion times better than a pow( tmp, 2 ).

Second, to answer the OP; like others have said, work smarter (i.e. don't use sqrt if it's unnecessary); but if you're curious, yes sqrt has gotten faster; but more importantly CPUs have gotten better at hiding the latency (this is called pipelining: executing instructions that come after and don't depend on the sqrt's result, while this sqrt hasn't finished yet). Tricks like the famous Carmack's sqrt "fast approximation" actually hurt performance in today's hardware (because they tend to hinder pipelining, or involve RAM roundtrips, and ALU has gotten faster, but memory latency hasn't changed much in the last 10 years). Edited by Matias Goldberg

##### Share on other sites

OMG! You have three pow calls and you worry about a sqrt?! A pow is significantly more expensive.
Just do
float tmp = pv1.x - pv2.x;
tmp = tmp * 2; //or tmp = tmp + tmp;Whether "tmp * 2" is better than "tmp + tmp" depends on the architecture you're running.

I'm afraid this is not correct, shouldn't it be tmp * tmp?

(tmp * 2 would only work if it were always 2 :))

I now have 2 coord to coord distance functions, one squared and one non-squared.

Next step is going through my codebase and see where I can use the squared one and multiply the other variable by itself (that or saving the original value squared, the 2nd doesn't sound that good because I would then have to keep track of this always and rename all member vars).

##### Share on other sites

OMG! You have three pow calls and you worry about a sqrt?! A pow is significantly more expensive.

I don't encourage people relying on compiler optimizations too much, but gcc optimizes calls to pow where the exponent is a positive integer, turning the computation into a sequence of multiplies. I don't know if Visual C++ would do the same.