Started by Jan 17 2012 08:50 AM

,
27 replies to this topic

Posted 17 January 2012 - 08:50 AM

I think I finally figured out a version that works... it is this

squaredvector = position.x * position.x + position.y * position.y + position.z * position.z

onediv = 1 / squaredvector

position.x *= signx

position.y *= signy

position.z *= signz

xper = position.x * onediv

yper = position.y * onediv

zper = position.z * onediv

posyz = position.y + position.z

posxz = position.x + position.z

posxy = position.x + position.y

xmul = 1.0 - (posyz * onediv)

ymul = 1.0 - (posxz * onediv)

zmul = 1.0 -(posxy * onediv)

normal.x = xper * (xmul * posyz)* signx

normal.y = yper * (ymul * posxz)* signy

normal.z = zper * (zmul * posxy) * signz

Think this is the final version.

squaredvector = position.x * position.x + position.y * position.y + position.z * position.z

onediv = 1 / squaredvector

position.x *= signx

position.y *= signy

position.z *= signz

xper = position.x * onediv

yper = position.y * onediv

zper = position.z * onediv

posyz = position.y + position.z

posxz = position.x + position.z

posxy = position.x + position.y

xmul = 1.0 - (posyz * onediv)

ymul = 1.0 - (posxz * onediv)

zmul = 1.0 -(posxy * onediv)

normal.x = xper * (xmul * posyz)* signx

normal.y = yper * (ymul * posxz)* signy

normal.z = zper * (zmul * posxy) * signz

Think this is the final version.

Posted 17 January 2012 - 09:08 AM

Your code has 3 divs, 3 muls, and 6 adds.

Compare this to the code

Typically sqrt and divisions dominate in a code snippet like that. To see whether your version is a good approximation, I recommend you profile these two versions for runtime performance (clock cycles taken to normalize), and by how much the normalization is off in the worst case.

Compare this to the code

float x,y,z; float r = 1.0f / sqrt(x*x + y*y + z*z); x *= r; y *= r; z *= r;which has one sqrt, div, six muls and two adds. See here for an example.

Typically sqrt and divisions dominate in a code snippet like that. To see whether your version is a good approximation, I recommend you profile these two versions for runtime performance (clock cycles taken to normalize), and by how much the normalization is off in the worst case.

Me+PC=clb.demon.fi | C++ Math and Geometry library: MathGeoLib, test it live! | C++ Game Networking: kNet | 2D Bin Packing: RectangleBinPack | Use gcc/clang/emcc from VS: vs-tool | Resume+Portfolio | gfxapi, test it live!

Posted 17 January 2012 - 09:10 AM

The above formula is off by 15.47005383792515290182975610039% so if you multiply the above normals by 0.86602540378443864676372317075293 you should get a completely accurate normal.

Posted 17 January 2012 - 09:12 AM

This code doesn't need a square root, and there's only 9 operations because the 1.0 divided by the 3 added up positions only needs to be done once per Vector.

1 division, 3 multiplies and, 5 additions.

As far as I've noticed the most it's off by is about 15%

1 division, 3 multiplies and, 5 additions.

As far as I've noticed the most it's off by is about 15%

Posted 17 January 2012 - 09:37 AM

Could you please "normalize" the two vectors <1,0,0> and <-1,1,0> for me?

To make it is hell. To fail is divine.

Posted 17 January 2012 - 12:42 PM

Typically sqrt and divisions dominate in a code snippet like that.

true for std::sqrt, but not true for _mm_sqrt_ss. Unless you are using an Atom, div and sqrt will take just as long as an add or sub. On an i3/i5/i7, doing 1/sqrt is actually slower (an additional dependency chain) and less accurate than simply dividing through by the length (since rcp + mul is less accurate than div). [As an aside, you could just use rsqrt instead]

Of course, if you actually want to take account of the fact that normalization can cause a division by zero, then mul will probably still make more sense...

With AVX (sandy bridge) it is possible to normalise 8 x Vector3's with nothing more than: 6 muls, 2 adds, and an rsqrt. The OP's method produces nonsense results, and uses 9 adds, 3 mul, 3 div. That's 15 ops vs 9 for something that is at least 8 times slower.

Posted 17 January 2012 - 07:31 PM

On a GPU I wouldn't touch this code with a 10-foot pole, no matter how happy the OP may be with it as an approximation. Normalization on a GPU is just a dp3, an rsq and a mul - 1 instruction slot each, 3 total, and all the benefits of parallel processing.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

Posted 17 January 2012 - 09:40 PM

*EDIT*

most recent version.

if(position.x < 0)

position.x *= -1

signx = -1

}

addvector = position.x + position.y + position.z

positionerror = 1.0 - addvector * 0.01

onediv = 1/ addvector

normal.x = (onediv * (position.x + position.x)) * positionerror) * signx;

At 20 operations. 1 divide, 13 muls and 5 adds and 1 subtraction

most recent version.

if(position.x < 0)

position.x *= -1

signx = -1

}

addvector = position.x + position.y + position.z

positionerror = 1.0 - addvector * 0.01

onediv = 1/ addvector

normal.x = (onediv * (position.x + position.x)) * positionerror) * signx;

At 20 operations. 1 divide, 13 muls and 5 adds and 1 subtraction

Posted 17 January 2012 - 09:41 PM

On a GPU I wouldn't touch this code with a 10-foot pole, no matter how happy the OP may be with it as an approximation. Normalization on a GPU is just a dp3, an rsq and a mul - 1 instruction slot each, 3 total, and all the benefits of parallel processing.

This is meant more for people who do software graphics.

Posted 17 January 2012 - 10:19 PM

So basically you are trying to invent faster sqrt approximation? It has been already done. One of such methods are called Newton-Raphson method.

And people with software graphics will choose that over any other strange code (with strange limitations on max values) any day and any time.

One implementation of Newton-Raphson method (with smart choice of initial value) is "Carmack's inverse sqrt": http://en.wikipedia....iew_of_the_code

Anway - for a long time any of these methods are slower that simply one assembler instruction - rsqrtss from SSE instruction set. And that is from year 1999! Welcome to the future!

Here are some performance numbers on FPU sqrt vs Newton method vs Carmack's invsqrt: http://assemblyrequired.crashworks.org/2009/10/16/timing-square-root/

And people with software graphics will choose that over any other strange code (with strange limitations on max values) any day and any time.

One implementation of Newton-Raphson method (with smart choice of initial value) is "Carmack's inverse sqrt": http://en.wikipedia....iew_of_the_code

Anway - for a long time any of these methods are slower that simply one assembler instruction - rsqrtss from SSE instruction set. And that is from year 1999! Welcome to the future!

Here are some performance numbers on FPU sqrt vs Newton method vs Carmack's invsqrt: http://assemblyrequired.crashworks.org/2009/10/16/timing-square-root/

Posted 17 January 2012 - 10:36 PM

I knew about those. I'll test my code against those and see what happens. I'm guessing the SSE will go faster.

Although if new things are never tried, nothing is ever advanced.

Although if new things are never tried, nothing is ever advanced.

Posted 18 January 2012 - 02:18 AM

Although if new things are never tried, nothing is ever advanced.

Trying arbitrary permutations of things without a decent mathematical foundation is not only unproductive, but counter-productive as you cannot really reason about the behavior and properties of the "approximation" you have produced.

This reflects in your confused statements (now some edited away) about just simply applying scaling factors and misuse of terms.

Things like the Carmack sqrt approximation have a foundation in mathematics and architecture (Newton-Rhapson iterations and IEEE754 vs. integer interaction), while your approach just substitutes a function with something with completely different properties and proclaim it to be The Next Best Things without a proper analysis of it.

Heck, a simple mesh plot of your function would have revealed the massive oscillations and regions of infinite results.

To make it is hell. To fail is divine.

Posted 18 January 2012 - 05:15 AM

I agree with Zao.

Also, your last version even has an if in it... A very bad idea.

Also, your last version even has an if in it... A very bad idea.

Posted 20 January 2012 - 09:23 PM

Please stop.

There is nothing about your operation that is "normalization". You do not preserve the direction, nor produce a magnitude remotely approaching unity.

The topic should be named "Mutilation in way too many operations", as it doesn't have any of the properties desirable by division by a norm.

Note that the first line is your "final" version is the square of the length of a vector. If you've gone the length of computing that dot product already, you're just a square root away from a properly working regular normalization, with all the excellent properties it guarantees.

There is nothing about your operation that is "normalization". You do not preserve the direction, nor produce a magnitude remotely approaching unity.

The topic should be named "Mutilation in way too many operations", as it doesn't have any of the properties desirable by division by a norm.

Note that the first line is your "final" version is the square of the length of a vector. If you've gone the length of computing that dot product already, you're just a square root away from a properly working regular normalization, with all the excellent properties it guarantees.

To make it is hell. To fail is divine.

Posted 20 January 2012 - 09:36 PM

Also, please stop deleting your posts just because you've found something wrong with them. It only serves to harm discussion. I have restored your posts.

Posted 20 January 2012 - 10:10 PM

This is getting out of hand. I understand your motivation, I love to tweak the maximum speed out of my code, but in this instance your approximation is so different from normalisation it should be called something else. In addition, I counted 21 muls and 1 divide in your latest version. Since the canonical version (as posted by clb) uses 6 muls, a divide and a sqrt, what you're suggesting is that 15 muls and inaccuracy are a preferable alternative to using a sqrt. Unfortunately, due to how normalised vectors are typically used, inaccuracy is a big problem, potentially leading to all sorts of weird rendering artefacts.

It might be slow, but I'm not sure where you got the impression that the sqrt operation is the anti-christ.

It might be slow, but I'm not sure where you got the impression that the sqrt operation is the anti-christ.

Currently working on an open world survival RPG - For info check out my Development blog: ByteWrangler

Posted 20 January 2012 - 10:39 PM

So something that produces an approximate normal to what the real normal would be isn't normalizing... ok. If this was turned into SSE code it could be done in 11 operations.

This could work fine for lighting, generating normals as they are needed, but whatever, it's obvious no one here appreciates what I'm trying to do so I'm done.

This could work fine for lighting, generating normals as they are needed, but whatever, it's obvious no one here appreciates what I'm trying to do so I'm done.

Posted 20 January 2012 - 10:44 PM

Just delete the post since the community seems to think it's useless.

Posted 20 January 2012 - 10:45 PM

...This could work fine for lighting

Have you taken the time to try it? Actually demoing your method would be a nice thing to do, assuming it works.

Posted 21 January 2012 - 01:04 AM

Sorry, that's not how things work here.Just delete the post since the community seems to think it's useless.