# Cross Product, Normalizing, and overflows

This topic is 4830 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello I have 2 vectors (128,0,0) and (128,0,128). They are coplanar. They are actually represented as doubles in C++. I need to find the normal of these. Now I have taken the cross product and the result is: (0, -16384,0) (I think :)) Now I need to normalize the normal vector, by doing: sqrt(x^2, y^2, z^2) and then dividing each component of the normal vector by the result of this square root. My problem is: 16384^2 is a big big number and I may have vectors with components even larger than 128. I am worrying about overflow here. I am not a maths guru, so I ask: what can I do to "compress" this 16384 value down before normalizing without affecting the computation? (Also, check my working if you can :) ) Thanks

##### Share on other sites
I don't think this is a problem. but if you were having problems with it you could multiply your vector by a small constant (<< 1) before normalizing. since multiplying a vector by a constant doesn't effect the direction of the vector. In this way you'd ensure that the squared components are all small.

##### Share on other sites
Try normalizing the vectors before taking the cross product.

##### Share on other sites
Quote:
 Original post by timwI don't think this is a problem. but if you were having problems with it you could multiply your vector by a small constant (<< 1) before normalizing. since multiplying a vector by a constant doesn't effect the direction of the vector. In this way you'd ensure that the squared components are all small.
Ahh yes, scalar multiplication wont affect the direction of the vector. I am an idiot.

Infact, this is what normalizeing is effectively doing isnt it? (but i just need to divide by a scalar value before I normalize so i dont run into that ^2 overflow) :)

Thanks. Problem solved I think.

##### Share on other sites
If you work with doubles, them are okay for numbers up to around
10308 , i.e.
1 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 00

so with cross product you might get overflow only when components of vector have more than 75 digits [grin][lol] (when you multiply two 75-digits numbers you get something like 150 digits and when you multiply together two 150-digit numbers, you get something around 300 digits, still fits in the double) You wouldn't even get precision losses.

##### Share on other sites
Quote:
 Infact, this is what normalizeing is effectively doing isnt it?

yup..

ya, like dmytry said I don't think it'll be a problem. lol but if it is.. do the multiplication.

Tim

##### Share on other sites
with multiplication you can get underflow instead of overflow...

##### Share on other sites
Quote:
 Original post by DmytryIf you work with doubles, them are okay for numbers up to around10308 , i.e.1 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 00so with cross product you might get overflow only when components of vector have more than 75 digits [grin][lol] (when you multiply two 75-digits numbers you get something like 150 digits and when you multiply together two 150-digit numbers, you get something around 300 digits, still fits in the double) You wouldn't even get precision losses.
I just realized that floating point gives you much larer number range that int. I always assumed that 32-bit int would be able to take larger numbers, whereas floating point had less of a range but allowed for decimals. Seems it doesnt work that way and that FP also allows for much larger numbers than int.

##### Share on other sites
Floating point is designed to represent an approximation of real numbers for a very large range. But a float only has 32 bits. The way it represents very large numbers is that it only keeps track of the most significant bits. You do get precision losses when dealing with larger numbers. The floating point format has a certain number of bits for storing an exponent, and a certain number of bits for storing the most significant bits of the number. The number represented is the most significant bits multiplied by 2 raised to the exponent.

##### Share on other sites
"What Every Computer Scientist Should Know About Floating-Point Arithmetic"
http://docs.sun.com/source/806-3568/ncg_goldberg.html

1. 1
2. 2
Rutin
19
3. 3
khawk
15
4. 4
5. 5
A4L
13

• 13
• 26
• 10
• 11
• 44
• ### Forum Statistics

• Total Topics
633744
• Total Posts
3013654
×