Moment Of Inertia

Started by
8 comments, last by GameDev.net 18 years, 6 months ago
Given a local moment of inertia I_local and a global moment of inertia I_global. Since I store the local moment of inertia w.r.t. its principal axis, the tensor is diagonal and can bes stored as a vector: (1) What is the physical meaning of det( I_global )? (2) Is this value constant in time? (3) Is there some kind of identiy like this: det( I_global ) == det( I_local )? (4) Is the determinant of a diagonal matrix the same as the length of the corresponding vector? If not are they at least nearly equal and what kind of error at the cost of accuracy would I introduce? Regards, -Dirk
Advertisement
There's no physical meaning to this amount.

Since the determinant of a matrix is invariant under similarity transform, we have:
det(I_global) = det(I_local) = Ixx * Iyy * Izz,
where Ixx, Iyy and Izz are the principle moments.

The principle moments is what you have on the diagonal of I_local. Similarity transform is exactly what you do in order to find I_global. Because given the rotation matrix of the body R:
I_global = R * I_local * transpose(R),
and
transpose(R) = inverse(R), because R is orthogonal.

I guess that by this point, you probably realize that det(I_local) is hardly 3. So the error that you introduce can be quite big, depending on the bodies properties.

For instance, a sphere has identicle principle moments:
Ixx = Iyy = Izz = 2/5 * M * R^2,
where: R - sphere radius, M - sphere mass.

So in case of a sphere, you'll get:
det(I_local) = 8/125 * M^3 * R^6.

If you don't mind me asking, why do you need the determinant?

Quote:
If you don't mind me asking, why do you need the determinant?


Not at all :-)

In the thesis of Bart Barenbrug (Dynamo) in section 4.1.8 he applies some scaling by Det(J)/M for his orientation constraint. This should make the orientation constraint error match the other constraint errors. I have no idea why. So I asked if this value has a physical meaning.

Do you have the thesis? I send you the link or the copy if you like...



-Dirk
I'd love to.

Thanks.
Here you are:

http://www.win.tue.nl/dynamo/publications/

The PhD is "bartbthesis.pdf". Anyway, I found all papers worth reading. You find his master thesis here as well...

-Dirk
It says it is done that way to get the errors to be of the same magnitude as errors of another type of constraint. This way when the equations are solved to within a certain error tolerance, the errors of some constraints don't dominate the errors of other constraints, or something like that.

The determinant of the inertia tensor might not be entirely physically meaningless...it is the product of the principle moments, so higher principle moments will result in a higher determinant.
I think that the reason that he scales the vectors is because he is using an iterative method to solve the constraints in the system.

The convergence of most iterative methods such as Conjugate Gradients, depends on the condition number of the problems matrix. Smaller condition number usually means better convergence. So my guess is that the scaling is the means of preconditioning the problem.

A condition number of a symmertic matrix can be defined as the absolute ratio between the largest and the smallest eigenvalue.
What happens to the result when I scale the matrix by the preconditioner? Let's assume I have a simple linear system A*x = b. Now I scale the first row with some value, such that the iterative solver converges faster. How do I get back the correct result for 'x'?

-Dirk
A preconditioning matrix M is applied like this:

Given:
Ax = b,
we multiply the expression by M and get:
MAx = Mb

The solution to the later expression is the same as the solution to the former expression.

As a rule of thumb, you'd like your preconditioning matrix to be as "close" to the inverse of A as possible. A perfect M would be the inverse of A because in that case MA=I and the condition number of I is 1 which is optimal. Ofcourse, knowing the inverse of A is what we are trying to avoid so a better matrix is needed.

A common and very cheap preconditioning matrix is the inverse of D, where D is the diagonal of A.
I don't think this is exactly like preconditioning though. I haven't read enough to see what the whole method is, but I think this is something specific to the algorithm developed in the paper, that it uses these error quantities from each constraint to decide how close it is to an acceptable solution. It says this multiplication is used to make the errors in this constraint the same order of magnitude as errors of other constraints.

This topic is closed to new replies.

Advertisement