Collision detection/response in world made of ellipsoids

Started by
39 comments, last by Charles B 19 years, 7 months ago
Observations on oscillations :

- You are close to the solution at this point. But close enough to be sure to decide wether to rejct contact or say there is a contact (metric tolerance) ? That's the question.

I see quite well that you may fall in a quasi symetric scheme, specially if numerical precision comes into play to bias the recursion so that it is stuck in a pseudo 'parallel case'. (A,B) gives (C,D) gives (A,B).

The original picture that backed my intuition in 2D was two segments (AC, BD) (thus memorizing step n-1 at rank n) on both sides that would constantly become two subsegments of the two previous up to the point where they become quasi punctual.
"Coding math tricks in asm is more fun than Java"
Advertisement
Comparing methods : furthest point in one direction.

K I have verified your computation.

Point : M*A = M*Normalized(Transpose(D)*M)

Rewritten with my notations :

M = R*S
D = ~R*N, since you work in world coords, I simplified my case by working in local coords : M=S.
P = M*A

Let's see if it's coherent with my previous solution by simply reducing to the case where : R=Id, thus M=S

P = M*Normalized(tD*M) <=> P = S * (S*N/|S*N|) = S^2*N/||S*N||.

The implementations are equivalent, both require two Mat3x3*Vec3 ops. Mine requires D transformed into N (to locals). Then the contact point P transformed back to world coords (from locals). Yours works entirely in world space, but you can't escape M twice though, because it implicitely contains the scaling part, very logical.

The only difference is how the ellipses are defined as structures. Mine could be scaling (3) + quartenion (4) (=7). Yours requires a 3x3 matrix (9). Yours, with the concatenated matrix is thus slightly faster in this case. But the separated R and S might be useful for other algos. We'll see it later in practice maybe.

EDIT : obviously for the iterative method, it would be worth working in the local coords of one of the ellipses (scaling kept). This will cut half of the matrix*vector operations. You can't do it with concatenated matrices. Doing M2'=(M1^-1)*M2 would not work because you would be in a non euclidian 'unit sphere' space. Thus you could not check any metrics. With R, S split, you'd only need to transform back the contact point and normal when the algo ends.


[Edited by - Charles B on October 5, 2004 10:27:41 AM]
"Coding math tricks in asm is more fun than Java"
Quote:Original post by Charles B
Observations on oscillations :

- You are close to the solution at this point. But close enough to be sure to decide wether to rejct contact or say there is a contact (metric tolerance) ? That's the question.

I see quite well that you may fall in a quasi symetric scheme, specially if numerical precision comes into play to bias the recursion so that it is stuck in a pseudo 'parallel case'. (A,B) gives (C,D) gives (A,B).

The original picture that backed my intuition in 2D was two segments (AC, BD) (thus memorizing step n-1 at rank n) on both sides that would constantly become two subsegments of the two previous up to the point where they become quasi punctual.

Let simplify problem... say, it's 2D and we find distance between circle(ellipse that is very much like circle) and point using that iterative method.

So if circle center is C , radius r , and point is at [0,0] ,
at step n we have
D(n+1)=D(n)*W1+P(n)*W2
P(n+1)=C-r*D(n)/|D(n)|

where W1 and W2 it's coefficients for stability (both =0.5 if we use average trick).

So by solving for
D1=D0*W1+P0*W2
P1=C-r*D0/|D0|
D0=D1*W1+P1*W2
P0=C-r*D1/|D1|

we can find all oscilations.
So we have to solve
D0=((C-r*(D0*W1+P0*W2)/|D0*W1+P0*W2|)*W1+P0*W2)*W1+(C-r*D0/|D0|)*W2

Pretty sure sometimes it have several solutions.

Other method : do a drawing. I think, that if we have found closestpoint P with small angular inperfectness of normal a, we have angular inperfectness of direction from 0.0 to that point roundly equal to -a*r/(|C|-r) (|C|-r it's distance to sphere). At next iteration we will have direction with inperfectness equal to
W1*a-W2*(a*r/(|C|-r))
and if it's absolute value is bigger than a we will have increasing oscilations.
So we need
a>|W1*a-W2*(a*r/(|C|-r))|

let a is positive at that step, W1<1 . So we need
a>W1*(a*r/(|C|-r))-W2*a
1>W1*r/(|C|-r)-W2
W1>1+W2*r/(|C|-r)

Probably there was a mistake, but i'm gonna sleep.

where |C| is a length of C

as about testcode, it's very ugly code so i'll post when will make it looks bit better.
First the conclusions of what I have done today :

I have tested the code in 3D (with gl and glfw). With z=0, I could also test it in 2D. Halving does not work. But with 0.95, 0.05, converged in all my case tests to and error < 2^-24 in 7-9 iterations. It looks solid, but I haven't tried to prove it a.t.m.

Still in 3D, (add z components), it does not converge. In most cases, it gets close to the min distance and then it more or less converges to an area of higher distance.

I am not really surprised. My intuition was for the 2D picture I had in mind. I anticipated that the 'segments' should be replaced by closed surfaces on both ellipsoids. It should most probably be based on steradians, here projections of a triangle on a sphere. (For instance start with three axes : +X, +Y, and +Z local axes of one ellipsoid, when the center of the other is in it's local all positive canonical 8th-space). So a solid analogy of the 2D algo in 3D is certainly complex to create.

I'll try to comment your last post tomorow.

[Edited by - Charles B on October 6, 2004 4:29:17 AM]
"Coding math tricks in asm is more fun than Java"
What do you think of a more physics based algo ? A kind of spring constrained to the two surfaces. It would remain somehow the same algo, except convergence would certainly be tackled more solidly.

It would be a recursion on points An, Bn.

The movement of the point would by small steps be constructed from the local tangent plane and the elongated 'spring' AB.

An+1 = constrain( An + k*project( AB, perp(N(A)) ) )

constrain(A) is some kind of renormalization so that the points remains on the surface.

N being the normal at A. k may be a constant or variable. Probably 1/2 would do :

*  *  *  *  *  *  *  **        B3 B4       **         x.x...     **          \|        **           |        **           |\       **        ...x.x      **          A4  A3    **  *  *  *  *  *  *  *

k = 1/2. When very close to the points, surface quasi flat.
Obviously it would not oscillate, diverge or miss the limit point. Right ?

Maybe at start, far from the limit, when the steps would be too great for the surfaces to be approximated well to planes, k could be smaller. This needs some empirical tests. This time I am pretty confident it's solid and efficient.
"Coding math tricks in asm is more fun than Java"
Quote:Original post by Charles B
What do you think of a more physics based algo ? A kind of spring constrained to the two surfaces. It would remain somehow the same algo, except convergence would certainly be tackled more solidly.

It would be a recursion on points An, Bn.

The movement of the point would by small steps be constructed from the local tangent plane and the elongated 'spring' AB.

An+1 = An + k*project( AB, perp(N(A)) )

N being the normal at A. k may be a constant or variable.

Yes, i thought about it.

I even got an idea: if after spring step we got distance bigger than previous distance, we must halve our step size and try again. So distance[hopefully] will always be smaller. Or alternatively we may try to halve our "weight" in previous algo, till we're getting smaller distance .

In fact to find minimal distance from point 0,0,0 to ellipsoid
M*A+V
we need to find unit length A so
M*A+V
is 'anti'parallel to normal of ellipsoid at point A.

How to find normal of ellipsoid:
i've found before that for given normal N
A=Normalized(Transposed(M)*N) = c*Transposed(M)*N

where N is normal.
And from there normal is given by
N = c*(Transpose(M))-1 A
so we need to find a that
M*A+V = k*((Transpose(M))-1 A)
so
Transpose(M)*M*A+Transpose(M)*V=k*A
so
M2*A+Transpose(M)*V=k*A
M2*A-k*A+Transpose(M)*V=0

(M2-k*Id)*A=-Transpose(M)*V

A=(M2-k*Id)-1*(-Transpose(M)*V)
so we need to find k that A is unit-length.
Can be done using binary search, or may be even analitically.
So, how this can be used for ellipsoid-to-ellipsoid: we can do several steps for ellipsoid2 with point on ellipsoid1, then for ellipsoid1 with point on ellipsoid2, and vice-versa.
Quote:Original post by Charles B
What do you think of a more physics based algo ? A kind of spring constrained to the two surfaces. It would remain somehow the same algo, except convergence would certainly be tackled more solidly.

It would be a recursion on points An, Bn.

The movement of the point would by small steps be constructed from the local tangent plane and the elongated 'spring' AB.

An+1 = constrain( An + k*project( AB, perp(N(A)) ) )

constrain(A) is some kind of renormalization so that the points remains on the surface.

N being the normal at A. k may be a constant or variable. Probably 1/2 would do :

*  *  *  *  *  *  *  **        B3 B4       **         x.x...     **          \|        **           |        **           |\       **        ...x.x      **          A4  A3    **  *  *  *  *  *  *  *

k = 1/2. When very close to the points, surface quasi flat.
Obviously it would not oscillate, diverge or miss the limit point. Right ?

Maybe at start, far from the limit, when the steps would be too great for the surfaces to be approximated well to planes, k could be smaller. This needs some empirical tests. This time I am pretty confident it's solid and efficient.

As i understand, you mean, we need to just assume that ellipsoid is flat and find closest point like it's flat plane, then halve the difference, then kinda renormalize to place it on surface.
But, i just just realised it's _almost_ our previous oscilating method with stability constants ,but in reverse!!!!
Prev. method worked like:
Have point Pn , find next point that have normal antiparallel to direction based on Pn
Make a weighted average of direction with previous direction, for stability.

And new will work
Have a normal Nn. Find a point Pn+1 that closest line is [inprecisely] parallel to Nn .Make a weighted average of new point with previous, for stability.

If something doesn't converge at all, and in fact does opposite, we just need to do it in reverse. Heheheheehe [wink].
hey guys,

here's an interesting image of stacks of ellipsoids, and also of m&m's. However, I don;t know which algorithm they've used. ( Donev, Stillinger et al in Science)

ellipsoids

[Edited by - Airo on October 6, 2004 10:58:20 AM]
@Airo :
Thanks. Looks impressive. But surely not runtime. Still I now think it can be possible to use ellipsoids in runtime. Great tight volume approximations (ex:body parts) and possibly as efficient as OBBoxes or small convexes. If that's true, our efforts here could be payed. I hope Oii, Eeco or Dmytry would like to make a demo. I could clean the algo from a math pov and write some math routines, even SIMD optimized. The rest is all about using sphere trees or any O(nlog(n)) partition algo.


@Dmytry:

But, i just just realised it's _almost_ our previous oscilating method with stability constants, but in reverse!!!!

Yes in the sense that a duality is exploited in reverse order.


The previous method was recursing on axes (<=> planes, normals), that is analoguous the separating axis search used for convex polyhedra.
Nn -> (An, Bn) -> Nn+1 -> (An, Bn)
It required the function FurthestPoint(Normal)

The second method is recursing on pairs of points (<=> segments, pseudo spring).
(An, Bn) -> (An,Bn) and (N(An), N(Bn)) -> (An+1, Bn+1)
It requires GetNormal(Point).


Both functions come from the gradient. Thus the duality also comes from the differentiation theories for implicit iso-surfaces.
- inversible, thus Pa=Qa <=> Pa*=Qa* (same ellipsoid)

... that theorems for the original figure can be immediately applied to the reciprocal figure after suitable modification :
- For two ellipsoids A and B, Pa and Pb are in contact <=> Pa*=Pb*. (viewed as planes, not simply normals)
It's also possible that order relations are preserved between distances and angles (not demonstrated) :
- dist(Pa, Pb) < dist(Qa, Qb) <=> Pa*.Pb* > Qa*.Qb*


Note that for circles (radius R) the duality is immediate.
P(x,y,z) |-> N(P)
The tangent plane is the set of points Q so that :
P*Q = R*R


But for ellipsoid, since in the end we want a min distance between two points it's quite logical the algo is based on points, and not their dual (tangent planes).


For ellipsoids, the duality is given by the gradient. In the local axes (no R no T) :

Grad(P)=2*S-2*P <=> N(P)=normalize(S-2.P).

So it's not linear (normalize), but inversible, and the square of the scaling matrix comes into play.
"Coding math tricks in asm is more fun than Java"
this physical method also occured to me. i visualize it as two critically damped roller bearings on the inside of the elipses with a spring between them.

this would converge for all cases, indeed, but im not sure about its efficiency. as you noted charles, if you find a new position, it will be off the surface of the elipse, so your next iteration will start as if from a bigger elipse if you dont normalize it, and ofcource normalizing is never fun.

however, i dont think its necisary to normalize with high accuracy. you can use a very cheap sqrt approximation, and the error wont accumulate, since it will get pulled back to the surface more or less each iteration.

This topic is closed to new replies.

Advertisement