Constraints and linear programming

Started by
76 comments, last by woodae 14 years, 8 months ago
Quote:Original post by Dave Eberly
Quote:Original post by DonDickieD
Look at Erin Catto's first presentation at the GDC and e.g. Anitescu PhD (I think it is not freely available - contact me if you cannot find it). Erin's presentation can be found here:

http://www.gphysics.com/files/IterativeDynamics.pdf


Thanks, I already have Catto's PDF. I have looked a long time at Anitescu's web site, but cannot see anything labeled as his PhD thesis (the university page for graduated students has no links). I do not mind paying to download the thesis, but I have to find a link and a way to do this first :)


This page has Anitescu's 1997 University of Iowa PhD thesis as "Multi-Rigid Body Dynamics with Contact and Friction" under Florian Potra.

So, I'm just guessing here, but probably any of Anitescu's paper with a similar title will contain the same results. That is, after all, how the "academic method" works: you publish the same thing in as many journals as possible, under a similar (but not quite identical) name, to get as much publishing credit as possible.


If your search for that title, you find e.g. this ppt by Anitescu which references 'Anitescu & Potra, 1996' for "Newton's law with impulses and velocities" (as well as Stewart & Trinkle, 1995).

Guessing again, that 1996 reference is either one of these (both an off-by-one error to that reference):

38. M. Anitescu and F.A.Potra, Formulating dynamic multi-rigid-body contact problems with friction as solvable Linear Complementarity Problems Nonlinear Dynamics, 14, pp. 231–247, 1997.
39. M. Anitescu, J. F. Cremer and F. A. Potra, Formulating 3D Contact Dynamics Problems, Mechanics of Structures and Machines, 24(4), November 1996.

Advertisement
Assume you have a distance constraint between two particles. Also assume that we only can push the particles apart.

C(x1, x2) = |x2 - x1| - L = 0
dC/dt = (x2 - x1) / |x2 - x1| * (v2 - v1)

Define n = (x2 - x1) / |x2 - x1| -> dC/dt = n * (v2 - v1)


We apply impulses P1 and P2:

(1) v1' = v1 + P1 / m1
(2) v2' = v2 + P2 / m2

such that the post-velocities v1' and v2' should satisfy the velocity constraint dC/dt:

(3) n * (v2' - v1') = 0

We also know the impulse direction (results from the d'Alembert Principle):

(4) P1 = -lambda * n
(5) P2 = lambda * n

5 equations for 5 unknowns(v1', v2', P1, P2, lambda). Now we solve for lambda using a more compact vector notation:

v' = v + P / M
J * v' = 0
P = JT * lambda

<=> J * M^-1 * JT * lambda = -J * v

The above equation finds an impulse lambda such that the relative velocites along n are cancelled out. Now let's form an LCP from this:

v_rel' = J * M^-1 * JT * lambda + J * v >= 0 _and_
lambda >= 0 _and_
v_rel' * lambda = 0

If the particles are drifting apart lambda becomes less than zero and we ignore the impulse. If the particles are approaching each other we apply a positive impulse lambda such that v_rel' becomes zero.

v_rel' is the relative velocity after we apply the impulse. J * v is the relative velocity before we apply the impulse. If the relative velocity is greater zero lambda becomes zero. Otherwise we compute a lambda such that the relative velocity after applying the impulse becomes zero.

Note that LCPs and sequential impulses are mathemetical equivalent. It is basically just the physical interpretation what a projected Gauss-Seidel does when solving a MLCP. Namely it applies "impulses sequentially".


HTH,
-Dirk









Quote:Original post by Christer Ericson
This page has Anitescu's 1997 University of Iowa PhD thesis as "Multi-Rigid Body Dynamics with Contact and Friction" under Florian Potra.


Thanks. I had found the reference, just not a link to a copy of it. The PPT file at the link you posted is, well, a PPT file--not much detail. Usually, a thesis has a lot more details than you would find in a research publication or presentation, so I was hoping to read the original.

Quote:
So, I'm just guessing here, but probably any of Anitescu's paper with a similar title will contain the same results. That is, after all, how the "academic method" works: you publish the same thing in as many journals as possible, under a similar (but not quite identical) name, to get as much publishing credit as possible.


We called this practice "splitsmanship" :) I never practiced it and I don't miss the publish-or-perish environment. I'll track down one of Anitescu's earlier papers and see what I can find.
Quote:Original post by Numsgil
Gaussian Elimination, from my understanding, solves for the inverse of the matrix and then uses it to solve for the unknowns.


Thanks for your reply, Numsgil. But FYI -- and, apologies for this somewhat off-topic parenthetical discussion in this thread -- this is not correct. Solving a single system of equations with Gaussian elimination does not involve constructing an inverse. Just the opposite: You compute the inverse with GE by solving n equations (one for each element of the natural basis).

Quote:If you actually need the real inverse of the matrix, both methods are essentially the same. They're both O(n^3) operations (no free lunch), but it just so happens that if you just want to solve a system of equations, LU decomposition is a constant order of magnitude faster (3x faster) than finding the inverse.


AFAIK, using LU decomposition to compute an nxn inverse is O(n^3). Using Gaussian elimination to do so is O(n^4).

Quote:Chapter 2 of Numerical Recipes in C goes over all of this (plus way more I haven't even looked at yet) and provides some sample implementations.


Thanks for the reference. The Numerical Recipes people make me sad though: The books used to be available for free online as PDFs, but now you need to install some really suspicious spyware-looking plugin for Adobe Reader to enforce some kind of DRM.

----

Anyway, hopefully I haven't distracted too much from the discussion at hand. I'm observing because I'm curious about collision handling too. One problem that is really bothering me is the simulation of Newton's Cradle.
Quote:Original post by Emergent
Quote:Original post by Numsgil
Gaussian Elimination, from my understanding, solves for the inverse of the matrix and then uses it to solve for the unknowns.


Thanks for your reply, Numsgil. But FYI -- and, apologies for this somewhat off-topic parenthetical discussion in this thread -- this is not correct. Solving a single system of equations with Gaussian elimination does not involve constructing an inverse. Just the opposite: You compute the inverse with GE by solving n equations (one for each element of the natural basis).


Ah, sorry. I haven't played with Gaussian Elimination since high school. Sort of skipped the section on that in Numerical Recipes and went straight to LU.

Quote:
Anyway, hopefully I haven't distracted too much from the discussion at hand. I'm observing because I'm curious about collision handling too. One problem that is really bothering me is the simulation of Newton's Cradle.


It's what I'm playing with right now. I'm pretty sure you can construct a solver for newton's cradle using just local information, and solve the system globally at one time. Here's a powerpoint of what I've done so far (you might need texpoint to see the LaTeX render properly). It's incorrect in the slides, though, that as presented it will properly solve Newton's cradle. The result will be physically correct, but basically underestimate the restitution of the bodies. My guess is that you need to modify the right hand side (the 'b' terms in the ppt) somehow.

Anyway, since you're working on the same thing give it a look over and let me know any thoughts. That goes for anyone else interested as well. The 'j' terms in the ppt are the same (pretty sure) as the lambda terms for constraint solving. Just arrived at from a different direction. Likewise the Omega terms are basically J * M^-1 * JT, and the 'b' terms are basically J * v.
[size=2]Darwinbots - [size=2]Artificial life simulation
Quote:Original post by DonDickieD
Assume you have a distance constraint between two particles. Also assume that we only can push the particles apart.

C(x1, x2) = |x2 - x1| - L = 0
dC/dt = (x2 - x1) / |x2 - x1| * (v2 - v1)

Define n = (x2 - x1) / |x2 - x1| -> dC/dt = n * (v2 - v1)


We apply impulses P1 and P2:

(1) v1' = v1 + P1 / m1
(2) v2' = v2 + P2 / m2

such that the post-velocities v1' and v2' should satisfy the velocity constraint dC/dt:

(3) n * (v2' - v1') = 0

We also know the impulse direction (results from the d'Alembert Principle):

(4) P1 = -lambda * n
(5) P2 = lambda * n

5 equations for 5 unknowns(v1', v2', P1, P2, lambda). Now we solve for lambda using a more compact vector notation:

v' = v + P / M
J * v' = 0
P = JT * lambda

<=> J * M^-1 * JT * lambda = -J * v

The above equation finds an impulse lambda such that the relative velocites along n are cancelled out. Now let's form an LCP from this:

v_rel' = J * M^-1 * JT * lambda + J * v >= 0 _and_
lambda >= 0 _and_
v_rel' * lambda = 0

If the particles are drifting apart lambda becomes less than zero and we ignore the impulse. If the particles are approaching each other we apply a positive impulse lambda such that v_rel' becomes zero.

v_rel' is the relative velocity after we apply the impulse. J * v is the relative velocity before we apply the impulse. If the relative velocity is greater zero lambda becomes zero. Otherwise we compute a lambda such that the relative velocity after applying the impulse becomes zero.

Note that LCPs and sequential impulses are mathemetical equivalent. It is basically just the physical interpretation what a projected Gauss-Seidel does when solving a MLCP. Namely it applies "impulses sequentially".


Right, I follow that. Basically the same thing as Erin Catto's GDC presentations.

So suppose you let Omega (hereafter O) = J * M^-1 * JT. And let b = -(J * v + bias(t)). And let's use j instead of lambda. You then have O * j = b. As long as this is the only constraint you want to solve, you can do something like j = b / O. Then apply constraints on to j as appropriate.

But suppose you want to solve constraints 2 at a time. Then you get something more like:

O11 * j0 + O12 * j1 = b0
O21 * j0 + O22 * j1 = b1

If the constraints are between the same bodies (that is, you're solving a contact manifold), O12 = O21. The O matrix is symmetric, so you can solve it as easy as finding its transpose.

But I've been using equalities so far on the 2x2 system. What I want to do now is apply arbitrary constraints on j0, j1. Maybe I want j0 <= 0 and j1 free. Or maybe j0, j1 >= 0. Or maybe 10 <= j0 <= 50 and -j0 <= j1 <= j0. You can't just solve the system presented here and then clamp j0, j1 at the end, because changing j0 will change j1, and vice versa.

LCP seems to be the answer here, but I don't get how to format the problem. I get how to solve a single equation, but it's relationship to LCP eludes me. Probably a case of over familiarity.

[Edited by - Numsgil on July 14, 2009 11:38:19 AM]
[size=2]Darwinbots - [size=2]Artificial life simulation
First, the inverse of a symmetric matrix is NOT euqual to its transpose! Also the matrix omega which sometimes called the effective mass matrix is positive definite.

Let's use a common notation so others can follow as well:

Let A be the (inverse) effective mass matrix: A = J * W * JT
Let b be the errors in the velocity constraints: b = -J * v
Let x be the impulses we need to correct the velocity errors: x = lambda

a11 * x1 + a12 * x2 = b1
a21 * x1 + a22 * x2 = b2

This would be a linear system and can easily be solved with your favorite method of choice. Now assume that we have actually an LCP. The system to solve then becomes:

w1 = a11 * x1 + a12 * x2 - b1 >= 0
w2 = a21 * x1 + a22 * x2 - b2 >= 0

_and_

x1 >= 0, x2 >= 0

_and_

x1 * w1 = 0, x2 * w2 = 0


How to solve such a system is explained in the webbook I posted here (directly in the first chapter). Look for "Direct Enumeration Method". The author goes exactly through examples like the one above with pictures and everything. I couldn't do this any better here.

Once you understood this you can look at Erin's contact block solver for another example. The code is very simple to follow and understand. This gives you then an implementation example. Maybe you look at these links first and then ask some more concrete questions please. I know that this is a hard topic and it takes some time to learn this stuff. You are on a good way!

Cheers,
-Dirk

Quote:Original post by DonDickieD
First, the inverse of a symmetric matrix is NOT euqual to its transpose!


Oops :P Don't know where I got that from.

Quote:Also the matrix omega which sometimes called the effective mass matrix is positive definite.

Let's use a common notation so others can follow as well:

Let A be the (inverse) effective mass matrix: A = J * W * JT
Let b be the errors in the velocity constraints: b = -J * v
Let x be the impulses we need to correct the velocity errors: x = lambda

a11 * x1 + a12 * x2 = b1
a21 * x1 + a22 * x2 = b2

This would be a linear system and can easily be solved with your favorite method of choice. Now assume that we have actually an LCP. The system to solve then becomes:

w1 = a11 * x1 + a12 * x2 - b1 >= 0
w2 = a21 * x1 + a22 * x2 - b2 >= 0

_and_

x1 >= 0, x2 >= 0

_and_

x1 * w1 = 0, x2 * w2 = 0


How to solve such a system is explained in the webbook I posted here (directly in the first chapter). Look for "Direct Enumeration Method". The author goes exactly through examples like the one above with pictures and everything. I couldn't do this any better here.

Once you understood this you can look at Erin's contact block solver for another example. The code is very simple to follow and understand. This gives you then an implementation example. Maybe you look at these links first and then ask some more concrete questions please. I know that this is a hard topic and it takes some time to learn this stuff. You are on a good way!

Cheers,
-Dirk


Ah, okay, I'm starting to make sense of this.

Let me try and make a bridge between the webbook you linked and what you just posted.

Let:
M = A = inverse effective mass matrix
z = x = vector of final impulse terms to apply to the constraints
q = b = vector of initial conditions

From the webbook, LCP attempts to find w,z such that:

w - Mz = q
w,z complimentary

Or, to put it in slightly more verbose notation:

Iw + (-M)z = q

that is, it's sort of a special sort of lerp.

So far so good? Now let's suppose I've plugged in my M (A) and q (b) into the LCP solver I'm using (some monstrous black box contraption), and it has spit out either a (w,z) pair or indicated some sort of error.

Does w have any special significance? If I read things right, its sole purpose is to act as a slack variable to make the equation work. Once you have your z (x) vector of impulses, you just discard the w (w representing basically the degree to which the constraints had to be clamped).

Right?

Okay, now a more open question. Let's take again a 2x2 system:

A x = b

And impose constraints of the form x0 <= 0 and -u * x0 <= x1 <= u * x0. If you graph the constraints (x0 = x, x1 = y), the first constraint says the answer must be in the second or third quadrant. The first part of the second constraint is a line with slope -u passing through the origin. The second part of the second constraint is a line with slope u passing through the origin as well.

Because all the constraints are inequalities, we can find their union and construct a "feasible region".

Here's my best attempt at a picture:

ff\        |        /fff\       |       /ffff\      |      /fffff\     |     /ffffff\    |    /fffffff\   |   /ffffffff\  |  /fffffffff\ | /ffffffffff\|/     x------------------------ffffffffff/|fffffffff/ |ffffffff/  |fffffff/   |ffffff/    |fffff/     |ffff/      |


(edit: had the feasible region wrong in my picture)
(edit2: arg, the picture is wrong, I'll try again in a bit)

Here f's represent the feasible region. It extends to and includes the origin. x represents the solution to the unconstrained system.

Am I right that the LCP solver is basically finding the closest point in the feasible region to x? Both problems are special cases of quadratic programming, so my guess is that they're both just slightly different ways of addressing the same problem.

GJK is also a special case of quadratic programming. Ignoring for a moment the fact that the feasible region is unbounded along one direction, could you use GJK to find the closest point in the feasible region to x? Would that be the same answer as what came out of an LCP solver?

If so, that might be an interesting direction to present LCP solving from: as an extension of GJK (or vice versa). They're both pretty complex topics with lots of special vocabulary (support mapping, "feasible region", etc.), and combining them together would be pretty cool.

[Edited by - Numsgil on July 14, 2009 1:56:01 PM]
[size=2]Darwinbots - [size=2]Artificial life simulation
You might be interested in the presentations by David Wu concerning (sadly now defunct) PseudoInteractive's simulator:

http://www.pseudointeractive.com/files/DesignPatternsForInteractivePhysicsGDC2001.ppt
http://www.pseudointeractive.com/files/CelDamagePhysicsGDC2002.ppt
http://www.pseudointeractive.com/files/PhysicsAndAnimationGDC2003.doc
http://www.pseudointeractive.com/files/CharacterPhysicsAndAnimationGameTech2004.ppt

Of note is that the constraint solving is done "non-iteratively", for instance via the Conjugate Gradient method (he describes various methods which were attempted and which worked best on each platform). This is somewhat confusing since AFAIK Conjugate Gradient performs a series of steps, but isn't iterative in the same sense as something like Bullet/Box2D-style -- the system is solved as a big matrix rather than row-by-row.

There are a lot of other interesting novel aspects to the system, it's definitely worth reading.. and the rereading and rereading :)


Let's have another look at the system (w and x are complementary):


w1 = a11 * x1 + a12 * x2 - b1 >= 0
w2 = a21 * x1 + a22 * x2 - b2 >= 0
x1 >= 0, x2 >= 0
x1 * w1 = 0, x2 * w2 = 0

There are four possible solutions:

1) x1 > 0, w1 = 0, x2 > 0, w2 = 0
2) x1 > 0, w1 = 0, x2 = 0, w2 > 0
3) x1 = 0, w1 > 0, x2 > 0, w2 = 0
4) x1 = 0, w1 > 0, x2 = 0, w2 > 0

The "Direct Enumeration Method" enumerates all possible solutions. Hence the name :)

So you basically go through the four cases:

Case 1:
Set w1 = 0 and w2 = 0
0 = a11 * x1 + a12 * x2 - b1
0 = a21 * x1 + a22 * x2 - b2
if ( x1 > 0 and x2 > 0 ) -> DONE

Case 2:
Set w1 = 0 and x2 = 0:

We need to solve:
0 = a11 * x1 + a12 * 0 - b1
w2 = a21 * x1 + a22 * 0 - b2

x1 = b1 / a11
w2 = a21 * x1 - b2

if ( x1 > 0 and w2 > 0 ) -> DONE

Case 3 (see case 2):
Set x1 = 0 and w2 = 0:

Case 4:
Set x1 = 0 and x2 = 0
w1 = -b1
w2 = -b2

if ( w1 > 0 and w2 > 0 ) -> DONE

If you haven't found a solution until here it doesn't exist.


HTH,
-Dirk

This topic is closed to new replies.

Advertisement