Constraints and linear programming

Started by
76 comments, last by woodae 14 years, 8 months ago
I agree with you about Featherstone. It is indeed an overly sophisticated solution, for a much simpler problem. I am aware of a very simple solution, but it is not yet published so I cannot talk about it.

Good luck. Let me know if you find a solution to the problem. This is really interesting. I lack the mathematical knowledge for this.


Cheers,
-Dirk
Advertisement
I got an interesting result when I tried to decompose it.

Suppose you have two rectangular matrices:

| B | and | C r1 || r0 |


That is, the first is (cxb) and the second is (bxc), and c > b. B is simply the bxb submatrix, and r0 is the remainder. Likewise with C and r1. Multiply them together, and you have a square matrix of cxc size, of the form:

| B *C B*r1  || r0*C r0*r1 |


If you decompose it into LU form, you get:
| 1    0 | * | B*C B*r1 || r0B-1 1 |   | 0   0    |


The interesting part is that the bottom row of the upper triangular matrix is all 0s. I'm not quite sure what this result means. As near as I can tell it would seem to indicate that the system is singular. But I made absolutely no assumptions about the original rectangular matrices, so my result would seem to indicate that all square matrices formed as the multiplication of two rectangular matrices would be singular, which is obviously not true.

[Edited by - Numsgil on July 23, 2009 5:19:25 PM]
[size=2]Darwinbots - [size=2]Artificial life simulation
Okay, a little more progress. I think this might lead to a general linear time constraint solver, but it's a little too early to tell.

Say that you want to decompose your effective mass matrix formed by J * M-1 * JT into a lower and upper triangular decomposition (L * U).

Arbitrarily subdivide your Jacobian into a square sub matrix and a remainder (for this example I'm assuming that there are more constraints than bodies). ie:

J = <sup>T</sup><br><br>Now multiply out:<br><pre><br>J * M<sup>-1</sup> * J<sup>T</sup> = [B*M<sup>-1</sup>*B<sup>T</sup> B*M<sup>-1</sup>*r<sup>T</sup> ]<br> [r*M<sup>-1</sup>*B<sup>T</sup> r*M<sup>-1</sup>*r<sup>T</sup> ]<br></pre><br><br>B is a square matrix, and if we assume that it's not singular, you can decompose it into an upper and lower triangular form. Then you get L * U * M<sup>-1</sup> * U<sup>T</sup> * L<sup>T</sup>, which is a really nice form for solving systems.<br><br>I think you can then recursively block out the other sub matrices like this, where you find the largest square block you can decompose, and recurse &#111;n the remaining matrices. I'm still working through the math, but let's say that you can.<br><br>At each level in the recurse, matrix B is sparse (since it comes right from teh Jacobian), with a constant upper bound &#111;n the number of non zero elements per row for all possible Jacobians, no matter how large, assuming &#111;nly unary and binary constraints. Likewise while an individual column might be dense, all other columns will be very sparse. So the <i>average</i> denseness per column is constant, no matter how large the matrix is. I haven't worked through the proof yet, but I think this means you can do things like add and multiply the matrices in linear time.<br><br>Which (I think) means you can decompose the matrix in linear time. Like I said, I'm still working through the math and seeing how it would work in practice. But I don't see anything which would indicate that this method wouldn't work.<br><br>edit: another interesting result. If the Jacobian has more rows than columns, and it has full row rank, the system is over constrained and no solution can possibly exist. Which means the Jacobian of a system with a solution must always have the form for some square matrix B and a rectangular matrix r, which means all of the above is sort of moot. :P<br><br><!–EDIT–><span class=editedby><!–/EDIT–>[Edited by - Numsgil on July 24, 2009 3:36:24 PM]<!–EDIT–></span><!–/EDIT–>
[size=2]Darwinbots - [size=2]Artificial life simulation
Okay, trying again:

Let J be your Jacobian. You want to find, say, the inverse of J * M-1 * JT.

J must be a rectangular matrix with #rows <= #columns. So we can write it in block matrix for as , where B is of size #rows x #rows, and r is the remainder of the matrix (note: it's entirely possible for r to have more elements than B).<br><br>Let's likewise subdivide the M<sup>-1</sup> matrix into block notation:<br><pre><br>| M<sub>cc</sub> 0<sub> </sub> |<br>| 0<sub> </sub> M<sub>bb</sub> |<br></pre><br><br>Multiplying out we get: J * M<sup>-1</sup> * J<sup>T</sup> = B*M<sub>cc</sub>*B<sup>T</sup> + r*M<sub>bb</sub>*r<sup>T</sup>. Which at first doesn't seem to be much of an improvement, since there aren't many nice matrix identities we can use. However, it happens to be in a form we can use in <a href=http://en.wikipedia.org/wiki/Woodbury_matrix_identity>the Woodbury matrix identity</a>.<br><br>Which is where I'm at right now. Still working through the math, but it looks promising so far :)
[size=2]Darwinbots - [size=2]Artificial life simulation
Sort of a roundabout hacky way, but I think I figured out how to handle general case constraints in linear time. I looked into the Woodbury formula, but it was sort of a dead end. However, I realized that if the system were critically constrained, so there weren't any free degrees of freedom for any of the bodies, the Jacobian is a sparse square matrix. You should be able to deconstruct it into LU form in linear time. You can then form your system as L * U * M-1 * UT * LT.

So you need the Jacobian to be a square matrix. So you add constraints to the system until it's critically constrained and forms a square. The actual value of the constraints doesn't matter, so you can just add rows of 0s. Then you decompose the Jacobian.

Then you remove the rows/cols corresponding to the faux constraints. What you're left with are still triangular and diagonal matrices, but their size are back down to square matrices based on the number of constraints.

Because the square Jacobian you're deconstructing is so sparse, you can decompose it (I think) in linear time.

[Edited by - Numsgil on July 27, 2009 12:08:31 PM]
[size=2]Darwinbots - [size=2]Artificial life simulation
Hmm, I think there might be a problem with that method. Still working through it...
[size=2]Darwinbots - [size=2]Artificial life simulation
Okay, so I have four postulates which, if true, means you can solve the constraints in the physics step (ignoring any bounding on the variables) in linear time (I'm pretty sure). If anyone can help me with any of these, I'd appreciate it.

1. Given a single body with d degrees of freedom, and c existing constraints (where c < d), can you construct (and if so, how) d-c additional constraints (call them f) where for each c, c dot f == 0?

That is, how do you construct entirely orthogonal constraints to fill up any remaining degrees of freedom? Or can you?

2. Given two orthogonal constraints, c1 and c2, where c1 * c2 == 0 (as in the first postulate above), and the system:
(| c1 |)^2 * | j0 | = | b0 |(| c2 |)     | j1 | = | b1 |
Is it the case that j0 == b0 / (c1 * c1)?

3. Assuming that every degree of freedom in the Jacobian has a constraint associated with it, the Jacobian is square of size b*d x b*d. Can you deconstruct it into a lower and upper triangular decomposition in O(b*d) time? And if so, how?

I think the answer is yes, since the Jacobian is so sparse. Suppose that each constraint is binary (only acts between two bodies), and that you have the principal of no virtual work. That means that each constraint removes exactly two degrees of freedom from the global system. Because each body has exactly d DOFs, there is a constant upper bound on the number of non zero elements in any constraint, regardless of the size of the global system. Call this upper bound k.

Likewise, while any column might be rather dense, the other columns will be extremely sparse. So there is an average number of non zero elements per column of k.

4. Given the L * U deconstruction from 3, what is the time complexity to solve L * U * x = b? For dense matrices, it's quadratic time. And it's at least linear time, since you need to touch every element of x. My guess is that since the Jacobian was so sparse, so will the LU decomposition be sparse and the system can be solved in O(kc) time.



So that's where I am.

[Edited by - Numsgil on July 28, 2009 3:18:29 PM]
[size=2]Darwinbots - [size=2]Artificial life simulation
Need help in linear programming.

http://www.gamedev.net/community/forums/topic.asp?topic_id=543025

This topic is closed to new replies.

Advertisement