Jacobian + QR decomposition

Started by
1 comment, last by Numsgil 13 years, 11 months ago
I'm hoping someone can show me why I can't do the following: Given a constraint Jacobian "J", where each row in the Jacobian is a different constraint. Also given a diagonal mass Matrix M where the diagonal entries are mass and moment of inertia. Also given an initial velocity vector "v". Also given a bias vector "b". The constraint equation can be solved via: J * M-1 * JT * λ = -J * v + b (see Catto's GDC presentations). So I want to build a "global solver" for a high fidelity physics engine. It's very possible that the J * M-1 * JT matrix will be rank deficient (when you have more constraints than degrees of freedom in your system, for instance). So I had the bright idea of doing a linear least squares. Which means in the over constrained case the impulses would get "evenly" (in a least squares sense) distributed across my constraints. This should work just fine. eg: use a QR decomposition of the J * M-1 * JT matrix, and then solve for λ. (Haven't tested it yet, though). Then I had the idea that if I redefine the bias a bit I can get the equation in this form: J * M-1 * JT * λ = J * (v + b) And then use QR decomposition to find the psuedoinverse of J. Call it JP. Then I can do: JP * J * M-1 * JT * λ = -JP * J * (v + b) M-1 * JT * λ = -(v + b) The psuedoinverse of the inverse mass matrix is just the mass matrix with infinite values clamped to 0. So then I should be able to do: JT * λ = -M * (v + b) And then I can use the psuedoinverse of JT and use it to find λ in a least squares sense. Or so I thought. But when I try it on paper, I don't get the correct answer for the case of this simple equation: J := [1, -1] b := [0, 0] M := [1, 0] <-- diagonal matrix with these as diagonal entries v := [-2, 1] The correct answer should be λ = 3. Which I get if I solve it normally. But when I try to use my QR decomposition of J (well, actually JT) that's not what I'm getting at all. JT = [sqr(2)/2, -sqr(2)/2]T * [sqr(2)] If I plug that into JT * λ = M * (v + b), I get λ = 1. So I feel like I'm missing something really important here.
[size=2]Darwinbots - [size=2]Artificial life simulation
Advertisement
Usually you have to solve a MLCP and not a linear system. Are you only assuming holonomic constraints? In this case you usually have some kind of tree structure which can be solved in linear time. Baraff has written a lot of stuff about this. Look at his linear time solver. You can also look into the Mirtich papers and PhD. He used a SVD to deal with the over-constrained system. If you have to solve a MLPC look into Dantzig or Lemke.

My personal opinion is that you will not achieve "high fidelity" physics with a direct solver though.
Quote:Original post by DonDickieD
Usually you have to solve a MLCP and not a linear system. Are you only assuming holonomic constraints?


I'm starting with the simple case where constraints can push and pull and have no limits on how large they can be (which I guess is what you mean by holonomic? I'm not familiar with the term). Once I get that working with a linear least squares solver, I'll add back in box limits. I have a paper talking about how to do box constraints for LLS problems (hrm, that's an overloaded term in this context. I mean limit it so that l <= λ <= u).

I believe for the general case like this it's a quadratic programming and not a LCP, though. But yeah, same idea.

Quote:
In this case you usually have some kind of tree structure which can be solved in linear time. Baraff has written a lot of stuff about this. Look at his linear time solver. You can also look into the Mirtich papers and PhD. He used a SVD to deal with the over-constrained system. If you have to solve a MLPC look into Dantzig or Lemke.


I've read both Mirtich's thesis and the Baraff-tree-linear-time-solver papers, actually. Baraff's is good, but I'm starting with the absolutely most general case first and working down to more specific optimizations like his. The SVD Mirtich does is good, too, but it's more expensive than the QR. Plus, down the road I want to see if I can implement a sparse decomposer, and I think that'll be easier with QR than SVD. Hmm, let me go through my toy problem above with the SVD instead of the QR and see if it makes a difference.

Quote:
My personal opinion is that you will not achieve "high fidelity" physics with a direct solver though.


A piece of the puzzle. I'm also setting it up so that I use conservative advancement to find all TOIs and a collision heap to sort potential collisions by time ala Mirtich's paper.
[size=2]Darwinbots - [size=2]Artificial life simulation

This topic is closed to new replies.

Advertisement