Jump to content
  • Advertisement
Sign in to follow this  
migdalsky

LCP solution methods

This topic is 5092 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello: I'm working on an LCP solver, using JM^-1J^T method in Baraff's terminology (the LCP w=Mx+q, w,x>=0, wx=0 with matrix M being non singular and strictly positive definite). I have some pretty good progress, but I'm still far from a high-quality physics package. I wanted to ask for any suggestions how to implement iterative O(n) expected running time solver. I'm looking at running up to 10000 contact problem in real time on the modern high-end PC with low memory footprint (don't ask me why :)). I'm pretty successful with this goal so far for frictionless balls :) but I hope to make it work for everything. The collision detection is not my focus now, as with this number of constraints the biggest problem is collision response (that is, when I'm using something more complicated than balls) The LCP matrix is pretty sparse, it's relatively easy to represent it. But still, I have about 64000 float entries in the matrix itself, plus control structures. In essence, it means I'm limited to 50-100 sweeps through the matrix (matrix-vector multiplications). I tried constructing no-fill Cholesky factors for the matrix, doesn't seem to work well. Obviously I can't use any significant fill-in due to the prohibitive size of the problem. For some simpler configurations (a pile of 1000 balls, tightly packed), the matrix is "almost diagonally dominant" in some sense, which means simple gauss-seidel iteration works just fine and I only need 15-30 matrix-vector multiplications, and it works beautifully fast. For more complicated situations however, there are big off-diagonal elements, which brings gauss-seidel relaxation scheme I'm using to solve the LCP to its knees, just as the theory suggests. I tried block Gauss-Seidel, but it has essentially the same problems with the non-diagonally-dominant matrix. Can anyone suggest a way to precondition the JM'J'-type matrix so that it becomes diagonal dominant? I really want to stick to a very simple iterative method, since it works so fast and well for simple configurations. Does anyone know any projected iteration methods, besides projected SOR/Gauss-Seidel, which work for positive-definite LCPs? I'd kill for a projective analog of Conjugate Gradient for LCP! CG solves the analogous linear problem (just find x:Mx+q=0) in 30 steps even without preconditioning - and my budget is 50-100 steps. Has anyone tried to use multigrid/multilevel to solve contact forces? The literature on multigrid for LCP is sparse to say the least, any references would be very much appreciated. Another way is to use some simplex method. I'm not a specialist in this, but to my best knowledge simplex methods (Lemke, criss-cross) don't work on sparse matrices, because the matrix gets more and more filled in during pivots. And memory is an issue for me. Am I missing something here? Has anyone tried simplex methods to solve contact forces in big systems (i.e. where it's impossible to fit the whole dense LCP in memory)? Dantzig method requires keeping and updating a factor of variable size (depending on the active basis). I think I heard ODE uses or used this? Anyway, there's no way for me to keep a dense factor of a submatrix of a 10000x10000 JM'J' matrix with a budget of 50-100 matrix-vector multiplications. Again, am I missing something? I didn't ever programmed updating of a factor by adding/deleting a row/column, but I did program a few sparse (incomplete) factors for this problem and even that was a huge pain, since the quality of the preconditioner falls dramatically as I try to shrink memory usage by raising drop tolerances or implementing some moderate fill-in heuristics. In any case, I suspect Dantzig method requires too many steps for a big (10000 vars) LCP. Am I wrong? Another way to solve LCP is to use interior point methods. I experimented with them using alternative formulations (QP with such objective as energy minimization). It works pretty well, but again, I need at least 10 outer-loop steps in bad cases, each step consisting of forming a preconditioner and solving at least one or two linear problems (e.g. for predictor-corrector type algorithm). I can only use cheap preconditioners, and it means at least 20-30 steps (very optimistically!) for each linear problem (I used MinRes, LSQR, QMR, SQMR, CG, CGS to solve the linear problem, all of them with a few different preconditioners - all just to make sure they work as expected by theory; I used both the Augmented System and Normal Equations approaches; AS worked better for me). So, in total I'm looking at least at 400 matrix-vector computations, which is out of budget. Besides, in reality, it'll be closer to 1200 due to ill-conditioned systems, especially closer to smaller complementarity gaps when the interior point converges to the solution. Plus the time to construct each preconditioner.. Oh yeah, to top it all, the nature of the engine is quite dynamic, and usage of warm-start (using the solution from the previous frame) is hardly possible to the extent that can make a big difference... I'm starting to run out of sane ideas... Any suggestions would be greatly appreciated. Sincerely, Sergiy

Share this post


Link to post
Share on other sites
Advertisement
sorry, this may be a stupid idea:

if the matrix is usually pretty sparse, and the reason is that the objects can only come in contact with other objects in close physical proximity, I imagine you could use a space partitioning system just like you did in your collision detection. So you'd solve for the responses of only the objects in a given grid "bucket". The nasty part would be in the boundary conditions (i.e. where there are more contacts, but they are outside the grid square and you're ignoring them. Since you are already using iteration, you could alternate between 2 different grids, where the second is offset by 0.5 the width in each direction:


Grid Odd
+---+---+---+
| | | |
| | | |
+---+---+---+
| | | |
| | | |
+---+---+---+
| | | |
| | | |
+---+---+---+

Grid Even
| | |
--+---+---+--
| | |
| | |
--+---+---+--
| | |
| | |
--+---+---+--
| | |



the setup cost would be more expensive, as you are doing it per bucket, but the memory requirement would be significantly reduced, as well as the overall solution cost (I'd imagine), assuming the boundary effects didn't kill the solutiuon.

Please feel free to shoot me down: it's obvious from your post that you have put way more time into this than I have [8^)

Share this post


Link to post
Share on other sites
Hello lonesock:

Thank you for your input.

Yes, limiting the solver to the fewer objects or contacts is one of the things to stabilize the solver. I experimented with it a bit. Mathematically it is equivalent to block Gauss-Seidel, where you're solving a subproblem by ignoring all contacts but some local contact set. There may be many strategies how to select contacts to work with - I tried a few of them that are feasible from the memory access pattern standpoint, as I have very tight memory budget. I ended up believing that solving for everything simultaneously is the way to go unless I want to give it up alltogether.

One interesting idea here(extending one of Baraff's ideas) is to select contacts in such a way that there are no closed loops. It's relatively straightforward to precondition such a system perfectly without any fill-in - that is, in linear time and in linear space, I could calculate solution for those contacts, for example, with interior-point method. Then I could choose another tree of contacts (without loops) and solve for them... After my experiments with block Gauss-Seidel I don't have much belief in this method, but it's worth to be looked at nevertheless. Mathematically, it means splitting the initial LCP

Mx+q=w, x,w>=0, xw = 0

into a series of easy-to-solve LCP
A1 x+q=w, A2 x+q=w, ...
where M = A1+A2+...

My math skills aren't nearly good enough to find a good sequence of such A1,A2.. that will at least guarantee convergence in finite number of steps, I don't even know what to begin with.

Generally, my belief is that if "obvious" splitting is possible, Gauss-Seidel method will do it automatically. For example, if matrix M is block-diagonal (that is, when I manage to solve for two object islands that aren't connected together - which would be quite stupid for me to do), projected gauss-seidel will perform the same as when I split the island correctly (well, I'll have to make N iterations where N is the max of the number of iterations required for each block/island). It will keep both blocks in memory, yes, and they'll compete for cache, but generally the convergence rate won't be hurt. My problem is the number of steps at the moment, not the cache issues which can be solved by blocking the system when/if I begin optimizations, for example.

This example shows one of the reasons of my strong belief that I must solve for all constraints at once, because they'll split themselves in good cases and there's no other way in bad cases..

Thank you,
Sergiy

Share this post


Link to post
Share on other sites
I'm writing a paper about this for the GDC. The main idea is that temporal coherance can be exploited effectively and cheaply. If your problem doesn't have high temporal coherance, then you shouldn't need very accurate solutions anyhow (at least for games).

I'll post a link to my paper here after the GDC.

Erin

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!