Impulse based simulation - Jacobson, Baltman, Guendelman (Jiggle)

Started by
30 comments, last by Dirk Gregorius 18 years, 6 months ago
Quote:
Unforunately most academics litter their papers with unecessary complexity just so that they can look smart or impress their famous advisors


I think I have read every paper and master or PhD thesis regarding physically-based animation. It is sometimes amazing how even the most simple problems are turned into really sophisticated ones. So I can't agree more :-)

Anyway, Ury has a very sorrow understanding of both mathematics and dynamics. And I am glad that he shares his konowledge and has the patience to answer all my silly questions. There is never no such thing like one way to go. I think there are some descent simulators out there that apply his technique, while there are others that use maybe another approach. This was a very good discussion so far and it would be sad if we end up in some kind of flame war here.

Regards,

-Dirk
Advertisement
Quote:Original post by billy_zelsnack
ury. I'm sorry.. But you are a math punk. The Guendelman paper is the 3rd most important paper in physics simulation. Sure his collision stuff is craziness, but he was also handling mesh-mesh. That paper is important because it finally showed the academic snobs that PLAUSIBLE is the key.

I cannot disagree more with this statement. It would not call the paper “the more important…”, I would not even classify it as a paper worth reading, the only thing the Guendelman demonstrates is that it is a good example of what not to do.
Quote:Original post by ury
The most disappointing part in this paper is the iterative solver. He identifies his solver as Gauss-Seidel method. In reality, his solver is a close relative of the Jacobi's method with a bug. :) Jacobi's method on it's own has a very poor convergence rate. I am sure that the bug doesn't improve that either.
I guess that this is one of the reasons why he needed contact caching. :)




Ury, so I'm really curious now - what is the bug?!

thanks
-Steve.
Steve Broumley
No flame war at all intended. I agree. This has been a very nice thread and one I sure others will reference in the future. Ury has presented some very good information. I just wanted to chime up on the stacking paper.

anony.. Get an account please.
very unhelpful and/or unfriendly
Quote:
I cannot disagree more with this statement. It would not call the paper “the more important…”, I would not even classify it as a paper worth reading, the only thing the Guendelman demonstrates is that it is a good example of what not to do


Could you give reasons for this? Without justification such a posting makes very few sense. I mean what shall one think after such a post. You could be either a very experienced physic programmer in which case it would be interessting to hear the justification for this statement or you are simply not able to implement the approach described in the paper...
Quote:
ury. I'm sorry.. But you are a math punk.

"Why that's the nicest thing that.." *sheds a tear*
"You just wait till I tell it to my one and only best friend Billy".
"Billy, come here". *impatiently waits*
"Ahh... Billy, someone just... HEY! Billy! How many times did I tell you not to eat poo. Bad dog!"

Now seriously. I enjoy hearing some criticism once in a while, but please if you do, at least read the thread thoroughly. I was talking about Erin Catto's paper and not about Eran Guendelman's.

I must agree, Guendelman's paper is a very important paper. BTW, I find the way that he treats collisions, contact and most importantly friction to be excellent.
Unfortunately, falling rigid bodies are just not enough in today's gaming physics. In today's games you'd expect to find more than that. Articulated bodies, deformable objects, fluids, smoke are just a part of that. Guendelman's paper certainly lacks any consideration for those. Why do I say that? Although Guendelman's paper is still excellent, you always have to remember:

Quote:
There is never no such thing like one way to go. I think there are some descent simulators out there that apply his technique, while there are others that use maybe another approach.

Dirk Gregorius - October, 2005.


As far as "unnecessary complexity" goes. Sometimes it's necessary, but usually it's not. Please keep in mind that those papers are written for the academic community. If you are a 16 year old kid looking to create a game, you read your stuff here.

Quote:
btw. Just because an algorithm has a faster convergence rate on paper does not make the algorithm take less wall clock time to execute on a computer.


Now that was a really unnecessary comment, don't you agree?

Quote:Yes please! Especially, I like to know where the bug in the projected Gauss-Seidel is. The friction method is very interssting as well. Since you mention boxed LCP - I am still looking how to transform the classic LCP into a boxed LCP. Any reference?


EDIT: Actually, there's no bug in Erin's solver. See Erin's explanation here.

As for box constrained LCP, sadly I don't have any reference for that. Doom3 uses box constrained LCP to handle friction, so you can look it up in their SDK.

The idea behind it is very simple.
Box constraints allow you dynamically assign the limits for some solution scalar based on some other solution scalar.

Here's a brief explanation:

Define:
n - contact normal
u, v - orthonormal vectors that span the collision plane.
c - mu / sqrt(2)

Coulomb model:
Ff <= mu * Fn

Define:
Fn = lambda1 * n
Ff = lambda2 * u + lambda3 * v

Make the following approximation to Coulomb model.
(2) |lambdal2 + lambda3| <= mu * lambda1

This is almost identical to the "classic LCP". The only difference is that you have four vectors instead of two. I assume that you are already familiar with this technique so I hope that you know what I mean.

Rewrite (2) like this:
(3) -c * lambda1 <= lambda2 <= c * lambda1
(4) -c * lambda1 <= lambda3 <= c * lambda1

There we have our box constraint. The bounds for lambda2 and lambda3 should be dynamically adjusted by the solver, as it finds new values for lambda1.

The results that this technique gives are identical to the ones that the "classic LCP" gives when using two vectors.

Quote:
Technically you are absolutely right, there is very few difference. What might be an issue is that you need to call the collision engine twice (or even more), while I get away with calling it only ones. But I am not so sure here :-/


You are correct. IMHO, Guendelman iterates his collision process five times. THAT'S TWICE MORE WORK THAN WHAT I DO! Muhahaha! :)
Seriously, it all depends on how accurate you want to be. I found out that for my current project, two times should suffice.

Quote:BTW: How do you generate contact points? Like in the ODE and OpenTissue or liked proposed in Guendelman using signed distance maps. If you have any good references they would be highly appreciated...

For collision detection we perform continuous sweeps from the previous position to the current one. This is the only way for us to handle collisions since we are simulating very thin objects like suturing threads. For our big objects we can't use signed maps because almost all of them are deformable.
Collision detection is a very big subject to be just giving references. Do you have anything specific on your mind?

Quote:Regarding higher integrators:
The Newton-Euler equation in a constrained systems has the form:

M * du/dt = f_ext + f_c

Let's call F(x,u,t) = f_ext + f_c(x,u,t) + f_int(x,u,t).

When evaluating F, the parameters x,v,t are considered. Usually, the forces don't depend on time so t can be neglected.

The ODE that we have on our hands is:
dx/dt = v
dv/dt = F(x,v,t)

or if we define:
Y = (x,v)t,
f(Y,t) = (v,F(Y,t)))t = (v,F(x,v,t)))t

and this is how we get:
dY/dy = f(Y,t)

Looks familiar?
Here's how we use it:

Runge-Kutta second order should be:
k1 = dt * f(Yn,t)
k2 = dt * f(Yn + 0.5*k1, tn + 0.5*dt)
Yn+1 = Yn + k2

[Edited by - ury on October 26, 2005 5:15:17 AM]
"I was talking about Erin Catto's paper and not about Eran Guendelman's."

Well.. There you go. I take it all back.. Minus the 'math punk' part. :)
very unhelpful and/or unfriendly
It might look like there is a bug in my Gauss-Siedel implementation because I present it in a form different from textbooks. I did this for two reasons:

1. To avoid presenting the mathematical baggage of matrix splittings and spectral radii.
2. To give a ready to implement, practical algorithm.

Here's my basic implementation of Gauss-Seidel (ripped straight from my paper):
for k = 1 to limit  for i = 1 to n    dx_i = (b_i - sum(j = 1 to n, A_ij x_j)) / A_ii    x_i = x_i + dx_i  endend

The matrix A can be split as A = L + D + U, where L is the lower part, D is the diagonal, and U is the upper part. It might appear that I am using the splitting:

x(k+1) = inv(D) (b - A x(k)) = inv(D) (b - (L + D + U) x(k))

But notice that in the inner loop I am updating x_i before proceeding to the next row. I’ll write my algorithm in component form:

x_i(k+1) = x_i (k) + (b_i - sum_j(1, i-1, A_ij x_j(k+1)) - sum_j(i, n, A_ij x_j(k))) / A_ii

or

x_i(k+1) = (b_i - sum_j(1, i-1, A_ij x_j(k+1)) - sum_j(i+1, n, A_ij x_j(k))) / A_ii

Now multiply through by A_ii:

A_ii x_i(k+1) = b_i - sum_j(1, i-1, A_ij x_j(k+1)) - sum_j(i+1, n, A_ij x_j(k))

In matrix form this is:

D x(k+1) = b - L x(k+1) - U x(k)

Solve for x(k+1) to find:

x(k+1) = inv(D+L) (b - U x(k))

which is precisely the Gauss-Seidel splitting.
Erinhttp://gphysics.com
Quote:
Here's my basic implementation of Gauss-Seidel (ripped straight from my paper):

for k = 1 to limit
for i = 1 to n
dx_i = (b_i - sum(j = 1 to n, A_ij x_j)) / A_ii
x_i = x_i + dx_i
end
end


*gulp* Erin, I am terribly sorry. I totaly overlooked the fact that you compute dx rather than x. Needless to say that everything I said about your method converging to (A+D)x=b is total crap. I'll update my previous post accodingly.
Congratualations, you have yourself a kosher Gauss-Seidel method. :)

Anyways, you wrote a very good paper. If you'll add some explanations and examples about Jacobian computations, we'll have ourselves a great tutorial.
Oh.. and you might wonna improve your friction model. :)

Ok, I am going to eat my hat now, c'ya.
Here is a question coming from an ignorant in theses things.
If all these papers are so perfect and so correct. Why is it that the physics is so bogus?
I mean with all these advanced theories I would think that simple things like 10 boxes stacks will state put without the help of auto sleep, velocity damping, or the incredible shock wave. If you think that is hard with corrent methods try a stack of boxes with variable masses where the top one is heavier than the bottom one.

The other point is how many time the Gauss-Seidel iterative method for solving linear system is going be re discover in all its glory, I think I got it, it is written in each PHD paper of the past 4 to 5 years. It is also written is every elementary book in numerical analysis of the pass 25 years.

Who would though the method would make such an incredible comeback in the 21 century? 25 years ago it was regarded as one of the methods with unbounded convergence curve no worth considering for anything seriuoslly.

I once run some experiment with large sparse PSD systems, (not only you got those in physics engine they also show up on a very large range of solution to partial differential equations with the exact same structure or a rigid body system) and we found out that all so called stationary methods any thing with a condition number higher than 5 to 10 will have a very flat convergence curve.
I think that is what explain teh asumetion that mass ratios can not be larger that 5 to 10. We end up with sparce Choleky factorization.
Was I wrong and there has been some new development that improve Gauss-Seidel convergence rate? I had not read anything on that area in any of the papers just more or the same.

This topic is closed to new replies.

Advertisement